Cdbi wishlist

has_a semantics
This discussion has been moved to has_a semantics wish list, as it's getting rather long!

Relationships metadata
Moved to relationship metadata.

Columns metadata
In subclassing Class::DBI::mysql I'm doing something like this and I'd love to see some of this kind of functionality in the base class. So you can do something like @columns = $cdbi->get_columns; Then per column things like $col->is_nullable or $col->enum_set_list or $col->default. It makes it really easy to put a CGI on top of it.

sub get_columns { require Fields; my ( $self, $table ) = @_; $table ||= $self->table; unless ( $CACHE{$table}{COLUMNS} ) { ( my $sth = $self->sql_desc_table )->execute; for ( $sth->fetchall ) { my ( $xtra ) = $_->[1] =~ /\((.+)\)/; # capture size/info my @vals = map { s/^'|'$//g; $_ } split "','", $xtra if $_->[1] =~ /^(set|enum)/i; my $size = scalar @vals || $xtra; push @{ $CACHE{$table}{COLUMNS} }, Fields->new(                                [ $_->[0], # name                                   $_->[1], # type                                   $_->[4], # default                                   $size,   # size                                   @vals ? \@vals : undef, # set/enum arr_ref                                   $_->[5], # extra                                   $_->[2], # null                                   $_->[3], # key                                   $table,  # table                                   ] ); }  }   return wantarray ? @{ $CACHE{$table}{COLUMNS} } : $CACHE{$table}{COLUMNS}; }

NOTE: You should probably be playing with Class::DBI::Column, rather than throwing this in as a list...

(Will try that. Thanks for the pointer.)

Class::DBI::Pager improvements
Wanted: A CDBI pager that uses LIMIT,OFFSET (or ROWNUM) mechanisms from the database, not filtering the results of the query in Perl.

This is going to be difficult, because you won't be able to tell the Pager the total number of records until after you run the query, and then it's too late to add the proper LIMIT statement in there.

I don't understand why you need to know the number of records before running the query. With recent versions of MySQL, you can ask it how many rows would have been returned if you had not added the "LIMIT" statement.

You will need to know number of records before running a query with "LIMIT" statement. Otherwise, you cannot check validity of offset value. You will calc it from current page number and entries per page. But if you don't know total number of records, there is no way to check page number user gives you in his input. The only way to do it with one big select query, is to check correctness of given page number after the query itself.

Well, don't know why you have to really check that? I think this check should be in the application, like doing a count query first and then a limit, offset query next. so if it still fails then, you could handle it a nothing return or reduce the page number, whatever you prefer.

Whole this question is about doing count query before real one, or not. And lemmas will catch you, if you trying to stay with only one select query at all. Obviously, if you do count query, there is nothing to discuss.

Additional discussion at CDBI pagers.

Binding types
Wish there was a method to supply the binding type for placeholders, like you can with DBI:

$sth->bind_param(1, $value, SQL_INTEGER);

Why is this neccessary?

This could for example partially solve the problem above with the OFFSET, LIMIT clause in PostgreSQL (which requires that the placeholders for OFFSET and LIMIT be explicitly typed). I am sure there are other cases where it would be useful as well. You just need to look for the cases where you need that feature in pure DBI.

Trigger access
Triggers should have access to the old values as well as the new

In Oracle you have :old and :new automatic variables in certain triggers. I recently tried to do something with an after_update trigger, but I could no longer retrieve the pre-update value of the field I want to act on. Changing discard_columns to changed_columns as a list of field names and old and new values would be most helpful.

I've attempted a fix for this for my own use. There's a patch available at http://www.geekthing.com/~robf/perl/Class-DBI-oldvalues.diff.txt. I don't use after_update triggers, so haven't verified when the old values are cleared relative to when an after_update trigger would run. This patch was last updated 18 March 2006.

An alternative solution which does not require modification of CDBI is to use a global variable and the 'select' trigger. For example, imagine a table with a counter field (n) plus another, linked table (metastats), with a field containing the sum of all counters (tot_n). The following snippet illustrates how to change the tot_n sum whenever a field n is changed:

our $original_n; __PACKAGE__->add_trigger( select => \&remember_orig_n ); __PACKAGE__->add_trigger( before_update => \&update_tot_n ); sub remember_orig_n { my $this = shift; $orig_n = $this->n; }  sub update_tot_n { my $this = shift; return unless ref($this); # different triggers used for create and delete (not shown) my $metastats = $this->metastats; my $tot_n = $metastats->tot_n - $orig_n + $this->n; $metastats->tot_n($tot_n); $metastats->update; }

Cache mixins
Wish Class::DBI::mysql and Class::DBI::Cacheable were mixins and not subclasses

I want to make my Class::DBI::mysql stuff cacheable. Currently I'm mixing by hand.

ANSWER: Class::DBI::mysql is really a subclass. It overrides behavior in the parent class. It's not likely to become a mixin any time soon.

DBIx::Sequence and primary keys
Wish DBIx::Sequence could be used to provide primary key values

I wish something like DBIx::Sequence was used to automagically provide primary key values where appropriate (where it's not supported by the DB).

Maybe there is already something like that. If so, a pointer would be helpful.

-- JohanLindstrom

ANSWER: This should be added to database specific subclasses for any database that doesn't support this. It doesn't belong in CDBI itself.

Validation should check all columns
&validate_column_values currently goes through the hash of column values, kicking off triggers based on the keys to that hash. This means NOT NULL triggers won't get triggered unless an empty value has been dropped into the hash. I suggest that instead the full column hash should be iterated through.

NOTE: before you iterate through the full hash, you should have default values that automatically get dropped in. Otherwise, you force people to specify every value upfront. I depend on the database doing the storage to plug in default values, and I'm not the only one, so your idea needs column defaults as well or it will break a lot of existing code.

NOTE^2: Default values are not always desirable; in these cases, validation should be able to check for null values.

prepare_cached should be optional
I'm an Ingres user and when using Class::DBI and transactions, I get complaints about prepared statements being done after a commit. Apparently Ingres (or DBD::Ingres) won't use a prepared statement after a commit. I fixed this by hacking Ima::DBI and changed the default to not cached.

My original hope was that I could simply override set_sql to turn off caching. Unfortunately Class::DBI doesn't call my overriden set_sql so I had to go all the way down to Ima::DBI.

You can turn off caching when writing your own SQL, but not when using the built in SQL.

ANSWER: You can also override the built in SQL...

There's no other sensible way that I can think of to make something like this happen, as CDBI sets up its SQL at compile time and can't know what you're planning to do with it.

This problem is also present if you are a Sybase user. Usage of prepare_cached with Sybase/DBD::Sybase does not work properly (documented in DBD::Sybase, and the module author is aware of the problem). so it would be nice to turn off caching entirely somehow. The perldoc seems to imply that it should be relatively simple, but I had to modify the Class::DBI code to get it work... Efforts at overriding set_sql in my own subclass of Class::DBI::Sybase didn't work... It seems feasible not to have to modify the sql, but just the cache value that is sent

The change in Class::DBI::set_sql was from:

$class->SUPER::set_sql($name, $sql, $db, @others);

to

$class->SUPER::set_sql($name, $sql, $db, 0, @others);

Table name prefixing
I'd like an easy way to add a prefix to a group of table names for a Class::DBI application (i.e. siesta_*). This is a pretty basic requirement for multiple apps to co-exist in a single database. I know that of course one can use multiple db's, if you have access to multiples, but it's often much easier just to use one. Makes joins easier too.

ANSWER: Huh? CDBI doesn't have any concept of 'applications' or groups of table names.

Retrieve values instead of objects
Able to use retrive, retrieve_all, search and search_like to get data structures instead of objects. For example retrieve_all could return an arrayref which contains a bunch of hashrefs to the actual data. This is useful for passing to template engines like HTML::Template.

THOUGHT: Doesn't this really just indicate that the template engine's a bit naff? Template::Toolkit, HTML::Mason etc. are quite happy with objects - and if you simply must use a template engine that doesn't understand what an object is, why not just wrap the objects in tied hashes that call accessor methods on FETCH etc.

IMPLEMENTATION:

package Tie::Hash::Object; use strict; use base 'Tie::Hash'; sub FETCH { } 1;
 * 1) to be continued.

Allow specific mapping of accessor names to column names
It would be very nice to allow specifying a mapping of accessor names to column names. It is possible to do this in accessor_name, but it seems rather kludgy.

Allow searching columns of related tables in search and search_like
Will Hawes info@whawes.co.uk

I regularly need to locate Order objects based on values stored in related tables, e.g. InvoiceAddress, Customer. At present you either have to return all the Order objects in an array and then examine properties of their related classes (memory intensive and slow), write custom sql to include columns from the related tables (a pain to do over and over) or use views/stored procedures in the database (not supported by some rdbms software). Having Class::DBI add the table for each related class to the FROM clause of its generated sql statement would give access to improved search capabilities using minimal code.

E.g.

SELECT [table.fieldlist] FROM [table], [related_table_1], [related_table_n] WHERE [table.foreign_key_1] = [related_table_1.pk] AND [table.foreign_key_n] = [related_table_n.pk] AND [%s].

ANSWER: I don't understand this question. Perhaps you could take it up on the mailing list?

This has been discussed on the mailing list recently under the heading "Allow searching columns of related tables in search and search_like".

ANSWER #2: Class::DBI::Plugin::DeepAbstractSearch allows you to search on fields in joined classes.

DISTINCT
I would love to see a native distinct method (especially on a column). Using DeepAbstractSearch, sometimes you are joining tables, but if one table is a cross reference table (many to many), you will get duplicate entries. In SQL you can solve this with distinct:

SELECT distinct mytable.id from mytable,mytable2 where mytable.fid = mytable2.id

As a hack, I've been looking into overriding the iterator to provide a distinct method, but havn't solved it yet.

A LazyReconnectionHandler
Look here LazyReconnectionHandler

Automatic Schema Generation
It would be great to be able to automatically generate the database schema from the Class::DBI objects, rather than requiring two separate files that can get out of sync with each other.

Add more wishes/gripes here...
... preferably as separate pages, linked from here, rather than more text on this page ...