Hi
I'm wanting to use a dynamic SQL dataset (SQLite ":memory:" dataset) to link to some non-SQL tables which I'll populate into the dataset at runtime.
The theory works fine but I need a way of determining the tables/fields before the real SQL is run (as it'll need to populate at this point - there's too much data to do all the tables at once).
The following code gets the initial tables and fields from a basic SQL select statement :
var idx:integer;
ZSQLiteStatementAnalyser:TZSQLiteStatementAnalyser;
ZSQLiteTokenizer:TZSQLiteTokenizer;
ZSelectSchema:IZSelectSchema;
begin
ZSQLiteTokenizer := TZSQLiteTokenizer.Create;
ZSQLiteStatementAnalyser := TZSQLiteStatementAnalyser.Create;
ZSelectSchema := ZSQLiteStatementAnalyser.DefineSelectSchemaFromQuery(ZSQLiteTokenizer,'select Firstname, Lastname from Table1, Table2 where Table1.id = Table2.id and Age > 30');
for idx := 0 to ZSelectSchema.TableCount-1 do begin
ShowMessage(ZSelectSchema.Tables[idx].Table);
end;
for idx := 0 to ZSelectSchema.FieldCount-1 do begin
ShowMessage(ZSelectSchema.Fields[idx].Field);
end;
but it doesn't get the "joined" fields and "where clause" fields -
is there way of doing this?
Chrs
Parsing SQL to get fields & tables
Moderators: gto, cipto_kh, EgonHugeist
-
- Fresh Boarder
- Posts: 4
- Joined: 27.02.2008, 22:43
- mdaems
- Zeos Project Manager
- Posts: 2766
- Joined: 20.09.2005, 15:28
- Location: Brussels, Belgium
- Contact:
I'm afraid this isn't implemented. For the zeoslib project only the table names and selected field names are important. The fields that are only used to join or in where clauses are not important to us. So : just add them to your where clause if you need their definitions.
Or write the necessary code to parse the where and/or join parts. This isn't an easy thin to do, however...
If you only want to be sure you know all field definitions that could be involved you can try this :
- Parse the sql to know hat tables are involved
- Change the initial query in such a way that the selected fields are replaced by <tab a>.*, <tab b>.*, ...
- Parse the new query -> now all fields of all involved tables should be available (I hope).
If you think about writing your own parser :
- think about adding the additionally found fields to the ZSelectSchema.Fields
- try to develop good structures for 'table links' and eventually (if you need them) 'where conditions'.
It would be nice if we could extend our parser classes to include these, but at the moment we have no need for it. Please let us know when you did. If your implementation looks good, we could add it to the library.
Mark
Or write the necessary code to parse the where and/or join parts. This isn't an easy thin to do, however...
If you only want to be sure you know all field definitions that could be involved you can try this :
- Parse the sql to know hat tables are involved
- Change the initial query in such a way that the selected fields are replaced by <tab a>.*, <tab b>.*, ...
- Parse the new query -> now all fields of all involved tables should be available (I hope).
If you think about writing your own parser :
- think about adding the additionally found fields to the ZSelectSchema.Fields
- try to develop good structures for 'table links' and eventually (if you need them) 'where conditions'.
It would be nice if we could extend our parser classes to include these, but at the moment we have no need for it. Please let us know when you did. If your implementation looks good, we could add it to the library.
Mark
-
- Fresh Boarder
- Posts: 4
- Joined: 27.02.2008, 22:43
Hi Mark
I needed a farily quick solution so I've hacked a combination of the code above to get the initial field list and table list and then re-iterate through the list of the tokens from the TokenizeQuery function. In the first instance adding the identifier following a "JOIN" statement to the tablelist and then anything else that wasn't in the table list and removing anything that couldn't be an identifier (only certain characters) and isn't a reserved word.
One interesting thing I had to do was re-merge qualified fieldnames (as in "table1.fieldname") since the tokenizer breaks these down separately ("table1", ".", "fieldname").
It's a bit of a hack but it seems to work (at least for my current needs anyway) - I did try looking around for other free source code that would do this but I think all the one's I came across have some issues.
For the longer term it would be better to adopt a more universal approach. One possibility is to use a similar approach to the SQlite code which uses the "Lemon" parser generator to generate code to support the grammer but this is beyond the time I have available currently. Hopefully I could get back to this relatively soon if this sounds like an approach that people consider to be a good one.
Thanks.
Chrs
I needed a farily quick solution so I've hacked a combination of the code above to get the initial field list and table list and then re-iterate through the list of the tokens from the TokenizeQuery function. In the first instance adding the identifier following a "JOIN" statement to the tablelist and then anything else that wasn't in the table list and removing anything that couldn't be an identifier (only certain characters) and isn't a reserved word.
One interesting thing I had to do was re-merge qualified fieldnames (as in "table1.fieldname") since the tokenizer breaks these down separately ("table1", ".", "fieldname").
It's a bit of a hack but it seems to work (at least for my current needs anyway) - I did try looking around for other free source code that would do this but I think all the one's I came across have some issues.
For the longer term it would be better to adopt a more universal approach. One possibility is to use a similar approach to the SQlite code which uses the "Lemon" parser generator to generate code to support the grammer but this is beyond the time I have available currently. Hopefully I could get back to this relatively soon if this sounds like an approach that people consider to be a good one.
Thanks.
Chrs