Zeos truncates data
Moderators: gto, EgonHugeist
Zeos truncates data
Using other software I can see that there is more than 255 characters in the database, but using ZTable or ZQuery only gives the first 255 characters.
How do I get to see all my data?
How do I get to see all my data?
-
- Expert Boarder
- Posts: 164
- Joined: 18.03.2008, 13:03
- Contact:
-
- Expert Boarder
- Posts: 164
- Joined: 18.03.2008, 13:03
- Contact:
vannus,
I can not help you alot with SQLite as I'm not using it, but maybe there are some analogy with Postgresql. I noticed, that I also get long varchar fields truncated to 255. The answer to "why?" lies in libpq - client library to postgres. Somehow if the field is bigger than 255 then PQfsize returns -1, witch means "... the data type is variable-length."
Maybe its the same (or alike) with SQLite ?... You should investigate procedure TZSQLiteResultSet.Open. It is where ColumnsInfo is defined.
It would be nice to hear from you how is it in SQLite...
Good luck!
I can not help you alot with SQLite as I'm not using it, but maybe there are some analogy with Postgresql. I noticed, that I also get long varchar fields truncated to 255. The answer to "why?" lies in libpq - client library to postgres. Somehow if the field is bigger than 255 then PQfsize returns -1, witch means "... the data type is variable-length."
Maybe its the same (or alike) with SQLite ?... You should investigate procedure TZSQLiteResultSet.Open. It is where ColumnsInfo is defined.
It would be nice to hear from you how is it in SQLite...
Good luck!
Thanks for the tip, I thought I would get it sorted when I looked at TZSQLiteResultSet.Open at first, but the problem looks to be a lot deeper in the code :S
I've found that
1- zeos defaults to 255 chars for fields with undefined field size (in sqlite)
2- replacing that with sqlite default field size (1,000,000,000) will cause an EOutOfMemory Exception
3- going with a large enough field size (5000 in my case) will get replaced somewhere after TZSQLiteResultSet.Open
I've tried to find where the code that is shrinking the field size is, but can't.
It seems to be between
. InternalInitFieldDefs;
. ColumnList := ConvertFieldsToColumnInfo(Fields);
in
. TZAbstractRODataset.InternalOpen;
strangely, I can't debug BindFields(True);
I think I'll have to use a nasty workaround of splitting the data over multiple fields
I've found that
1- zeos defaults to 255 chars for fields with undefined field size (in sqlite)
2- replacing that with sqlite default field size (1,000,000,000) will cause an EOutOfMemory Exception
3- going with a large enough field size (5000 in my case) will get replaced somewhere after TZSQLiteResultSet.Open
I've tried to find where the code that is shrinking the field size is, but can't.
It seems to be between
. InternalInitFieldDefs;
. ColumnList := ConvertFieldsToColumnInfo(Fields);
in
. TZAbstractRODataset.InternalOpen;
strangely, I can't debug BindFields(True);
I think I'll have to use a nasty workaround of splitting the data over multiple fields
-
- Expert Boarder
- Posts: 164
- Joined: 18.03.2008, 13:03
- Contact:
vannus,
Did you look into
Does it return correct fieldtype and precision?
Did you look into
Code: Select all
ColumnType := ConvertSQLiteTypeToSQLType(TypeName^,
FieldPrecision, FieldDecimals);
If you set field size & precision in the sqlite database then
returns the correct fieldtype, size & precision.
However during TZAbstractRODataset.InternalOpen which happens later on, those values get changed
- my 200 length fields become 255
- my 5000 length fields become 1000
Code: Select all
ColumnType := ConvertSQLiteTypeToSQLType(TypeName^,
FieldPrecision, FieldDecimals);
However during TZAbstractRODataset.InternalOpen which happens later on, those values get changed
- my 200 length fields become 255
- my 5000 length fields become 1000
-
- Expert Boarder
- Posts: 164
- Joined: 18.03.2008, 13:03
- Contact:
This is really strange as I observed no such behavior with postgres (with fields less than 255 char long). I expect Postgres and SQLite behave alike in abstract units... Now I'm at work, so I can't test this, but maybe this night I'll investigate this... No promises though.However during TZAbstractRODataset.InternalOpen which happens later on, those values get changed
- my 200 length fields become 255
- my 5000 length fields become 1000
Good luck!
I created a test application from scratch and tried the same db (with field sizes defined) - and it reads the data correctly.
The changing field sizes problem must be something to do with my original application and not Zeos.
I'll post something in the SQLite section to discuss the 255 being the default string length.
Thanks for your help!
The changing field sizes problem must be something to do with my original application and not Zeos.
I'll post something in the SQLite section to discuss the 255 being the default string length.
Thanks for your help!
-
- Expert Boarder
- Posts: 164
- Joined: 18.03.2008, 13:03
- Contact:
vannus,
i would discourage you using ZTable on tables that are growing as it will slow your application in future. ZTable is good for tables of not changing data - that you write once and new records are added rarely. If the table is used for general operation (sales, logging and so on) then the ZQuery should be used with "WHERE" condition that filters only relevant records. But I'm sure you already know this.
Good luck!
i would discourage you using ZTable on tables that are growing as it will slow your application in future. ZTable is good for tables of not changing data - that you write once and new records are added rarely. If the table is used for general operation (sales, logging and so on) then the ZQuery should be used with "WHERE" condition that filters only relevant records. But I'm sure you already know this.
Good luck!