Page 1 of 1

Definition of TZRowBuffer.Columns (ZDBCCache.pas) too small

Posted: 23.05.2007, 10:35
by xtra
Hi,
we just stumpled across a little oddity when executing a rather large SELECT statement (lots of joins etc.).
The resulting rows exceeded the hardcoded limit of 32kb per row in Zeos.
This limit is defined in ZDbcCache.pas by defining the Columns as TByteArray (TByteArray is defined as array[0..32767] of Byte in SysUtils.pas).
So Delphi prompted me with a range exception when executing the Query.
The Exception originated at: TZRowAccessor.SetInt() but could occur at any of the SetXYZ methods as there is no range checking in there.

So I decided to simply widen the declaration of TZRowBuffer.Columns to be of 128kb instead of 32kb.
Do I have to expect any negative side effect by doing so or is it ok?
For now it seems to work like a charm.

Code: Select all

  {** Defines a header for row buffer. }

(*
  TZRowBuffer = packed record
    Index: Integer;
    UpdateType: TZRowUpdateType;
    BookmarkFlag: Byte;
    Columns: TByteArray;
  end;
*)
  // change c.schiffler because 32kb was simply too small, we use 128kb now.
  TColumnArray=array[0..131072] of Byte;
  {** Defines a header for row buffer. }
  TZRowBuffer = packed record
    Index: Integer;
    UpdateType: TZRowUpdateType;
    BookmarkFlag: Byte;
    Columns: TColumnArray;
  end;

Posted: 31.05.2007, 23:34
by mdaems
Hi,

I added this patch. I only hope it doesn't eat too much memory. If it does we'll have to rollback this change. Rev. 257 of SVN Testing Branch.

Mark

Posted: 01.06.2007, 09:26
by btrewern
Mark,

Is this per table, per row or what? If it's per row then I'm not sure it's a good idea to x4 the memory used for just this case.

Regards,

Ben

Posted: 01.06.2007, 10:24
by mdaems
Ben,

As far as I can see Memory is only allocated, depending on the real rowsize that's needed. (See alloc and allocbuffer methods of TZRowAccessor)
If you have an idea to check this in the real world, please feel free to do so. I have not enough knowledge of methods to check memory usage.

Mark

Posted: 01.06.2007, 14:47
by btrewern
Mark,

Just tested this using task manager and some quite large query results and it does seem to make quite a difference. I could easily use all my swap space using

Code: Select all

  TColumnArray=array[0..131072] of byte;
  TZRowBuffer = packed record
    Index: Integer;
    UpdateType: TZRowUpdateType;
    BookmarkFlag: Byte;
    Columns: TColumnArray;
  end;
but with

Code: Select all

  TZRowBuffer = packed record
    Index: Integer;
    UpdateType: TZRowUpdateType;
    BookmarkFlag: Byte;
    Columns: TByteArray;
  end;
I had no problems at all. I'm wondering if for my usage it might be worth using

Code: Select all

TColumnArray=array[0..16386] of byte;
Regards,

Ben

Posted: 03.06.2007, 13:25
by mdaems
@Ben,

You're absolutely right. I tested using
'select * from mysql.help_relation a,mysql.help_relation b'
(nice autojoin to get a lot of records)
Just executing this did bring my test program on his knees. (Out of memory)
So I did a test with AllocMemSize function and
'select * from mysql.help_relation a'
-> 79.900.688 bytes. Wowwww. Can't explain, but this is a lot.
Restored old situation
-> 664.524 bytes. Autojoin 32.739.148 bytes.

Conclusion : I roll back this change immediately.

@xtra :
Thanks for the patch. It can be helpful for people who need to have datasets with a lot of columns. We can't take it as a standard, however. I think my small (not so scientific, but clear) example proves this.

Rolled back in revision 258