Ten times slower and ten times smaller


In class NetCdfVariables, I noticed this piece of code:
switch (dims.Length)
                case 1: chunk = 1000000; break;
                case 2: chunk = 1000; break;
                case 3: chunk = 100; break;
                default: chunk = 10; break;
I make a 3 dimension data file and 4 dimension data. In my test, I found that access to 4 dimension data is 10 times faster than 3 dimension data file.
Do you have any clue?


dvoits wrote Aug 1, 2011 at 10:01 AM

The access speed depends both on chunk sizes and your data shape (both when writing and reading).
We have tried to find out default chunk sizes suitable for "general" case, so that it would work both for small and large data. Nevertheless, performance can be affected by the chunk size selection.

In the following release an interface will be provided to adjust chunk sizes explicitly.

chris_wen_11 wrote Aug 1, 2011 at 10:12 AM

Can you just give away when you are going to release new version? I really like this project and closely follow.

dvoits wrote Aug 1, 2011 at 10:29 AM

We are expecting to publish new release in the September.

wrote Feb 14, 2013 at 2:27 AM