I have chunks with 256 blocks (16x16), one block size is 8 pixels. And each block have tons of info about that block.
You will always run into limits with regards to memory and/or file I/O when processing lots of data. You have several options:
- Reduce size required per block (would both decrease memory usage and delays when reading & writing chunks)
- Speed up file I/O (for example using Kryo
), or by writing your own fast chunk save/load routines.
- Increase available memory (e.g. using the JVMs -Xmx<size> parameter)
I'd personally also try to optimize things in this order, especially if youre having 'tons' of data per block.