Ended up writing my own implementation, as the problem solved in the SPGL code is subtlely different to my requirements.
(Cas is packing a set number of images into a variable number of 256x256 textures,
I on the other hand need to pack a set number of images into the smallest possible area.)
There was also a minor complication that in my datasets, the input rectangles could be super-imposed. (several sprites sharing the same pixels)
Ended up with an acceptable solution that given a random dataset of rectangles with dimensions [1<=x<=26, 1<=y<=26] it on average attains a 91% fitness efficiency.
With a real-life dataset the variance is alot larger (typicaly between 80-100%) which is acceptable for the time constraints I had to write it in....1 day
Most importantly - it generates better texture pages than the artists usually manage (One step closer to making the artists redundant
The modularity of the problem does leave the algorithm open to future improvements too - so i'm happy.