pjt33, care to share how you're compressing the dictionary?
It relies to some extent on the fact that all the words are the same length. Then it's a case of manipulating it so that kzip can do a good job.
Step 1. Split into digraphs and group by head:
e.g. ab->[ed,et,le,ly,ut], ac->[ed,es,he,hy,id,me,ne,re,ts], ad->[ds,ze], ae->[on,ry], af->[ar], ...
Step 2. Sort by number of tails (so all digraphs like "af" which only have one completion come first, etc).
Step 3. Each head+tails can be encoded as a number giving the size of the set, the head, then the tails. E.g. 1afar1ajar...5abedetlelyut... However, by using difference encoding on the sizes you can get most of them to be 0 or 1. 1afar0ajar... As 0 and 1 are pretty frequent in the .pack file the difference-encoded lengths don't need any further treatment.
Step 4. Count letter frequencies. Count bytecodes 0-25 in the .pack file produced with an empty string in place of the data. Assign letters accordingly (i.e. 's' -> 21, which is the most common byte - not surprising, because it's the opload for iload; 'e' -> 0; etc).
Step 5. Emit the alphabet* ordered so that I can do a charAt to invert the mapping set up in 4. Emit the string from 3 encoded as specified in 4.
* Actually 1-26 where 1 corresponds to 'a' and 26 to 'z'.