I have an app that uses the File.createTempFile method, which has always worked for me, but refuses to work in Debian
. It seems that this method doesn't work on Debian (it generates "permission denied" whenever you try to use the file you've created).
[Presumably this is either a bug in the JVM (not working with the OS as it should) or in Debian (being over-zealous and some smart-arse making temporary files not work the way they are supposed to in a standard OS, possibly braking POSIX compliance or something like that?), but I know it works fine on both windows and linux. EDIT ... or some other app, completely unrelated, that I happened to run previously from the same BASH and which set the umask to something magnanimous and didn't restore it...although I have NO idea why this breaks the /tmp directory!]
Now I'm stuck trying to think of a workaround: how to get temporary files to work when the OS gives permission-denied every time you open them. This is for an app that creates thousands of files on every run, as a form of primitive file-memory-mapping.
The only thing I can think of is to re-write it from the ground up to use NIO mem-mapping - but then, we already *know* that that part of NIO doesn't work properly at the moment, so... ARGH!
In case someone can think of an easier refactoring (or better design), the basic process is this:
1. recursively split a very very large (hundreds of megabytes) input file into smaller sub-sections using a computationally intensive algorithm (i.e. it's not something plain and simple like a sort)
2. use those sub-sections as source data for running a variety of algorithms. Most algorihtms "cherry-pick" a small subset of the sub-sections to work on (hence this gives us a fast, effective, simplistic sort of virtual memory mgmt). Some algos work on all sub-sections, but use the tree-structured grouping to deal with them in large chunks. Various magic goes on to order algorithm execution to maximise use of any given section whilst it's in memory.
3. delete everything, and return to 1, using a DIFFERENT recursive split algorithm, and a different set of algos in 2 (although many will be the same as ones used previously, there are also many that aren't).
4. ...continues until all splitting algos are complete...
With 2 or 3 splitting algos and a dozen or so processing algos this works fine and fast. It needs to scale to 20-odd splitters and 30-odd processors, but only over time (As I get around to adding the extra algos).