Small necro here.. I took a week off around the new year and didn't get back to this..
@nsigma Why yes.. Will check out your efforts more soon! I'll probably take a look at your gstreamer integration which I'll likely use for initial testing my video engine efforts w/ Vulkan on the desktop. Glad to hear that someone capable is dealing with the Java bindings for Jack!
No issue with that! You seemed to be saying that running audio in a separate process was beneficial / the only way to make best use of multi-core, without really saying why processes rather than threads. It's a more complicated way of working which can offer benefits in certain cases, sure, but not sure I'd advocate it as the go-to solution in all cases. I am genuinely interested in your reasoning.
I was just pointing out that using SuperCollider you get audio in a separate process "for free". Indeed a bit more complicated per se, but not really bad. In moving my library / framework / engine work towards being highly multithreaded I favor in general protocols much more over standard APIs.
In regard to the OP and desire to have realistic audio. With using SuperCollider it would be darn neat to work with Ambisonics. In particular to "I also of course could adjust this pan to match my head movement." One of the fantastic properties of Ambisonics is that when things are in the encoded state all one has to do is multiply the rotation matrix of the players head and this will automatically rotate the entire audio scene. No need to track individual sounds and move them around. Also Ambisonics can be decoded to binaural audio or discrete speaker arrangements.
It actually was neat to read that OpenAL Soft
uses Ambisonics internally to provide better results, but this is an internal implementation detail. On a quick review of the HRTF example code it shows manually moving a sound around. I'm not sure if you can provide a rotation matrix as described above to apply to the frame of reference which would manipulate the internal Ambisonic implementation rotating the entire audio scene. It would be neat if that is possible with OpenAL Soft.
>Would you agree libraries like nanomsg are pointless given that scaling is done through sockets and multiple processes coordinating?
Ahh.. Was just a rhetorical question as I didn't understand where you were coming from with the multi-thread / process angle.
Original poster! Have you made any progress?