OSC is something I plan to look into in more depth. It ranks high on the to-study queue. All I know about it is that as a spec
Indeed a transport independent messaging protocol. Keep in mind that SuperCollider doesn't exactly follow the OSC spec exactly, but is essentially similar. You could send MIDI over OSC for instance.
One consideration: making synths that implement OSC or MIDI with any pretense to completeness is out of scope.
Essentially you'd want your synth to know nothing about OSC or MIDI. You'd marshal data from OSC or MIDI data to whatever internal representation is understood by the synth. Ideally this internal implementation is event driven.
Theoretically, inverse square makes more sense. Ears and art don't always agree with physics, so I like to verify this sort of thing. (Who should I believe, experienced audio engineers or my own lying ears?)
In this case for a spherical wavefronts physics matches perception. IE a sound source with no walls / reflections adding to the direct sound. You may potentially futz with the coefficients to change things somewhat, but still follow an inverse square relationship.
No matrices involved, beyond what JavaFX manages behind the scenes via the PerspectiveCamera.
Matrices are involved under everything in JavaFX. Each Node of which PerspectiveCamera is inherited from has a transform. The rotation of the PerspectiveCamera is all you need for ambisonics.
Having not used JavaFX you'd likely call:https://docs.oracle.com/javase/8/javafx/api/javafx/scene/Node.html#getLocalToSceneTransform--
And that is what you'd send to SuperCollider or what have you to manipulate ambisonic rotation.
Maybe in a few days I'll get to the PD, SuperCollider or OSC research. But it seems like a good thing to build stuff and learn from the experience. No strategy is always right.
Like anything it may take some time to explore. It definitely doesn't hurt to know what has come before and depending on goals one might find that the existing solution is solid enough. The task then becomes filling in the gaps. Way back when I thought I'd have to spend a bunch of time creating a DSP engine then SC3 dropped and the split audio DSP server architecture fit my needs.
I fully understand the thinking of independents that don't want to spend anything.
Developers are fickle even considerably more fickle than consumers in many respects. You'll find plenty of developers independent or otherwise that will not be willing to pay for X developer tool. Rather than single tools developers will pay for platforms / ecosystems. The trick then becomes providing more value than one captures for creating some symbiosis that brings in more developers while extracting enough funds, preferably in an ongoing basis (the capture angle), to make it all viable. And at that as mentioned a free platform with some services that can be kept behind a paywall fits that pattern. IE Lumberyard w/ integration with AWS and other for pay services is a big example; tools for free, but hey isn't it sure easy to integrate with our other for pay products. As things go an interactive audio engine for games is not an easy platform to deliver as a service.
Money, money, money. Whatever.
At the end of the day rent has to be paid, food needs to get on the table, and health of all involved needs to be maintained.
Otherwise, no charge. It is a crazy business model, perhaps, trying to maximize the chance of participating in a black swan rather than up front income.
IMHO that relies on expecting the other party is honest and will keep things on the up and up. Collecting anything even from a moderate success will rely on the ethics of the other party which is a risk. If it was a truly black swan event it'd be easy for the other party to simply not pay and make it a legal situation. Anything over values that could be collected in small claims court could get locked up in a costly legal battle.
I'm not saying don't take this approach just consider that other parties may not play by any agreed upon rules as sad as that may be.