I'm doing a major change in the API
I used to process little buffers of 32 samples. This was because I eventually need to send a buffer of a given size to the soundcard to play, so I thought I'd just create those buffers and mix, filter etc along the way and finally send the same buffer (after converting it to bytes) to the sound card.
Every 32 samples (or whatever you set the buffer size), the controllers (Volume settings of tone generators, Envelopes etc.) were updated (adding a tiny bit of 'zipper noise' in the process).
Bad Idea :-/
In the process I created unneeded overhead, because of all the arrays everywhere. Java's bounds checks do take a performance hit (in the client anyway). The need to copy arrays in certain places (in panning for example which has to split a mono signal to stereo) doesn't help there either.
Also the API became a little bit more complex than needed because I had to have AudioInput/AudioOutput interfaces (writing arrays) and ControlInput/ControlOutput interfaces (those not being compatible of course, Controls write just a float).
So, I got rid of all those little arrays and now have only arrays where absolutely needed (sending samples to the sound card, getting sound from the sound card etc).
And, I can get rid of the control interfaces, because there's now just one type of signal that can be used for both audio and controllers, which means more flexibility in the connections you can make.
So in short, the score is:
+ improved performance in audio processing
+ simpler API
+ more flexible API
+ no zipper noise = better audio quality
- controllers like envelopes will be updated 32 times as
much, so will cause a little performance hit. The
improved sound quality, better API and faster audio
processing easily makes up for that.