Java-Gaming.org Hi !
Featured games (91)
games approved by the League of Dukes
Games in Showcase (804)
Games in Android Showcase (237)
games submitted by our members
Games in WIP (867)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  manipulating SourceDataLine  (Read 7052 times)
0 Members and 1 Guest are viewing this topic.
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Posted 2016-09-13 00:30:40 »

Hello all,

I have some songs I am playing using  SourceDataLine and I was wondering if anyone knows if there is a way to start from a specific point in the song. I have a song that has a couple bars of intro before going into the main part, and I want to play the whole song once, then loop starting just after the intro, instead of playing the entire intro again. Here is the class I'm using for reference. Sorry for the wall of code! Also, any constructive criticism of my code is welcome, as I am new to Java's sound API and coded this as a first effort, referencing members of this site, javadocs, and stackoverflow.com.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
69  
70  
71  
72  
73  
74  
75  
76  
77  
78  
79  
80  
81  
82  
83  
84  
85  
86  
87  
88  
89  
90  
91  
92  
93  
94  
95  
96  
97  
98  
99  
100  
101  
102  
103  
104  
105  
106  
107  
108  
109  
110  
111  
112  
113  
114  
115  
116  
117  
118  
119  
120  
121  
122  
123  
124  
125  
126  
127  
128  
129  
130  
131  
132  
133  
134  
135  
136  
137  
138  
139  
140  
141  
142  
143  
144  
145  
146  
147  
148  
149  
150  
151  
152  
153  
154  
package com.noah.breakit.assets;

import java.net.URL;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.FloatControl;
import javax.sound.sampled.SourceDataLine;

import com.noah.breakit.util.Util;

public class Song {

   // music credit to sketchylogic
   public static final Song titlesong = new Song("songs/titlesong.wav");
   public static final Song playfieldsong = new Song("songs/playfieldsong.wav");
   public static final Song briefingsong = new Song("songs/briefingsong.wav");
   public static final Song gameoversong = new Song("songs/gameoversong.wav");

   private static boolean playing;
   private boolean looping;
   private boolean killThread;

   private URL url;

   private AudioInputStream ais;
   private AudioFormat baseFormat;
   private AudioFormat decodeFormat;
   private DataLine.Info info;
   private SourceDataLine sdl;
   private FloatControl gainControl;
   
   private String name;

   private Song(String filename) {
      name = filename;
     
      try {
         url = this.getClass().getClassLoader().getResource(filename);
      } catch (Exception e) {
         e.printStackTrace();
      }
   }

   public synchronized void loopSong() {      
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing){
               System.out.println(name + " waiting...");
            }//wait for any other song threads to finish executing...
           
            playing = true;
            looping = true;
           
            while (looping)
               play();
            playing = false;
         }
      });
   }

   public synchronized void playSong() {
      playing = true;
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            play();
            playing = false;
         }
      });
   }

   private void play() {
      try {

         ais = AudioSystem.getAudioInputStream(url);

         baseFormat = ais.getFormat();
         decodeFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, baseFormat.getSampleRate(), 16,
               baseFormat.getChannels(), baseFormat.getChannels() * 2, baseFormat.getSampleRate(), false);

         info = new DataLine.Info(SourceDataLine.class, decodeFormat);

         sdl = (SourceDataLine) AudioSystem.getLine(info);
         sdl.open();
         gainControl = (FloatControl) sdl.getControl(FloatControl.Type.MASTER_GAIN);

         sdl.start();
         int nBytesRead = 0;
         byte[] data = new byte[sdl.getBufferSize()];
         int offset;
         while ((nBytesRead = ais.read(data, 0, data.length)) >= 0) {
            offset = 0;
            System.out.println(name + " reading...");
            while (offset < nBytesRead){
               System.out.println(name + " writing...");
                offset += sdl.write(data, 0, nBytesRead);
             }
            if(killThread){
               System.out.println(name + " killing...");
               break;
            }
            System.out.println(name + " reading...");
         }
         
         System.out.println(name + " draining, stopping, closing...");
         
         sdl.drain();
         sdl.stop();
         sdl.close();
         
         System.out.println(name + " drain, stop, close complete!");
         
         if(killThread){
            looping = false;
            killThread = false;
         }
      } catch (Exception e) {
         e.printStackTrace();
      }
   }

   public void adjustGain(float gain) {
      if (gainControl == null) return;
      float value = Util.clamp((gainControl.getValue() + gain), gainControl.getMinimum(), gainControl.getMaximum());
      gainControl.setValue(value);
   }

   public void setGain(float gain) {
      gainControl.setValue(Util.clamp(gain, gainControl.getMinimum(), gainControl.getMaximum()));
   }

   public boolean atMin() {
      return gainControl.getValue() == gainControl.getMinimum();
   }

   public boolean atMax() {
      return gainControl.getValue() == gainControl.getMaximum();
   }

   public boolean isPlaying() {
      return playing;
   }

   public boolean fadeToBlack() {
      adjustGain(-0.4f);
      if (atMin()){
         killThread = true;
         System.out.println(name + " killThread set to true...");
      }
      return atMin();
   }
}

Living is good!
Offline philfrei
« Reply #1 - Posted 2016-09-13 01:37:25 »

When reading a file via AudioInputStream, I think one has to pretty much start at the beginning and go until the end, or until quitting, whichever comes first.

It is possible to read and throw away input data until you get to the desired starting point. You'd have to count elapsed sound frames in order to know when to switch over to actually streaming data to the SourceDataLine instead of throwing it away.

Another thing is to just take the cue itself and edit it down to exactly where you want to start it. I use Audacity for this sort of thing. If you don't intend to use the first few seconds, clipping off the data will reduce the size of the file which is also a good thing.

Since you want to repeat the cue, you could either append the repeat, again using Audacity, or programmatically put in place a LineListener to determine when the cue ends and use that notification to start another iteration.

Simplest, though, if there is enough RAM to hold the entire cue, would be to go back to making the cue a Clip. Clips allow the programmer to set the "playback head" to any starting point as well as allowing looping.

music and music apps: http://adonax.com
Offline ndnwarrior15
« Reply #2 - Posted 2016-09-13 04:00:12 »

If this were my project I would just have two separate audio files, one with the intro and the other without. Then just start looping the second after the first has finished.

Either that or like philfrei suggested, use a clip
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Icecore
« Reply #3 - Posted 2016-09-13 10:50:24 »

"SourceDataLine" is works like Data buffer
https://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/SourceDataLine.html
it have size depend on "AudioFormat"
FormatSize * channels == time (1 sec or ms not remember sure)

so you need load file "wav"
offset header data (header data needed for creating AudioFormat)
then you can send any data in any order as you want to SourceDataLine

simple remember: to not broke this loop - its important to be so (because of multy thread)
1  
2  
3  
while (offset < nBytesRead){
   offset += sdl.write(data, 0, nBytesRead);
}

up:
i reread comments about SourceDataLine.write
and remembered that hi blocked thread until fill full data so “while” in theory can be skipped

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #4 - Posted 2016-09-13 13:25:47 »

Well, I tried separating the intro from the body and playing the intro, then the body, but the problem I'm having with this approach is that they are separate threads, and syncing them correctly seems impossible, especially with that kind of precision. In fact, when I tried it, they just played on top of each other. I used the same class as posted previously, but with this extra method:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
public synchronized void playIntroLoopBody(Song body) {
      playing = true;
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing) {
               System.out.println(name + " waiting...");
            } // wait for any other song threads to finish executing...

            play();
            playing = false;
         }
      });
      body.loopSong();
   }


So, since boolean playing is static, the new thread is SUPPOSED to spinlock and wait for the other thread to finish, but it totally didn't work! They just overlapped each other and it sounded really bad. Thread concurrency is really advanced, so if anybody knows a better way to make one thread wait for the other, and play seamlessly, I'm all ears. I'm doubtful, though, since you have to drain the line, stop it, close it, then wait for the thread to finish executing, then wait for the thread to be removed from the thread pool.

Other than that, I think using a Clip would be better for something like this, since at least it would all be on the same thread.

Also, i would be interested to know if anybody has experimented with MIDI playback, since really this is probably the best format for tight looping and controlling the flow of a song. Thanks always for your guys help!

Living is good!
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #5 - Posted 2016-09-13 14:18:37 »

Just a quick fix: I placed a method call in the wrong scope. After fixing, the intro and the body no longer play on top of each other, but it is still seamy. I'll play around with the Audacity edits and see if I can get it to transition more smoothly. Here is the updated code:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
   public synchronized void playIntroLoopBody(Song body) {
      playing = true;
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing) {
               System.out.println(name + " waiting...");
            } // wait for any other song threads to finish executing...

            play();
            playing = false;
            body.loopSong();//this was in the wrong scope
         }
      });
      //it was right here...
   }

Living is good!
Offline philfrei
« Reply #6 - Posted 2016-09-13 17:33:50 »

I think using a LineListener is going to be both more accurate and more efficient than polling. But if you are trying to make two files play perfectly contiguously and seamlessly, I don't know if that is going to be possible without frame counting.

If you have the midi data I assume that approach can work. You then have to decide whether to provide you own samples or rely on those provided by sound cards. I've only just started working with Java midi myself, so I can't offer much in the way of advice on that topic.

music and music apps: http://adonax.com
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #7 - Posted 2016-09-13 18:53:59 »

Yes, I need them to play contiguously and seamlessly because the seam between the intro and the body is very obvious and jarring.

The same goes for the looping of the body: in Audacity, the body loops seamlessly and is completely transparent. However, looping the SourceDataLine requires draining and closing the old DataLine, shutting down the thread, starting the new thread, and creating, opening, and starting the new DataLine. The time it takes to do all that is enough to create a brief moment of silence, which creates a sound gap and throws the rhythm off just enough to be very noticeable.

I am gong to try a LineListener and see if it makes any difference. I'm not sure how to do frame counting but i'll research it.

If all else fails, I'll just ditch the intro portion completely, hard code a fade to black into the wav file using Audacity, and then just loop it normally. Not the ideal solution, but at least it will be listenable.

Living is good!
Offline philfrei
« Reply #8 - Posted 2016-09-13 20:44:19 »

It is possible to run two SourceDataLines at the same time from the same file, but each requires its own AudioInputStream instance.

Theoretically, if you put a LineListener on one and have it launch the other SDL, many of the intervening tasks you mention can occur independently, on their respective threads, and not contribute to a gap. But there will likely still be some sort of gap. I've not tried this myself except in very forgiving situations.

There are some notes about LineListeners here, and the tutorials touch on what I'm calling frame counting in the very last section ("Manipulating the Audio Data Directly") of the tutorial Processing Audio with Controls. Actually, the best code example is in the tutorial Using Files and Format Converters, in the section "Reading Sound Files" -- where the example code has the comment
1  
2  
      // Here, do something useful with the audio data that's 
      // now in the audioBytes array..."



I'm guessing you won't want to get in that deep. Best will probably be just pre-process the sound files into the exact forms that you wish to have them play back as, in Audacity, and load them as Clips when you want to use seamless looping.

music and music apps: http://adonax.com
Offline Icecore
« Reply #9 - Posted 2016-09-13 21:07:28 »

However, looping the SourceDataLine requires draining and closing the old DataLine, shutting down the thread, starting the new thread, and creating, opening, and starting the new DataLine.
Who say you this?)

as i said
then you can send any data in any order as you want to SourceDataLine

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
//Thread 1
AudioInputStream ais_swap = null;
while(true)
   synchronized(sdl){
      if(ais_swap != null){
         ais = ais_swap;
         ais_swap = null;
      }
   }
   nBytesRead = ais.read(data, 0, data.length));
   offset = 0;
   System.out.println(name + " reading...");
   while (offset < nBytesRead){
      System.out.println(name + " writing...");
      offset += sdl.write(data, 0, nBytesRead);
    }
   if(killThread){
      System.out.println(name + " killing...");
      break;
   }
   System.out.println(name + " reading...");
}
//Thread 2
   synchronized(sdl){
      ais_swap = new AudioInputStream
   }

*if AudioInputStream's have same AudioFormat

Perfect will be after swap - drain previous data
- but this step can be skipped because it's coupl MS time of sound before swap new track
***
byte[] data
SourceDataLine depends only on AudioFormat it dont care what data you write in

p.s Auido in java is hard - MIDI even harder
Easiest way – try change exist “audio tutorials code” as small as possible Wink

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #10 - Posted 2016-09-13 21:33:28 »

Thanks for that clarification Icecore, I had a vague notion that was what you meant, I just wasn't sure exactly how to implement it. This will definitely speed up the swapping!  Smiley

Living is good!
Offline Icecore
« Reply #11 - Posted 2016-09-13 22:22:58 »

Forgot add IO check)
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
while(true) 
   boolean swap = false;
   synchronized(sdl){
      if(ais_swap != null){
         ais = ais_swap;
         ais_swap = null;
         swap = true;
      }
   }
   nBytesRead = ais.read(data, 0, data.length));
   if(nBytesRead < 0){
      break;
   }
   if(swap){
      sdl.drain();
   }


Technical you can mix audio in Byte Array befor send it
but I have no idea how mix "byte Audio data" )
(I believe simple + 2 data's is wrong)

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #12 - Posted 2016-09-14 00:15:28 »

Thanks for that. Now I have a question about the synchronized block. It synchronizes on sdl, but there is nothing inside the block that actually references sdl. Can you please explain to me how this works? Forgive my ignorance!

Living is good!
Offline philfrei
« Reply #13 - Posted 2016-09-14 00:17:23 »

Quote
Technical you can mix audio in Byte Array befor send it
but I have no idea how mix "byte Audio data" )
(I believe simple + 2 data's is wrong)

1) Convert the byte data to PCM values (very likely to -32768 to 32767 range if 16-bit data).
2) Add the values from each input (and check to prevent going out of range).
3) Convert back to byte data and ship it out.

Icecore's basic example with multiple AudioInputStream is a good one. And, actually, it is okay if the incoming audio formats differ, as long as you make the necessary conversions before writing the data.

You get to pick when you read from either AIS. Another way to code would be to test if the read from the AIS returns -1. If it does, flip a switch and read from the other AIS without dropping a beat. That would eliminate the need for using a LineListener.

Where I was talking about counting frames, I'm thinking you can also do that by using the skip(long n) method. Let's say you want to start exactly 2 seconds in. If the frame rate is 44100 fps, that would be 88200 frames. If the format is stereo, 16-bit, then there would be 4 bytes per frame, so the number of bytes to read before starting would be 88200 * 4 or 352800 bytes.

Starting or stopping abruptly in the middle of a sound can create a click. To avoid that, do a fade in. Even as few as 32 or 64 frames can suffice. (In the 3-step chart above, the middle step would be to multiply the PCM data by a factor that ranges from 0 to 1 over 64 or however many steps.)


I think we are beyond "Newbie & Debugging..." and that this thread is a good candidate to move over to the Audio part of the Forum.

music and music apps: http://adonax.com
Offline philfrei
« Reply #14 - Posted 2016-09-14 00:18:59 »

Thanks for that. Now I have a question about the synchronized block. It synchronizes on sdl, but there is nothing inside the block that actually references sdl. Can you please explain to me how this works? Forgive my ignorance!

It's not my example, but I'm not seeing why synchronization is needed.

music and music apps: http://adonax.com
Offline Icecore
« Reply #15 - Posted 2016-09-14 10:03:03 »

For synchronization you need any non changeable(not null) object that you can access from 2 threads
https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html

I use sdl like example – because in many cases you don’t want create new SourceDataLine object in Audio thread.
But to make code clean better create separate syn object
1  
public static final Object Syn = new Object();

Using Direct synchronized from another Thread its rude, same as static Syn object,
but for raw example its ok, and it works just fine

but I'm not seeing why synchronization is needed.
Its prefers for using Sync, i show why:
1  
2  
3  
4  
5  
6  
7  
8  
//Thread 2 set ais_swap
if(ais_swap != null){//Thread 1
   //Thread 2 set ais_swap = null
   //but Thread 1 already pass null check
   ais = ais_swap;//Thread 1 ais = null;
   ais_swap = null;
   swap = true;
}

Yes its rare, very rare but you can simulate this in debug mode
(stop threads, step line by line for 1, 2 Thread as you want)
synchronized block Prevent this.

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Offline Icecore
« Reply #16 - Posted 2016-09-14 10:22:22 »

1) Convert the byte data to PCM values (very likely to -32768 to 32767 range if 16-bit data).
2) Add the values from each input (and check to prevent going out of range).
3) Convert back to byte data and ship it out.
I'm not sure about step 2 Smiley
Yes it works and almost everyone use it, but is it right?
Its same as Add 2 Red colors bytes, when you must add luminance of colors...

I’m interesting if record audio from mic between 2 instruments that play one Note
How sound changes?
I doubt that simple multiply Hz of played note on 2.
At least it must be some exponential Curve for Raw adding,
But for more clear it must be something Like LAB space in Color

up find this:
http://atastypixel.com/blog/how-to-mix-audio-samples-properly-on-ios/
http://www.voegler.eu/pub/audio/digital-audio-mixing-and-normalization.html

little offtop:)
Warcraft II - Tides of Darkness - Human 2 Midi
https://www.youtube.com/watch?v=GU7UWhPn-pQ
https://www.youtube.com/watch?v=V_FYOI91eLE&list

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #17 - Posted 2016-09-14 12:54:34 »

That's explains a lot, but can somebody pleased explain to me why the block is synchronized on sdl, and not ais or ais_swap?  Huh

Living is good!
Offline philfrei
« Reply #18 - Posted 2016-09-14 16:03:10 »

Quote
I'm not sure about step 2 Smiley
Yes it works and almost everyone use it, but is it right?
Its same as Add 2 Red colors bytes, when you must add luminance of colors...
That is a reasonable question to ask. But in fact, from what I have learned from working through this resource, audio signal are indeed linear and can be added. The math supports this.

Quote
I doubt that simple multiply Hz of played note on 2.
At least it must be some exponential Curve for Raw adding,
But for more clear it must be something Like LAB space in Color

You are correct in that the relationship between what we hear as a progression from silent to loud and the magnitude of the waves is not linear. However, in the specific application (goal is to avoid creating a click from the discontinuity in the data), linear progression works and executes at less of a cost than using a power curve. Here I am speculating, but I bet that one could shorten the number of frames needed for the transition from silent to full volume by using a power curve, maybe by as much as half or even more. Whether the benefit of using a sweep of 32 instead of 128 frames is worth it is debatable. 128 frames = 3 milliseconds, and at that point, sensory events are next to impossible to discriminate.

But the best test is to try it out and listen to the results.

The links that you provide are for the situation where the volumes of the contributing signals overflow. Yes, compensating for that on the fly requires significant complexity in that one wants to reduce the components in a way that preserves as much of the tonal content as possible.

But my point of view is that if you are getting signals that are too hot to mix, the sanest solution is to just turn them down! Then, all mixing can proceed linearly and all of those complexities (which can be a drag on a limited budget for on-the-fly audio processing) can be avoided. In my conception of how to run things, the person responsible for implementing the audio simply has to review "loudest case" scenarios and listen, checking for the distortion that arises from overflowing. If there is distortion, adjust volumes so that this doesn't happen. If the low end of sounds get lost this way, send the cue back to the sound designer for compression or some other means of narrowing the dynamic range of the cue.

A good sound designer knows how to use a tool like Audacity to provide the desired amount of compression or whatever is needed to best make a sound with levels that "play well" with others. (I would make this a hiring point --> somewhere on the chain from musician or sf/x creator to audio implementer, the knowledge and ability to mix sounds without overflowing.)

There is also the safety mechanism of putting in a Max and Min (for example if the DSP range is -32768 to 32767) is a reasonable choice as well. A little bit of overshooting here can cause clipping, but in some contexts the sound is an interesting effect, especially if you like metal guitar playing.

music and music apps: http://adonax.com
Offline philfrei
« Reply #19 - Posted 2016-09-14 16:42:24 »

For synchronization you need any non changeable(not null) object that you can access from 2 threads
https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html

I use sdl like example – because in many cases you don’t want create new SourceDataLine object in Audio thread.
But to make code clean better create separate syn object
1  
public static final Object Syn = new Object();

Using Direct synchronized from another Thread its rude, same as static Syn object,
but for raw example its ok, and it works just fine


Its prefers for using Sync, i show why:
1  
2  
3  
4  
5  
6  
7  
8  
//Thread 2 set ais_swap
if(ais_swap != null){//Thread 1
   //Thread 2 set ais_swap = null
   //but Thread 1 already pass null check
   ais = ais_swap;//Thread 1 ais = null;
   ais_swap = null;
   swap = true;
}

Yes its rare, very rare but you can simulate this in debug mode
(stop threads, step line by line for 1, 2 Thread as you want)
synchronized block Prevent this.

Given that the preparation of the cue should probably be on a different thread than the audio playback thread, guaranteeing that a concurrency conflict does not occur is needed. On this I agree with Icecore.

As with most things in programming, there is more than one way.  Smiley

My biases come from when I "got religion" via nsigma about making it a high priority to never block the audio thread. Thus, I avoid using synchronization in the audio thread if I can figure out an efficient non-blocking algorithm. If nothing else, maybe provide a boolean latch and have the audio thread check the latch and "fail" if the AIS is not ready rather than block and wait. An "IllegalStateException" is often thrown in this case.

Also, as the programmer and architect of the sound design, you have the ability to set things up so that the "open" and the "play" of this special sound object (employing multiple AIS and other code) never enter into a race condition. This sort of concurrency requirement would normally be prominently documented in the class, and it would be up to the programmer to implement safely.

But I can also see that if the only audio that is being blocked is the one cue, then using synchronization and waiting is reasonable. This sort of thing is more of a concern in a scenario where all the audio is being mixed down to a single audio thread, as I do with the mixing system I wrote, or with a system like TinySound that also funnels all sound through a single output. There, a single block can delay the entire sound mixing process and contribute to dropouts. (This assumes that the native code that plays back audio will continue to process other cues while the one cue blocks. I don't know if that is how audio works on all implementations.)

music and music apps: http://adonax.com
Offline DayTripperID

Senior Devvie


Medals: 8
Projects: 1
Exp: 1-3 months


Living is good!


« Reply #20 - Posted 2016-09-14 16:49:04 »

OK I finished implementing the changes, and there is a lot of improvement. There is still a little seam, but I think it just boils down to getting that Audacity edit perfect, because there is no more skip in the rhythm, which imho is more noticable that a little sound dropout. This is what I have now:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
69  
70  
71  
72  
73  
74  
75  
76  
77  
78  
79  
80  
81  
82  
83  
84  
85  
86  
87  
88  
89  
90  
91  
92  
93  
94  
95  
96  
97  
98  
99  
100  
101  
102  
103  
104  
105  
106  
107  
108  
109  
110  
111  
112  
113  
114  
115  
116  
117  
118  
119  
120  
121  
122  
123  
124  
125  
126  
127  
128  
129  
130  
131  
132  
133  
134  
135  
136  
137  
138  
139  
140  
141  
142  
143  
144  
145  
146  
147  
148  
149  
150  
151  
152  
153  
154  
155  
156  
157  
158  
159  
160  
161  
162  
163  
164  
165  
166  
167  
168  
169  
170  
171  
172  
173  
174  
175  
176  
177  
178  
179  
180  
181  
182  
183  
184  
185  
186  
187  
188  
189  
190  
191  
192  
193  
194  
195  
196  
197  
198  
199  
200  
201  
202  
203  
204  
205  
206  
207  
208  
209  
210  
211  
212  
213  
214  
215  
216  
217  
218  
219  
220  
221  
222  
223  
224  
225  
226  
227  
228  
229  
230  
231  
232  
233  
234  
235  
236  
237  
238  
239  
240  
241  
242  
243  
244  
245  
246  
247  
248  
249  
250  
251  
252  
253  
254  
255  
256  
257  
258  
259  
260  
261  
262  
263  
264  
265  
266  
267  
268  
269  
270  
271  
272  
273  
274  
275  
276  
277  
278  
279  
280  
281  
282  
283  
package com.noah.breakit.assets;

import java.io.IOException;
import java.net.URL;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.FloatControl;
import javax.sound.sampled.SourceDataLine;

import com.noah.breakit.util.Util;

public class Song {

   // music credit to sketchylogic
   public static final Song titlesong = new Song("songs/titlesong.wav");
   public static final Song playfieldsong = new Song("songs/playfieldsongintro.wav", "songs/playfieldsongbody.wav");
   public static final Song briefingsong = new Song("songs/briefingsong.wav");
   public static final Song gameoversong = new Song("songs/gameoversong.wav");

   private static volatile boolean playing;
   private boolean killThread;

   private URL url;
   private URL url2;

   private AudioInputStream ais;
   private AudioFormat baseFormat;
   private AudioFormat decodeFormat;
   private DataLine.Info info;
   private SourceDataLine sdl;
   private FloatControl gainControl;

   private String name;

   private Song(String filename) {
      name = filename;

      try {
         url = this.getClass().getClassLoader().getResource(filename);
      } catch (Exception e) {
         e.printStackTrace();
      }
   }

   private Song(String filename1, String filename2) {
      this(filename1);

      try {
         url2 = this.getClass().getClassLoader().getResource(filename2);
      } catch (Exception e) {
         e.printStackTrace();
      }

   }

   public synchronized void loopSong() {
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing) {
            } // wait for any other song threads to finish executing...
            playing = true;
            loop();
            playing = false;
         }
      });
   }

   public synchronized void playSong() {
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing) {
            } // wait for any other song threads to finish executing...
            playing = true;
            play();
            playing = false;
         }
      });
   }

   public synchronized void playIntroLoopBody() {
      playing = true;
      SoundThreadPool.execute(new Runnable() {
         public void run() {
            while (playing) {
            } // wait for any other song threads to finish executing...

            playing = true;
            play_intro_loop_body();
            playing = false;
         }
      });
   }

   private void setup() {
      try {
         ais = AudioSystem.getAudioInputStream(url);

         baseFormat = ais.getFormat();
         decodeFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, baseFormat.getSampleRate(), 16,
               baseFormat.getChannels(), baseFormat.getChannels() * 2, baseFormat.getSampleRate(), false);

         info = new DataLine.Info(SourceDataLine.class, decodeFormat);

         sdl = (SourceDataLine) AudioSystem.getLine(info);
         sdl.open();

         gainControl = (FloatControl) sdl.getControl(FloatControl.Type.MASTER_GAIN);

         sdl.start();

      } catch (Exception e) {
         e.printStackTrace();
      }
   }
   
   private void teardown(){
      sdl.drain();
      sdl.stop();
      sdl.close();
     
      try {
         ais.close();
      } catch (IOException e) {
         e.printStackTrace();
      }

      if (killThread) killThread = false;
   }

   private void play() {

      try {
         setup();

         int nBytesRead = 0;
         byte[] data = new byte[sdl.getBufferSize()];
         int offset;

         while (true) {

            nBytesRead = ais.read(data, 0, data.length);

            if (nBytesRead < 0) break;

            offset = 0;

            while (offset < nBytesRead) {
               offset += sdl.write(data, 0, nBytesRead);
            }
            if (killThread) break;
         }

         teardown();

      } catch (Exception e) {
         e.printStackTrace();
      }
   }

   private void loop() {

      try {
         setup();

         int nBytesRead = 0;
         byte[] data = new byte[sdl.getBufferSize()];
         int offset;

         // Thread 1
         AudioInputStream ais_swap = null;
         while (true) {
            boolean swap = false;
            synchronized (sdl) {
               if (ais_swap != null) {
                  ais = ais_swap;
                  ais_swap = null;
                  swap = true;
               }
            }

            nBytesRead = ais.read(data, 0, data.length);

            if (nBytesRead < 0) {
               ais_swap = AudioSystem.getAudioInputStream(url);
               swap = true;
            }

            if (swap) sdl.drain();
            offset = 0;

            while (offset < nBytesRead) {
               offset += sdl.write(data, 0, nBytesRead);
            }
            if (killThread) break;
         }

         teardown();

      } catch (Exception e) {
         e.printStackTrace();
      }
   }
   
   private void play_intro_loop_body() {
      try {
         setup();

         int nBytesRead = 0;
         byte[] data = new byte[sdl.getBufferSize()];
         int offset;

         // Thread 1
         AudioInputStream ais_swap = null;
         while (true) {
            boolean swap = false;
            synchronized (sdl) {
               if (ais_swap != null) {
                  ais = ais_swap;
                  ais_swap = null;
                  swap = true;
               }
            }

            nBytesRead = ais.read(data, 0, data.length);

            if (nBytesRead < 0) {
               ais_swap = AudioSystem.getAudioInputStream(url2);
               swap = true;
            }

            if (swap) sdl.drain();
            offset = 0;

            while (offset < nBytesRead) {
               offset += sdl.write(data, 0, nBytesRead);
            }
            if (killThread) break;
         }

         teardown();

      } catch (Exception e) {
         e.printStackTrace();
      }
   }

   public void adjustGain(float gain) {
      if (gainControl == null) return;
      float value = Util.clamp((gainControl.getValue() + gain), gainControl.getMinimum(), gainControl.getMaximum());
      gainControl.setValue(value);
   }

   public void setGain(float gain) {
      if (gainControl == null) return;
      gainControl.setValue(Util.clamp(gain, gainControl.getMinimum(), gainControl.getMaximum()));
   }

   public boolean atMin() {
      if (gainControl == null) return false;
      return gainControl.getValue() == gainControl.getMinimum();
   }

   public boolean atMax() {
      if (gainControl == null) return false;
      return gainControl.getValue() == gainControl.getMaximum();
   }

   public boolean isPlaying() {
      return playing;
   }

   public boolean fadeToBlack() {
      adjustGain(-0.4f);
      if (atMin()) {
         killThread = true;
         System.out.println(name + " killThread set to true...");
      }
      return atMin();
   }
}


I'm pleased with the results. I think the only thing I can do now is go into Audacity and get those edits juuust right...

Major thanks to @philfrei and @Icecore for their willingness to share knowledge and expertise!

Living is good!
Offline Icecore
« Reply #21 - Posted 2016-09-14 18:29:58 »

I'm pleased with the results. I think the only thing I can do now is go into Audacity and get those edits juuust right...

Major thanks to @philfrei and @Icecore for their willingness to share knowledge and expertise!
 
You're welcome

That's explains a lot, but can somebody pleased explain to me why the block is synchronized on sdl, and not ais or ais_swap?  Huh
Because ais and ais_swap is changed by both threads during synchronize

This how it works “Raw explain”:
Every object have boolean value –
is_synchronized


When Thread enter synchronized block
Thread check
1  
2  
3  
4  
5  
6  
if(is_synchronized){
wait
}
is_synchronized = true;
on leave block
is_synchronized = false;


null object don’t have any Object data
and for changeable object
you have
obj1, obj2

Thread1 synchronized(obj1)

then swap links
obj1 = obj2 

Thread2 try use synchronized(obj1)
and shi use it, because
Technically obj1 is different object hi have
is_synchronized = false

Even when Thread1 still in synchronized block above

I also reread about
drain
you can use it only on Thread stop
Drain = wait, until all written data - played to end
It prevents clipping sound on Force thread Stop, but don’t have any result on filling new data to buffer

Last known State: Reassembled in Cyberspace
End Transmission....
..
.
Journey began Now)
Pages: [1]
  ignore  |  Print  
 
 

 
Riven (397 views)
2019-09-04 15:33:17

hadezbladez (5280 views)
2018-11-16 13:46:03

hadezbladez (2204 views)
2018-11-16 13:41:33

hadezbladez (5544 views)
2018-11-16 13:35:35

hadezbladez (1150 views)
2018-11-16 13:32:03

EgonOlsen (4585 views)
2018-06-10 19:43:48

EgonOlsen (5462 views)
2018-06-10 19:43:44

EgonOlsen (3119 views)
2018-06-10 19:43:20

DesertCoockie (4016 views)
2018-05-13 18:23:11

nelsongames (4708 views)
2018-04-24 18:15:36
A NON-ideal modular configuration for Eclipse with JavaFX
by philfrei
2019-12-19 19:35:12

Java Gaming Resources
by philfrei
2019-05-14 16:15:13

Deployment and Packaging
by philfrei
2019-05-08 15:15:36

Deployment and Packaging
by philfrei
2019-05-08 15:13:34

Deployment and Packaging
by philfrei
2019-02-17 20:25:53

Deployment and Packaging
by mudlee
2018-08-22 18:09:50

Java Gaming Resources
by gouessej
2018-08-22 08:19:41

Deployment and Packaging
by gouessej
2018-08-22 08:04:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!