Java-Gaming.org Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (690)
Games in Android Showcase (200)
games submitted by our members
Games in WIP (764)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 46
1  Java Game APIs & Engines / Java Sound & OpenAL / Re: Should sound be on its own thread? on: 2016-09-25 18:34:49
If a thread is playing sound, it will not do anything else until the sound is done. So yes, sound playback has to take place in its own thread.

I think it makes sense to organize control of the sound as a whole into a single class. Is that what you are doing with SoundHandler? Or is SoundHandler part of some sort of library that you are using?

Mostly, I've been making the sound handling class static. But in a couple of situations I've done some sound management in the game loop as well. (Your question makes me think maybe I should consider doing more of this.)

For example, game-loop based management makes sense in a situation where the volumes or panning are being updated by the positions of objects on the screen (more common with 3D, but can apply to 2D as well). Another case would be reading the mouse position and putting the [X,Y] values in variables that are only consulted once per game loop and used with the sound methods on that basis. This often makes more sense than triggering the sound with every mouse update.

I'm still learning, though. And a lot depends on the specifics of the game and the sound library you are using (I most use javax.sound.sampled).
2  Java Game APIs & Engines / Engines, Libraries and Tools / Re: Starting game development, need guidance? on: 2016-09-15 21:07:32
I recommend putting the late-comer JavaFX into consideration, as well. (I'm referring to the Java 8 iteration of JavaFX, not the original mess.) I think it is easier to learn and use than Swing/Java2D and has a lot of 3D implemented as well. It is part of the base Java language now. I wrote a tutorial to help get started, if you want to get a taste of it.

I'm not clear what is going on with LWJGL-based game engines right now. A version 3.0 has been created, but the game engines (Libgdx, JMonkeyEngine, Flash) seem to be sticking with 2.9.2 or whatever the last version 2 is. The jump from 2 to 3 is significant. I don't know the extent to which the various game engines shield the user from the OpenGL implementation. Maybe it is a non-issue for the application programmer. Am looking forward to comments that might clarify this situation! Thanks Brynn.

There is also JOGL to consider. Folks that actually have experience with it will have to give you its selling points. I've not tried it myself.
3  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-09-14 17:25:49
People believe what there told to believe...esp when it's convenient and relieves them of any responsibility.

If you would like to get even more depressed, check out this book (I am currently half way through it)
The Crisis Caravan
4  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-14 16:42:24
For synchronization you need any non changeable(not null) object that you can access from 2 threads
https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html

I use sdl like example – because in many cases you don’t want create new SourceDataLine object in Audio thread.
But to make code clean better create separate syn object
1  
public static final Object Syn = new Object();

Using Direct synchronized from another Thread its rude, same as static Syn object,
but for raw example its ok, and it works just fine


Its prefers for using Sync, i show why:
1  
2  
3  
4  
5  
6  
7  
8  
//Thread 2 set ais_swap
if(ais_swap != null){//Thread 1
   //Thread 2 set ais_swap = null
   //but Thread 1 already pass null check
   ais = ais_swap;//Thread 1 ais = null;
   ais_swap = null;
   swap = true;
}

Yes its rare, very rare but you can simulate this in debug mode
(stop threads, step line by line for 1, 2 Thread as you want)
synchronized block Prevent this.

Given that the preparation of the cue should probably be on a different thread than the audio playback thread, guaranteeing that a concurrency conflict does not occur is needed. On this I agree with Icecore.

As with most things in programming, there is more than one way.  Smiley

My biases come from when I "got religion" via nsigma about making it a high priority to never block the audio thread. Thus, I avoid using synchronization in the audio thread if I can figure out an efficient non-blocking algorithm. If nothing else, maybe provide a boolean latch and have the audio thread check the latch and "fail" if the AIS is not ready rather than block and wait. An "IllegalStateException" is often thrown in this case.

Also, as the programmer and architect of the sound design, you have the ability to set things up so that the "open" and the "play" of this special sound object (employing multiple AIS and other code) never enter into a race condition. This sort of concurrency requirement would normally be prominently documented in the class, and it would be up to the programmer to implement safely.

But I can also see that if the only audio that is being blocked is the one cue, then using synchronization and waiting is reasonable. This sort of thing is more of a concern in a scenario where all the audio is being mixed down to a single audio thread, as I do with the mixing system I wrote, or with a system like TinySound that also funnels all sound through a single output. There, a single block can delay the entire sound mixing process and contribute to dropouts. (This assumes that the native code that plays back audio will continue to process other cues while the one cue blocks. I don't know if that is how audio works on all implementations.)
5  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-14 16:03:10
Quote
I'm not sure about step 2 Smiley
Yes it works and almost everyone use it, but is it right?
Its same as Add 2 Red colors bytes, when you must add luminance of colors...
That is a reasonable question to ask. But in fact, from what I have learned from working through this resource, audio signal are indeed linear and can be added. The math supports this.

Quote
I doubt that simple multiply Hz of played note on 2.
At least it must be some exponential Curve for Raw adding,
But for more clear it must be something Like LAB space in Color

You are correct in that the relationship between what we hear as a progression from silent to loud and the magnitude of the waves is not linear. However, in the specific application (goal is to avoid creating a click from the discontinuity in the data), linear progression works and executes at less of a cost than using a power curve. Here I am speculating, but I bet that one could shorten the number of frames needed for the transition from silent to full volume by using a power curve, maybe by as much as half or even more. Whether the benefit of using a sweep of 32 instead of 128 frames is worth it is debatable. 128 frames = 3 milliseconds, and at that point, sensory events are next to impossible to discriminate.

But the best test is to try it out and listen to the results.

The links that you provide are for the situation where the volumes of the contributing signals overflow. Yes, compensating for that on the fly requires significant complexity in that one wants to reduce the components in a way that preserves as much of the tonal content as possible.

But my point of view is that if you are getting signals that are too hot to mix, the sanest solution is to just turn them down! Then, all mixing can proceed linearly and all of those complexities (which can be a drag on a limited budget for on-the-fly audio processing) can be avoided. In my conception of how to run things, the person responsible for implementing the audio simply has to review "loudest case" scenarios and listen, checking for the distortion that arises from overflowing. If there is distortion, adjust volumes so that this doesn't happen. If the low end of sounds get lost this way, send the cue back to the sound designer for compression or some other means of narrowing the dynamic range of the cue.

A good sound designer knows how to use a tool like Audacity to provide the desired amount of compression or whatever is needed to best make a sound with levels that "play well" with others. (I would make this a hiring point --> somewhere on the chain from musician or sf/x creator to audio implementer, the knowledge and ability to mix sounds without overflowing.)

There is also the safety mechanism of putting in a Max and Min (for example if the DSP range is -32768 to 32767) is a reasonable choice as well. A little bit of overshooting here can cause clipping, but in some contexts the sound is an interesting effect, especially if you like metal guitar playing.
6  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-14 00:18:59
Thanks for that. Now I have a question about the synchronized block. It synchronizes on sdl, but there is nothing inside the block that actually references sdl. Can you please explain to me how this works? Forgive my ignorance!

It's not my example, but I'm not seeing why synchronization is needed.
7  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-14 00:17:23
Quote
Technical you can mix audio in Byte Array befor send it
but I have no idea how mix "byte Audio data" )
(I believe simple + 2 data's is wrong)

1) Convert the byte data to PCM values (very likely to -32768 to 32767 range if 16-bit data).
2) Add the values from each input (and check to prevent going out of range).
3) Convert back to byte data and ship it out.

Icecore's basic example with multiple AudioInputStream is a good one. And, actually, it is okay if the incoming audio formats differ, as long as you make the necessary conversions before writing the data.

You get to pick when you read from either AIS. Another way to code would be to test if the read from the AIS returns -1. If it does, flip a switch and read from the other AIS without dropping a beat. That would eliminate the need for using a LineListener.

Where I was talking about counting frames, I'm thinking you can also do that by using the skip(long n) method. Let's say you want to start exactly 2 seconds in. If the frame rate is 44100 fps, that would be 88200 frames. If the format is stereo, 16-bit, then there would be 4 bytes per frame, so the number of bytes to read before starting would be 88200 * 4 or 352800 bytes.

Starting or stopping abruptly in the middle of a sound can create a click. To avoid that, do a fade in. Even as few as 32 or 64 frames can suffice. (In the 3-step chart above, the middle step would be to multiply the PCM data by a factor that ranges from 0 to 1 over 64 or however many steps.)


I think we are beyond "Newbie & Debugging..." and that this thread is a good candidate to move over to the Audio part of the Forum.
8  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-13 20:44:19
It is possible to run two SourceDataLines at the same time from the same file, but each requires its own AudioInputStream instance.

Theoretically, if you put a LineListener on one and have it launch the other SDL, many of the intervening tasks you mention can occur independently, on their respective threads, and not contribute to a gap. But there will likely still be some sort of gap. I've not tried this myself except in very forgiving situations.

There are some notes about LineListeners here, and the tutorials touch on what I'm calling frame counting in the very last section ("Manipulating the Audio Data Directly") of the tutorial Processing Audio with Controls. Actually, the best code example is in the tutorial Using Files and Format Converters, in the section "Reading Sound Files" -- where the example code has the comment
1  
2  
      // Here, do something useful with the audio data that's 
      // now in the audioBytes array..."



I'm guessing you won't want to get in that deep. Best will probably be just pre-process the sound files into the exact forms that you wish to have them play back as, in Audacity, and load them as Clips when you want to use seamless looping.
9  Game Development / Networking & Multiplayer / Re: Why Threads for the Client on the Server? on: 2016-09-13 17:49:21
Different context, but related?

I was making and using individual threads for audio event streams where certain sound effects schedule their next playbacks. Going to an ExecutivorService, with a FixedThreadPool instead improved performance. But I only need to run a pool of 10, so far.

Kevin Workman's explanation is to the point. I remember this coming up in a lecture in college about operating systems back in the 1980's (I had a work-study job video-taping lectures, I wasn't actually a CS major), where they made a point about there being drawbacks with strict first-in first-out scheduling.

10  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-13 17:33:50
I think using a LineListener is going to be both more accurate and more efficient than polling. But if you are trying to make two files play perfectly contiguously and seamlessly, I don't know if that is going to be possible without frame counting.

If you have the midi data I assume that approach can work. You then have to decide whether to provide you own samples or rely on those provided by sound cards. I've only just started working with Java midi myself, so I can't offer much in the way of advice on that topic.
11  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-09-13 09:06:52
I got depressed a bit.

http://xkcd.com/1732

Should display that at the Smithsonian's David H. Koch Hall of Human Origins. Facts would be refreshing in a place where one learns that it is only a matter of time before we evolve more sweat glands in response to global warming.



https://thinkprogress.org/smithsonian-stands-by-wildly-misleading-climate-change-exhibit-paid-for-by-kochs-bd3105ef354b#.nuz1tuh87
12  Game Development / Newbie & Debugging Questions / Re: manipulating SourceDataLine on: 2016-09-13 01:37:25
When reading a file via AudioInputStream, I think one has to pretty much start at the beginning and go until the end, or until quitting, whichever comes first.

It is possible to read and throw away input data until you get to the desired starting point. You'd have to count elapsed sound frames in order to know when to switch over to actually streaming data to the SourceDataLine instead of throwing it away.

Another thing is to just take the cue itself and edit it down to exactly where you want to start it. I use Audacity for this sort of thing. If you don't intend to use the first few seconds, clipping off the data will reduce the size of the file which is also a good thing.

Since you want to repeat the cue, you could either append the repeat, again using Audacity, or programmatically put in place a LineListener to determine when the cue ends and use that notification to start another iteration.

Simplest, though, if there is enough RAM to hold the entire cue, would be to go back to making the cue a Clip. Clips allow the programmer to set the "playback head" to any starting point as well as allowing looping.
13  Java Game APIs & Engines / Java Sound & OpenAL / OpenAL and current LWJGL-based tools on: 2016-09-12 07:38:28
I made a working endless, non-repeating campfire sf/x, and it runs on the OpenAL that comes with LWJGL 3. First get it to work, then improve it, right?

If I understand correctly, there are basically four "layers" that require handling for sound:
> device
   > context
      > source
         > buffers

I was puzzling out what should be handled automatically by the class and instance, and what should be provided by the programmer. I'm figuring, since the "source" is given a 3D location, maybe it should be accessible to the programmer (sometimes audio cues need to be moved around). Also, possibly the programmer may be organizing sources into various "contexts."

I'm thinking the cue instance should require the programmer to provide these values as arguments to the constructor. The constructor would then handle setting up the streaming buffers so that the game developer wouldn't ever have to deal with that level of detail, they would just start or stop the CampfireSFX.

But it seemed to me that this could be a lot to ask of the game developer, especially if they had signed up for a library to help shield them from managing these details.

So...I thought the thing to do would be to see how the various game engines handle "device" and "context" and "source". First look was at a Slick audio example and...full stop: it is using LWJGL 2, not 3.

A quick look at our JGO "OpenGL" forum shows the first entry as LWJGL 2.9.2.
Does Libgdx also still use 2.9.2?
Does JMonkeyEngine also still use 2.9.2?
If so, are there plans to migrate to 3?

It is not clear to me from the documentation what versions of LWJGL these engines are using. I guess I just need to go ahead and download them and see what I get...
14  Games Center / WIP games, tools & toy projects / Re: PFTheremin on: 2016-09-09 19:56:09
Nice app, worked fine for me on windows 10 java 8.
The default settings make a spooky wail. Will be interesting to see how you and others tune and use it.
I have no experience in how to make or use sound but intend to learn one day.

Good to hear!

Re learning audio: no time like the present!

You comment brings up a point though which is that I haven't made a place where people can share patches. Maybe, since they are text files (xml, that is), the simplest thing is to post patches on this thread if anyone wants to share? I don't think I can support a forum on my website. Maybe it is possible but I haven't figured out how.

I've been meaning to post more myself, but I'm letting "perfect pictures" get in the way. I've been intending to do things like rent "The Day the Earth Stood Still" or "The Red Planet" and try and replicate the effects used there, or maybe try and match the theremin in "Good Vibrations" or "Dark Shadows". Too ambitious, should just post some that illustrate basic capabilities.

I did make one upgrade to the program a few weeks ago. There's a trade off with screen size, pitch range and pitch precision. The wider the range, the more difficult it is to hit specific notes. What I came up with is this: when hitting the shift key, the mouse pitch stays the same, playing or not, where ever you move. Thus either in silences or on held notes, use of the shift key and maneuvering can shift the screen pitch range to the area you wish to play in.
15  Games Center / WIP games, tools & toy projects / Re: PFTheremin on: 2016-09-09 19:43:04
Very cool.

Do you happen to have the source for this on github or a similar site?

Thank you for the compliment!

The only code that is posted has been posted here on JGO over the years. I've been working on getting better at using Java Sound (javax.sound.sampled) for several years now, and the posting reflect progress made and issues dealt with. Some of the later code is better than the earlier stuff I posted.

I am happy to answer questions on it or on how to make things using Java Sound. I have a notification attached to the jgo audio forum and always check on anything posted there.
16  Game Development / Newbie & Debugging Questions / Re: Reading mp3's from executable jar file on: 2016-09-08 01:48:09
I haven't looked closely at your solution. If it works, great! Very glad to hear it is all running more smoothly. Java's sound classes are much maligned (mostly due to difficult-to-assimilate documentation), but they are capable of performing quite well, imho.

Quote
I have some questions about how to properly implement this. Since exec is static, it would be senseless to have more than one instance of SoundThreadPool, so it would be instantiated somewhere like the main Game class, correct?

What I do is to make a class called SoundHandler or GameSound or something like that and make it the top organizational point for all game sound. The executor is called in the SoundHandler constructor:

1  
    new SoundThreadPool(10);

That is all that is needed.

The SoundHandler has (among other things) two methods, a start() and a stop(). I put the shutdown code in the stop() method:
1  
    SoundThreadPool.shutdown();


Quote
Is there an event listener that detects a program shutdown, like clicking to close the window?

I've only made GUIs with JavaFX since coming up with this scheme. The main JavaFX GUI element is type Stage. It has the following method for detecting closes:
1  
    stage.setOnCloseRequest( e -> cleanShutdown() );


The method cleanShutdown() is where I put soundHandler.stop() which in turn calls SoundThreadPool.shutdown().

I haven't tried this with Swing yet. I did a search and found this which looks like just the way to go:
http://docs.oracle.com/javase/tutorial/uiswing/events/windowlistener.html


Working with the Runtime thread looks interesting! I haven't delved into that area yet, and know little about it's mysteries or capabilities. Just goes to show, though, that there's almost always more than one way to accomplish something.
17  Game Development / Newbie & Debugging Questions / Re: Reading mp3's from executable jar file on: 2016-09-07 21:03:32
Looks solid. I could see where it might take a few seconds to decode and load all these cues. That could happen on a background thread, perhaps. Depending upon their length, some of the "songs" might be better off as SourceDataLines. Unlike a Clip, though, you can only use one once before having to close it and open a new one.

Since each playback requires a new Thread, you might experiment with using an ExecutiveService.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
public class SoundThreadPool {

   private static ExecutorService exec;
   
   public SoundThreadPool(int nThreads)
   {
      exec = Executors.newFixedThreadPool(nThreads);
   }
   
   public static void shutdown()
   {
      exec.shutdown();
   }
   
   public static void execute(Runnable command)
   {
      exec.execute(command);
   }
}


Initialize it to something that covers your maximum use case (how many concurrent sounds--then add a couple more for safety) Then, code something like the following in your play and loop methods:

1  
2  
3  
     SoundThreadPool.execute(new Runnable() {
         public void run() {
         ...


Making and destroying Threads has more than the usual overhead for most Objects and so it often benefits from pooling. Note: the Executor will need to be shut down as part of exiting the app.

I think my pool class above is reasonably efficient. I haven't exposed it to scrutiny before. It is pretty simple and bare bones. I'm finding it has improved the performance of my soundscapes, and it has also made coding the individual cues a bit easier. Maybe this will alleviate some of the latencies (if they are related to the overhead of making new Threads).
18  Game Development / Newbie & Debugging Questions / Re: Reading mp3's from executable jar file on: 2016-09-07 19:33:35
Glad to hear you are getting this solved.

You probably already know this, but it is not clear from your post or the code fragments:

A long file like a song should probably be played directly as a SourceDataLine, not loaded into memory as a Clip. Usually, no more than one "decoding" stream is played at one time since decoding adds to the cost of playback. But having one decoding stream going is usually fine. If there are dropouts, it might be due to the buffer being too small (the buffer used in the SourceDataLine operations). The SourceDataLine's buffer can be larger since for background song playback, a little extra latency is not a significant issue.

A Clip should be preloaded. If you combine open() and start(), the Clip will not commence until the entire file has first been loaded into memory, creating considerable latencey. Often first-time users of Java sound reload the Clip data with each start(), which repeats the costly loading over and over again. A Clip, once loaded can be reset so that it can be played again without reloading. Once loaded and set to the starting position, the start() method should commence virtually immediately. If it doesn't, something probably needs debugging.

With these limits in mind, it should be possible to use javax.sound.sampled libraries and get things to work. But TinySound can make handling all this easier and adds some additional functionality.
19  Game Development / Newbie & Debugging Questions / Re: Reading mp3's from executable jar file on: 2016-09-07 04:03:40
If I am reading this post correctly:
http://stackoverflow.com/questions/13374469/trying-to-get-audioinputstream-of-an-audio-file
perhaps the mp3spi jar is not included in your export?

It seems to me there are some extra steps sometimes required when exporting a project with jars. I'm not sure how to check for this. Maybe rename your jar to .zip and step inside and look and see if the jars are there.

This might be a clue:
http://stackoverflow.com/questions/26858843/include-external-jars-while-exporting-java-project-in-eclipse

The tool for Ogg/Vorbis is here:
http://www.jcraft.com/jorbis/
I went through some major headaches getting it to work, but that is because I'm not using it in a normal way. I import and decode the file into raw PCM and store it in a normalized float array, for use with some other custom tools I made. This required some tinkering with their example code at the edges of my skill level. The work path that resulted is not very friendly.

You might consider using TinySound. It supports ogg/vorbis, and functions as a way to mix multiple sound sources and play them back at the same time. On the plus side, ogg/vorbis sounds just as good, if not better than mp3, and doesn't trigger licensing requirements from Fraunhofer.
http://www.java-gaming.org/topics/need-a-really-simple-library-for-playing-sounds-and-music-try-tinysound/25974/view.html
20  Game Development / Newbie & Debugging Questions / Re: Reading mp3's from executable jar file on: 2016-09-07 02:06:26
When something works in Eclipse but not in a jar, it is usually because the file system cannot address locations that were within the file system in Eclipse but are now within the jar.

A URL can read from a jar location. I recommend building a URL and using that to get your AudioInputStream:
1  
2  
3  
4  
5  
    URL url = this.getClass().getResource("sfx/" + filename)
    AudioInputStream ais = AudioSystem.getAudioInputStream(url);
    DataLine.Info info = new DataLine.Info(Clip.class, ais.getFormat());  // you might want to specify your own format
    Clip clip = (Clip) AudioSystem.getLine(info);
    clip.open(ais);


Another problem used to come up in that InputStream is assumed to support mark and reset, but audio data files often don't support this. This is another reason to get your AudioInputStream using a URL rather than an InputStream, so as to avoid the intermediate step where an exception might be thrown due to lack of mark and reset. You can compare the API's to see what I'm talking about:
http://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/AudioSystem.html#getAudioInputStream(java.net.URL)
http://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/AudioSystem.html#getAudioInputStream(java.io.InputStream)

I'm not seeing where you are decoding the mp3 and I didn't include that step in my example as I've not done that. (Been sticking with ogg/vorbis when using compression.)

21  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-09-06 19:35:21
Got an endless, non-repeating campfire sfx to play using LWJGL/OpenAL streaming.

The mechanic of loading direct ByteBuffers is a bit alien to what I've been doing in my Java audio. So, getting this has been a bit of a conceptual leap.

I'm wishing the underlying code or natives provided some sort of "notify" when a ByteBuffer is consumed. I've read that for providing a good throttle (audio data can be generated much faster than it will be consumed), a pull/notify rather than continually repolling for an open slot should be more efficient way to go. If this were implemented, the latencies in OpenAL perhaps could be improved overall. But I'm still too new to this to know if my thoughts are on base or not.

Maybe this isn't provided because the underlying native is written in C, not Java, and there is no intervening Java layer for this function.  
22  Discussions / General Discussions / Re: Comparison between 2 IDE's - Netbeans and Eclipse on: 2016-08-31 06:13:50
I had good luck working with Android Studio on Ubuntu. It was hell to get it all installed, but once running, including the Emulators, it worked well. The Emulators worked so well, code ran faster on it than a device I was using for testing. I thought it was a big improvement over the time I tried Eclipse + Android plugins (really slow performance). But it's been about 3/4 year since I used it.

I assume using Android Studio would not be the best way to organize a project meant to run on both desktop and Android systems, something I haven't attempted yet. True/false?
23  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-30 16:56:40
@philfrei, I'm liking the Allenspace Generator! It's like a Geiger counter on acid.

It would be a good background track for a science fiction / hacker / creepy aliens kind of game.

Edit: But what does the "Release" button do? I'm scared to press it. I'm also scared to not press it.

Thanks!

I love your description--it's the best I've heard yet.

I think it would be neat to see the tone-cluster sound data sent to some sort of color synthesizer or graphic algorithm. But yes, I'm thinking it would be a component of a deep-space (deep inner space) stealth sort of scenario.

The "Start" and "Release" buttons refer to the auto-generation of tone clusters. "Release" just prevents any new sounds from occurring. Existing sounds will play out. To be relabeled.

When a "real" gui gets developed, it will be important to clarify what pertains to the "next" tone cluster that is going to be produced (most of the sliders) and what pertains to present-time controls like the flibber button and the bottom volume control. (Should also have a way to save/load settings and to export wav files, as well as better fitting slider equations than linear in certain instances.)

These are the longest envelope durations I've worked with, with sounds set up to crescendo/descrescendo for as long as a minute. But that is also part of the design, to have a slow evolution. However, it does mean that you have to wait for the next cluster, which can take a while, to hear what the new settings are producing.

I'm thinking of adding an echo control, as that worked very well to give the theremin extra depth. A reverb would be preferable but I'm not ready to tackle that challenge yet.
24  Discussions / General Discussions / Re: Comparison between 2 IDE's - Netbeans and Eclipse on: 2016-08-30 02:13:10
This probably doesn't affect your choice, but I think netbeans has a graphical tool for use with JavaFX GUI building. I haven't used it. Eclipse doesn't offer this as far as I know, though there may be a plug-in. I'm in the minority here, but I do like JavaFX more than Swing and have not gotten very far with LWJGL despite a couple starts. (Am hoping the 3rd time is the charm.)

I'd consider asking this question over on the LWJGL board, too, and reporting back what you find out. Maybe Spasi will weigh in. He's been very helpful for me over on that board and is a JGO member as well.
25  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-29 06:52:01
Most of two days spent on a very elusive bug. Due to the intermittent nature and not a whole lot of confidence in my multi-threading chops, spent a lot of time checking things like possible race conditions, lack of atomic operations (where I thought they shouldn't really be required), deep vs shallow copy errors, that sort of thing. For the first time, also, employed a debugging technique I hadn't used before: catching and re-throwing the error down the call stack, thus allowing reportage on variable states at several levels.

It turns out though, that the problem was due to simple accumulation of arithmetic error. I discovered that if you add a float value that is in the E-6 range 400,000 times (via +=) or so, it can cough up a pretty sizable error compared to just doing the multiplication.

I don't know how to describe it any better without going into details no one want to read about.

Result: envelope operations on the synths are now being done via doubles rather than floats. I was afraid I'd break everything when I converted to doubles, but it only took about 20 minutes, once the changes were made, to fix the downstream errors that were generated. Am back to a pristine Eclipse: 0 Items in the "Problems" tab.

The "flibber" effect on the Allenspace generator is now much more like the original. Intermixing via selecting tracks at random has a much more satisfying aural glitchiness than iterating through the tracks, even when the time spent per track was also subject to variation.
26  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-26 03:39:47
I got some good debugging done for something I'm calling an "Allanspace Sound Generator". It makes a continuous, spacey collection of sine waves, with some controls for the generation rate, ...

That sounds really neat, I've been wanting to get into audio for a little while now. I've always wondered, how do you take the audio data you generate and actually use it and output a sound? Is there something in the Java SDK that allows you to create sound from custom data?

Yes. The standard "java way" is to use javax.sound.sampled.Clip or javax.sound.SourceDataLine for output of .wav files. But with SourceDataLine, you don't have to use a wav as the source, but can feed it your own sound data.

I'm happy to get into specifics or answer questions, but they are probably best put on the Sound/Audio thread. Or feel free to message me.
27  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-25 04:51:39
I got some good debugging done for something I'm calling an "Allenspace Sound Generator". It makes a continuous, spacey collection of sine waves, with some controls for the generation rate, the envelopes, the number of tones per cluster and basic pitch spread and pan spread. There is also a weird gizmo called a "flibber" which basically intermixes tracks rather than mixing them. This gizmo was an attempt to recreate a faulty version made unintentionally that had some bizarre multi-threading issues and is very glitchy sounding.

At this point the GUI is totally placeholder, making use of a single slider tool that only shows a label and outputs a normal (0..1). AND IT HAS NOT BEEN GIVEN MUCH TESTING and will crash (turn non-functional) if you just hit "Start" without moving ALL the sliders first. If you want to give it a try, I recommend first putting all the sliders near the middle and working from there. Or wait until I figure out a reasonable GUI.
GUI is now more informative if not particularly friendly. Now has reasonable initial settings so can just hit play and get something. One highly intermittent known bug remains, though.

http://www.java-gaming.org/user-generated-content/members/27722/allenspace.jar

I am a bit fried and will give it the GUI more work as a side priority over the next week or two, probably. It potentially could also be run from the jar via an api, if you have a need for some spacey atmospherics.

Sounds pretty cool: get two running at the same time, and have one "flibber" and the other straight. I want to expose some controls over the "flibber" parameters, and investigate adding various forms of lfo modulation to the sines.

*

Did some research on freesound.org, for rain and wind-in-trees, for candidates effects to use in Vangard. I think I have found a couple improvements over the first wind-in-trees attempt. In the next while, I'll be expanding the api to allow ags1 to directly trigger audio events that are part of the sound-scapes. Right now she can only manipulate the volumes, not the trigger events or control the stochastic timing algos. A future api will allow the ability to control the stochastic timings as well. For the first pass (built over the last few weeks), I just wanted to get something that would work, sound decent, and could be a placeholder for a while. (The jar is 1MB and runs solo at about 1% cpu on my PC.)

*

So what I really wanted to get to today was this:

> downloaded LWJGL
> set up a project in Eclipse
> got the red block "HelloWorld" program to run (with some fussing, trying to figure out what they meant when referring to "your natives" and how to specify launch arguments which I hadn't done before)

The hello world program didn't run on my laptop (GLFW_API_UNAVAILABLE error. but it IS running on my desktop. Yay!

Also gathered up locations of documents for study including an example of audio playback via OpenAL. The goal is to be able to output my audio over OpenAL, via LWJGL.
28  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-23 21:20:59
Started rewriting my entire graphics engine , to be perfectly honest the previous one was pretty shoddy now looking at it...
It is so much better to have tried, and learned from the process, than done nothing at all.
29  Game Development / Newbie & Debugging Questions / Re: getters and setters vs public vars on: 2016-08-22 00:13:39
A setter/getter construction ensures that every time the information is accessed, it goes through a single, centralized set of rules established by the instance of the class itself. These rules can even be written to reflect state changes of the class instance which would complicate the coding needed from classes that access these variables.

Also, it allows you to freely change what is behind that wall or api without having to track down every instance in which you access that class from other classes. Suppose you do something simple like change from doubles to ints or vice versa. The setter/getter can allow the input and output to stay the same (perhaps employing a cast) while the innards change. Then, you don't have to find and change every instance where the method is being called and change each one.

(The nature of external classes is that they tend to multiply in quantity, and it can become increasingly hard or annoying to track down every place where the direct access is made.)

Using setters/getters is a looser form of coupling between classes than allowing direct access. Loose coupling between classes can be a big benefit, especially when getting into multiple-thread programming.

In general, I'd recommend only using direct access to variables if:
* they are in a private inner class or an additional class on a class and are only accessed directly by the outer or main class on that file;
* they are a class that consists only of fields and values, where instances serve as arguments to a more "functional" class or method and are tightly managed.

But maybe, since you are curious and skeptical, it would be good to just go ahead and do it (skip the setters/getters) and see what sort of trouble you get into (or don't get into) down the road. Sometimes the only way to understand is to do and grapple with the consequences.
30  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-08-21 20:08:47
Young barista at the coffee shop where I often do my morning coding wants to "learn everything there is to know" about game coding. Shades of Dunning-Kruger! He is very much into playing games, and looking at saving up to attend some tech school specifically for game coding. I tried to suggest a decent community college (more affordable) and free online courses of study (e.g., w3.org tutorials) while he is saving up money to start the private college. Also, the importance of a strong math background.

But he seems mesmerized by the promise of the private school to help graduates find actual work in the industry. He is a nice fellow and was polite, but I think pretty set on his plan and is putting me into a old fogey category of unsolicited advice giving. I assume that as he starts to learn more about programming, he'll get a better concept of just how big the field is and the impossibility of "knowing everything there is to know", and also that there are decent strategies for various degrees of specialization.I don't know if anyone "gets" that trade schools generally fall short with "placement" promises until they actually experience this for themselves.

Just saw DVD of "The Big Short". Highly recommended. I identify (more Dunning-Kruger, but on my part?) with the fellows that are sticking to their long shot strategies despite people telling them they are crazy, in my going all-in on Java procedural audio. I don't expect a huge payout, but advancing my capabilities with it still seems like it could lead to something real.
Pages: [1] 2 3 ... 46
 
xTheGamerPlayz (49 views)
2016-09-26 21:26:27

Wave Propagation (248 views)
2016-09-20 13:29:55

steveyg90 (361 views)
2016-09-15 20:41:23

steveyg90 (362 views)
2016-09-15 20:13:52

steveyg90 (403 views)
2016-09-14 14:44:42

steveyg90 (427 views)
2016-09-14 14:42:13

theagentd (449 views)
2016-09-12 16:57:14

theagentd (381 views)
2016-09-12 14:18:31

theagentd (299 views)
2016-09-12 14:14:46

Nihilhis (730 views)
2016-09-01 13:36:54
List of Learning Resources
by elect
2016-09-09 09:47:55

List of Learning Resources
by elect
2016-09-08 09:47:20

List of Learning Resources
by elect
2016-09-08 09:46:51

List of Learning Resources
by elect
2016-09-08 09:46:27

List of Learning Resources
by elect
2016-09-08 09:45:41

List of Learning Resources
by elect
2016-09-08 08:39:20

List of Learning Resources
by elect
2016-09-08 08:38:19

Rendering resources
by Roquen
2016-08-08 05:55:21
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!