Java-Gaming.org Hi !
Featured games (87)
games approved by the League of Dukes
Games in Showcase (649)
Games in Android Showcase (181)
games submitted by our members
Games in WIP (700)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 43
1  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-27 18:47:20
Quote from: nsigma
Hope that makes some sense.

It does, and I appreciate the advice, and I've thought this through a couple of times. Sadly, the complexity of the coding involved, with my modest skills, would likely set me back several months, maybe even a year or more. I am going to stick with the adage: first get it to work, then make it more efficient. The "it" that needs to work are the higher level, procedural music/sound tools. At that level, what I'm bringing to the party are insights and ideas arising from my experience, knowledge, craft and vision as a composer.

If something can be accomplished that allows for modest arrangements involving maybe a half dozen synths or 32 note concurrency, then that will be proof of concept, and will justify devoting more time to coding to double or triple the number of notes, say. (Or lead to financing that allows hiring a pro to do it!)

Quote from: sazkul7c1
...advice to use interfaces.
Interfaces are great. I have been preferring them (and composition) over extensions for a while now. Getting better with them, with understanding how to use them with functions seems to me to be a core Java programming skill, as well as contributing to functional programming chops that are potentially helpful with almost any language. Seems very much worth the investment.
2  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-27 04:43:45
When you get a chance and can possibly post your recent coding efforts I'd be glad to do a code review and see if there are any suggestions I can offer.  Are you targeting JDK 8?
Cool!
Yes, I'm using JDK 8. I guess that means I have "targeted" it. I did not think through a decision based on who might be using or running the code--just on my attempt to stay current with skills and learn new stuff.
3  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-27 00:59:07
I'm skimming on cell.  This sounds like a good place to use a functionalinterface

I was kind of thinking the same thing. Again, will have to do this slowly, as it is relatively unfamiliar territory, and unclear to me if I can do it without adding a significant cpu cost where the composed function is within the most expensive while loop.

[Example of alternate while(): I'm thinking about using a 32-frame collector size as an alternative to the current 1-frame size. All the polyphonic Note values are "collected" by the associated Synth and summed and handed as a single value to the audio mixer. The new-to-me idea is to make the 32-frame collector a circular queue, and iterate through it one frame at a time (continuing to process 1 frame at a time), but where the size of the collector allows one to put the left or right stereo track data back to where it will be read at a later iteration, based on the panning value. It seems to me, at 44100 fps, 32 frames will large enough to accommodate a reasonable approximation of the speed of sound traveling the distance of one head's width, which theoretically is the amount of time delay that would be relevant for temporal binaural effects. To be experimented on soon! This is based upon another one of Riven's suggestions, from another thread.]

Will have a chance to look deeper after I get some stuff about controlling timbre better organized/architected, which seems to be occupying my brain right now.

Quote from: nsigma
A comment on your specific use case - most polyphony code I've seen does both these things.  They use a fixed pool of SynthNote, and search for an available note in the following way.

Useful! Thanks. I will probably make that set of steps one of several options. One reason not to do so: am wanting to give the environmental placement and individual timbre as much weight as pitch in the first step. In that case, might as well just go for first unused. But this is getting off topic. Better to message me if there is more you want to say more about the getNote() process, unless it involves composition/functional interface aspects.
4  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-01-25 05:49:37
Hell yes, SATs were delayed because of snow. Hoping to get a 2000+ this go Smiley got a 1890 something last time.

Good luck!

When I took the SAT (long ago) there were only two tests, English and math, 800 pts possible each. I "studied" for the English by reading lots of Huxley and looking up every word I didn't know. Only managed 700 on the English, but strangely it was at a higher percentile (of UCB applicants) than the 750 I got on the math. It got me into UC Berkeley, my goal. (But the lack of study skills--why study if you can get ok grades via cramming?--made getting through college very difficult.)
5  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-24 23:51:12
Riven, I just finished a successful implementation of your suggestion, and it works quite well.

What I did:

1) Created an interface each for methods aa(), bb(), cc(), named InterfaceAA, InterfaceBB, InterfaceCC.
2) Created a default implementation for each, as classes CoreAA, CoreBB, CoreCC.
3) Created alternative implementations for each, as classes AltAA, AltBB, AltCC.

4) In my AbstractCoreClass, I created three variables: interfaceA, interfaceB, interfaceC and instantiated them with CoreAA, CoreBB, CoreCC, as the default implementations.

5) In my AbstractCoreClass, the method aa() was rewritten to execute the following: "interfaceA.aa();" Parallel writing for bb() & cc().

Now, when I create a new class that extends AbstractCoreClass, I have the option of instantiating the alternative implementation classes and loading them into the corresponding variables. I also still have the ability to override classes in the AbstractCoreClass and to write additional methods.

Is there an existing name for this pattern?

Now to figure out if I can make this work with my specific case! I didn't get into describing the specifics because of not wanting to complicate things.

>> stop here for tldr version <<

This is for use with the synthesizers I have been writing. I had been cutting and pasting lots of code, much of it duplicate, over the course of the last year and a half or more, making around 20 individual synths. I came up with some changes that I wanted to apply to all of them, but resistant to making the change in each and every synth. So, I created an abstract CoreSynth with the desired new capabilities, and have been rewriting my synths as extensions of this CoreSynth (am about 2/3rds through, first pass).

During the course of the rewrite, I started seeing where it would be beneficial to have something like another layer of abstract synths with certain "extras" or "special features" so that the extra or special feature would NOT have to be rewritten for each concrete synth that uses it. That is what motivated this thread. With the pattern you describe, the functionality in the synth will be rewritten to have a default implementation and an alternative, optional implementation. I won't have to actually create a proliferation of intermediate abstract synths, but instead can optionally put in the alternate methods as needed.

Here is an example. The synth has a premade pool of SynthNotes, which match the polyphonic capability of the synth. The default implementation searches through this collection of notes for one that is flagged 'isAvailable".

An alternative implementation defines the pool of SynthNotes by creating one per each permitted pitch. When a new SynthNote is needed, the search pulls the SynthNote for that specific pitch, and restrikes it (whether the note is playing or not).

The first case is good for music where few notes play at the same time, but the choice of pitch is wide or undetermined. The second case is good where many notes are heard at once, but from a known limited set of pitches. The need for many notes at once is a common result where there are long decay times. The restriking of a note that is in the process of decaying can sound perfectly clean and requires less cpu than having multiple instances of the same note, where the loudest effectively masks (aurally) those that are more decayed.

Other examples are things like real-time volume or real-time timbre response, which require slightly more steps in relatively costly while loops (where target levels are reconciled with actual levels). So, instead of making those capabilities "default" and present on every synth, I wanted to make them optional.
6  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-24 20:23:36
Thanks Riven! I not a good enough programmer to just read and imagine the solution you proposed. I'm going to have to use my fingers and toes and Eclipse to try and program a simple case what you described. Exercises like these will eventually make it easier to visualize via reading the verbal description. (I hope!)

It does seem that Decorator (which I HAD come across before) isn't exactly what I'm looking for, as the result of that pattern is an instance of a class, not an abstract class that itself can be subclassed.
7  Game Development / Newbie & Debugging Questions / Re: design pattern to avoid exponential number of extensions on: 2016-01-21 23:01:07
OK, just spotted the "decorator" pattern and am reading up on it. Maybe it will help.
8  Game Development / Newbie & Debugging Questions / design pattern to avoid exponential number of extensions on: 2016-01-21 22:58:38
This conundrum is making me feel like a newbie. I'm trying to follow the standard advice of limiting the amount of duplicate code.

Let's say we have a fairly involved abstract class as a starting point.
Let's say it has methods aa(), bb(), cc(), dd(), ee().

Now, lets say that of the subclasses being made, some use identical code override aa(), others use identical code to override bb() and others use identical code to override cc().

One solution would be to make three abstract subclasses to match the three common cases. Then the overriding code is only written once, one for each case.

My conundrum, though, is that the subclasses that require overrides of aa(), bb(), and/or cc() may do so in any combination. For example one requires the new aa() and bb() but not the new cc(), or another requires the new bb() or cc() but not aa(). All in all, there are (2^3)-1 possible combinations of these method overrides. Writing each abstract subclass with duplicate code for the various method overrides is what I am hoping to avoid. (Am also worried about the growth pattern getting worse if yet more common overrides prove useful and independent.)

At that point, maybe it makes sense to just store the overriding methods as text file templates and paste them in when making the subclasses directly from the abstract class.

Seems like this would be an issue that has been solved many times in the past, but I am failing to think of keywords to search for how others have handled this or if there is a design pattern for it.

Anyone else familiar with how this might be best implemented? Or do I just live with the duplicate code.
9  Game Development / Newbie & Debugging Questions / Re: Problem with serialization on: 2016-01-17 23:41:51
No way to tell without the actual error stacktraces. Ask your users to start the game on the console and post the full output. Also are you packaging your own copy of the JRE? If not, it's highly recommended to not have to debug for a myriad of JRE versions. Also start the game from a wrapper to set required heap and stack sizes yourself.
This is what I was going to recommend, but cylab got there first.
10  Game Development / Game Play & Game Design / Re: Risk design on: 2016-01-17 23:39:56
I am assuming you are referring to the board game "Risk". Is that correct? If so are there licensing issues?

Just a reminder, JavaFX is very powerful as well as easier to use than Swing, and well integrated with Java now (compared to the first attempt several years ago). You don't mention that as a possible way to program the graphics, but I think it would be easier and cleaner than Swing. Worth taking a bit of time to consider this, especially since there seems to be more emphasis on supporting and developing JavaFX going forward than Swing, at this point.
11  Java Game APIs & Engines / Java 2D / Re: Difficulties with using RadialGradientPaint as a lighting system on: 2016-01-17 23:34:55
A quick question or two to clarify: (1) Does this only happen while moving? Does the clipping go away after the objects are stationary at the new locations for a number of frames? (2) Can this clipping occur on any edge (top, bottom, left, right) or multiple edges?
12  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-01-16 06:28:12
Why is it so hard to send out invoices sometimes? You'd think a person would be impatient to get the ball rolling for the income to show up. Got my bill sent to UCSF today for an ongoing part-time contract. I had finished a stage of the Shepard Chord improvements yesterday, but was trying to use discipline to not post until my "day job" work was invoiced.

So, now up: the Shepard Chord program can glissando "forever" either up or down. I also posted a version of the InputStream I wrote for saving these sounds as wavs for use in games.

Coming up next, an API for use of the jar as a library. Have to set up and test that sort of use first, though. Am open also to requests to modify things for specific game sf/x's.
13  Game Development / Shared Code / Re: Audio: Write generated PCM to file on: 2016-01-16 05:57:15
The following is one of two InputStreams that I am now using with my audio code, and with the new Shepard Chord builder, when saving data to wav files. The other is specifically tied to the output of my mixer, so won't be of as much use for people. But for this one, it allows you to load an array and save it as a playable stereo wav file. Or more precisely, it allows you to use it as an input parameter when creating an AudioInputStream which in turn can be used for writing via AudioSystem.write().

The expected data array should be stereo floats (i.e., left track, right track), one pair per frame, where the floats range from [-1 to 1].

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
69  
70  
71  
72  
import java.io.IOException;
import java.io.InputStream;

public class StereoPcmInputStream extends InputStream
{
   private float[] dataFrames;
   private int framesCounter;
   private int cursor;
   private int[] pcmOut = new int[2];
   private int[] frameBytes = new int[4];
   private int idx;
   
   private int framesToRead;
   
   public void setDataFrames(float[] dataFrames)
   {
      this.dataFrames = dataFrames;
      framesToRead = dataFrames.length / 2;
   }
   
   @Override
   public int read() throws IOException
   {
      while(available() > 0)
      {
         idx &= 3;
         if (idx == 0) // set up next frame's worth of data
         {
            framesCounter++; // count elapsing frames
           
            // scale to 16 bits
            pcmOut[0] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
            pcmOut[1] = (int)(dataFrames[cursor++] * Short.MAX_VALUE);
           
            // output as unsigned bytes, in range [0..255]
            frameBytes[0] = (char)pcmOut[0];
            frameBytes[1] = (char)(pcmOut[0] >> 8);
            frameBytes[2] = (char)pcmOut[1];
            frameBytes[3] = (char)(pcmOut[1] >> 8);
           
         }
         return frameBytes[idx++];
      }
      return -1;
   }

   @Override
   public int available()
   {
      // NOTE: not concurrency safe.
      // 1st half of sum: there are 4 reads available per frame to be read
      // 2nd half of sum: the bytes of the current frame that remain to be read
      return 4 * ((framesToRead - 1) - framesCounter) + (4 - (idx % 4));
   }

   @Override
   public void reset()
   {
      cursor = 0;
      framesCounter = 0;
      idx = 0;
   }
   
   @Override
   public void close()
   {
      // nothing to close, actually
//      System.out.println(
//            "CoreMixerInputStream stopped after reading frames:"
//                  + framesCounter);
   }
}
14  Games Center / WIP games, tools & toy projects / Re: Audio library demos on: 2016-01-16 05:38:31
More than 60 days since last post! Dang, it took a long time to finish the modifications/improvements to the Shepard Chord builder.

New features:
1) the glissando function works now
2) real time volume changes are working
3) two 3-octave ranges (high, low) and a 4-octave version (wide) to choose from
4) ability to take a loop and save it to a wav file now in place

About the last feature: some improvement could be useful, as the "Moog52" patch has a distinct attack and the LERP that is done to smooth out the loop (for the gliss version) doesn't eliminate the change in timbre.

But, maybe this is cool: if you load the wav as a Clip and play it via Loop, it will ascend or descend smoothly and continuously. No breaks.

Last thing: it should be possible to load the file as an externally linked jar to a project, and to generate tones procedurally. The nice thing is that you are not limited to the parameters I set. For example, you can set the ascending or descending going so fast that it turns into a sort of frequency modulation effect. Also you can set your own pitch range and starting pitches, rather than relying on the values hard coded into the gui.

I will test this shortly and post an API.

A spinning skin would be nice...am using JavaFX for the gui.
15  Game Development / Shared Code / Re: Audio: Write generated PCM to file on: 2016-01-12 22:00:19
Thanks for the compliment!  Smiley  
I put a bunch of time revising the code so it would be a readable expression of the algorithm. Nice to have this recognized. And happy to have improvements posted!

InputStream has important requirements. It restricts the output to ints which must have the following values: 0..255 with -1 used to signal end-of-file. The (byte) cast returns values from -128 to 127. The (char) cast returns 0 to 255. So in addition to losing the sign bits (causing bad distortion to the data), there was the problem that (byte) casting would return occasional -1's which would prematurely signal the end of the InputStream.

I think what happens at the bit level can be illustrated with the following  audio byte: [1000 0011]. When the byte is placed into an int, the sign bit gets moved over, e.g., [1000 0000 0000 0011], and the value read by the AudioInputStream's inner workings when writing will be [0000 0011] rather than the correct [1000 0011].

In fact, I made this very mistake, trying to use (byte) cast and getting truncated noise in my test files. Being thrown by this was part of what made me miss my "What-I-did-today" deadline.

Also, I think in order to compile, you need the first line to be as follows:

1  
    short pcmValue = (short)(audioVal * 32767);


Maybe the following is a step in the right direction:
1  
2  
3  
    int pcmValue = (int)(audioValue * Short.MAX_VALUE); // scale value to signed 16 bits
    frameBytes[0] = (char)pcmValue;  // "little" byte (unsigned)
    frameBytes[1] = (char)(pcmValue >> 8);  // "big" byte (unsigned)


If the >> operator works with short, then pcmValue could be a short. But I think that Java represents shorts internally as ints. I'm a little shaky on the specifics. Hence the overly long debugging session!
16  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-01-12 19:18:32
Task I declared I would finish on Sunday was complete yesterday morning on bus to work, and formatted for posting here just now.

http://www.java-gaming.org/topics/what-i-did-today/33622/msg/352716/view.html#msg352716

On to integrating the new functionality for writing audio to my ShepardChord generator! Maybe I can get that posted today (might be ambitious, will require modifying for stereo, testing).
17  Game Development / Shared Code / Audio: Write generated PCM to file on: 2016-01-12 18:58:14
The following code will allow you to write a wav file from procedurally generated audio data.

For example, maybe you have written a mono synthesizer or are mixing sounds and want to save the result, as opposed to playing it back. The function that Java provides for writing audio data to file is a method belonging to javax.sound.sampled.AudioSystem:

1  
2  
3  
static int    write(AudioInputStream stream, AudioFileFormat.Type fileType, File out)

Writes a stream of bytes representing an audio file of the specified file type to the external file provided.


A tricky aspect is creating an AudioInputStream from procedurally generated audio data (assumed to be PCM values encoded as signed, normalized floats). AudioInputStream takes either TargetDataLine or InputStream as a parameter. Instead of streaming, this implementation subclasses InputStream, for outputting a predefined number of frames. The inner class PCMInputStream can be extracted and modified from the example code. Feel free to modify to work on stereo or different audio formats or to accept the audio source function as a parameter instead of being hard coded.

The example code is in the "get it to work" stage, with extra comments. It creates a 2-second long note, pitched at E above middle C. As it starts and stops abruptly, there will probably be a click at the beginning and end.


1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
69  
70  
71  
72  
73  
74  
75  
76  
77  
78  
79  
80  
81  
82  
83  
84  
85  
86  
87  
88  
89  
90  
91  
92  
93  
94  
95  
96  
97  
98  
99  
100  
101  
102  
103  
104  
105  
106  
107  
import java.io.File;
import java.io.IOException;
import java.io.InputStream;

import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;

public class DevAudioWrite
{
   PCMInputStream ps;

   public static void main(String[] args)
   {
      DevAudioWrite dw = new DevAudioWrite();
      dw.ps = dw.new PCMInputStream();
      dw.ps.framesToFetch = 44100 * 2; // two seconds at 44100 fps
     
      // MONO wav format used in example
      AudioFormat audioFormat = new AudioFormat(
         AudioFormat.Encoding.PCM_SIGNED,
         44100, 16, 1, 2, 44100, false);

      // params: InputStream, AudioFormat, length in frames
      AudioInputStream ais = new AudioInputStream(dw.ps, audioFormat, dw.ps.framesToFetch);
     
      try {
         System.out.println("ais.format()=" + ais.getFormat());
         System.out.println("ais.frameLength()=" + ais.getFrameLength());  
         System.out.println("ais.available()=" + ais.available());
      } catch (IOException e1) {
         e1.printStackTrace();
      }        
     
      File file = new File("test.wav");
      System.out.println("file is at following location:");
      System.out.println("" + file.getAbsolutePath());
     
      try {
         
         AudioSystem.write(ais, AudioFileFormat.Type.WAVE, file);

         System.out.println("finished AIS, available() = " + ais.available());

      } catch (IOException e) {
         e.printStackTrace();
      }
   }

   class PCMInputStream extends InputStream
   {
      private int cursor, idx;
      private int[] frameBytes = new int[2];
      int framesToFetch;
     
      @Override
      public int read() throws IOException
      {
         while(available() > 0)
         {
            idx &= 1;
            if (idx == 0) // set up next frame's worth of data
            {
               cursor++; // count elapsing frames
               
               // Your audio data source call goes here.
               float audioVal = audioGet(cursor);
               
               // convert signed, normalized float to bytes:
               audioVal *= 32767; // scale value to 16 bits
               frameBytes[0] = (char)audioVal; // little byte
               frameBytes[1] = (char)((int)audioVal >> 8 ); // big byte
            }
            return frameBytes[idx++]; // but only return one of the bytes per read()
         }
         return -1;
      }  

      // Following is a substitute for your audio data source. Can be
      // an external audio call instead.
      // Input: if your function needs no inputs, eliminate the input param
      // Output: must be normalized signed float, one track of one frame.
      private float audioGet(long ii)
      {
         int frequency = 330;
         return (float)Math.sin((ii * frequency) / 44100f * 2 * Math.PI);
      }
     
      @Override
      public int available()
      {
         // Took a while to get this!
         // NOTE: not concurrency safe.
         // 1st half of sum: there are 2 reads available per frame to be read
         // 2nd half of sum: the bytes of the current frame that remain to be read
         return 2 * ((framesToFetch - 1) - cursor) + (2 - (idx % 2));
      }
     
      @Override
      public void reset()
      {
         cursor = 0;
         idx = 0;
      }
   }
}
18  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-01-10 19:14:24
What I am GOING TO DO today (dang it):

Make an extension of InputStream that I can tie to a data source (such as a synth that outputs stereo PCM) and use as a way to save that data as a wav file, using AudioSystem.write(audioInputStream, fileFormatType, file). Hopefully this will work rather than having to code a solution implementing a TargetDataLine, another possible path but a bit messier and not needed since I know the length of the destination wav file.

If I get a general solution working I will post the code, along with an implementation using the JavaFX FileChooser.

[EDIT: struggled with this for several hours, did not complete the task.]


[EDIT 2 (1/12/16): got it working finally. In addition to some off-by-one-fu, I overlooked two requirements of InputStream: *needs to send -1 when done, *needs to send ints in range 0..255, not signed bytes. Code is here.]
19  Discussions / Community & Volunteer Projects / Re: Programmer looking for team on: 2016-01-08 21:46:22

When you go from a one-man team to a two-man team, you suddenly have a lot of communication to do, which you did not need before (because you only had to communicate with yourself).

It seems to me that much of what is mentioned should be documented, regardless, if you want to have any chance of efficiently modifying/maintaining the code after having taken time off, or disengaged to go deep into any other project. There may be drawbacks to two-person teams, but the fact that it enforces some communication/documentation discipline is a benefit, imho.

[EDIT: and "hell" can result when the other team member drops the ball in this regard.]
20  Discussions / Miscellaneous Topics / Re: What I did today on: 2015-12-29 20:04:05
p.s And I watched my old Source code (open source) - it looks horrible XD
In memory, he was much better,
I feel very sorry for it, but he was the best that I could write in past

Yeah you're truly never the best you can be. You can always get better. A year from now you will look at your current code and say the same thing!

Good example: Stephen Curry compared to last year (when he won MVP). Go Warriors!
21  Game Development / Game Play & Game Design / Re: Is a Binaural Sound Engine in Java possible? on: 2015-12-29 05:43:08
Uh...Mike, I'd like you to meet Neil. Neil, Mike. I believe you two have some interests in common...
 Smiley
22  Game Development / Game Play & Game Design / Re: Fatigue as a design element on: 2015-12-28 04:47:18
@CommanderKeith Interesting link, thanks! I'm going to have to read the entire 8 Core Drives.

I am just now remembering that when I played FarCry2 a while back, the character had to go to sleep now and again. I can't recall if it was mandatory or if there was degradation in abilities or performance. I do remember there were some long waits for scheduled events on occasion and sleeping was a good way to kill time. Fortunately, when sleeping, it all happens at once--you don't have to actually wait a few hours to play again. The strategic element is deciding whether you can "just do it" whatever the task is, if you are on a roll, even with the fatigue, or if you decide to make a retreat to rest and try again, taking the loss in elapsed time. (And in this it kind of resembles binge programming.)
23  Game Development / Game Play & Game Design / Fatigue as a design element on: 2015-12-27 03:54:52
I came across a debate recently about whether audio should be pristine (exclusively present the game elements recorded as nicely as possible), or if it should be more "realistic". In the real world, there is a tremendous amount of background noise, for example.

Because brains have mechanisms for focusing on sounds to the exclusion of other sounds (see "cocktail party effect"), it seems to me that background noise can be kept pretty minimal without disturbing the sense of realism. I don't know the specifics of how this happens (I recall terms like "gamma system," "reticular formation & habituation," "lateral inhibition" -- terms I came across back in the years I was studying cognitive psychology and audio perception at UC Berkeley), but in effect, our auditory system can to some extent turn down the perceived amplitude of some sounds and bring up the volume of others. Biggest component may be if you can link to a visual correlate to the audio source. The visual reinforcement helps the brain synch to the target audio and select it for cognitive amplification/focus.

At the same time, background sound can add a lot of texture and sense of place, as well as have an emotional influence. So it should often be a good idea to make it part of the sound design. But most of the time, background sounds are ignored and/or not even noticed, so maybe it is more "real" to have them be very quiet or at subliminal volumes, whatever that means. (Example, people are able to sleep in a busy city, even with lots of noisy traffic 24 hours a day. The brain habituates to the noise, recognizes it as not important and dials it down.)

The new thought, for me, was that the amplitude of the background sounds could be linked to a "health" or "fatigue" reading. I think most agree that as we get more tired, it often becomes harder to focus or concentrate. Introducing additional distracting elements or just making the background noise louder relative to more important sound content could simulate the effect of being tired. An avatar, upon waking, could have the background effects be quieter and less attention grabbing.

There might be ways to also do this visually. Somehow, the graphical representation of the world would have to be something where you could modulate how busy or cluttered the look is, while ostensibly keeping the appearance the same. For example, irrelevant texture details on a surface might get progressively more contrast or edge enhancement with fatigue.

I'm wondering if there are games where this is done and how useful this might be as a design idea.

Another component might be adding a bit of lag and/or introducing an additional lack of precision to the GUI controls for movement. Reaction times do get worse with tiredness. But this could easily be overdone and result in something that is just not fun anymore.

Kind of a weird notion when I first came across it, that in effect we have faders in our brains that we don't even know we are using.
24  Game Development / Game Play & Game Design / Re: Is a Binaural Sound Engine in Java possible? on: 2015-12-27 03:22:34
Interesting concept, using ray tracing to get real-time audio information.

https://www.youtube.com/watch?v=05EL5SumE_E

Comments indicate that the effort wasn't a complete success. Though it still seems impressive to me.
25  Discussions / Miscellaneous Topics / Re: What I did today on: 2015-12-27 03:17:24
Aaargh!

Been working on various fine points on the "Shepard Chord" audio demo. I now have it able to pitch the effect at different octaves (am going to probably have a "high" and a "low" in the gui). I also put in an option to select either a 3-octave or 4-octave effect.

But somewhere in there, because I neglected a basic and important design principle, the pitch gets progressively wonky over time. There are many oscillators that are having their frequency altered, and because they should all maintain fixed harmonic relationships, I should have just one stem source for the pitch changes, and generate the set via fixed multipliers.

Have to go back to figuring out the right 'architecture' and rebuild, once again. Third pass is the charm?
26  Game Development / Game Play & Game Design / Re: Is a Binaural Sound Engine in Java possible? on: 2015-12-27 02:56:50
I think Riven is mostly, but not completely right about the timing being a source of binaural location. This is the most important component for low to middle frequencies. As you get to wavelengths that are smaller than the size of your head, though, amplitude becomes increasingly more important, especially for steady state high sounds. This is what I remember from when I worked at a work-study lab assistant at a binaural lab at UC Berkeley back in the 1980's.

Another consideration is the frequency content. As sounds travel larger distances through air, the high frequency components die out quicker than the low components. You can significantly enhance the effect of distance by doing some low pass filtering.

Yet another consideration is that our ear shapes tend to "color" sound in a subtle way, depending on the direction of approach, and this can also help with correlating an incoming sound with its source.

Can Java handle this? I think so. I am trying to do so. On the "easy" side, when mixing sounds, one can make use of stereo PCM coding. It's not at all hard to take a sound value from some source and multiply it by, say 0.4 for the right and 0.6 for the left and have it sound like it is some degree towards the left. If you want to play with the timings, then it is mostly a matter of creating an array to use as a holding area and cursor through it with linear interpolation if you want to get smooth variations (smoother than that which can be done at 44100 fps increments). I've created arrays such as this and used them for echo and/or flanging effects. Easy to do, relatively.

I have a thread where I am showing java audio programs as I write them, with a couple sample programs which you can download and hear for yourself. In the first (CirclesDemo) there are synthesized musical motifs that are played with a panning setting that correlates to a ball's location off of the center axis, and a volume that correlates to the distance from the center. Six sound sources (all generated in real time) are shown, moving about the screen, where the position data is sent in real time to the mixer. In that demo, I'm only using volumes to create the binaural effect. I guess I should consider putting a little timing adjustment as well. Hmmm. Interesting idea. (Might be necessary to filter out the high components before the timing adjustment in order to prevent some comb-filtering artifacts. Worth a test when I get a chance.)

http://www.java-gaming.org/topics/audio-library-demos/36682/view.html

I haven't done more than the crudest of filtering so far. It seems costly in terms of CPU, but a lot of that is probably my ignorance and trepidation. Someone like Neil Smith (nsigma) has done this (and much, much, more -- check out his Praxis site!) and can be of more help with that. Just give him a day or three to notice this thread.
27  Game Development / Game Play & Game Design / Re: Using noise generation to generate map on: 2015-12-15 08:16:06
I like the Simplex Noise implementation from Gustafson, myself. But whichever one uses, mostly the same principles apply.

If you want more abrupt changes, then you need to add higher frequency content. If you want to stay within the Simplex-random-generative algorithm, you can add more octaves, via a fractal weightingl to get steeper slopes. But it is also perfectly legit to "modulate" the simplex algorithm with your own functions over a given area.

For example, I made the graphic below by modulating the function with a sin function on the x-axis and with a radial gradient (getting the slight arch effect, center pole is higher than the end poles).



Or if you want a specific geography, you can just generate it and put in in place (as ShadedVertex suggests) but I'd also use some linear interpolation for a cross-fading at the boundary area between the two sources.
28  Game Development / Game Mechanics / Re: Simplex noise for reasonable maps on: 2015-12-15 08:05:24
Ken Perlin describes using an ABS function on the [-1...1] output, and mapping the resulting [0..1] to a color map or color function. With the ABS function, he calls the result "Turbulent" noise.

When [-1..1] is mapped to [0..1] via the function f(x) = (x + 1) / 2, and then applies the color mapping function,  then he calls it "Smooth" noise.

To me, the "Turbulent" noise does a better job of making landscapes/maps. There is more activity at the 0 end, and a sort of "crease" or "fold" in the function, and as it heads towards 1, there is an ever decreasing likelihood of occurrence.

Octaves relate to the rate of change. The higher the octave, the more variability to the result. Or another way to think about it, the octave relates to how close the reference poles are to each other. The curvature results from angles being assigned at the poles and lines computed to connect the space so that each pole is intersected at the random angle assigned to it.

When you combine octaves, you are combining (via weights) the differing rates of randomness. If the higher octave is given a higher weight, there will be a finer grain to the randomness. For some reason, the "fractal" progression of weights and functions seems to be one of the most satisfying, including for creating topology. In that case, the weight is inversely proportional to the octave.  1/2 * octave 1 + 1/4 * octave 2 + 1/8 * octave 3, where each octave is 2^n. Adding up octaves weights in this sequence is always safely within the bounds of [0..1] whether using the "smooth" or "turbulent" algos above.

My brain is a little too full right now to closely read the code you posted. I hope my above lines are actually helpful somehow and not off topic.

By the way, the hugo elias article is a useful read, but the noise he describes is not Perlin noise or even gradient noise, if I remember correctly. With gradient noise, random gradients (angles) are assigned to the pole points, not random values. I think each pole point is actually of 0 value but the line or plane going through it is at a random angle.
29  Discussions / Miscellaneous Topics / Re: What I did today on: 2015-12-09 21:22:47
Finished writing an abstract "core" phase modulation synth today (working on this two weeks?). It was a bit of a stretch for me as I am used to using "implements" rather than "extends" and the synth has some important inner classes which complicated the use of abstract a bit. The new structure eliminates a LOT of duplicate code and has been made to allow external "EnvelopDataSet" objects to be loaded to alter selected operator envelops, so a single "main" synth can have multiple versions that make use of extended EnvelopDataSets for that synth.

Am toying with the idea of having a GUI made at this point. The data points and coding points are approaching a clarity where a GUI could be used to make and tweak these synths. But first, I want to convert the synths used in the ShepardTone demo, and post the revision that has the glissandi synths.
30  Games Center / WIP games, tools & toy projects / Re: Vangard on: 2015-12-04 19:48:04
I'm curious about the recipe system, how it is going, what the complications are.
Does one specify a series of "actions" or a series of "states"?
When I cook, I rarely follow the recipe in a completely accurate way, for better and often for worse.  Tongue
Pages: [1] 2 3 ... 43
 
KaiHH (9 views)
2016-02-14 15:51:05

KaiHH (167 views)
2016-01-31 23:15:29

sci4me (158 views)
2016-01-23 21:47:05

sci4me (143 views)
2016-01-23 21:46:58

KaiHH (177 views)
2016-01-19 13:26:42

theagentd (261 views)
2016-01-05 17:10:00

ClaasJG (278 views)
2016-01-03 16:58:36

chrisdalke (270 views)
2015-12-28 06:31:21

Guerra2442 (278 views)
2015-12-25 03:42:55

Guerra2442 (282 views)
2015-12-25 03:27:21
List of Learning Resources
by SilverTiger
2016-02-05 09:39:47

List of Learning Resources
by SilverTiger
2016-02-05 09:38:38

List of Learning Resources
by SilverTiger
2016-02-05 09:35:50

Rendering resources
by Roquen
2015-11-13 14:37:59

Rendering resources
by Roquen
2015-11-13 14:36:58

Math: Resources
by Roquen
2015-10-22 07:46:10

Networking Resources
by Roquen
2015-10-16 07:12:30

Rendering resources
by Roquen
2015-10-15 07:40:48
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!