Java-Gaming.org    
Featured games (91)
games approved by the League of Dukes
Games in Showcase (581)
games submitted by our members
Games in WIP (500)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 ... 4 5 [6] 7 8
  ignore  |  Print  
  3D Sound Engine  (Read 75312 times)
0 Members and 1 Guest are viewing this topic.
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #150 - Posted 2011-08-31 17:17:18 »

It should be a fun math exercise,  anyway.  I'll see what I can come up with.  Besides, who knows if at some point a type of head orientation device becomes widely used - then you could simply feed the listener orientation to the SoundSystem and problem solved Smiley

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #151 - Posted 2011-08-31 17:56:16 »

This conversation just totally gave me a great idea for my next project.  A pattern-recognition algorithm that can recognize the eyes and nose from webcam input.  Knowing their position on a 2D cross section of a player's head could be used to calculate a fairly accurate orientation for the player's head.  That could control audio positional data for a scene.  I am totally excited about this!

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Online Riven
« League of Dukes »

JGO Overlord


Medals: 605
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #152 - Posted 2011-08-31 18:02:37 »

You'd also have to know the position and orientation of the webcam.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #153 - Posted 2011-08-31 18:11:12 »

Yep.  I'm thinking a simple "please look directly at the dot on the screen" configuration step to generate a matrix to apply to the later calculated orientations.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline teletubo
« League of Dukes »

JGO Ninja


Medals: 48
Projects: 6
Exp: 8 years



« Reply #154 - Posted 2011-08-31 19:02:23 »

Talking about ideas, if you implement the "cetera" algorithm into your engine, it could also open new possibilities to produce more complex games for the visually impaired people . Or even a game to be played with your eyes closed (and no more excuses to not finishing a game because of the art Wink  )

Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #155 - Posted 2011-08-31 19:50:17 »

I've started coding this.  The math for calculating the timing and gain differences is actually surprisingly simple (that kind of worries me for some reason).  What I have to figure out now is how to take those calculated values and apply them.  I'm thinking a basic two mono input to one stereo output mixer.  Gain differences are easy.  Phase differences are going to be more tricky (specifically, changing the phase difference dynamically as the sound is playing).  I'm not sure if this should be done with slight samplerate changes or by "throwing out" data from whichever side needs to be ahead of the other.  I suppose I'll just play around to see what sounds best.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #156 - Posted 2011-08-31 20:19:40 »

Thinking about these formulas, there isn't any difference between the values you get if the sound is playing in front of you or if it is behind you.  I think there needs to be a bit more to it.  Phase difference shouldn't change, so I think the problem will be in the gain difference calculation.  My initial thought is that sounds from the front should have a greater gain difference (due to the shape of the ears which amplifies incoming sound waves originating within a more-or-less cone shape extending out from the ear.  I'll have to think about this some more...

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Online Riven
« League of Dukes »

JGO Overlord


Medals: 605
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #157 - Posted 2011-08-31 22:46:23 »

As said, you have to simulate the sound waves bouncing the skull, or it will never be realistic. Besides, it's actually very hard in real life too, to distinguish a sound from directly in front / behind you, if there aren't any obstacles that reflect the sound, which give more clues to the brain.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #158 - Posted 2011-08-31 23:30:20 »

Ah, of course.  I was focusing so much on the differences between ears, I left out the echo back to the ear part of the equation.  The phase difference per side is easy enough to calculate (adds one more line to mix each, so 4  total now).  The gain difference for the echo is a bit more complicated, though.  I wonder what an acceptable attenuation would be for inside the skull.  Different than through the air obviously, but no idea where I might find that.  Maybe I could use use an attenuation measurement made through water, since the head is mostly full of fluids after all.

Hmm.. Even with an echo off the inside of the skull (which is more or less spherical), the values are still all the same whether front or back (i.e. symmetric across the x/y plane, so front-right still would still sound exactly like back-right, etc).  I could add another echo per side off some imaginary sphere or cube that the listener is inside, to give the brain more information to process.  But that would again be symmetric across the x/y plane, so mathematically no difference.  Even echoing of some completely randomly positioned object would still be indistinguishable without some visual or other sensory cue to define whether that object was in front of or behind the listener.  I'm really back to thinking this must have something to do with the extra gain added by the ear as a sound approaches a direction directly in front of it.  Am I understanding the concept incorrectly?

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline teletubo
« League of Dukes »

JGO Ninja


Medals: 48
Projects: 6
Exp: 8 years



« Reply #159 - Posted 2011-09-01 02:04:30 »

Quote
The pinna, the outer part of the ear, serves to "catch" the sound waves. Your outer ear is pointed forward and it has a number of curves. This structure helps you determine the direction of a sound. If a sound is coming from behind you or above you, it will bounce off the pinna in a different way than if it is coming from in front of you or below you. This sound reflection alters the pattern of the sound wave. Your brain recognizes distinctive patterns and determines whether the sound is in front of you, behind you, above you or below you.

http://science.howstuffworks.com/environmental/life/human-biology/hearing1.htm


I think that will be a though one to simulate .

Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #160 - Posted 2011-09-01 03:42:19 »

I think that will be a though one to simulate.

Agreed.  So it "alters the pattern of the sound wave", but what does that mean really?  I'm thinking this isn't something that can just be thought up theoretically - it will require some actual measurements and comparisons to first recognize what those pattern differences are, and second to come up with a filter that can recreate them, which can be applied to the audio data.  How realistic can a system be without this component?  Well, I'll just have to see, I suppose.  I'll continue working on the other components, and maybe come back to this one later.

To counter my earlier argument, the skull actually isn't "more or less spherical".  More like an upside-down bowl (a better representation might be a half-sphere with a flat end).  The ears themselves are positioned toward the lower-back, not smack in the middle.  So the echo will take longer to return to the ear if a sound is coming from behind than if it is coming from the front (and longer if it is coming from below than if it is coming from above).  Likewise, it will pass through more space and attenuate more if coming from behind or below.  So even without the pinna component, the brain can probably make the distinction by taking the phase and attenuation differences between ears and comparing that to the phase and attenuation differences between the initial sound and its echo for each side.

All this is really driving home to me just how complex positional audio is in the real world.  As far as we've come with 3D graphics and virtual reality, on the audio side we are still practically in the stone age as we continue to simulate positional sounds using the cosine function!!  It is about time some advances were made in this area.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Online Riven
« League of Dukes »

JGO Overlord


Medals: 605
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #161 - Posted 2011-09-01 08:45:16 »

You have to threat it like photon-mapping: you hit the skull with a bunch of 'audio rays' which bounce around the skull, producing more rays. The more accurate the shape of the head/ears, the more realistic the result. The nice part is that you don't actually have to use any complex math anywhere, 'pattern manipulation' is a side effect of what I just described.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline cylab

JGO Knight


Medals: 34



« Reply #162 - Posted 2011-09-01 09:41:38 »

You have to threat it like photon-mapping: you hit the skull with a bunch of 'audio rays' which bounce around the skull, producing more rays. The more accurate the shape of the head/ears, the more realistic the result. The nice part is that you don't actually have to use any complex math anywhere, 'pattern manipulation' is a side effect of what I just described.

I think you have to incorporate the frequency absorbtion and resonance on every bounce, too. Additionally there is also the sound (mostly lower frequences) that is contributed by the skull itself, but I don't know if this would be significant for spatial detection, since lower frequences usually are considered less important (hence you only have one subwoover, but multiple satelite speakers)

Mathias - I Know What [you] Did Last Summer!
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #163 - Posted 2011-09-01 13:51:58 »

The nice part is that you don't actually have to use any complex math anywhere, 'pattern manipulation' is a side effect of what I just described.
Doing this on the fly would require mixing a massive number of lines if you are going for a truly "photo-realistic" audio effect.  I'm not all that sure it could be done without tens or even hundreds of milliseconds of buffering.  That being said, I think this type of rig could be used to come up with that "complex math" that I'd need to formulate a filter that could be used on the fly (along the lines of texture mapping a low-poly model by using a high-poly version).

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline teletubo
« League of Dukes »

JGO Ninja


Medals: 48
Projects: 6
Exp: 8 years



« Reply #164 - Posted 2011-09-01 15:44:09 »

I think this is what you should research :
http://en.wikipedia.org/wiki/Head-related_transfer_function

there are various papers in the subject in google, but as far as I looked, they are very very complex :/

Offline philfrei
« Reply #165 - Posted 2011-09-01 20:53:37 »

I was taught, in a PsychoAcoustics class on Hearing, 20-something years ago at UCBerkeley, that for low pitches, phase differences are used by the ear to stereo-locate, and that for high pitches (size of the wave < size of the head) the relative amplitudes are used for stereo-location.

The point was that waves that are larger than the head in size will simply go around it, and smaller ones will tend to be blocked/attenuated. Probably not a sharp cutoff between the two regions, but then, it is hard to make a sharp cutoff with digital filters anyway. Still a high-pass filter that attenuates L or R based on angle might be sufficient for game programming 3D. Or maybe in combination with a slight phase-emulating delay for low-pass filtered sounds (delay compensates for speed of sound in air from one ear to the other).

Echoes matter, but it is the first sounds that reach the ear that are the most important for binaural hearing.

This refers to how you would treat a mono sound to be added to a mix, if you wanted to go beyond normal panning which seems sufficient for most 3D games. Theoretical on my part, as I have never experimented with this. But maybe I will after reading a few more chapters of this awesome book! It is a little dated, but the explanations are the clearest I've found on DSP: http://www.dspguide.com/pdfbook.htm

"Greetings my friends! We are all interested in the future, for that is where you and I are going to spend the rest of our lives!" -- The Amazing Criswell
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #166 - Posted 2011-09-01 21:41:02 »

Yes, my current plan is to go with a 3-phase process:

1) Overall logarithmic attenuation based on distance from the listener, plus Doppler effect if enabled (the normal way of doing 3D audio, minus the panning)

2) Phase and attenuation adjustments per side (as described earlier) based on the direction vector, using average values for speed of sound, attenuation, and distance between the ears

3) Additional filtering per side to simulate echoing (either by using a formula derived from "audio ray tracing" a high-poly model of the ears and skull, or by doing the ray-tracing and mixing real time if fast enough)

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline zammbi

JGO Coder


Medals: 4



« Reply #167 - Posted 2011-09-07 10:44:19 »

That sounds awesome. Such a project I would donate too.

Current project - Rename and Sort
Offline gouessej

« In padded room »



TUER


« Reply #168 - Posted 2011-09-08 00:21:16 »

That sounds awesome. Such a project I would donate too.
Me too.

Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #169 - Posted 2011-09-08 02:55:43 »

I will actually say that something strange has happened.  When running the compatibility check, I found that my SoundSystem was actually not compatible with JavaSound.  It's weird since this is considered a "backup" library.  JOAL worked just fine... But this is still kind of strange to me as I (think?) use Javasound to play sound effects right now in some way.  Wondering if anyone had thoughts.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #170 - Posted 2011-09-08 22:42:20 »

I will actually say that something strange has happened.  When running the compatibility check, I found that my SoundSystem was actually not compatible with JavaSound.  It's weird since this is considered a "backup" library.  JOAL worked just fine... But this is still kind of strange to me as I (think?) use Javasound to play sound effects right now in some way.  Wondering if anyone had thoughts.

Which version of Java are you running, and what operating system?

Could you provide the console output from after running the following applet:
Bullet / Target Collision Applet
(I feel silly pushing this dumb applet, but it's the easiest test-case for LibraryJavaSound I have at the moment)

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #171 - Posted 2011-09-08 23:33:20 »

I will actually say that something strange has happened.  When running the compatibility check, I found that my SoundSystem was actually not compatible with JavaSound.  It's weird since this is considered a "backup" library.  JOAL worked just fine... But this is still kind of strange to me as I (think?) use Javasound to play sound effects right now in some way.  Wondering if anyone had thoughts.

Which version of Java are you running, and what operating system?

Could you provide the console output from after running the following applet:
Bullet / Target Collision Applet
(I feel silly pushing this dumb applet, but it's the easiest test-case for LibraryJavaSound I have at the moment)

Running Java 7 on Windows 7 64 bit.  Can't figure out where to obtain the console output but I heard sounds being played by the applet.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #172 - Posted 2011-09-09 13:24:29 »

Oh, well that applet uses the LibraryJavaSound plug-in.  Make sure you are using the latest version in your project:

LibraryJavaSound.jar

If that doesn't work, could you put together a simple test case that experiences the problem, and post the code?  Could be a bug I haven't encountered yet.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #173 - Posted 2011-09-09 14:57:10 »

Oh, well that applet uses the LibraryJavaSound plug-in.  Make sure you are using the latest version in your project:

LibraryJavaSound.jar

If that doesn't work, could you put together a simple test case that experiences the problem, and post the code?  Could be a bug I haven't encountered yet.

Weird, I made a new project that works when I force the soundsystem to be instantiated with libraryjavasound.  I'll do a bit of fiddling to see why the compatibility would say no in my main project.

In other news, my game is 2D, and I actually position the listener at playerx, playery, -25 and play my sounds at z=0, otherwise there is no black and white as to what speaker is playing the sound, which isn't the most realistic (it should play partially out of each speaker, with a percentage based on how far it is to the left or right of the listener in 2D?)  The -25 z fix worked alright, but I'm wondering if anyone has had similar experiences?

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #174 - Posted 2011-09-09 16:34:16 »

The left-right thing is due to the way I calculate the panning (it's just a simple cosine formula).  In a 2D situation where you want to utilize panning based on position on the screen, the way you are doing it is the way a number of others have done it as well - by playing with the z value (closer values pan faster, further values less).  You can also change the attenuation to zero if you are having a problem with the sounds being too quiet at the distance you are listening from.  It is a 3D sound library, so you sometimes have to get a little creative when you use it for 2D.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #175 - Posted 2011-09-09 17:45:02 »

Yeah I'm thinking of just sticking with the Z value.  For anyone else playing with this, our world coordinates are based on pixels (one screen width = 1440 world units), and I'm achieving a decent pan with a default rolloff of .003, ROLLOFF for attenuation mode, and positioning the listener to Playerx, playery, -25, and playing all sounds from sourcex, sourcey, 0.

I'm having a weird issue though, with certain .wav files not obeying the formulas I've set up.  I can be well over 2000 units away from a particular wav source and still hear it as though I'm using ATTENUATION_NONE.  I'm guessing this is just a flaw in the wav file, because for certain files it is working fine.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #176 - Posted 2011-09-09 17:55:41 »

For attenuation to work, be sure to use monotone files (can't remember if that just affected the OpenAL plug-ins only or also the JavaSound plug-in).  If that doesn't help, if you post a link to one of the problem files, I'll take a look to see if there is anything "special" about it.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #177 - Posted 2011-09-09 19:12:16 »

For attenuation to work, be sure to use monotone files (can't remember if that just affected the OpenAL plug-ins only or also the JavaSound plug-in).  If that doesn't help, if you post a link to one of the problem files, I'll take a look to see if there is anything "special" about it.

If they're not "monotone", would there be an easy way with audacity or something to correct the issue with the file?

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline paulscode

Senior Member


Medals: 11


Staff Sergeant


« Reply #178 - Posted 2011-09-09 20:14:45 »

If they're not "monotone", would there be an easy way with audacity or something to correct the issue with the file?
In Audacity, I believe it is an option in the Tracks menu (something like "convert stereo to mono").  Note: monotone is only required for point sources (ones you want to pan and attenuate).  For ambient sources like music, use stereo instead.

We love death.  The US loves life.  That is the difference between us.  -Osama bin Laden, mass murderer
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #179 - Posted 2011-09-09 20:36:51 »

If they're not "monotone", would there be an easy way with audacity or something to correct the issue with the file?
In Audacity, I believe it is an option in the Tracks menu (something like "convert stereo to mono").  Note: monotone is only required for point sources (ones you want to pan and attenuate).  For ambient sources like music, use stereo instead.

Right this makes sense.  I am assuming that this is definitely why the issue is happening, I can think of no other logical reason why.

In that case, if you have a monotone sound and play it without your sound mod and it comes out both speakers, does it just play the sound out of both speakers at 50% volume each or something? 

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Pages: 1 ... 4 5 [6] 7 8
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

xsi3rr4x (64 views)
2014-04-15 18:08:23

BurntPizza (62 views)
2014-04-15 03:46:01

UprightPath (75 views)
2014-04-14 17:39:50

UprightPath (58 views)
2014-04-14 17:35:47

Porlus (76 views)
2014-04-14 15:48:38

tom_mai78101 (101 views)
2014-04-10 04:04:31

BurntPizza (161 views)
2014-04-08 23:06:04

tom_mai78101 (256 views)
2014-04-05 13:34:39

trollwarrior1 (209 views)
2014-04-04 12:06:45

CJLetsGame (216 views)
2014-04-01 02:16:10
List of Learning Resources
by SHC
2014-04-18 03:17:39

List of Learning Resources
by Longarmx
2014-04-08 03:14:44

Good Examples
by matheus23
2014-04-05 13:51:37

Good Examples
by Grunnt
2014-04-03 15:48:46

Good Examples
by Grunnt
2014-04-03 15:48:37

Good Examples
by matheus23
2014-04-01 18:40:51

Good Examples
by matheus23
2014-04-01 18:40:34

Anonymous/Local/Inner class gotchas
by Roquen
2014-03-11 15:22:30
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!