Java-Gaming.org Hi !
 Featured games (83) games approved by the League of Dukes Games in Showcase (580) Games in Android Showcase (162) games submitted by our members Games in WIP (632) games currently in development
 News: Read the Java Gaming Resources, or peek at the official Java tutorials
Pages: [1]
 ignore  |  Print
 Neural Network Help  (Read 2312 times) 0 Members and 1 Guest are viewing this topic.
SkyAphid
 « Posted 2012-11-07 02:33:30 »

Anyone here good at neural nets? I've been studying some of the algorithms behind it, but seeing as I haven't taken Calculus yet I've had to pick a lot of the concepts for this on the way through.

Anyway, I generate and put weights into the net, it provides an output in which I send to be backpropagated so that the weights can be corrected for maximum correctness.

Unfortunately, all it achieves is making the results worse..

Weights are generated, and 1 and 0 are inputted for an ideal output of 1. This is based on the XOR system for testing networks.

Here's the  code:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131 `public class ANN {   private static final int HIDDEN_WEIGHT = 0;   private static final int HIDDEN_OLD_WEIGHT = 1;   private static final int HIDDEN_SUM = 2;   private static final int HIDDEN_OUT = 3;      private static final int OUTPUT_SUM = 0;   private static final int OUTPUT_OUT = 1;      float[][] weights;   float[][] weightHistory;      float[][] hlayers;      int neurons; //Rows   int hiddenLayers; //Columns   float idealOutput;      float learningRate = 0.8f;   float momentum = 0.6f;      public ANN(int neurons, int hiddenLayers, float idealOutput){      this.neurons = neurons;      this.hiddenLayers = hiddenLayers;      this.idealOutput = idealOutput;      weights = new float[neurons+1][hiddenLayers];      weightHistory = new float[neurons][hiddenLayers];            hlayers = new float[hiddenLayers+1][4];            //Randomize weights, can be overwritten      Random r = new Random();            for (int i = 0; i < neurons+1; i++){         for (int j = 0; j < hiddenLayers; j++){            weights[i][j] = r.nextFloat();            hlayers[j][HIDDEN_WEIGHT] = r.nextFloat();         }      }            hlayers[hiddenLayers][HIDDEN_WEIGHT] = r.nextFloat();   }      public void overwriteWeights(float[][] newWeights, float[] hweights){      weights = newWeights;            for (int a = 0; a < hlayers.length; a++){         hlayers[a][HIDDEN_WEIGHT] = hweights[a];      }   }      public float[] recall(float[] input){      float output[] = new float[2];            for (int j = 0; j < hiddenLayers; j++){         float sum = 0f;                  //Apply weights to inputs         for (int i = 0; i < neurons; i++){            sum += weights[i][j] * input[i];         }                  //Add bias         sum += weights[neurons][j];                  //Send to hidden layer and apply sigmoid         hlayers[j][HIDDEN_SUM] = sum;         hlayers[j][HIDDEN_OUT] = sigmoidActivation(sum);      }            //Apply weights to outputs      for (int j = 0; j < hiddenLayers; j++){         output[OUTPUT_SUM] += hlayers[j][HIDDEN_OUT] * hlayers[j][HIDDEN_WEIGHT];      }            //Apply bias 2 and apply sigmoid      output[OUTPUT_SUM] += hlayers[hiddenLayers][OUTPUT_SUM];      output[OUTPUT_OUT] = sigmoidActivation(output[0]);            System.out.println("SUM: "+output[OUTPUT_SUM] + " OUTPUT: "+output[OUTPUT_OUT]);      backPropagate(output, input);            return output;   }      public void backPropagate(float[] output, float[] input){      float error = getError(output[OUTPUT_OUT], idealOutput);      float outDelta = getLayerDelta(output[0], error);      float[] gradient = new float[hiddenLayers];            System.out.println("Error is at "+error);            //Now we back propagate to the output      for (int j = 0; j < hiddenLayers; j++){         //float hiddenDelta = (sigmoidDerivative(hlayers[i][HIDDEN_SUM]) * hlayers[i][HIDDEN_WEIGHT]) * outDelta;                  gradient[j] = outDelta * hlayers[j][HIDDEN_OUT];                  float newWeight = (learningRate * gradient[j]) + (hlayers[j][HIDDEN_WEIGHT] * hlayers[j][HIDDEN_OLD_WEIGHT]);                  hlayers[j][HIDDEN_OLD_WEIGHT] = hlayers[j][HIDDEN_WEIGHT];         hlayers[j][HIDDEN_WEIGHT] = newWeight;      }              for (int j = 0; j < hiddenLayers; j++){         for (int i = 0; i < neurons; i++){             float newWeight = (learningRate * gradient[j]) + (weights[i][j] * weightHistory[i][j]);             weightHistory[i][j] = weights[i][j];             weights[i][j] = newWeight;         }      }   }   public float getError(float output, float idealOutput){      return (float) output - idealOutput;   }      public float getLayerDelta(float sum, float error){      return -error * sigmoidDerivative(sum);   }      public float sigmoidActivation(float x){      return 1f / (float) (1f + Math.exp(-x));   }      public float sigmoidDerivative(float sum){      return (sigmoidActivation(sum) * (1f - sigmoidActivation(sum)));   }}`

Here's the output when tested:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68 `Commencing test...Initializing network.Overwriting weights for control variable.Recalling [1.0, 0.0]Test 0SUM: 1.1265055 OUTPUT: 0.7551935Error is at -0.24480653Test 1SUM: 0.7995221 OUTPUT: 0.68987226Error is at -0.31012774Test 2SUM: 0.8105837 OUTPUT: 0.6922339Error is at -0.30776608Test 3SUM: 0.80390143 OUTPUT: 0.6908084Error is at -0.30919158Test 4SUM: 0.8040238 OUTPUT: 0.6908346Error is at -0.30916542Test 5SUM: 0.803821 OUTPUT: 0.69079125Error is at -0.30920875Test 6SUM: 0.80381775 OUTPUT: 0.69079053Error is at -0.30920947Test 7SUM: 0.8038115 OUTPUT: 0.6907892Error is at -0.30921078Test 8SUM: 0.8038112 OUTPUT: 0.6907891Error is at -0.3092109Test 9SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 10SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 11SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 12SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 13SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 14SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 15SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 16SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 17SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 18SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109Test 19SUM: 0.80381095 OUTPUT: 0.6907891Error is at -0.3092109`

The error is supposed to be negative in some cases, but I'm not so sure about if I did it right.

Please point out flaws, there's bound to be a bunch because of how mathematically retarded I am when it comes to calculus.

Also, few questions,

• Can Neural Networks only learn one pattern for each network?
• What's the point of having more than one set of hidden layer nodes?
• Where's a good place to learn some basic Calculus fundamentals?

Thank you very much.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #1 - Posted 2012-11-07 03:59:45 »

This is one of my subjects in study and fortunately my strongest one. ANN is kinda general, what are you planning, back propagation? kohonen?

1. yes it can
2. the more hidden layers you have, the adaptive/learning skill will be better. For example if you use it to recognize pattern, it can spot minor details. It can also reduce error margin between literation.
3. college nobody want to read calculus books at home *.

*) applied to common people, especially non gamer ones.

SkyAphid
 « Reply #2 - Posted 2012-11-08 00:47:27 »

This is one of my subjects in study and fortunately my strongest one. ANN is kinda general, what are you planning, back propagation? kohonen?

1. yes it can
2. the more hidden layers you have, the adaptive/learning skill will be better. For example if you use it to recognize pattern, it can spot minor details. It can also reduce error margin between literation.
3. college nobody want to read calculus books at home *.

*) applied to common people, especially non gamer ones.

For one, I'm glad to have someone who's experienced in this, because I need a lot of help! Hahah.

Anyway, I need it to recognize mostly photos, as I'm working on an adapative AI. Essentially it will be trained to recognize people and things, along with text. Even person/thing will be marked with a good or bad meter so it knows how to react to certain stimuli and so on. I've chosen to give it "eyes" because I plan on welding some parts and making a nifty little robot arm or something for fun over the summer.

But yeah, I need it to recognize places and things. Problem is, I have no real idea on how to, and a lot of the examples are written so mathematically I struggle to comprehend a lot of it. I wanted to use backpropagation because it seemed best for allowing it to learn by itself in some cases.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
theagentd

« JGO Bitwise Duke »

Medals: 443
Projects: 2
Exp: 8 years

 « Reply #3 - Posted 2012-11-08 01:17:03 »

3. college nobody want to read calculus books at home *.
Huh? I did calculus in my second and third year in high school...

Myomyomyo.
ReBirth
 « Reply #4 - Posted 2012-11-08 04:18:32 »

@theagentd
Homework doesn't count

Back propagation is best use on prediction or data mining. For pattern like you saud, you may need hopfield. Actually, rather make one by yourself there's already Neuroph library which quite powerful and you'll have a working network by less than 20 lines of code

SkyAphid
 « Reply #5 - Posted 2012-11-09 01:25:42 »

@theagentd
Homework doesn't count

Back propagation is best use on prediction or data mining. For pattern like you saud, you may need hopfield. Actually, rather make one by yourself there's already Neuroph library which quite powerful and you'll have a working network by less than 20 lines of code

It's a lot cooler to do it yourself. I thought hopfields were slow learners?

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #6 - Posted 2012-11-09 03:14:25 »

Yes, Hopfield needs more learning and training data.

SkyAphid
 « Reply #7 - Posted 2012-11-11 22:52:46 »

Yes, Hopfield needs more learning and training data.
Sorry to reply so late, I've been busy.

Anyway, I've read from a lot of places that Hopfields is a bad technique due to the fact it can't fix errors. Is that true? Plus, the bigger ones apparently get pretty slow.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #8 - Posted 2012-11-12 00:34:53 »

I never read about that, but considering Hopfield who uses matrix as its "memory" it maybe true.

Atually same problem goes for neural network too, but you can do quick fix by adjusting the number of layers and each of its neurons.

SkyAphid
 « Reply #9 - Posted 2012-11-12 03:14:41 »

I never read about that, but considering Hopfield who uses matrix as its "memory" it maybe true.

Atually same problem goes for neural network too, but you can do quick fix by adjusting the number of layers and each of its neurons.
Alright. Thanks. Any specific reads you recommend? Possibly stuff I get can get off the internet, because I'm an impatient person and I don't want to order a book. Hahah

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #10 - Posted 2012-11-12 03:19:26 »

Unfortunately I got them from books and college chairs

SkyAphid
 « Reply #11 - Posted 2012-11-12 03:44:11 »

Unfortunately I got them from books and college chairs
You wouldn't happen to be interested in PM'ing me some of your notes would you? lol

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #12 - Posted 2012-11-12 04:02:32 »

I have no problem in PM'ing/copying, the problem is translating. They're not written in English

SkyAphid
 « Reply #13 - Posted 2012-11-14 00:32:35 »

I have no problem in PM'ing/copying, the problem is translating. They're not written in English
If you could translate the important stuff and send them to me I'd be thankful. If you write in a language with a relatively similar alphabet to english I can translate them myself though. I'd be quite thankful!

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
ReBirth
 « Reply #14 - Posted 2012-11-14 01:56:36 »

I can't promise anything. Keep searching for another source though

Jono
 « Reply #15 - Posted 2012-11-14 08:02:01 »

One thing is that it looks like the number of nodes in your hidden layer(s) are the same as in your input layer. Two nodes probably won't be enough to represent the XOR function - try 3 or more.

Also, I've never heard of any value in more than two hidden layers and I'm pretty sure that theoretically two is sufficient for any mapping from inputs to outputs (though the layers might have to be large in some cases).
krasse
 « Reply #16 - Posted 2012-11-14 09:44:22 »

ANN is a specialized case of non-linear optimization, which is a very tricky area, filled with black art tricks.

Also, there is often a better alternative available for the problem than NNs, if you can figure out good features (there are some really good features for images that you can use) and precalculate them and use as much linear optimization as possible etc.

Here is also a good resource:
http://sourceforge.net/projects/weka/

Joshua Waring
 « Reply #17 - Posted 2012-11-14 10:38:53 »

Are you doing the Neural Network course at www.coursera.com ?

The world is big, so learn it in small bytes.
Pages: [1]
 ignore  |  Print

You cannot reply to this message, because it is very, very old.

 Waterwolf (21 views) 2015-05-20 15:01:45 chrislo27 (25 views) 2015-05-20 03:42:21 BurntPizza (59 views) 2015-05-10 15:53:18 FrozenShade (45 views) 2015-05-07 09:11:21 TheLopais (208 views) 2015-05-06 13:36:48 TheLopais (192 views) 2015-05-06 13:35:14 TheLopais (198 views) 2015-05-06 13:33:39 TheLopais (214 views) 2015-05-06 13:32:48 TheLopais (214 views) 2015-05-06 13:31:28 ClaasJG (236 views) 2015-04-30 20:33:25
 Spasi 31x BurntPizza 16x theagentd 13x DavidBVal 13x ra4king 12x EgonOlsen 11x Husk 10x KevinWorkman 9x princec 8x scanevaro 8x KaiHH 7x opiop65 6x revers 6x Riven 6x HeroesGraveDev 5x MrPork 5x
 List of Learning Resources2015-05-05 10:20:32How to: JGO Wikiby Mac702015-02-17 20:56:162D Dynamic Lighting2015-01-01 20:25:42How do I start Java Game Development?by gouessej2014-12-27 19:41:21Resources for WIP gamesby kpars2014-12-18 10:26:14Understanding relations between setOrigin, setScale and setPosition in libGdx2014-10-09 22:35:00Definite guide to supporting multiple device resolutions on Android (2014)2014-10-02 22:36:02List of Learning Resources2014-08-16 10:40:00
 java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org