Java-Gaming.org Hi !
Featured games (83)
games approved by the League of Dukes
Games in Showcase (541)
Games in Android Showcase (133)
games submitted by our members
Games in WIP (603)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Neural Network Help  (Read 2071 times)
0 Members and 1 Guest are viewing this topic.
Offline SkyAphid
« Posted 2012-11-07 02:33:30 »

Anyone here good at neural nets? I've been studying some of the algorithms behind it, but seeing as I haven't taken Calculus yet I've had to pick a lot of the concepts for this on the way through.

Anyway, I generate and put weights into the net, it provides an output in which I send to be backpropagated so that the weights can be corrected for maximum correctness.

Unfortunately, all it achieves is making the results worse..

Weights are generated, and 1 and 0 are inputted for an ideal output of 1. This is based on the XOR system for testing networks.

Here's the  code:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
69  
70  
71  
72  
73  
74  
75  
76  
77  
78  
79  
80  
81  
82  
83  
84  
85  
86  
87  
88  
89  
90  
91  
92  
93  
94  
95  
96  
97  
98  
99  
100  
101  
102  
103  
104  
105  
106  
107  
108  
109  
110  
111  
112  
113  
114  
115  
116  
117  
118  
119  
120  
121  
122  
123  
124  
125  
126  
127  
128  
129  
130  
131  
public class ANN {
   private static final int HIDDEN_WEIGHT = 0;
   private static final int HIDDEN_OLD_WEIGHT = 1;
   private static final int HIDDEN_SUM = 2;
   private static final int HIDDEN_OUT = 3;
   
   private static final int OUTPUT_SUM = 0;
   private static final int OUTPUT_OUT = 1;
   
   float[][] weights;
   float[][] weightHistory;
   
   float[][] hlayers;
   
   int neurons; //Rows
   int hiddenLayers; //Columns
   float idealOutput;
   
   float learningRate = 0.8f;
   float momentum = 0.6f;
   
   public ANN(int neurons, int hiddenLayers, float idealOutput){
      this.neurons = neurons;
      this.hiddenLayers = hiddenLayers;
      this.idealOutput = idealOutput;

      weights = new float[neurons+1][hiddenLayers];
      weightHistory = new float[neurons][hiddenLayers];
     
      hlayers = new float[hiddenLayers+1][4];
     
      //Randomize weights, can be overwritten
      Random r = new Random();
     
      for (int i = 0; i < neurons+1; i++){
         for (int j = 0; j < hiddenLayers; j++){
            weights[i][j] = r.nextFloat();
            hlayers[j][HIDDEN_WEIGHT] = r.nextFloat();
         }
      }
     
      hlayers[hiddenLayers][HIDDEN_WEIGHT] = r.nextFloat();
   }
   
   public void overwriteWeights(float[][] newWeights, float[] hweights){
      weights = newWeights;
     
      for (int a = 0; a < hlayers.length; a++){
         hlayers[a][HIDDEN_WEIGHT] = hweights[a];
      }
   }
   
   public float[] recall(float[] input){
      float output[] = new float[2];
     
      for (int j = 0; j < hiddenLayers; j++){
         float sum = 0f;
         
         //Apply weights to inputs
         for (int i = 0; i < neurons; i++){
            sum += weights[i][j] * input[i];
         }
         
         //Add bias
         sum += weights[neurons][j];
         
         //Send to hidden layer and apply sigmoid
         hlayers[j][HIDDEN_SUM] = sum;
         hlayers[j][HIDDEN_OUT] = sigmoidActivation(sum);
      }
     
      //Apply weights to outputs
      for (int j = 0; j < hiddenLayers; j++){
         output[OUTPUT_SUM] += hlayers[j][HIDDEN_OUT] * hlayers[j][HIDDEN_WEIGHT];
      }
     
      //Apply bias 2 and apply sigmoid
      output[OUTPUT_SUM] += hlayers[hiddenLayers][OUTPUT_SUM];
      output[OUTPUT_OUT] = sigmoidActivation(output[0]);
     
      System.out.println("SUM: "+output[OUTPUT_SUM] + " OUTPUT: "+output[OUTPUT_OUT]);
      backPropagate(output, input);
     
      return output;
   }
   
   public void backPropagate(float[] output, float[] input){
      float error = getError(output[OUTPUT_OUT], idealOutput);
      float outDelta = getLayerDelta(output[0], error);

      float[] gradient = new float[hiddenLayers];
     
      System.out.println("Error is at "+error);
     
      //Now we back propagate to the output
      for (int j = 0; j < hiddenLayers; j++){
         //float hiddenDelta = (sigmoidDerivative(hlayers[i][HIDDEN_SUM]) * hlayers[i][HIDDEN_WEIGHT]) * outDelta;
         
         gradient[j] = outDelta * hlayers[j][HIDDEN_OUT];
         
         float newWeight = (learningRate * gradient[j]) + (hlayers[j][HIDDEN_WEIGHT] * hlayers[j][HIDDEN_OLD_WEIGHT]);
         
         hlayers[j][HIDDEN_OLD_WEIGHT] = hlayers[j][HIDDEN_WEIGHT];
         hlayers[j][HIDDEN_WEIGHT] = newWeight;
      }  
     
      for (int j = 0; j < hiddenLayers; j++){
         for (int i = 0; i < neurons; i++){
             float newWeight = (learningRate * gradient[j]) + (weights[i][j] * weightHistory[i][j]);
             weightHistory[i][j] = weights[i][j];
             weights[i][j] = newWeight;
         }
      }
   }

   public float getError(float output, float idealOutput){
      return (float) output - idealOutput;
   }
   
   public float getLayerDelta(float sum, float error){
      return -error * sigmoidDerivative(sum);
   }
   
   public float sigmoidActivation(float x){
      return 1f / (float) (1f + Math.exp(-x));
   }
   
   public float sigmoidDerivative(float sum){
      return (sigmoidActivation(sum) * (1f - sigmoidActivation(sum)));
   }
}


Here's the output when tested:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
61  
62  
63  
64  
65  
66  
67  
68  
Commencing test...

Initializing network.

Overwriting weights for control variable.

Recalling [1.0, 0.0]

Test 0
SUM: 1.1265055 OUTPUT: 0.7551935
Error is at -0.24480653
Test 1
SUM: 0.7995221 OUTPUT: 0.68987226
Error is at -0.31012774
Test 2
SUM: 0.8105837 OUTPUT: 0.6922339
Error is at -0.30776608
Test 3
SUM: 0.80390143 OUTPUT: 0.6908084
Error is at -0.30919158
Test 4
SUM: 0.8040238 OUTPUT: 0.6908346
Error is at -0.30916542
Test 5
SUM: 0.803821 OUTPUT: 0.69079125
Error is at -0.30920875
Test 6
SUM: 0.80381775 OUTPUT: 0.69079053
Error is at -0.30920947
Test 7
SUM: 0.8038115 OUTPUT: 0.6907892
Error is at -0.30921078
Test 8
SUM: 0.8038112 OUTPUT: 0.6907891
Error is at -0.3092109
Test 9
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 10
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 11
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 12
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 13
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 14
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 15
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 16
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 17
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 18
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109
Test 19
SUM: 0.80381095 OUTPUT: 0.6907891
Error is at -0.3092109


The error is supposed to be negative in some cases, but I'm not so sure about if I did it right.

Please point out flaws, there's bound to be a bunch because of how mathematically retarded I am when it comes to calculus.

Also, few questions,

  • Can Neural Networks only learn one pattern for each network?
  • What's the point of having more than one set of hidden layer nodes?
  • Where's a good place to learn some basic Calculus fundamentals?


Thank you very much.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Offline ReBirth
« Reply #1 - Posted 2012-11-07 03:59:45 »

This is one of my subjects in study and fortunately my strongest one. ANN is kinda general, what are you planning, back propagation? kohonen?

to your questions, IMO
1. yes it can
2. the more hidden layers you have, the adaptive/learning skill will be better. For example if you use it to recognize pattern, it can spot minor details. It can also reduce error margin between literation.
3. college Smiley nobody want to read calculus books at home *.

*) applied to common people, especially non gamer ones.

Offline SkyAphid
« Reply #2 - Posted 2012-11-08 00:47:27 »

This is one of my subjects in study and fortunately my strongest one. ANN is kinda general, what are you planning, back propagation? kohonen?

to your questions, IMO
1. yes it can
2. the more hidden layers you have, the adaptive/learning skill will be better. For example if you use it to recognize pattern, it can spot minor details. It can also reduce error margin between literation.
3. college Smiley nobody want to read calculus books at home *.

*) applied to common people, especially non gamer ones.

For one, I'm glad to have someone who's experienced in this, because I need a lot of help! Hahah.

Anyway, I need it to recognize mostly photos, as I'm working on an adapative AI. Essentially it will be trained to recognize people and things, along with text. Even person/thing will be marked with a good or bad meter so it knows how to react to certain stimuli and so on. I've chosen to give it "eyes" because I plan on welding some parts and making a nifty little robot arm or something for fun over the summer.

But yeah, I need it to recognize places and things. Problem is, I have no real idea on how to, and a lot of the examples are written so mathematically I struggle to comprehend a lot of it. I wanted to use backpropagation because it seemed best for allowing it to learn by itself in some cases.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline theagentd

« JGO Bitwise Duke »


Medals: 366
Projects: 2
Exp: 8 years



« Reply #3 - Posted 2012-11-08 01:17:03 »

3. college Smiley nobody want to read calculus books at home *.
Huh? I did calculus in my second and third year in high school...

Myomyomyo.
Offline ReBirth
« Reply #4 - Posted 2012-11-08 04:18:32 »

@theagentd
Homework doesn't count Smiley

Back propagation is best use on prediction or data mining. For pattern like you saud, you may need hopfield. Actually, rather make one by yourself there's already Neuroph library which quite powerful and you'll have a working network by less than 20 lines of code Smiley

Offline SkyAphid
« Reply #5 - Posted 2012-11-09 01:25:42 »

@theagentd
Homework doesn't count Smiley

Back propagation is best use on prediction or data mining. For pattern like you saud, you may need hopfield. Actually, rather make one by yourself there's already Neuroph library which quite powerful and you'll have a working network by less than 20 lines of code Smiley

It's a lot cooler to do it yourself. I thought hopfields were slow learners?

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Offline ReBirth
« Reply #6 - Posted 2012-11-09 03:14:25 »

Yes, Hopfield needs more learning and training data.

Offline SkyAphid
« Reply #7 - Posted 2012-11-11 22:52:46 »

Yes, Hopfield needs more learning and training data.
Sorry to reply so late, I've been busy.

Anyway, I've read from a lot of places that Hopfields is a bad technique due to the fact it can't fix errors. Is that true? Plus, the bigger ones apparently get pretty slow.

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Offline ReBirth
« Reply #8 - Posted 2012-11-12 00:34:53 »

I never read about that, but considering Hopfield who uses matrix as its "memory" it maybe true.

Atually same problem goes for neural network too, but you can do quick fix by adjusting the number of layers and each of its neurons.

Offline SkyAphid
« Reply #9 - Posted 2012-11-12 03:14:41 »

I never read about that, but considering Hopfield who uses matrix as its "memory" it maybe true.

Atually same problem goes for neural network too, but you can do quick fix by adjusting the number of layers and each of its neurons.
Alright. Thanks. Any specific reads you recommend? Possibly stuff I get can get off the internet, because I'm an impatient person and I don't want to order a book. Hahah

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline ReBirth
« Reply #10 - Posted 2012-11-12 03:19:26 »

Unfortunately I got them from books and college chairs Smiley

Offline SkyAphid
« Reply #11 - Posted 2012-11-12 03:44:11 »

Unfortunately I got them from books and college chairs Smiley
You wouldn't happen to be interested in PM'ing me some of your notes would you? lol

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Offline ReBirth
« Reply #12 - Posted 2012-11-12 04:02:32 »

I have no problem in PM'ing/copying, the problem is translating. They're not written in English Grin

Offline SkyAphid
« Reply #13 - Posted 2012-11-14 00:32:35 »

I have no problem in PM'ing/copying, the problem is translating. They're not written in English Grin
If you could translate the important stuff and send them to me I'd be thankful. If you write in a language with a relatively similar alphabet to english I can translate them myself though. I'd be quite thankful!

“Life is pretty simple: You do some stuff. Most fails. Some works. You do more of what works. If it works big, others quickly copy it. Then you do something else. The trick is the doing something else.” ~Leonardo da Vinci
Offline ReBirth
« Reply #14 - Posted 2012-11-14 01:56:36 »

I can't promise anything. Keep searching for another source though Grin

Offline Jono
« Reply #15 - Posted 2012-11-14 08:02:01 »

One thing is that it looks like the number of nodes in your hidden layer(s) are the same as in your input layer. Two nodes probably won't be enough to represent the XOR function - try 3 or more.

Also, I've never heard of any value in more than two hidden layers and I'm pretty sure that theoretically two is sufficient for any mapping from inputs to outputs (though the layers might have to be large in some cases).
Offline krasse
« Reply #16 - Posted 2012-11-14 09:44:22 »

ANN is a specialized case of non-linear optimization, which is a very tricky area, filled with black art tricks.

Also, there is often a better alternative available for the problem than NNs, if you can figure out good features (there are some really good features for images that you can use) and precalculate them and use as much linear optimization as possible etc.

Here is also a good resource:
http://sourceforge.net/projects/weka/

Offline Joshua Waring
« Reply #17 - Posted 2012-11-14 10:38:53 »

Are you doing the Neural Network course at www.coursera.com ?

The world is big, so learn it in small bytes.
Pages: [1]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

Mr.CodeIt (10 views)
2014-12-27 04:03:04

TheDudeFromCI (13 views)
2014-12-27 02:14:49

Mr.CodeIt (25 views)
2014-12-23 03:34:11

rwatson462 (56 views)
2014-12-15 09:26:44

Mr.CodeIt (46 views)
2014-12-14 19:50:38

BurntPizza (92 views)
2014-12-09 22:41:13

BurntPizza (113 views)
2014-12-08 04:46:31

JscottyBieshaar (84 views)
2014-12-05 12:39:02

SHC (94 views)
2014-12-03 16:27:13

CopyableCougar4 (102 views)
2014-11-29 21:32:03
Resources for WIP games
by kpars
2014-12-18 10:26:14

Understanding relations between setOrigin, setScale and setPosition in libGdx
by mbabuskov
2014-10-09 22:35:00

Definite guide to supporting multiple device resolutions on Android (2014)
by mbabuskov
2014-10-02 22:36:02

List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!