Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (487)
Games in Android Showcase (112)
games submitted by our members
Games in WIP (553)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Back propagation algorithm  (Read 4832 times)
0 Members and 1 Guest are viewing this topic.
Offline JavaSnob77

Junior Newbie




You got JavaServed!


« Posted 2005-01-23 21:28:04 »

When using the backpropagation algorithm for training a neural network, what are good values for learning rate and momentum?  Thanks in advance.

-Rob
Offline digitprop

Junior Member





« Reply #1 - Posted 2005-01-26 08:53:22 »

Impossible to say without knowing more about your network. Other factors such as the number of neurons in each layer are equally important and interdependent with the learning rate.

I suggest to set up an analysis tool to see how the network performs for a range of parameter values, and to determine the optimal learning rate experimentally.

In the simplest case, the 'analysis tool' is just a for() loop where you increase the learning rate in each step and log if and how fast the network converges.

M. Fischer . www.digitprop.com
Offline bodoelod

Innocent Bystander




Java games rock!


« Reply #2 - Posted 2005-02-15 08:33:18 »

A would realy need a backprop algorithm (full) implemented in Java. I hope somebody can help me... thanks
my mailadress: bodoelod@yahoo.com
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline t_larkworthy

Senior Member


Medals: 1
Projects: 1


Google App Engine Rocks!


« Reply #3 - Posted 2005-09-12 17:45:52 »

Its an absolute nightmare trying to train a NN. Really it needs to be done by hand, and you need to monitor the weight changes by eye. You can then bump the network when it gets stuck in a local minima. Luckily there is a pretty good tool that you can do this with, JOONE.

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.
Offline barfy

Junior Member




The evidence of things not seen


« Reply #4 - Posted 2005-10-19 10:11:07 »

You can think of a standard, feed forward, multi-layer neural net as simply a function approximator.

A high 'learning rate' will mean that your NN converges faster. The tradeoff is a larger chance of converging at sub-optimal values within a minima.

A high 'momentum' will mean that your NN will have a greater tendency to "climb" out of a minima, therefore it will have a lesser chance of converging on a sub-optimal minima. The tradeoff is that it could also get pushed out of the global minima (which is where you want the solution to converge at).

The tricky thing is to find magic values for both the 'learning rate' and 'momentum' that gives you the best results for your NN  - that usually requires a lot of experimentation and tweaking. I would suggest starting with a small  'learning rate' and high 'momentum'.


 


Offline lowlife

Junior Newbie





« Reply #5 - Posted 2005-12-20 16:58:44 »

When it comes to neural networks, there are no definite answers. Empirical studies
and techniques are frequently used. Replying to your question about momentum and
learning rate, the situation is as described in previous answers. I would suggest
you start with low values for both (in a typical feedforward MLP, 0.3 for learning rate
and 0.2 for momentum are 'nice' values to start with). Monitor your network and estimate
its performance over a separate testing set (kept hidden from training) in order to
see what values will work best for your case (tune accordingly).

Moreover there are a couple of other things you can do to combat the local minima problem:

Try using the stochastic approximation to gradient descent which traverses multiple
error surfaces (distinct for each training example) and uses the average in order to converge.

Train multiple neural networks using different parameters (rate, momentum, initial weights) over the same
training set. Evaluate and choose the best performing one.

Use a separate validation set in order to stop training in time and avoid overfitting.
Offline tkr

Senior Newbie




Java games rock!


« Reply #6 - Posted 2005-12-21 06:51:28 »

try out: http://www-ra.informatik.uni-tuebingen.de/SNNS/ . It's a really cool tool to set up and understand NNs.
Pages: [1]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

TehJavaDev (10 views)
2014-08-28 18:26:30

CopyableCougar4 (24 views)
2014-08-22 19:31:30

atombrot (37 views)
2014-08-19 09:29:53

Tekkerue (30 views)
2014-08-16 06:45:27

Tekkerue (29 views)
2014-08-16 06:22:17

Tekkerue (18 views)
2014-08-16 06:20:21

Tekkerue (27 views)
2014-08-16 06:12:11

Rayexar (65 views)
2014-08-11 02:49:23

BurntPizza (41 views)
2014-08-09 21:09:32

BurntPizza (33 views)
2014-08-08 02:01:56
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!