Hi !
Featured games (85)
games approved by the League of Dukes
Games in Showcase (636)
Games in Android Showcase (178)
games submitted by our members
Games in WIP (687)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Back propagation algorithm  (Read 5371 times)
0 Members and 1 Guest are viewing this topic.
Offline JavaSnob77

Junior Newbie

You got JavaServed!

« Posted 2005-01-23 21:28:04 »

When using the backpropagation algorithm for training a neural network, what are good values for learning rate and momentum?  Thanks in advance.

Offline digitprop

Junior Devvie

« Reply #1 - Posted 2005-01-26 08:53:22 »

Impossible to say without knowing more about your network. Other factors such as the number of neurons in each layer are equally important and interdependent with the learning rate.

I suggest to set up an analysis tool to see how the network performs for a range of parameter values, and to determine the optimal learning rate experimentally.

In the simplest case, the 'analysis tool' is just a for() loop where you increase the learning rate in each step and log if and how fast the network converges.

M. Fischer .
Offline bodoelod

Innocent Bystander

Java games rock!

« Reply #2 - Posted 2005-02-15 08:33:18 »

A would realy need a backprop algorithm (full) implemented in Java. I hope somebody can help me... thanks
my mailadress:
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline t_larkworthy

Senior Devvie

Medals: 1
Projects: 1

Google App Engine Rocks!

« Reply #3 - Posted 2005-09-12 17:45:52 »

Its an absolute nightmare trying to train a NN. Really it needs to be done by hand, and you need to monitor the weight changes by eye. You can then bump the network when it gets stuck in a local minima. Luckily there is a pretty good tool that you can do this with, JOONE.

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.
Offline barfy

Junior Devvie

The evidence of things not seen

« Reply #4 - Posted 2005-10-19 10:11:07 »

You can think of a standard, feed forward, multi-layer neural net as simply a function approximator.

A high 'learning rate' will mean that your NN converges faster. The tradeoff is a larger chance of converging at sub-optimal values within a minima.

A high 'momentum' will mean that your NN will have a greater tendency to "climb" out of a minima, therefore it will have a lesser chance of converging on a sub-optimal minima. The tradeoff is that it could also get pushed out of the global minima (which is where you want the solution to converge at).

The tricky thing is to find magic values for both the 'learning rate' and 'momentum' that gives you the best results for your NN  - that usually requires a lot of experimentation and tweaking. I would suggest starting with a small  'learning rate' and high 'momentum'.


Offline lowlife

Junior Newbie

« Reply #5 - Posted 2005-12-20 16:58:44 »

When it comes to neural networks, there are no definite answers. Empirical studies
and techniques are frequently used. Replying to your question about momentum and
learning rate, the situation is as described in previous answers. I would suggest
you start with low values for both (in a typical feedforward MLP, 0.3 for learning rate
and 0.2 for momentum are 'nice' values to start with). Monitor your network and estimate
its performance over a separate testing set (kept hidden from training) in order to
see what values will work best for your case (tune accordingly).

Moreover there are a couple of other things you can do to combat the local minima problem:

Try using the stochastic approximation to gradient descent which traverses multiple
error surfaces (distinct for each training example) and uses the average in order to converge.

Train multiple neural networks using different parameters (rate, momentum, initial weights) over the same
training set. Evaluate and choose the best performing one.

Use a separate validation set in order to stop training in time and avoid overfitting.
Offline tkr

Senior Newbie

Java games rock!

« Reply #6 - Posted 2005-12-21 06:51:28 »

try out: . It's a really cool tool to set up and understand NNs.
Pages: [1]
  ignore  |  Print  
You cannot reply to this message, because it is very, very old.

Dwinin (71 views)
2015-11-07 13:29:08

Rems19 (80 views)
2015-10-31 01:36:56

Rems19 (74 views)
2015-10-31 01:32:37

williamwoles (106 views)
2015-10-23 10:42:59

williamwoles (92 views)
2015-10-23 10:42:45

Jervac_ (106 views)
2015-10-18 23:29:12

DarkCart (134 views)
2015-10-16 00:58:11

KaiHH (116 views)
2015-10-11 14:10:14

KaiHH (156 views)
2015-10-11 13:26:18

BurntPizza (171 views)
2015-10-08 03:11:46
Rendering resources
by Roquen
2015-11-13 14:37:59

Rendering resources
by Roquen
2015-11-13 14:36:58

Math: Resources
by Roquen
2015-10-22 07:46:10

Networking Resources
by Roquen
2015-10-16 07:12:30

Rendering resources
by Roquen
2015-10-15 07:40:48

Math: Inequality properties
by Roquen
2015-10-01 13:30:46

Math: Inequality properties
by Roquen
2015-09-30 16:06:05

HotSpot Options
by Roquen
2015-08-29 11:33:11 is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!