Java-Gaming.org Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (746)
Games in Android Showcase (226)
games submitted by our members
Games in WIP (828)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Evolving neural network. Survival agents. part II  (Read 766 times)
0 Members and 1 Guest are viewing this topic.
Offline UnuntuMDJ
« Posted 2018-01-06 14:25:13 »



Regular video report of my neural network result. Added enemies and you can see, that agents starts to avoid them. On the video, agents trained about 40 min and was chosen 8 best for continue training. I was used genetic algorithm combined with neural network. Signals of agent are: 1-agent's health, 2-closer food distance, 3-closer food angle, 4-closer enemy distance, 5-closer enemy angle; Agent can see only on his view field (gray circle).
Agent needs for increment his fitness for reproduction. It's depends on food eaten (fitness increment) or enemy contact (fitness decrements).

P.S. It is continue of the topic "Neural Network and Qlearning. Survival agents."
Thx.

<a href="http://www.youtube.com/v/VtVXNMB5DJE?version=3&amp;hl=en_US&amp;start=" target="_blank">http://www.youtube.com/v/VtVXNMB5DJE?version=3&amp;hl=en_US&amp;start=</a>
Offline meva

Senior Newbie


Exp: 3 years



« Reply #1 - Posted 2018-01-13 09:28:19 »

Great job:)!

The bots behave very realistic.
Did you compare your solution with other IA algorithms?

I saw at the movie that you are using four layers, input layer, two hidden and output layer.
Usually three layers are enough for multilayer perceptron neural network. Did you experiment with different number of layers?

Offline UnuntuMDJ
« Reply #2 - Posted 2018-01-13 20:21:22 »

Thanks for answer Smiley Yes, 4 layers just for testing, but in a future I want to allow every agent to get random layers in width and in height at beginning training and after reproduction two agents with different neural networks will create after crossing new agent with new neural network. At the moment I use just mutation of weights, but result makes me happy anyway, LOL Smiley
In first version I tried to realize q-learning combined with neural network, but lack of knowledge didn't let me do it Smiley
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline meva

Senior Newbie


Exp: 3 years



« Reply #3 - Posted 2018-01-16 12:16:40 »

Hi

Thanks, really nice job.
Are you going to prepare a library based on your solution or just want to use it in your games?
Offline UnuntuMDJ
« Reply #4 - Posted 2018-01-16 17:24:57 »

To be honest I have not thought about this Smiley
Thanks again for your comment.
Pages: [1]
  ignore  |  Print  
 
 

 
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!