i was thinking about mutation of neurons for which GAs are prefect
.. somehow a NN need to evolve, how would you go about it otherwise ?
(I hope we're not derailing your thread Ecumene)
NNs have dedicated algos for training that are specialized to exploit NNs' nature, one of the better known is backpropagation, but GA can work as well, as seen in the link I provided. However I don't think they are optimal, as they assume nothing about the problem they are optimizing (the NN).
A guy I watch on twitch has been experimenting with GA-on top of-NN for learning to play super mario: http://www.twitch.tv/sethbling/profile/past_broadcasts
(the last few broadcasts)
The NN is what plays the game (a function of tile data around mario -> controller inputs)
The GA optimizes this function towards the goal (getting as far right in the level as possible)
GA = known result, unknown data
NN = known result, unknown function
Specifically, the 'data' is the argument to a function, the return value of which you are trying to optimize, whereas the NN's 'function' is anything.
NNs approximate functions, but themselves do not preform optimization, that is the job of the training algo, which then of course can be any optimization algorithm, including GAs.
In this sense I say they are orthogonal.