-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Starting with a single neuron, no useful evolution can happen for many generations #34
Comments
thanks @Ploppz, I noticed this behavior, and I'm agree with your solution. We should be able to send a predefined net to the algorithm. For me, NEAT it's only the base for the project, If we find better ways to do things we should add to the project. |
Edit: I am talking about ways to boost the algorithm in general. We could also make it possible to start with a predefined genome of course. The paper splits neurons into input, hidden and output neurons.
|
I'm exactly in the same point. When I started the project I was thinking that the algorithm should be as much standard as possible, I mean that the user should not configure the net, only call the algorithm with inputs and outputs and get results. I think that connect inputs with outputs by default will improve the performance, as you say this, implementation take a lot of generations to make any improvement. I think Ctrnn discriminates inputs and hidden neurons. This section it's the most obscure for me, I did some tests changing it, and I'm not sure it's well implemented. In the readme of the function_aproximation branch there is a link to Ctrnn paper. |
Both in the XOR example and in my own attempts, I noticed something: The first 100-200 generations, the output of
organism.activate
is0.0
. So we are essentially waiting for any connection between input and output, while no evolution other than random mutation can happen because fitness will be constant for the organisms that only output0.0
.So I suggest either starting from a slightly more connected starting point, or finding a way to make the alg more 'eager' to add connections early on (but maybe this is not in line with the original alg), or let the user specify a starting point (that is, a genome or NN architecture to start with).
Maybe it would already be a good improvement to connect the one start neuron with all inputs and outputs.
The text was updated successfully, but these errors were encountered: