Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting with a single neuron, no useful evolution can happen for many generations #34

Open
Ploppz opened this issue Feb 20, 2019 · 3 comments

Comments

@Ploppz
Copy link

Ploppz commented Feb 20, 2019

Both in the XOR example and in my own attempts, I noticed something: The first 100-200 generations, the output of organism.activate is 0.0. So we are essentially waiting for any connection between input and output, while no evolution other than random mutation can happen because fitness will be constant for the organisms that only output 0.0.

So I suggest either starting from a slightly more connected starting point, or finding a way to make the alg more 'eager' to add connections early on (but maybe this is not in line with the original alg), or let the user specify a starting point (that is, a genome or NN architecture to start with).
Maybe it would already be a good improvement to connect the one start neuron with all inputs and outputs.

@TLmaK0
Copy link
Owner

TLmaK0 commented Feb 21, 2019

thanks @Ploppz, I noticed this behavior, and I'm agree with your solution. We should be able to send a predefined net to the algorithm. For me, NEAT it's only the base for the project, If we find better ways to do things we should add to the project.

@Ploppz
Copy link
Author

Ploppz commented Feb 21, 2019

Edit: I am talking about ways to boost the algorithm in general. We could also make it possible to start with a predefined genome of course.
Edit 2: I rewrote this after reading the paper.

The paper splits neurons into input, hidden and output neurons.
I think we should in any case start with n_inputs + n_outputs number of neurons.
Then some questions are:

  • whether we should discriminate between input, hidden and output neurons.
    Specifically, I'm thinking that maybe there should not be added any connections among input neurons, and among output neurons. But maybe I'm wrong, I'm not sure how the Ctrnn works, maybe such connections do have some effect.

  • Whether we should start with connections between input and output. In the paper it seems they do that. But it also seems ok to not do that, because in the paper they also talk about starting as minimal as possible (and in this case, it would probably be beneficial to not allow connections among input and output neuron groups).

@TLmaK0
Copy link
Owner

TLmaK0 commented Feb 22, 2019

I'm exactly in the same point. When I started the project I was thinking that the algorithm should be as much standard as possible, I mean that the user should not configure the net, only call the algorithm with inputs and outputs and get results. I think that connect inputs with outputs by default will improve the performance, as you say this, implementation take a lot of generations to make any improvement.

I think Ctrnn discriminates inputs and hidden neurons. This section it's the most obscure for me, I did some tests changing it, and I'm not sure it's well implemented. In the readme of the function_aproximation branch there is a link to Ctrnn paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants