-
Notifications
You must be signed in to change notification settings - Fork 820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate token #375
Generate token #375
Conversation
If a token is not explicitly provided, let the first server generate a random one. Such a token is saved on the first server and the playbook can retrieve it from there and store it a a fact. All other servers and agents can use that token later to join the cluster. It will be saved into their environment file as usual. Signed-off-by: Marko Vukovic <[email protected]>
This has been tried before. You need to test the case of |
Maybe I do not understand how Ansible works then. Doesn't the task "Init first server node" from roles/k3s_server/tasks/main.yml runs first and terminates before "Start other server if any and verify status" runs? The former task will save the token and it will always be available for the others being setup in the latter, or at least that is how thought it would work. Setting up three servers all at once was actually the first test I ran, although it is possible that it was a fluke that it worked. |
If you got it working, that great! I'm gonna pull down your PR and check it out sometime later today or Monday. |
the CNCF requires that all commits be signed. Just follow the instructions https://github.com/k3s-io/k3s-ansible/pull/375/checks?check_run_id=32818692853 |
Signed-off-by: Marko Vukovic <[email protected]>
0ed6ea6
to
9ee4f3f
Compare
When testing with the vagrant file, I see the following error
Its possible that the vagrant ansible provisioner works differently than a regular |
So interesting results. For the 3 pi cluster, the first time I tested with 3 servers, it installed fine. Then ran the
This is the exact same issue I ran into the first time I attempted to implement auto generating tokens. |
Run a server + agent inventory on the raspberry pi cluster, the playbook works because those are seperate roles, so they run sequentially (ie the server role gets executed, then the agent role). But for the vagarant provisioner, it just runs everything in parallel, so this system will never work. I'm less concerned if the Vagrantfile works, that can just be notes in the You might want to look into https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html#restricting-execution-with-throttle or other ways to control execution on nodes. Its possible there is some way of achieving |
Do you still have the complete log of the playbook execution that you can attach here? |
Okay nvmd I just read my own error logs. Let me fix it. |
I seem to have found a seperate issue around |
OK, I shall push another commit to address the new batch of Lint errors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment above https://github.com/k3s-io/k3s-ansible/blob/master/Vagrantfile#L31 That a token variable is required for the vagrant ansible provisioner.
I'm happy to accept this PR if it work on "real" ansible playbooks and vagrant is just weird.
Signed-off-by: Marko Vukovic <[email protected]>
The token is still required when using Vagrant. Signed-off-by: Marko Vukovic <[email protected]>
If a token is not explicitly provided, let the first server generate a random one. Such a token is saved on the first server and the playbook can retrieve it from there and store it a a fact. All other servers and agents can use that token later to join the cluster. It will be saved into their environment file as usual.
I tested this by creating a cluster of one server and then adding two more servers and one agent. Please let me know if I should try some other tests as well.
Changes
Linked Issues
#307