-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database returns: Duplicate entry '2147483647' for key 'PRIMARY' #6176
Comments
Running a count on
|
Since we just had 20k active rows, we dropped the indexes and run a alter table to change to
control-plane nodes starts successfully after this. |
This has been brought up in the past, although I'm not finding the issue at the moment. If we're going to migrate to another datatype, it should probably be something like Can I ask how long it took to run the |
@brandond Here is the full output from the commands, Note: we did shut down k3s on all 3 control-plane servers, also ran the
|
Hit this problem, fixed using:
|
I need to evaluate the impact of running that statement while multiple servers are connected and active. I would love to proactively address this, but I am concerned that it will cause surprising outages during an upgrade of we just throw the switch by default. |
Just for my own notes: It appears that we do not need to make any changes for sqlite;
So we only need to migrate mysql and postgres over to a larger integer column type. |
WIP: k3s-io/kine#273 The current idea is that new databases will created with the new schema, while old databases are left alone. Users who want to opt-in to manually converting to the new schema can set |
Bumping out to v1.30 |
Validated using mysql as db on rke2 version v1.30.0-rc2+rke2r1Environment DetailsInfrastructure Node(s) CPU architecture, OS, and Version: Cluster Configuration: Config.yaml:
Steps to validate
Validation results:Type is seen and biginit unsigned
|
Validated with RC v1.30.0-rc2+rke2r1 with PostgreSQL 15 as External DBEnvironment DetailsInfrastructure Node(s) CPU architecture, OS, and Version: Cluster Configuration: Config.yaml:
Steps to validate
Validation results
|
Environmental Info:
K3s Version:
k3s version v1.22.12+k3s1 (17b9454)
go version go1.16.10
Node(s) CPU architecture, OS, and Version:
OS: Ubuntu 22.04.1 LTS
CPU: AMD EPYC 7R32
Linux ip-10-21-14-218 5.15.0-1020-aws #24-Ubuntu SMP Thu Sep 1 16:04:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
3 control-plane nodes
~200 agents
Database: AWS 5.7.mysql_aurora.2.10.2
Cluster managed with Rancher and using Rancher Fleet for deployments to the nodes.
Cluster have been up and running for almost 18 months.
Describe the bug:
control-plane nodes went crazy and was hammering the database with almost 200k connections. I have tried debug logs but they did not give me the root cause, using
tcpdump
I can see that MySQL replies:#23000Duplicate entry '2147483647' for key 'PRIMARY'
We have reached the maximum allowed number for the
id
field:Expected behavior:
control-plane nodes should be up and running, but if they encounter this error it should be visible in the logs.
Actual behavior:
control-plane nodes can not start, the error from MySQL was not visible in any logs.
The text was updated successfully, but these errors were encountered: