Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changes between 1.28 and 1.29+ lead to decrease in docker/daemon.json options (aka daemon.json is being always overwritten) #16323

Closed
dmpe opened this issue Apr 14, 2023 · 14 comments
Labels
co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug.

Comments

@dmpe
Copy link
Contributor

dmpe commented Apr 14, 2023

What Happened?

Hi,

We are using Centos 7 VM where we install minikube using something like this:

minikube start --cni="calico" \
                      --container-runtime="docker" \
                      --driver=none \
                      --delete-on-failure=true

This works perfectly fine, both using 1.28 as well as 1.29+ minikube releases. However, while in 1.28 /etc/docker/daemon.json was not replaced during the installation, in 1.29+ it is being replaced via this function
https://github.com/kubernetes/minikube/blob/v1.29.0/pkg/minikube/cruntime/docker.go#L524.
See for 1.28 here: https://github.com/kubernetes/minikube/blob/v1.28.0/pkg/minikube/cruntime/docker.go#L511

Well, of course, when we install minikube & docker and use none driver, we already setup docker in a way which allows us to use such configuration. :) :)

In fact, we already set CGroupDriver to systemd, and not just that. We also customize many other settings in to make it work in a strictly regulated environment in our company, resulting into file be more like this:

{
  "data-root": "/srv/docker",
  "storage-driver": "overlay2",
  "bip": "10.xx.xx.xx/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Now, we are stuck on using minikube 1.28 because with 1.29 the minikube will replace daemon.json file which -- even though makes minikube start properly ! -- it is not usable for our purposes in the company. Reason being "docker data" (images, etc.) are stored in /var/lib/docker and bip with logging options cannot adjusted either, making it hard work in our environment.

What would be your suggestion here? Should we maybe add a new option which will disable such overwrite of a file (e.g. minikube start --disable-daemon-overwrite=true, with a default being false (current behavior). Or maybe fix it in a different way - e.g. check if driver is systemd, and if it is then do nothing, while it is not, attempt to change it.

cc @prezha @afbjorklund

Thank you for any suggestions, or help.

Attach the log file

Minikube runs perfectly fine, it is just that daemon.json options for docker are no longer configurable.

Operating System

Other

Driver

None

@dmpe dmpe changed the title changes between 1.28 and 1.29+ lead to decrease in docker/daemon.json options (aka daemon.json is being overwritten) changes between 1.28 and 1.29+ lead to decrease in docker/daemon.json options (aka daemon.json is being always overwritten) Apr 14, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 14, 2023

I was sure that config file was much older than that, but in general minikube should only try to change the config settings it actually cares about and not replace the existing config (made some similar changes to cri-o, but not docker)

Especially for external platforms, those not managed by minikube itself

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Apr 14, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 14, 2023

There was a similar issue with "overlay2" recently, causing conflicts with exec flags on some platforms

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 14, 2023

It got broken here: e59d621 (when the forceSystemd was changed)

-// ForceSystemd forces the docker daemon to use systemd as cgroup manager
-func (r *Docker) forceSystemd() error {
-       klog.Infof("Forcing docker to use systemd as cgroup manager...")
+// setCGroup configures the docker daemon to use driver as cgroup manager
+// ref: https://docs.docker.com/engine/reference/commandline/dockerd/#options-for-the-runtime
+func (r *Docker) setCGroup(driver string) error {
+       if driver == constants.UnknownCgroupDriver {
+               return fmt.Errorf("unable to configure docker to use unknown cgroup driver")
+       }
+
+       klog.Infof("configuring docker to use %q as cgroup driver...", driver)

@afbjorklund afbjorklund added the co/runtime/docker Issues specific to a docker runtime label Apr 14, 2023
@dmpe
Copy link
Contributor Author

dmpe commented Apr 14, 2023

Great, thanks for a quick look and confirmation.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@willsu
Copy link

willsu commented Oct 31, 2023

Is there any known workaround to this issue? My minikube is failing to start due to the docker service failing startup. To fix my docker service startup issue, I need to change the 'storage-driver' to 'vfs', which I can test in the Minikube container itself and it works (by modifying the /etc/docker/daemon.json). Due to this issue, the 'daemon.json' is regenerated everytime I run 'minikube start --docker-opt storage-driver=vfs' and the resulting storage driver is 'overlay2', which causes. Since the '--docker-opt' values seem to be discarded, is there any way to set the 'storage-driver' docker daemon in the minikube container?

@dmpe
Copy link
Contributor Author

dmpe commented Oct 31, 2023

And i already thought that i am alone with the issue..:).. Indeed, can also confirm that docker-opt does not work as one would expect it. Hopefully we get reply one day

@willsu
Copy link

willsu commented Nov 1, 2023

Thanks for the follow-up @dmpe. I see that the fix has been lingering for quite a long time, but has not been merged. I hope that the minikube team will fix ''--docker-opt" and other options that affect the 'daemon.json' file, because it's very misleading to document it as an option. Maybe the team can change the documentation to more clearly document that current state, because these issues can be very hard to diagnose for folks who don't (want to) know the internals of Minikube.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2024
@lunarfs
Copy link

lunarfs commented Feb 16, 2024

Hi, I was also facing this issue, my workaround was to

  1. create /etc/docker/alternate-daemon.json" that had the config i like
  2. create /etc/systemd/system/docker.service.d/docker.conf with the following content

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --config-file=/etc/docker/alternate-daemon.json
TimeoutStartSec=5min
3) reload the systemctl
systemctl daemon-reload
4) sudo systemctl restart docker
and my docker is running with my prefered config

@dmpe
Copy link
Contributor Author

dmpe commented Feb 16, 2024

Thanks, that's creative. We have on the other hand taken the opportunity to migrate to k3s.
Granted minikube and k3s are different projects with different objectives. However, we have already used minikube cluster much more like a "stable" cluster. And k3s fit the purpose even better.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 17, 2024
@k8s-ci-robot
Copy link
Contributor

@numbpun: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

6 participants