Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] After the k3d cluster is restarted, the Naked Pod on the server node is automatically removed. #1486

Open
braveantony opened this issue Aug 14, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@braveantony
Copy link

What did you do

  • How was the cluster created?

    • sudo k3d cluster create bobo --servers 1 --agents 2
  • What did you do afterwards?

# 1. Generate a simple Pod Yaml file
$ sudo kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx.yaml

# 2. Add a NodeSelector to specify that the pod runs on the server node.
$ nano nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  nodeSelector:                                        # add
    node-role.kubernetes.io/control-plane: "true"      # add
status: {}

# 3. Run Pod
$ sudo kubectl apply -f nginx.yaml

# 4. Check Pod is running on server node
$ sudo kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          4s    10.42.2.12   k3d-bobo-server-0   <none>           <none>

# 5. Stop K3d Cluster
$ sudo k3d cluster stop bobo

# 6. Start K3d Cluster
$ sudo k3d cluster start bobo

# 7. Check Pod Status
$ sudo kubectl get pods -o wide
No resources found in default namespace.

# 8. Check Events
$ sudo kubectl get events --field-selector involvedObject.name=nginx --sort-by='{.metadata.creationTimestamp}'
LAST SEEN   TYPE     REASON      OBJECT      MESSAGE
119s        Normal   Scheduled   pod/nginx   Successfully assigned default/nginx to k3d-bobo-server-0
119s        Normal   Pulling     pod/nginx   Pulling image "nginx"
105s        Normal   Pulled      pod/nginx   Successfully pulled image "nginx" in 13.736s (13.736s including waiting)
105s        Normal   Created     pod/nginx   Created container nginx
105s        Normal   Started     pod/nginx   Started container nginx
86s         Normal   Killing     pod/nginx   Stopping container nginx

What did you expect to happen

Naked Pod should still running on server node after k3d cluster is restarted.
I tested running a naked pod on agent node with the exact same steps, and the pod is still running after k3d Cluster reboot.

Which OS & Architecture

$ sudo k3d runtime-info
arch: amd64
cgroupdriver: cgroupfs
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: extfs
infoname: rch155
name: docker
os: alpine
ostype: linux
version: 5.0.3

$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.20.2
PRETTY_NAME="Alpine Linux v3.20"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"

Which version of k3d

$ k3d version
k3d version v5.7.2
k3s version v1.29.6-k3s2 (default)

Which version of podman

$ podman version
Client:       Podman Engine
Version:      5.0.3
API Version:  5.0.3
Go Version:   go1.22.5
Built:        Mon Jul  8 01:34:20 2024
OS/Arch:      linux/amd64

$ podman info
host:
  arch: amd64
  buildahVersion: 1.35.4
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-r0
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: unknown'
  cpuUtilization:
    idlePercent: 89.84
    systemPercent: 4.95
    userPercent: 5.21
  cpus: 4
  databaseBackend: sqlite
  distribution:
    distribution: alpine
    version: 3.20.2
  eventLogger: file
  freeLocks: 2048
  hostname: rch155
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 500000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 500000
      size: 65536
  kernel: 6.6.45-0-lts
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 2194477056
  memTotal: 12515233792
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-r0
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-r0
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.15-r0
    path: /usr/bin/crun
    version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /tmp/storage-run-1000/crun
      spec: 1.0.0
      +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-2024.06.07-r0
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /tmp/storage-run-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 2h 2m 0.00s (Approximately 0.08 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/bigred/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/bigred/.local/share/containers/storage
  graphRootAllocated: 1051196825600
  graphRootUsed: 13800706048
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /tmp/storage-run-1000/containers
  transientStore: false
  volumePath: /home/bigred/.local/share/containers/storage/volumes
version:
  APIVersion: 5.0.3
  Built: 1720373660
  BuiltTime: Mon Jul  8 01:34:20 2024
  GitCommit: ""
  GoVersion: go1.22.5
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.3
@braveantony braveantony added the bug Something isn't working label Aug 14, 2024
@braveantony
Copy link
Author

braveantony commented Aug 14, 2024

I also found that if I shut down the Podman Host machine directly and start the k3d cluster after booting, the naked pod on the Server node is not automatically deleted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant