Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exec container process /wasi_example_main.wasm: Exec format error #27

Open
Tej-Singh-Rana opened this issue Sep 25, 2023 · 17 comments
Open

Comments

@Tej-Singh-Rana
Copy link

Hi Team,

I was playing with the kwasm operator. I followed the steps in the official docs but encountered an error. Can you please assist me with this?
A few days ago, it was working fine. Today, I checked again and encountered an error.

Environment Details:

Kubernetes Version: v1.28
Environment Type: Sandbox
CRI:  containerd 1.6.6 

Working:

image (216)

Not Working:

image (217)

Error Output:

controlplane ~ ➜  kubectl logs pod/wasm-test-gmzpv
{"msg":"exec container process `/wasi_example_main.wasm`: Exec format error","level":"error","time":"2023-09-25T17:42:14.765200Z"}

Earlier, crun is managed by the kwasm operator; now, I have to install it manually. Did we update anything recently? Can you share any URL where I can read about this?

Thanks & Regards,

0xE282B0 added a commit to KWasm/kwasm.github.io that referenced this issue Sep 26, 2023
0xE282B0 added a commit to KWasm/kwasm-node-installer that referenced this issue Sep 26, 2023
@0xE282B0
Copy link
Member

Hi @Tej-Singh-Rana,
I'm sorry for the inconvenience. The installer has been updated to 0.3.0 To streamline the installation process, the crun has been removed in favor of the containerd-shim-wasmedge. Also, all other runtimes can now work with sidecars. All examples that used crun before should still work if you change the RuntimeClass to wasmedge.

The website has not been updated yet. I just fixed the website a minute ago to make the examples work again.

Thanks for your detailed bug reports! If you find anything that does not work with the wasmedge RuntimeClass, or if you explicitly need crun as a runtime (the shims use libcontainer from the youki project), please let me know!

@Tej-Singh-Rana
Copy link
Author

Hi @0xE282B0 ,
Thanks for your reply.

I tried the above steps, but still failing for me.

image (218)

updated the installer,

image (219)

Here is the config.toml content.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.lunatic]
    runtime_type = "/opt/kwasm/bin/containerd-shim-lunatic-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.slight]
    runtime_type = "/opt/kwasm/bin/containerd-shim-slight-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin]
    runtime_type = "/opt/kwasm/bin/containerd-shim-spin-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wws]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wws-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmedge-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmer]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmer-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmtime-v1"

Binaries:

image (220)

Thanks & Regards,

@0xE282B0
Copy link
Member

Interesting, everything looks as it should:

  • Binaries at /opt/kwasm/bin
  • Containerd conf.toml entries
  • RuntimeClass wasmedge
  • according to the error meddage the pod is configured with runtimeClassName: wasmedge

Which kubernetes distribution do you use? - I tested some here: KWasm/kwasm-node-installer#43
Did containerd restart corectly after config change?

@Tej-Singh-Rana
Copy link
Author

I deployed K8s cluster using the kubeadm tool. Yes, I did restart containerd service.

@0xE282B0
Copy link
Member

Do you have a default conf.toml or is the plugin configuration the only content in the conf file?

@Tej-Singh-Rana
Copy link
Author

is the plugin configuration the only content in the conf file?

Only this one.

@0xE282B0
Copy link
Member

Could you generate a default config and run the installer again?
containerd config default > /etc/containerd/config.toml

@Tej-Singh-Rana
Copy link
Author

Sure, I will try this and let you know.

Regards,

@Tej-Singh-Rana
Copy link
Author

Tej-Singh-Rana commented Sep 26, 2023

  1. Ran the following command:
containerd config default > /etc/containerd/config.toml
  1. Restarted the containerd service.

Content of the config.toml file:

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "k8s.gcr.io/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0
  1. Installed the operator from the below link:

https://kwasm.sh/quickstart/#installation

  1. Updated the installer with tag v0.3.0

https://github.com/KWasm/kwasm-node-installer/releases/tag/v0.3.0

  1. Restarted the containerd service.

Content of the config.toml file:

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "k8s.gcr.io/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.lunatic]
    runtime_type = "/opt/kwasm/bin/containerd-shim-lunatic-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.slight]
    runtime_type = "/opt/kwasm/bin/containerd-shim-slight-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin]
    runtime_type = "/opt/kwasm/bin/containerd-shim-spin-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wws]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wws-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmedge-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmer]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmer-v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
    runtime_type = "/opt/kwasm/bin/containerd-shim-wasmtime-v1"
  1. Ran the sample tests:

https://kwasm.sh/quickstart/#wasm-runtime-configuration

  1. This time, I got a different error.
image (221)

@0xE282B0
Copy link
Member

Thats odd!
The good thing is, that error message comes from the shim. But it's very unspecific error message.
Is the Spin container working or one of the others?

You can test that with these manifests:

kubectl apply -f https://github.com/KWasm/kwasm-node-installer/releases/download/v0.3.0/deployment.yaml
kubectl apply -f https://github.com/KWasm/kwasm-node-installer/releases/download/v0.3.0/runtimeclass.yaml

@Tej-Singh-Rana
Copy link
Author

I will check and let you know.

@Tej-Singh-Rana
Copy link
Author

I tried it again and deployed the above manifest files.

Runtimeclasses:

controlplane ~ ➜  kubectl get runtimeclasses
NAME       HANDLER    AGE
lunatic    lunatic    4m20s
slight     slight     4m20s
spin       spin       4m20s
wasmedge   wasmedge   4m20s
wasmer     wasmer     4m20s
wasmtime   wasmtime   4m20s
wws        wws        4m20s

Deployments:

controlplane ~ ➜  kubectl get po -A
NAMESPACE      NAME                                   READY   STATUS              RESTARTS      AGE
default        lunatic-demo-6475554875-7vxml          0/2     RunContainerError   6 (9s ago)    79s
default        wasm-slight-6467bcc5bc-w5h5d           0/2     CrashLoopBackOff    6 (13s ago)   78s
default        wasm-spin-74c4cf5c77-lf25l             0/2     CrashLoopBackOff    6 (14s ago)   80s
default        wasm-wws-888f6bc4b-22kwz               0/2     RunContainerError   6 (14s ago)   79s
default        wasmedge-demo-5ff758d79-gb2xx          0/2     CrashLoopBackOff    6 (18s ago)   80s
default        wasmer-demo-857f947cb7-tm9m9           0/2     CrashLoopBackOff    6 (16s ago)   80s
default        wasmtime-demo-56c78ddd95-grm2v         0/2     CrashLoopBackOff    6 (15s ago)   79s

Inspected the pods:

Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m11s                  default-scheduler  Successfully assigned default/wasm-wws-888f6bc4b-22kwz to controlplane
  Warning  Failed     5m5s                   kubelet            Error: failed to start containerd task "wws-hello": Others("failed to create container: failed to receive. \"waiting for intermediate process\". BrokenChannel"): unknown
  Normal   Pulled     5m5s                   kubelet            Successfully pulled image "ghcr.io/deislabs/containerd-wasm-shims/examples/wws-js-hello:latest" in 902ms (4.077s including waiting)
  Normal   Pulled     4m53s                  kubelet            Successfully pulled image "redis" in 314ms (11.079s including waiting)
  Normal   Pulled     4m52s                  kubelet            Successfully pulled image "ghcr.io/deislabs/containerd-wasm-shims/examples/wws-js-hello:latest" in 226ms (1.445s including waiting)
  Normal   Created    4m49s (x2 over 4m53s)  kubelet            Created container redis
  Warning  Failed     4m49s (x2 over 4m53s)  kubelet            Error: failed to create containerd task: failed to create shim task: Other: mount process exit unexpectedly, exit code: ENOENT: No such file or directory: unknown
  Normal   Pulled     4m49s                  kubelet            Successfully pulled image "redis" in 331ms (2.28s including waiting)
  Warning  BackOff    4m48s (x2 over 4m49s)  kubelet            Back-off restarting failed container redis in pod wasm-wws-888f6bc4b-22kwz_default(fa4e4906-cd03-4a61-aa4e-6f701d212849)
  Normal   Pulling    4m36s (x3 over 5m5s)   kubelet            Pulling image "redis"
  Normal   Created    4m36s (x3 over 5m5s)   kubelet            Created container wws-hello
  Normal   Pulling    4m36s (x3 over 5m9s)   kubelet            Pulling image "ghcr.io/deislabs/containerd-wasm-shims/examples/wws-js-hello:latest"
  Warning  Failed     4m36s (x2 over 4m52s)  kubelet            Error: failed to create containerd task: failed to create shim task: Other: mount process exit unexpectedly, exit code: ENOENT: No such file or directory: unknown
  Normal   Pulled     4m36s                  kubelet            Successfully pulled image "ghcr.io/deislabs/containerd-wasm-shims/examples/wws-js-hello:latest" in 256ms (256ms including waiting)
  Warning  BackOff    3s (x25 over 4m49s)    kubelet            Back-off restarting failed container wws-hello in pod wasm-wws-888f6bc4b-22kwz_default(fa4e4906-cd03-4a61-aa4e-6f701d212849)

@Tej-Singh-Rana
Copy link
Author

Is the Spin container working or one of the others?

Sorry, I didn't get your point. Which containers?

@0xE282B0
Copy link
Member

Spin is one of the test containers. I wanted to see if it is a problem of the wasmedge shim or if all shims are affected. Could you please describe how you set up your cluster, that I can reproduce the issue?

  • Machine type (arm64/amd64) Ram size
  • Linux distribution (uname -a output)
  • kubeadm version

@Tej-Singh-Rana
Copy link
Author

Machine type (arm64/amd64) Ram size - amd64

Linux distribution (uname -a output) - Ubuntu 20.04.6 LTS (Focal Fossa)

controlplane ~ ➜  uname -a
Linux controlplane 5.4.0-1106-gcp #115~18.04.1-Ubuntu SMP Mon May 22 20:46:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

kubeadm version - v1.28

@0xE282B0
Copy link
Member

Hi @Tej-Singh-Rana,

I tried to reproduce your setup but can't reproduce your issue.

Here is what I did:
I got a VM from civo.com. I choose a Medium instance (2 CPU amd64 / 4 GB RAM / 50 GB Disk / Ubuntu 20.04) and set up a single node cluster after the official docs: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Detailed steps

Install prerequisites

apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
apt-get install -y apt-transport-https ca-certificates curl
mkdir /etc/apt/keyrings/
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl containerd
apt-mark hold kubelet kubeadm kubectl containerd

# Install helm
curl -LO https://get.helm.sh/helm-v3.13.0-rc.1-linux-amd64.tar.gz | tar xzf -
mv linux-amd64/helm /usr/local/bin/

Set up cluster

modprobe br_netfilter
echo 1 > /proc/sys/net/ipv4/ip_forward
kubeadm init --pod-network-cidr 10.244.0.0/16

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl taint node kwasm-test-85a2-49a40b node-role.kubernetes.io/control-plane:NoSchedule-

# Prepare config.toml
mkdir -p /etc/containerd/
containerd config default > /etc/containerd/config.toml

Install and test KWasm operator

# install latest shims with kwasm installer
helm repo add kwasm http://kwasm.sh/kwasm-operator/ --force-update
helm repo update
helm upgrade --install -n kwasm --create-namespace kwasm-operator kwasm/kwasm-operator \
  --set kwasmOperator.autoProvision="true" \
  --set kwasmOperator.installerImage="ghcr.io/kwasm/kwasm-node-installer:v0.3.0"

# deploy sidecar smoke tests
kubectl apply -f https://github.com/KWasm/kwasm-node-installer/releases/download/v0.3.0/deployment.yaml
kubectl apply -f https://github.com/KWasm/kwasm-node-installer/releases/download/v0.3.0/runtimeclass.yaml

Test results

kubectl get all
NAME                                 READY   STATUS    RESTARTS      AGE
pod/lunatic-demo-6475554875-whcvs    2/2     Running   0             15m
pod/wasm-slight-6467bcc5bc-j9thq     2/2     Running   1 (11m ago)   15m
pod/wasm-spin-74c4cf5c77-xz6mn       2/2     Running   0             15m
pod/wasm-wws-888f6bc4b-9jxqc         2/2     Running   0             15m
pod/wasmedge-demo-5ff758d79-qqlz6    2/2     Running   0             15m
pod/wasmer-demo-857f947cb7-v9df7     2/2     Running   0             15m
pod/wasmtime-demo-56c78ddd95-5mnxn   2/2     Running   0             15m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   33m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/lunatic-demo    1/1     1            1           15m
deployment.apps/wasm-slight     1/1     1            1           15m
deployment.apps/wasm-spin       1/1     1            1           15m
deployment.apps/wasm-wws        1/1     1            1           15m
deployment.apps/wasmedge-demo   1/1     1            1           15m
deployment.apps/wasmer-demo     1/1     1            1           15m
deployment.apps/wasmtime-demo   1/1     1            1           15m

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/lunatic-demo-6475554875    1         1         1       15m
replicaset.apps/wasm-slight-6467bcc5bc     1         1         1       15m
replicaset.apps/wasm-spin-74c4cf5c77       1         1         1       15m
replicaset.apps/wasm-wws-888f6bc4b         1         1         1       15m
replicaset.apps/wasmedge-demo-5ff758d79    1         1         1       15m
replicaset.apps/wasmer-demo-857f947cb7     1         1         1       15m
replicaset.apps/wasmtime-demo-56c78ddd95   1         1         1       15m
> kubectl logs deployment/wasmedge-demo

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

KWasm.sh quickstart

I also tried the example from KWasm.sh, also without issues:

> kubectl apply -f https://raw.githubusercontent.com/KWasm/kwasm-node-installer/main/example/test-job.yaml
runtimeclass.node.k8s.io/wasmedge unchanged
job.batch/wasm-test created

> kubectl logs job/wasm-test
Random number: 595037507
Random bytes: [45, 107, 189, 139, 130, 108, 35, 23, 246, 32, 34, 148, 196, 243, 92, 219, 16, 22, 100, 119, 30, 119, 26, 147, 228, 206, 237, 72, 3, 146, 78, 145, 87, 168, 48, 105, 104, 42, 241, 228, 25, 3, 145, 238, 57, 2, 241, 70, 249, 5, 31, 3, 171, 46, 224, 153, 86, 14, 136, 225, 103, 225, 38, 177, 113, 177, 29, 50, 222, 210, 98, 211, 59, 39, 186, 157, 178, 32, 99, 189, 197, 133, 11, 205, 241, 73, 195, 88, 229, 0, 233, 109, 106, 250, 39, 210, 17, 112, 164, 67, 51, 213, 86, 59, 17, 227, 72, 1, 109, 126, 228, 90, 183, 159, 83, 136, 203, 30, 210, 47, 9, 244, 227, 134, 8, 128, 78, 0]
Printed from wasi: This is from a main function
This is from a main function
The env vars are as follows.
KUBERNETES_SERVICE_PORT_HTTPS: 443
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_ADDR: 10.96.0.1
KUBERNETES_PORT_443_TCP: tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT: 443
KUBERNETES_SERVICE_HOST: 10.96.0.1
KUBERNETES_PORT: tcp://10.96.0.1:443
HOSTNAME: wasm-test-lgdk9
KUBERNETES_SERVICE_PORT: 443
KUBERNETES_PORT_443_TCP_PROTO: tcp
The args are as follows.
/wasi_example_main.wasm
File content is This is in a file

Summary

I used a CIVO Medium VM instance (2 CPU amd64 / 4 GB RAM / 50 GB Disk / Ubuntu 20.04) and here are the versions of the tools I used:

> uname -a
Linux kwasm-test-85a2-49a40b 5.4.0-132-generic #148-Ubuntu SMP Mon Oct 17 16:02:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> containerd --version
containerd github.com/containerd/containerd 1.7.2
> kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
> helm version
version.BuildInfo{Version:"v3.13.0-rc.1", GitCommit:"825e86f6a7a38cef1112bfa606e4127a706749b1", GitTreeState:"clean", GoVersion:"go1.20.8"}

I have not been able to reproduce your problem. Is there anything I did that is different from your setup?

@Tej-Singh-Rana
Copy link
Author

Thanks for your time and efforts. I will dig more into it.

Regards,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants