Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Troubleshooting Talos Linux Integration: Challenges with k3d Image Pulling #57

Open
byteshiva opened this issue Apr 2, 2024 · 16 comments

Comments

@byteshiva
Copy link

byteshiva commented Apr 2, 2024

Description:
I would like to request the use of Talos Linux instead of k3d for the deployment. Currently, I am facing issues when trying to pull k3d images. Here are the details of the problem:

The below deployment and services to work in talos linux.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm-cluster
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wasm-cluster
  template:
    metadata:
      labels:
        app: wasm-cluster
    spec:
      containers:
      - name: wasm-cluster
        image: ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1
        ports:
        - containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
  name: wasm-cluster
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8081
  selector:
    app: wasm-cluster

I followed the instruction as provided in https://www.spinkube.dev/docs/spin-operator/installation/installing-with-helm/.

Please assist in resolving this issue and providing guidance on how to use Talos Linux for the deployment.

ps:
I plan to use Talos Linux instead of K3d.
https://www.spinkube.dev/docs/spin-operator/quickstart/

Error:
The k3d example fails to work in my NixOS setup. It's stuck at the below

k3d cluster create wasm-cluster   --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1   --port "8081:80@loadbalancer"  
INFO[0000] portmapping '8081:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-wasm-cluster'           
INFO[0000] Created image volume k3d-wasm-cluster-images 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-wasm-cluster-tools'       
INFO[0001] Creating node 'k3d-wasm-cluster-server-0'    
INFO[0001] Creating LoadBalancer 'k3d-wasm-cluster-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] HostIP: using network gateway 172.23.0.1 address 
INFO[0001] Starting cluster 'wasm-cluster'              
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-wasm-cluster-server-0'    
@rajatjindal
Copy link
Contributor

Hi @byteshiva,

could you please re-run the command to create the cluster with --verbose flag to check what it is waiting for.

@byteshiva
Copy link
Author

byteshiva commented Apr 2, 2024

k3d cluster create wasm-cluster --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1 --port "8081:80@loadbalancer"

Sure, here are the detailed logs after applying the --verbose flag:

Detailed logs

k3d cluster create wasm-cluster   --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1   --port "8081:80@loadbalancer"   --verbose 
 
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.25 OSType:linux OS:NixOS 23.05 (Stoat) Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs InfoName:nixos} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports:
  - 8081:80@loadbalancer
  registries:
    create: ""
  runtime-labels: []
  runtime-ulimits: []
  volumes: []
hostaliases: [] 
DEBU[0000] Configuration:
agents: 0
image: ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: "" 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha5} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:42365} Image:ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1 Network: Subnet: ClusterToken: Volumes:[] Ports:[{Port:8081:80 NodeFilters:[loadbalancer]}] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[] Ulimits:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
INFO[0000] portmapping '8081:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
DEBU[0000] generated loadbalancer config:
ports:
  80.tcp:
  - k3d-wasm-cluster-server-0
  6443.tcp:
  - k3d-wasm-cluster-server-0
settings:
  workerConnections: 1024 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:wasm-cluster Network:{Name:k3d-wasm-cluster ID: External:false IPAM:{IPPrefix:invalid Prefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000000340 0xc0000004e0] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0001f0f00 ServerLoadBalancer:0xc00042fed0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-wasm-cluster'           
INFO[0000] Created image volume k3d-wasm-cluster-images 
DEBU[0000] [Docker] DockerHost: '' (unix:///run/user/1000/docker.sock) 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false) 
DEBU[0000] Created container k3d-wasm-cluster-tools (ID: 08763c6eae24df178150a9ac34e9607f87aa075aae5b7094b6da8cb57ef878d4) 
DEBU[0000] Node k3d-wasm-cluster-tools Start Time: 2024-04-02 07:53:22.884297784 +0530 IST m=+0.069487660 
INFO[0000] Starting Node 'k3d-wasm-cluster-tools'       
DEBU[0000] Truncated 2024-04-02 02:23:23.052773229 +0000 UTC to 2024-04-02 02:23:23 +0000 UTC 
INFO[0001] Creating node 'k3d-wasm-cluster-server-0'    
DEBU[0001] Created container k3d-wasm-cluster-server-0 (ID: 19beb48f9e24f31c403902f7fb1f59e31bf7c5bfb146fdcd5bb5b2246bd90799) 
DEBU[0001] Created node 'k3d-wasm-cluster-server-0'     
INFO[0001] Creating LoadBalancer 'k3d-wasm-cluster-serverlb' 
DEBU[0001] Created container k3d-wasm-cluster-serverlb (ID: 26f40673af81a72930dd7ba0a358bf2b35eabcfb4a571bdb50d9add39338fc14) 
DEBU[0001] Created loadbalancer 'k3d-wasm-cluster-serverlb' 
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
INFO[0001] Using the k3d-tools node to gather environment information 
DEBU[0001] no netlabel present on container /k3d-wasm-cluster-tools 
DEBU[0001] failed to get IP for container /k3d-wasm-cluster-tools as we couldn't find the cluster network 
DEBU[0001] Deleting node k3d-wasm-cluster-tools ...     
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
INFO[0001] HostIP: using network gateway 172.29.0.1 address 
INFO[0001] Starting cluster 'wasm-cluster'              
INFO[0001] Starting servers...                          
DEBU[0001] >>> enabling cgroupsv2 magic                 
DEBU[0001] Node k3d-wasm-cluster-server-0 Start Time: 2024-04-02 07:53:24.065536612 +0530 IST m=+1.250726488 
INFO[0001] Starting Node 'k3d-wasm-cluster-server-0'    
DEBU[0001] Truncated 2024-04-02 02:23:24.265896393 +0000 UTC to 2024-04-02 02:23:24 +0000 UTC 
DEBU[0001] Waiting for node k3d-wasm-cluster-server-0 to get ready (Log: 'k3s is up and running'

Error encountered while attempting to delete cluster using k3d

Error Details

k3d cluster delete wasm-cluster --verbose

DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.25 OSType:linux OS:NixOS 23.05 (Stoat) Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs InfoName:nixos} 
DEBU[0000] Configuration:
{}                            
ERRO[0000] error getting loadbalancer config from k3d-wasm-cluster-serverlb: runtime failed to read loadbalancer config '/etc/confd/values.yaml' from node 'k3d-wasm-cluster-serverlb': Error response from daemon: Could not find the file /etc/confd/values.yaml in container 26f40673af81a72930dd7ba0a358bf2b35eabcfb4a571bdb50d9add39338fc14: file not found 
INFO[0000] Deleting cluster 'wasm-cluster'              
DEBU[0000] Cluster Details: &{Name:wasm-cluster Network:{Name:k3d-wasm-cluster ID: External:false IPAM:{IPPrefix:invalid Prefix IPsUsed:[] Managed:false} Members:[]} Token:DtZzZqxgaZUKhJeSfpDL Nodes:[0xc00001f6c0 0xc000334340] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:<nil> ServerLoadBalancer:0xc000476450 ImageVolume:k3d-wasm-cluster-images Volumes:[k3d-wasm-cluster-images]} 
DEBU[0000] Deleting node k3d-wasm-cluster-serverlb ...  
DEBU[0000] Deleting node k3d-wasm-cluster-server-0 ...  
INFO[0000] Deleting cluster network 'k3d-wasm-cluster'  
INFO[0000] Deleting 1 attached volumes...               
DEBU[0000] Deleting volume k3d-wasm-cluster-images...   
INFO[0000] Removing cluster details from default kubeconfig... 
DEBU[0000] Using default kubeconfig 'kubeconfig'        
DEBU[0000] Wrote kubeconfig to 'kubeconfig'             
INFO[0000] Removing standalone kubeconfig file (if there is one)... 
INFO[0000] Successfully deleted cluster wasm-cluster!   


Utilizing Nix for Isolation

$ cat run.sh 
export NIXPKGS_ALLOW_UNFREE=1

nix-shell -E '
let
  nixpkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/nixos-unstable.tar.gz") {};
in
nixpkgs.mkShell {
  buildInputs = with nixpkgs; [ k3d kubectl kubernetes-helm docker ];
  shellHook = "export KUBECONFIG=kubeconfig";
}' 

@rajatjindal
Copy link
Contributor

I just tried it on my laptop (Macbook M2 Pro), and it completed successfully.

k3d cluster create wasm-cluster   --image ghcr.io/spinkube/containerd-shim-spin/k3d:v0.13.1   --port "8081:80@loadbalancer" --verbose
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:25.0.1 OSType:linux OS:Docker Desktop Arch:aarch64 CgroupVersion:2 CgroupDriver:cgroupfs Filesystem:extfs InfoName:docker-desktop}

--ommitted some logs for readability--

DEBU[0023] Setting new current-context 'k3d-wasm-cluster'
DEBU[0023] Wrote kubeconfig to 'kubeconfig'
INFO[0023] You can now use it like this:
kubectl cluster-info

[nix-shell:~/nix-spinkube]$

@byteshiva
Copy link
Author

byteshiva commented Apr 2, 2024

I initiated a discussion on the K3d GitHub repository at k3d-io/k3d#1422.

I followed the steps below to set up a Spinkube cluster in a local environment using Talos Linux

But I see an error at:

kubectl logs -f simple-spinapp-56687588d9-jcpvt 
Error from server (BadRequest): container "simple-spinapp" in pod "simple-spinapp-56687588d9-jcpvt" is waiting to start: ContainerCreating
Steps to Create Sample SpinKube App

Step 1:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator
helm repo add kwasm http://kwasm.sh/kwasm-operator/

helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.13.1

kubectl annotate node --all kwasm.sh/kwasm-node=true

Step 2:

wget https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
kubectl apply -f  simple.yaml

Step 3:

kubectl get pods,svc,nodes  -A 
NAMESPACE       NAME                                                    READY   STATUS              RESTARTS      AGE
cert-manager    pod/cert-manager-5b54fc556f-dd7fw                       1/1     Running             0             12m
cert-manager    pod/cert-manager-cainjector-7d8b6cf7b9-7sv66            1/1     Running             0             12m
cert-manager    pod/cert-manager-webhook-7d4744b5ff-4vxts               1/1     Running             0             12m
default         pod/simple-spinapp-56687588d9-jcpvt                     0/1     ContainerCreating   0             7m1s
kube-system     pod/coredns-85b955d87b-fmfjw                            1/1     Running             0             14m
kube-system     pod/coredns-85b955d87b-kz5h4                            1/1     Running             0             14m
kube-system     pod/kube-apiserver-talos-xwn-6bf                        1/1     Running             0             14m
kube-system     pod/kube-controller-manager-talos-xwn-6bf               1/1     Running             2 (15m ago)   13m
kube-system     pod/kube-flannel-h95hg                                  1/1     Running             0             14m
kube-system     pod/kube-flannel-lqv4w                                  1/1     Running             0             14m
kube-system     pod/kube-proxy-ngzch                                    1/1     Running             0             14m
kube-system     pod/kube-proxy-t7fh7                                    1/1     Running             0             14m
kube-system     pod/kube-scheduler-talos-xwn-6bf                        1/1     Running             2 (15m ago)   14m
kwasm           pod/kwasm-operator-6c76c5f94b-cb5wt                     1/1     Running             0             11m
spin-operator   pod/spin-operator-controller-manager-565945c6f5-zhwx8   2/2     Running             0             12m

NAMESPACE       NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
cert-manager    service/cert-manager                                       ClusterIP   10.107.105.237   <none>        9402/TCP                 12m
cert-manager    service/cert-manager-webhook                               ClusterIP   10.96.242.218    <none>        443/TCP                  12m
default         service/kubernetes                                         ClusterIP   10.96.0.1        <none>        443/TCP                  15m
default         service/simple-spinapp                                     ClusterIP   10.110.159.114   <none>        80/TCP                   7m1s
kube-system     service/kube-dns                                           ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   15m
kwasm           service/kwasm-operator                                     ClusterIP   10.100.176.177   <none>        80/TCP                   11m
spin-operator   service/spin-operator-controller-manager-metrics-service   ClusterIP   10.107.217.215   <none>        8443/TCP                 12m
spin-operator   service/spin-operator-webhook-service                      ClusterIP   10.98.156.138    <none>        443/TCP                  12m

reference:

  1. https://www.spinkube.dev/docs/spin-operator/installation/installing-with-helm/
  2. https://www.spinkube.dev/docs/spin-operator/tutorials/integrating-with-docker-desktop/#spinkube
  3. https://www.spinkube.dev/docs/spin-operator/tutorials/integrating-with-rancher-desktop/#spinkube
  4. https://www.spinkube.dev/docs/spin-operator/tutorials/running-locally/#running-the-sample-application
  5. https://www.spinkube.dev/docs/spin-operator/tutorials/package-and-deploy/#creating-a-new-spin-app
  6. https://medium.com/@byteshiva/restoring-filesystem-hierarchy-standard-to-fix-spin-a-step-by-step-guide-66a40feb355b

@byteshiva
Copy link
Author

byteshiva commented Apr 2, 2024

I deployed a simple JavaScript Spin application on the GitHub Container Registry and made it public.

https://github.com/users/byteshiva/packages/container/package/hello-k3s

However, I'm encountering difficulties in creating the container, and I'm unsure of the reason behind it.

Steps/Details

cat  spinapp.yaml 
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-k3s
spec:
  image: "ghcr.io/byteshiva/hello-k3s:0.0.1"
  executor: containerd-shim-spin
  replicas: 2
kubectl apply  -f  spinapp.yaml
[nix-shell:~/]$ kubectl get pod -A   
NAMESPACE       NAME                                                READY   STATUS              RESTARTS       AGE
cert-manager    cert-manager-5b54fc556f-dd7fw                       1/1     Running             0              109m
cert-manager    cert-manager-cainjector-7d8b6cf7b9-7sv66            1/1     Running             0              109m
cert-manager    cert-manager-webhook-7d4744b5ff-4vxts               1/1     Running             0              109m
default         hello-k3s-6f8f596bb9-6ctpz                          0/1     ContainerCreating   0              6s
default         hello-k3s-6f8f596bb9-rb4g7                          0/1     ContainerCreating   0              6s
kube-system     coredns-85b955d87b-fmfjw                            1/1     Running             0              111m
kube-system     coredns-85b955d87b-kz5h4                            1/1     Running             0              111m
kube-system     kube-apiserver-talos-xwn-6bf                        1/1     Running             0              111m
kube-system     kube-controller-manager-talos-xwn-6bf               1/1     Running             2 (111m ago)   110m
kube-system     kube-flannel-h95hg                                  1/1     Running             0              111m
kube-system     kube-flannel-lqv4w                                  1/1     Running             0              111m
kube-system     kube-proxy-ngzch                                    1/1     Running             0              111m
kube-system     kube-proxy-t7fh7                                    1/1     Running             0              111m
kube-system     kube-scheduler-talos-xwn-6bf                        1/1     Running             2 (112m ago)   110m
kwasm           kwasm-operator-6c76c5f94b-cb5wt                     1/1     Running             0              107m
spin-operator   spin-operator-controller-manager-565945c6f5-zhwx8   2/2     Running             0              108m

[nix-shell:~/]$ kubectl logs -f hello-k3s-6f8f596bb9-6ctpz
Error from server (BadRequest): container "hello-k3s" in pod "hello-k3s-6f8f596bb9-6ctpz" is waiting to start: ContainerCreating

kubectl describe  hello-k3s-6f8f596bb9-6ctpz
  Warning  FailedCreatePodSandBox  69s (x26 over 6m23s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "spin" is configured

@0xE282B0
Copy link

0xE282B0 commented Apr 3, 2024

Hi @byteshiva,
how does your RuntimeClass look?

@byteshiva
Copy link
Author

Hi @byteshiva, how does your RuntimeClass look?

The RuntimeClass configuration looks like this

$cat spin-runtime-class.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin-v2
handler: spin

Runtimeclass Details


$ kubectl describe runtimeclass wasmtime-spin-v2
Name:         wasmtime-spin-v2
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  node.k8s.io/v1
Handler:      spin
Kind:         RuntimeClass
Metadata:
  Creation Timestamp:  2024-04-03T11:34:26Z
  Resource Version:    9017
  UID:                 afc19f60-***2-9c53968b0b04
Events:                <none>

$ kubectl get runtimeclass wasmtime-spin-v2 -o yaml
apiVersion: node.k8s.io/v1
handler: spin
kind: RuntimeClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"node.k8s.io/v1","handler":"spin","kind":"RuntimeClass","metadata":{"annotations":{},"name":"wasmtime-spin-v2"}}
  creationTimestamp: "2024-04-03T11:34:26Z"
  name: wasmtime-spin-v2
  resourceVersion: "9017"
  uid: afc19f60-***2-9c53968b0b04

@0xE282B0
Copy link

0xE282B0 commented Apr 3, 2024

How did you set up the K3s?
Do you have access to the /var/lib/rancher/k3s/agent/etc/containerd/config.toml file?

@byteshiva
Copy link
Author

byteshiva commented Apr 3, 2024

How did you set up the K3s? Do you have access to the /var/lib/rancher/k3s/agent/etc/containerd/config.toml file?

Please note that I'm not using K3d/k3s. Instead, I have set it up on Talos Linux.
You can set up Talos Linux using the guide at: https://www.talos.dev/v1.6/talos-guides/install/local-platforms/virtualbox/

Containerd config / Other Details

Controller Node Config

$ talosctl cat  /etc/containerd/config.toml   -n "${CONTROLER_PLANE_IP}"

version = 2

disabled_plugins = [
    "io.containerd.grpc.v1.cri",
    "io.containerd.internal.v1.opt",
    "io.containerd.internal.v1.tracing",
    "io.containerd.nri.v1.nri",
    "io.containerd.snapshotter.v1.blockfile",
    "io.containerd.tracing.processor.v1.otlp",
]

[debug]
level = "info"
format = "json"

Agent/Worker Node Config

$ talosctl cat  /etc/containerd/config.toml   -n "${WORKER_IP}"
version = 2

disabled_plugins = [
    "io.containerd.grpc.v1.cri",
    "io.containerd.internal.v1.opt",
    "io.containerd.internal.v1.tracing",
    "io.containerd.nri.v1.nri",
    "io.containerd.snapshotter.v1.blockfile",
    "io.containerd.tracing.processor.v1.otlp",
]

[debug]
level = "info"
format = "json"

$ talosctl services    -n "${CONTROLER_PLANE_IP}"
SERVICE      STATE     HEALTH   LAST CHANGE   LAST EVENT
apid         Running   OK       18m55s ago    Health check successful
containerd   Running   OK       18m57s ago    Health check successful
cri          Running   OK       18m54s ago    Health check successful
dashboard    Running   ?        18m56s ago    Process Process(["/sbin/dashboard"]) started with PID 1699
etcd         Running   OK       18m20s ago    Health check successful
kubelet      Running   OK       19m9s ago     Health check successful
machined     Running   OK       19m2s ago     Health check successful
trustd       Running   OK       20m9s ago     Health check successful
udevd        Running   OK       19m2s ago     Health check successful
$ talosctl ls  /etc/   -n "${CONTROLER_PLANE_IP}"
 NAME
 .
 ca-certificates
 cni
 containerd
 cri
 extensions.yaml
 hosts
 kubernetes
 localtime
 lvm
 machine-id
 nfsmount.conf
 os-release
 pki
 resolv.conf
 ssl

@squillace
Copy link

I like Talos! But realistically we would not have an "instead" sample; no harm in having several. :-)

@0xE282B0
Copy link

0xE282B0 commented Apr 3, 2024

Please note that I'm not using K3d/k3s. Instead, I have set it up on Talos Linux.

In that case the error message is correct, the containerd shim is not installed.

With KWasm or the runtime-class-manager the spin shim can be installed on a variety of Kubernetes distributions. But that is not working with Talos. But we can build a talos-extensions that adds the shim.

Only Talos on Docker can't execute extensions afaik. Are you using Talos on Docker?

@byteshiva
Copy link
Author

byteshiva commented Apr 3, 2024

Only Talos on Docker can't execute extensions afaik. Are you using Talos on Docker?

No, I'm not using Talos on Docker. I'm currently running Talos on VirtualBox using an ISO - metal-amd64.iso

@0xE282B0
Copy link

0xE282B0 commented Apr 3, 2024

Great!
@saiyam1814 already added an extension for WasmEdge, so we just need to do the same for Spin 👍

@byteshiva byteshiva changed the title Request to use Talos Linux instead of k3d and experiencing issues pulling k3d images Troubleshooting Talos Linux Integration: Challenges with k3d Image Pulling Apr 3, 2024
@0xE282B0
Copy link

0xE282B0 commented Apr 3, 2024

Hi @byteshiva
I was not able to verify it today, but this should work:
siderolabs/extensions@main...0xE282B0:talos-extensions:feat-spin-extension
I'll open a PR when I made sure it runs with a real Talos.

@0xE282B0
Copy link

0xE282B0 commented Apr 7, 2024

Hi @byteshiva,
I opened a PR for the official Talos extensions repo: siderolabs/extensions#355

I tested it on Digital Ocean and a Raspberry Pi 4. Steps to reproduce:

  1. check out the feature branch:
    git clone https://github.com/0xE282B0/talos-extensions.git -b feat-spin-extension
  2. build and push the extension image (adapt registry and username):
    cd talos-extensions && make REGISTRY=docker.io USERNAME=0xe282b0 TARGETS="spin" PUSH=true
    If you get the error message: "Makefile:106: *** missing separator. Stop." aou need to delete the lines 99-147 from the Makefile.
  3. build a image for your desired platform:

Digital Ocean

docker run --rm -t -v $PWD/_out:/out -v /dev:/dev \
  --privileged ghcr.io/siderolabs/imager:v1.6.7 digital-ocean \
    --arch amd64 \
    --system-extension-image docker.io/0xe282b0/spin:v0.13.1
  • go to https://cloud.digitalocean.com/images/custom_images and create a custom image using the _out/digital-ocean-amd64.raw.gz file
  • Create a Talos droplet from the custom image (per default only local IP can be used to)
  • Create a second Ubuntu droplet in the same VPC
  • install talosctl
  • configure node e.g.talosctl apply-config --insecure --mode=interactive --nodes <PRIVATE_IP>

Raspberry Pi

docker run --rm -t -v $PWD/_out:/out -v /dev:/dev \
  --privileged ghcr.io/siderolabs/imager:v1.6.7 metal \
    --arch arm64 \
    --system-extension-image docker.io/0xe282b0/spin:v0.13.1 \
    --board rpi_generic

Use the out/metal-rpi_generic-arm64.raw.xz to flash a sdcard e.g. with Raspberry Pi Imager

Verify extension

❯ talosctl --nodes 192.168.188.58 get extensions                                       
NODE             NAMESPACE   TYPE              ID   VERSION   NAME   VERSION
192.168.188.58   runtime     ExtensionStatus   0    1         spin   v0.13.1

Apply RuntimeClass

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin-v2
handler: spin

Creatte Spin Deployment

apiVersion: v1
kind: Pod
metadata:
  name: spin-test
spec:
  containers:
  - command:
    - /
    image: ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello
    name: spin-test
  runtimeClassName: wasmtime-spin-v2

Verify deployment

kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
spin-test   1/1     Running   0          56m
# Finally test the hello spin app 🥳
kubectl port-forward pod/spin-test 8000:80    
curl localhost:8000/hello
> Hello world from Spin!

Now everything is set up to use SpinKube! 🚀

@tpmccallum is SpinKube on Talos Linux something we should add to the documentation?

@saiyam1814
Copy link

saiyam1814 commented Apr 8, 2024

Nice! @0xE282B0 let me know if any help needed!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants