Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mirror d container - running from abnother container with mounted docker sock #2778

Open
miob-miob opened this issue Sep 24, 2024 · 11 comments
Labels
bug Something isn't working

Comments

@miob-miob
Copy link

Bug Description

First let me expalin my settings:

i have everything installed in container, kubectl, google cloud sdk in order to work with our infrastructure running in GKE, docker sock is mounted from host.
i have also installed mirrorD in container although kubectl, docker is working i get errors when trying to use mirrod container

i know, bit weird setup, i love it see working. in same setup i can use mirrord exec without issues.

mirrord command:
mirrord container -f mirror_dee_tmp_conf.json -- docker run -p 8383:8080 mendhak/http-https-echo

content of mirror_dee_tmp_conf.json :

{
  "target": {
    "pod": "cocoaas-backoffice-755dfb895f-mrc4z",
    "container": "cocoaas-backoffice"
  },
  "agent": {
    "labels": {
      "app": "mirrord",
      "team": "mirrord"
    },
    "ephemeral": true,
    "image": "***our mirror***/metalbear-co/mirrord:3.117.0",
    "annotations": {
      "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
    },
    "privileged": false
  },
  "feature": {
    "hostname": true,
    "network": {
      "incoming": {
        "mode": "steal",
        "ignore_localhost": false
      }
    }
  }
}

error i got:

! mirrord container is currently an unstable feature
  x ! mirrord container is currently an unstable feature
    ✓ preparing to launch process
      ✓ operator not found
      ✓ container created
Error:   × Command failed to execute command [docker logs 2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa]: Error: No such container:
  │ 2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa
  │
  help: This is a bug. Please report it in our Discord or GitHub repository.

        >> Please open a new bug report at https://github.com/metalbear-co/mirrord/issues/new/choose

        >> Or join our Discord https://discord.gg/metalbear and request help in #mirrord-help

        >> Or email us at [email protected]

logs from docker deamon running in debug mode:

2024-09-23T12:45:28.155391260Z][docker][I] [2024-09-23T12:45:28.153207885Z][lifecycle-server][I] (1398bb15) 4a0b4295-VMDockerdAPI S<-C 1ece1538-stats GET /vm/ram-cpu-usage
[2024-09-23T12:45:29.162459344Z][docker][I] [2024-09-23T12:45:29.158596511Z][lifecycle-server][I] (1398bb15) 4a0b4295-VMDockerdAPI S->C 1ece1538-stats GET /vm/ram-cpu-usage (1.005274209s): {"cpuPercentage":0.625,"memBytes":9749704000}
[2024-09-23T12:45:33.152810971Z][docker][I] [2024-09-23T12:45:33.150191680Z][lifecycle-server][I] (53ccfa6e) 4a0b4295-VMDockerdAPI S<-C 1ece1538-stats GET /vm/ram-cpu-usage
[2024-09-23T12:45:33.781555180Z][docker][I] [2024-09-23T12:45:33.780962138Z][lifecycle-server][I] proxy >> HEAD /_ping
[2024-09-23T12:45:33.782003638Z][docker][I] [2024-09-23T12:45:33.781681972Z][lifecycle-server][I] proxy << HEAD /_ping (739.125µs)
[2024-09-23T12:45:33.811316180Z][docker][I] [2024-09-23T12:45:33.810921222Z][lifecycle-server][I] proxy >> POST /v1.41/containers/create
[2024-09-23T12:45:33.812417513Z][docker][I] [2024-09-23T12:45:33.812157513Z][lifecycle-server][I] (440f346c) e479d393-LifecycleServer C->S BackendAPI GET /settings
[2024-09-23T12:45:33.818369013Z][docker][I] [2024-09-23T12:45:33.817394638Z][lifecycle-server][I] (440f346c) e479d393-LifecycleServer C<-S 7d5dcbfc-BackendAPI GET /settings (3.823875ms): {"acceptCanaryUpdates":false,"activeOrganizationName":"","allowExperimentalFeatures":true,"analyticsEnabled":true,"autoDownloadUpdates":false,"autoStart":false,"backupData":false,"containerTerminal":"integrated","cpus":8,"credentialHelper":"docker-credential-osxkeychain","customWslDistroDir":"","dataFolder":"<HOME>/Library/Containers/com.docker.docker/Data/vms/0/data","deprecatedCgroupv1":false,"disableHardwareAcceleration":false,"disableTips":false,"disableUpdate":false,"diskFlush":"","diskQcowCompactAfter":0,"diskQcowKeepErased":0,"diskQcowRuntimeAsserts":false,"diskSizeMiB":61035,"diskStats":"","diskTRIM":true,"displayRestartDialog":true,"displayedDeprecate1014":false,"displayedElectronPopup":[],"displayedWelcomeSurvey":true,"dockerAppLaunchPath":"","dockerBinInstallPath":"system","enableDefaultDockerSocket":true,"enableSegmentDebug":false,"enhancedCon
[2024-09-23T12:45:33.985246597Z][docker][I] [2024-09-23T12:45:33.984788805Z][lifecycle-server][I] proxy << POST /v1.41/containers/create (173.877ms)
[2024-09-23T12:45:33.988537597Z][docker][I] [2024-09-23T12:45:33.988188263Z][lifecycle-server][I] proxy >> POST /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/wait?condition=removed
[2024-09-23T12:45:33.991622138Z][docker][I] [2024-09-23T12:45:33.991264847Z][lifecycle-server][I] proxy >> GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json
[2024-09-23T12:45:33.992570763Z][docker][I] [2024-09-23T12:45:33.992274930Z][lifecycle-server][I] proxy << GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json (1.010833ms)
[2024-09-23T12:45:34.010541222Z][docker][I] [2024-09-23T12:45:34.010237972Z][lifecycle-server][I] proxy >> POST /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/start
[2024-09-23T12:45:34.153099138Z][docker][I] [2024-09-23T12:45:34.152740847Z][lifecycle-server][I] (53ccfa6e) 4a0b4295-VMDockerdAPI S->C 1ece1538-stats GET /vm/ram-cpu-usage (1.002546834s): {"cpuPercentage":7.3232323232323235,"memBytes":9753408000}
[2024-09-23T12:45:34.199192805Z][docker][I] [2024-09-23T12:45:34.198678430Z][lifecycle-server][I] proxy << POST /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/start (188.445375ms)
[2024-09-23T12:45:34.199716180Z][docker][I] [2024-09-23T12:45:34.199323555Z][lifecycle-server][I] proxy >> GET /v1.24/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json
[2024-09-23T12:45:34.199935180Z][docker][I] [2024-09-23T12:45:34.199357097Z][lifecycle-server][I] proxy >> GET /v1.24/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json
[2024-09-23T12:45:34.200722263Z][docker][I] [2024-09-23T12:45:34.200434013Z][lifecycle-server][I] proxy << GET /v1.24/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json (1.112792ms)
[2024-09-23T12:45:34.201183305Z][docker][I] [2024-09-23T12:45:34.200695013Z][lifecycle-server][I] proxy << GET /v1.24/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json (1.336875ms)
[2024-09-23T12:45:34.258277055Z][docker][I] [2024-09-23T12:45:34.257988305Z][lifecycle-server][I] proxy >> HEAD /_ping
[2024-09-23T12:45:34.258925263Z][docker][I] [2024-09-23T12:45:34.258532305Z][lifecycle-server][I] proxy << HEAD /_ping (548.334µs)
[2024-09-23T12:45:34.260140305Z][docker][I] [2024-09-23T12:45:34.259849638Z][lifecycle-server][I] proxy >> GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json
[2024-09-23T12:45:34.301764388Z][docker][I] [2024-09-23T12:45:34.301401680Z][lifecycle-server][I] proxy << GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/json (41.547167ms)
[2024-09-23T12:45:34.306163263Z][docker][I] [2024-09-23T12:45:34.305826847Z][lifecycle-server][I] proxy >> GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/logs?stderr=1&stdout=1&tail=all
[2024-09-23T12:45:34.306657847Z][docker][I] [2024-09-23T12:45:34.306421972Z][lifecycle-server][I] proxy << GET /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/logs?stderr=1&stdout=1&tail=all (596.708µs)
[2024-09-23T12:45:34.322973847Z][docker][I] [2024-09-23T12:45:34.322616888Z][lifecycle-server][I] proxy << POST /v1.41/containers/2953aafee575e1d2aac9cf97ecff7f232803a44c823cb9fcbda4676804babaaa/wait?condition=removed (334.422875ms)
[2024-09-23T12:45:38.153850668Z][docker][I] [2024-09-23T12:45:38.152237376Z][lifecycle-server][I] (f2925a10) 4a0b4295-VMDockerdAPI S<-C 1ece1538-stats GET /vm/ram-cpu-usage
[2024-09-23T12:45:39.160603793Z][docker][I] [2024-09-23T12:45:39.156643918Z][lifecycle-server][I] (f2925a10) 4a0b4295-VMDockerdAPI S->C 1ece1538-stats GET /vm/ram-cpu-usage (1.004381875s): {"cpuPercentage":0.24968789013732834,"memBytes":9750320000}
[2024-09-23T12:45:43.153380795Z][docker][I] [2024-09-23T12:45:43.151016253Z][lifecycle-server][I] (6282ca3a) 4a0b4295-VMDockerdAPI S<-C 1ece1538-stats GET /vm/ram-cpu-usage
[2024-09-23T12:45:44.158241087Z][docker][I] [2024-09-23T12:45:44.155585962Z][lifecycle-server][I] (6282ca3a) 4a0b4295-VMDockerdAPI S->C 1ece1538-stats GET /vm/ram-cpu-usage (1.004468667s): {"cpuPercentage":0.12484394506866417,"memBytes":9748564000}
[2024-09-23T12:45:48.152234297Z][docker][I] [2024-09-23T12:45:48.150301297Z][lifecycle-server][I] (3a473b25) 4a0b4295-VMDockerdAPI S<-C 1ece1538-stats GET /vm/ram-cpu-usage

Steps to Reproduce

mirror d instalation in container
kubectl installation in container
docker sock mounted from host

running mirrord exec...

Backtrace

No response

Relevant Logs

No response

Your operating system and version

container os: Linux 67bb1e5e3dc1 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 GNU/Linux computer os: Darwin Michals-MacBook-Pro.local 22.6.0 Darwin Kernel Version 22.6.0: Mon Jun 24 01:22:14 PDT 2024; root:xnu-8796.141.3.706.2~1/RELEASE_ARM64_T6000 arm64

Local process

running mirrord container

Local process version

No response

Additional Info

No response

@miob-miob miob-miob added the bug Something isn't working label Sep 24, 2024
Copy link

linear bot commented Sep 24, 2024

@aviramha
Copy link
Member

Hi, what mirrord version are you using?

@miob-miob
Copy link
Author

miob-miob commented Sep 24, 2024

mirrord 3.117.0

same as agent

Copy link
Member

This looks like an issue we fixed with mirrord 3.118.0 can you try it please ?

@miob-miob
Copy link
Author

mirrord container -f mirror_dee_tmp_conf.json -- docker run -p 8383:3333 mendhak/http-https-echo
! mirrord container is currently an unstable feature
x ! mirrord container is currently an unstable feature
✓ preparing to launch process
✓ operator not found
✓ container created
✓ container is ready Error: × Command failed to execute command [docker logs f28c1b6b0de277f79feb429dd29804baf8400d76ee87091f913766783950fc86]: Error: No such container:
│ f28c1b6b0de277f79feb429dd29804baf8400d76ee87091f913766783950fc86

help: This is a bug. Please report it in our Discord or GitHub repository.

still same on version mirrord 3.118.0

@DmitryDodzin
Copy link
Member

Oh I've just noticed you have a custom repo for mirrord-agent the issue might be pulling sidecar image that we spawn for container feature, could you try and add this to your config with the updated registry?

"container": {
  "cli_image": "***our mirror***/metalbear-co/mirrord-cli:3.118.0"
}

@miob-miob
Copy link
Author

Error still persisit:

as error is

docker logs a12cf054f069f17cd830eb0be27ead026167ae1a311954a1aa0baf5cbf368818]: Error: No such container: a12cf054f069f17cd830eb0be27ead026167ae1a311954a1aa0baf5cbf368818

i would say container was living for some time... (image was pulled) - but just my quess

con figuration now looks like:

  "target": {
    "pod": "cocoaas-backoffice-ccdb58b7f-kvwvh",
    "container": "cocoaas-backoffice"
  },
  "container": {
    "cli_image": "*******/mirror-eu/ghcr.io/metalbear-co/mirrord:3.118.0"
  },
  "agent": {
    "labels": {
      "app": "mirrord",
      "team": "mirrord"
    },
    "ephemeral": true,
    "image": "*****mirror-eu/ghcr.io/metalbear-co/mirrord:3.118.0",
    "annotations": {
      "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
    },
    "privileged": false
  },
  "feature": {
    "hostname": true,
    "network": {
      "incoming": {
        "mode": "steal",
        "ignore_localhost": false
      }
    }
  }
}

@DmitryDodzin
Copy link
Member

Hi I may have missed this but just to clarify the image of the cli is a bit different from the agent image

ghcr.io/metalbear-co/mirrord-cli - cli-image
ghcr.io/metalbear-co/mirrord - agent-image

Can you double check because I see that the provided config does have the same image in both config values

@miob-miob
Copy link
Author

Woow, thanks i totally missed fact of two different images. Let me add it to my corporate mirror and try it.

@miob-miob
Copy link
Author

Still persist:

command:

mirrord container -f mirror_dee_tmp_conf.json -- docker run -p 8383:8080 mendhak/http-https-echo

mirror_dee_tmp_conf.json :

{
  "target": {
    "pod": "cocoaas-backoffice-64f4bcd458-v9575",
    "container": "cocoaas-backoffice"
  },
  "container": {
    "cli_image": "INTERNAL-MIRROR/ghcr.io/metalbear-co/mirrord-cli:3.118.0"
  },
  "agent": {
    "labels": {
      "app": "mirrord",
      "team": "mirrord"
    },
    "ephemeral": true,
    "image": "INTERNAL-MIRROR/ghcr.io/metalbear-co/mirrord:3.118.0",
    "annotations": {
      "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
    },
    "privileged": false
  },
  "feature": {
    "hostname": true,
    "network": {
      "incoming": {
        "mode": "steal",
        "ignore_localhost": false
      }
    }
  }
}

error:

! mirrord container is currently an unstable feature
x ! mirrord container is currently an unstable feature
✓ preparing to launch process
✓ operator not found
✓ container created
✓ container is ready Error: × Command failed to execute command [docker logs
│ 486768d9040d698a3544344b740f56690f1a1e64bd80564ae9676f940ffe45f2]: Error: No such container:
│ 486768d9040d698a3544344b740f56690f1a1e64bd80564ae9676f940ffe45f2

@DmitryDodzin
Copy link
Member

DmitryDodzin commented Oct 22, 2024

Ah that's a bummer, there is this one fix that we just released with version 3.121.1 that should update the error to a more correct error message, like I'm afraid the docker logs <id> error is just because the sidecar is deleted before the cli managed to get the error logs (the latest release should fix it)

Could you try and update to latest version? this might shed more light on what is the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants