Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make golem-worker-executor depend on redis in Docker Compose #348

Merged

Conversation

asavelyev01
Copy link
Contributor

I bumped into this error when trying to bootstrap Golem infra locally

golem-worker-executor_1               | 2024-04-02T15:42:06.188727Z  INFO golem_worker_executor_base::services::template: Using template API at http://golem-template-service:9090/
golem-worker-executor_1               | 2024-04-02T15:42:06.188706Z  INFO Server::run{addr=0.0.0.0:8082}: warp::server: listening on http://0.0.0.0:8082/
golem-worker-executor_1               | thread 'main' panicked at /home/runner/work/golem/golem/golem-worker-executor-base/src/services/oplog.rs:59:35:
golem-worker-executor_1               | failed to get the number of replicas from Redis: IO Error: Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }
golem-worker-executor_1               | stack backtrace:
golem-worker-executor_1               |    0: rust_begin_unwind
golem-worker-executor_1               |              at /rustc/7cf61ebde7b22796c69757901dd346d0fe70bd97/library/std/src/panicking.rs:647:5
golem-worker-executor_1               |    1: core::panicking::panic_fmt
golem-worker-executor_1               |              at /rustc/7cf61ebde7b22796c69757901dd346d0fe70bd97/library/core/src/panicking.rs:72:14
golem-worker-executor_1               |    2: golem_worker_executor_base::Bootstrap::run::{{closure}}
golem-worker-executor_1               |    3: golem_worker_executor::run::{{closure}}
golem-worker-executor_1               |    4: tokio::runtime::context::runtime::enter_runtime
golem-worker-executor_1               |    5: tokio::runtime::runtime::Runtime::block_on
golem-worker-executor_1               |    6: worker_executor::main
golem-worker-executor_1               | note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
golem-worker-executor_1               | 2024-04-02T15:42:06.238006Z  INFO golem_worker_executor_base::http_server: Stopping Http server...
golem-template-compilation-service_1  | 2024-04-02T15:42:06.934255Z  INFO golem_worker_executor_base::http_server: Http server started on 0.0.0.0:8084
golem-template-compilation-service_1  | 2024-04-02T15:42:06.934319Z  INFO Server::run{addr=0.0.0.0:8084}: warp::server: listening on http://0.0.0.0:8084/
redis_1                               | 1:C 02 Apr 2024 15:42:06.250 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                               | 1:C 02 Apr 2024 15:42:06.250 * Redis version=7.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                               | 1:C 02 Apr 2024 15:42:06.250 * Configuration loaded
redis_1                               | 1:M 02 Apr 2024 15:42:06.250 * monotonic clock: POSIX clock_gettime
redis_1                               | 1:M 02 Apr 2024 15:42:06.255 * Running mode=standalone, port=6380.
redis_1                               | 1:M 02 Apr 2024 15:42:06.259 * Server initialized
redis_1                               | 1:M 02 Apr 2024 15:42:06.260 * Ready to accept connections tcp
golem-template-service_1              | 2024-04-02T15:42:06.179038Z  INFO golem_template_service: Starting cloud server on ports: http: 8083, grpc: 9090
golem-template-service_1              | 2024-04-02T15:42:06.179103Z  INFO golem_template_service::db: DB migration: sqlite:///app/golem_db/golem.sqlite
golem-template-service_1              | 2024-04-02T15:42:06.250322Z  INFO golem_template_service::db: DB Pool: sqlite:///app/golem_db/golem.sqlite
golem-template-service_1              | 2024-04-02T15:42:06.252545Z  INFO golem_service_base::service::template_object_store: FS Template Object Store root: /template_store, prefix:
golem-template-service_1              | 2024-04-02T15:42:06.265104Z  INFO poem::server: listening addr=socket://0.0.0.0:8083
golem-template-service_1              | 2024-04-02T15:42:06.265122Z  INFO poem::server: server started
golem-shard-manager_1                 | 2024-04-02T15:42:06.876860Z  INFO golem_shard_manager: Golem Shard Manager starting up...
golem-shard-manager_1                 | 2024-04-02T15:42:06.876960Z  INFO golem_shard_manager: Using Redis at redis://redis:6380/0
golem-shard-manager_1                 | 2024-04-02T15:42:06.877090Z  INFO Server::run{addr=0.0.0.0:8081}: warp::server: listening on http://0.0.0.0:8081/
golem-shard-manager_1                 | 2024-04-02T15:42:06.877212Z  INFO golem_shard_manager: The port read from env is 9002
golem-shard-manager_1                 | 2024-04-02T15:42:06.877245Z  INFO golem_shard_manager: Listening on port 9002
golem-shard-manager_1                 | 2024-04-02T15:42:06.886463Z  INFO golem_shard_manager: Starting health check process...
golem-shard-manager_1                 | 2024-04-02T15:42:06.886497Z  INFO golem_shard_manager: Shard Manager is fully operational.
docker-examples_golem-worker-executor_1 exited with code 101

from how it looks, it must be that sometimes the Worker Executor tries to connect reddis before the latter starts up. depends clause should fix it.

@asavelyev01
Copy link
Contributor Author

PS: tried starting both docker-compose files with the fix in place, no error in 2-3 attempts. So it probably fixes the issue and most probably doesn't break anything else.

Copy link
Contributor

@justcoon justcoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@afsalthaj afsalthaj merged commit 9e4bcaf into golemcloud:main Apr 3, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants