Skip to content

Commit

Permalink
Reduce cache TTL to avoid getting stuck in failure for too long (#114)
Browse files Browse the repository at this point in the history
  • Loading branch information
CoreyEWood authored Oct 16, 2024
1 parent 7f5ea72 commit 96c3692
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion app/core/edge_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

# Simple TTL cache for is_edge_inference_ready checks to avoid having to re-check every time a request is processed.
# This will be process-specific, so each edge-endpoint worker will have its own cache instance.
ttl_cache = TTLCache(maxsize=128, ttl=30)
ttl_cache = TTLCache(maxsize=128, ttl=2)


@cached(ttl_cache)
Expand Down

0 comments on commit 96c3692

Please sign in to comment.