Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prod deploy 06/10/23 #271

Merged
merged 6 commits into from
Oct 6, 2023
Merged

Prod deploy 06/10/23 #271

merged 6 commits into from
Oct 6, 2023

Commits on Oct 2, 2023

  1. Configuration menu
    Copy the full SHA
    36ea4ab View commit details
    Browse the repository at this point in the history

Commits on Oct 3, 2023

  1. feat: put toggle + disclaimer (#237)

    Adds a toggle to update which graphql endpoint the logs and state are
    pulled from.
    
     
    
    ![gipp](https://github.com/near/queryapi/assets/25015977/65c0a7dd-7e0d-4d0d-8a66-fd35a2e9d0b3)
    roshaans authored Oct 3, 2023
    Configuration menu
    Copy the full SHA
    2fc4e96 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    9f13148 View commit details
    Browse the repository at this point in the history

Commits on Oct 5, 2023

  1. Store Real Time Streamer Messages in Redis (#241)

    The streamer message is used by both the coordinator and runner.
    However, both currently poll the message from S3. There is a huge
    latency impact for pulling the message from S3. In order to improve
    this, the streamer message will now be cached in Redis with a TTL and
    pulled by runner from Redis. Only if there is a cache miss will runner
    pull from S3 again.
    
    Pulling from S3 currently takes up 200-500ms, which is roughly 80-85% of
    the overall execution time of a function in runner. By caching the
    message, a cache hit leads to loading the data in 1-3ms in local
    testing, which corresponds to about 3-5% of the execution time, or a
    1100% improvement in latency. 
    
    The reduction of network related activity
    to a much lower percentage of execution time also reduces the variability
    of a function's execution time greatly. Cache hits and misses will be
    logged for further tuning of TTL to reduce cache misses. In addition,
    processing the block takes around 1-3ms. This processing has been moved to
    be done before caching, saving an extra 1-3ms each time that block is read from
    cache. The improvement there will be important for historical backfill, which is planned to be optimized soon.
    
    Tracking Issue: #262
    Parent Issue: #204
    darunrs authored Oct 5, 2023
    Configuration menu
    Copy the full SHA
    9678087 View commit details
    Browse the repository at this point in the history

Commits on Oct 6, 2023

  1. Configuration menu
    Copy the full SHA
    ebadd12 View commit details
    Browse the repository at this point in the history
  2. fix: Handle errors emitted within worker threads (#270)

    Uncaught errors thrown within the worker will kill the entire
    application. This PR handles this by listening for these errors and
    terminating the thread if they arise.
    morgsmccauley authored Oct 6, 2023
    Configuration menu
    Copy the full SHA
    29e77c3 View commit details
    Browse the repository at this point in the history