You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems to me that it makes the most logical sense to put the functionality of running the benchmarks into the client project (let me know if you disagree, @WrathfulSpatula).
We should create a folder in the root directory with the name benchmark/ with the contents of run.py contained within as a first step. We should also ensure that the benchmarking pipeline is able to be run as well.
@srilakshmip03 / @QuantumAli I think focusing work on the benchmarks would take priority for the client, so it probably makes sense to sync on this for further details.
The text was updated successfully, but these errors were encountered:
It seems to me that it makes the most logical sense to put the functionality of running the benchmarks into the client project (let me know if you disagree, @WrathfulSpatula).
At present, the base-level functionality for running the benchmarking pipeline is located here:
https://github.com/unitaryfund/metriq-api/blob/main/benchmark/run.py
We should create a folder in the root directory with the name
benchmark/
with the contents ofrun.py
contained within as a first step. We should also ensure that the benchmarking pipeline is able to be run as well.@srilakshmip03 / @QuantumAli I think focusing work on the benchmarks would take priority for the client, so it probably makes sense to sync on this for further details.
The text was updated successfully, but these errors were encountered: