A benchmark and resources for evaluation of LLM agents on setting up and executing ML/NLP tasks from research repositories in the GitHub wild.
[arxiv]Dataset tasks are available in HuggingFace Hub 🤗.
We provide three sets: Expert (45 problems), Masked (152) and AutoGen (602).
Agents trajectories from the paper's experiments are available here.
git clone https://github.com/allenai/super-benchmark.git
cd super-benchmark
pip install -r requirements.txt
echo "OPENAI_API_KEY=your-openai-api-key" > .env
The following command will run the agent locally, which may incur risks as it will execute code on your machine. We provide the option to run the agent inside a Docker container, and using modal.com. We use the latter for the benchmark evaluation.
python -m super.run_single_query --env-backend local --query "Download the OpenBookQA dataset at https://github.com/allenai/OpenBookQA and tell me how many examples are in the train, dev, and test splits of the datasets."
We provide code to evaluate our implemented agents on SUPER.
To run tasks safely and concurrently, we use modal.com. Modal isn't free, but is quite cheap: running an average problem from the benchmark should generally cost 2-3 cents (assuming CPU). In addition users receive $30 credit per month, which should be enough to run the benchmark evaluation multiple times.
python -m super.run_on_benchmark --set Expert