Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update global_benchmark.py #22

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: Performance benchmarks

on:
pull_request_target:
types: [opened, ready_for_review]
branches:
- main
issue_comment:
types: [created]

permissions:
issues: write
pull-requests: write

jobs:
run-benchmarks:
if: >
github.event_name == 'pull_request_target' ||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '/rerun-benchmarks'))
runs-on: ubuntu-latest
steps:
- name: Checkout main branch
uses: actions/checkout@v4
with:
ref: main
repository: EwoutH/mesa
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Add project directory to PYTHONPATH
run: echo "PYTHONPATH=$PYTHONPATH:$(pwd)" >> $GITHUB_ENV
- name: Install dependencies
run: pip install numpy pandas tqdm tabulate
- name: Run benchmarks on main branch
working-directory: benchmarks
run: python global_benchmark.py
- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: timings-main
path: benchmarks/timings_1.pickle
- name: Checkout PR branch
uses: actions/checkout@v4
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge
repository: ${{ github.event.pull_request.head.repo.full_name }}
token: ${{ secrets.GITHUB_TOKEN }}
clean: false
- name: Download benchmark results
uses: actions/download-artifact@v4
with:
name: timings-main
path: benchmarks
- name: Run benchmarks on PR branch
working-directory: benchmarks
run: python global_benchmark.py
- name: Run compare timings and encode output
working-directory: benchmarks
run: |
TIMING_COMPARISON=$(python compare_timings.py | base64 -w 0) # Base64 encode the output
echo "TIMING_COMPARISON=$TIMING_COMPARISON" >> $GITHUB_ENV
- name: Comment PR
uses: actions/github-script@v7
with:
script: |
const output = Buffer.from(process.env.TIMING_COMPARISON, 'base64').toString('utf-8');
const issue_number = context.issue.number;
github.rest.issues.createComment({
issue_number: issue_number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Benchmark Comparison:\n\n' + output
});
4 changes: 2 additions & 2 deletions benchmarks/global_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@
def run_model(model_class, seed, parameters):
start_init = timeit.default_timer()
model = model_class(seed=seed, **parameters)
# time.sleep(0.001)
time.sleep(0.001)

end_init_start_run = timeit.default_timer()

for _ in range(config["steps"]):
model.step()
# time.sleep(0.0001)
time.sleep(0.0001)
end_run = timeit.default_timer()

return (end_init_start_run - start_init), (end_run - end_init_start_run)
Expand Down