Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous benchmarking #51

Open
wants to merge 28 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
97334e1
added continuous benchmarking workflow file
LDeng0205 Feb 12, 2022
0a98ccc
udpated output file path
LDeng0205 Feb 13, 2022
063c921
modifiede checkout file path
LDeng0205 Feb 14, 2022
0df54bf
modified file path to match updated result generation script
LDeng0205 Feb 14, 2022
c5211de
adjusted workflow settings
LDeng0205 Feb 14, 2022
85f7845
running all benchmarks
LDeng0205 Feb 14, 2022
adb55e5
tweaked workflow file
LDeng0205 Feb 15, 2022
7398aa4
test
LDeng0205 Feb 15, 2022
45f337b
changed iteration number
LDeng0205 Feb 16, 2022
cd2f1d4
continue on error
LDeng0205 Mar 1, 2022
b8568b1
changed action and runner to russell
LDeng0205 Mar 8, 2022
4201ce1
checking out current version of reactor-c
LDeng0205 Mar 8, 2022
c30ac24
checking out lf from lf-lang
LDeng0205 Mar 8, 2022
3d4e91e
fixed file name
LDeng0205 Mar 8, 2022
87c2811
modified to use benchmark repository
LDeng0205 Mar 11, 2022
aa71a55
testing on github machine
LDeng0205 Mar 11, 2022
9105e37
added comments
LDeng0205 Mar 11, 2022
d740d33
[benchmark] Update cb workflow.
petervdonovan Jul 9, 2022
ceab594
[benchmarks] Update workflow.
petervdonovan Jul 10, 2022
3ac8ecf
REVERT ME
petervdonovan Jul 13, 2022
4ca49de
[benchmarks] Benchmark different runtime versions.
petervdonovan Jul 13, 2022
dd98802
[benchmarks] Use size=fast.
petervdonovan Jul 14, 2022
b9f341a
Update continuous-benchmark.yml.
petervdonovan Jul 15, 2022
5f89f93
[benchmarks] Adjust threshold for warning.
petervdonovan Jul 15, 2022
3812932
[benchmarks] Rename a step; update comments.
petervdonovan Jul 15, 2022
72edc94
[benchmarks] Update to match collect_results.py.
petervdonovan Jul 20, 2022
62c2692
[benchmarks] Update ref.
petervdonovan Jul 28, 2022
6f23b39
Merge branch 'main' into continuous-benchmarking
lhstrh Feb 1, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 96 additions & 0 deletions .github/workflows/continuous-benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
name: Continuous Benchmarking


on:
pull_request:
workflow_dispatch:


permissions:
contents: write
deployments: write


jobs:
benchmark:
name: Run C Benchmarks
runs-on: ubuntu-latest #FIXME: change to self-hosted after russel is set up.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
runs-on: ubuntu-latest #FIXME: change to self-hosted after russel is set up.
runs-on: Linux


steps:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Checkout benchmark repository
uses: actions/checkout@v2
with:
repository: lf-lang/benchmarks-lingua-franca
ref: automated-full-benchmark # FIXME: delete this line

- name: Checkout Lingua Franca repository
uses: actions/checkout@v2
with:
repository: lf-lang/lingua-franca
path: lf

- name: Prepare LF build environment
uses: ./lf/.github/actions/prepare-build-env

- name: Checkout current version of reactor-c
uses: actions/checkout@v2
with:
path: lf/org.lflang/src/lib/c/reactor-c

- name: Install Python dependencies
run: pip3 install -r runner/requirements.txt

- name: Build lfc
run: |
cd lf
./gradlew buildLfc

- name: Set LF_PATH and LF_BENCHMARKS_PATH environmental variable
run: |
echo "LF_PATH=$GITHUB_WORKSPACE/lf" >> $GITHUB_ENV
echo "LF_BENCHMARKS_PATH=$GITHUB_WORKSPACE" >> $GITHUB_ENV

- name: Run C Benchmarks (multithreaded)
run: |
python3 runner/run_benchmark.py -m continue_on_error=True iterations=12 problem_size=small \
benchmark="glob(*)" target=lf-c target.params.scheduler=GEDF_NP,NP,adaptive threads=0

- name: Collect results
run: python3 runner/collect_results.py continuous-benchmarking-results-multi-threaded.json

- name: Store Benchmark Result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Lingua Franca C Benchmark -- Multithreaded
tool: customSmallerIsBetter
output-file-path: continuous-benchmarking-results-multi-threaded.json
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
alert-threshold: '200%' # FIXME: After russel is set up, lower the threshold
comment-on-alert: true
fail-on-alert: false

- name: Run C Benchmarks (unthreaded)
run: |
python3 runner/run_benchmark.py -m continue_on_error=True iterations=12 problem_size=small \
benchmark="glob(*)" target=lf-c-unthreaded

- name: Collect results
run: python3 runner/collect_results.py continuous-benchmarking-results-single-threaded.json

- name: Store Benchmark Result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Lingua Franca C Benchmark -- Single-Threaded
tool: customSmallerIsBetter
output-file-path: continuous-benchmarking-results-single-threaded.json
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
alert-threshold: '200%' # FIXME: After russel is set up, lower the threshold
comment-on-alert: true
fail-on-alert: false
Loading