Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results are not stable when using sel4bench. #37

Open
zqyzsj opened this issue Apr 16, 2023 · 2 comments
Open

The results are not stable when using sel4bench. #37

zqyzsj opened this issue Apr 16, 2023 · 2 comments

Comments

@zqyzsj
Copy link

zqyzsj commented Apr 16, 2023

I currently use the seL4bench to test some benchmarks. I use a for loop to evaluate the performance of my benchmark, and the results show that the performance of the first loop is 20 times slower than the next loops, and the next loops are steady. The following are the APIs I used in my experiment.

seL4_BenchmarkResetThreadUtilisation(simple_get_tcb(&env.simple)); 
 seL4_BenchmarkResetLog();
 test_suits();
 seL4_BenchmarkFinalizeLog();
 seL4_BenchmarkGetThreadUtilisation(simple_get_tcb(&env.simple))

And the following are the evaluation results.
image

@axel-h
Copy link
Member

axel-h commented Apr 16, 2023

Might be due to cold caches in the first run?

@zqyzsj
Copy link
Author

zqyzsj commented Apr 18, 2023

@axel-h Hi axel-h, Thanks for your reply. At first I thought I was caused by cache misses or TLB misses in the first run. To check whether it is the case, I first run the test_suit(), and then run the test_suit in a for loop, and I found that the evaluation result is also high in the first loop than the following loops. I will check my test_suit(), thanks.
BTW, I run my experiments on Raspberry Pi 4B.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants