Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Which stressors to use for the RT benchmarks #39

Open
carlossvg opened this issue Apr 12, 2022 · 1 comment
Open

Discussion: Which stressors to use for the RT benchmarks #39

carlossvg opened this issue Apr 12, 2022 · 1 comment

Comments

@carlossvg
Copy link
Contributor

Follow-up from this comment: ros-realtime/ros-realtime-rpi4-image#30 (comment)
Connected to: ros-realtime/ros2_realtime_benchmarks#4

As I said I'm interested in defining which kind of stressors we should use by default when we run the benchmarks. I see 3 different approaches:

  1. Use CPU stress only
  2. Use different stressors applied one by one in different experiments
  3. Use different stressors applied sequentially
  4. Apply all the stressors simultaneously

Option 1 is the common option we typically see for simple benchmarks but it's quite limited and it doesn't reflect real scenarios. This could lead to incorrect assumptions about the expected worst-case latency.

Option 2 is the one shown in this guide. See the snapshots below.

Selection_078
Selection_077

This is a good approach because it provides useful information about how which kind of stress is affecting the measurements. The obvious downside is that it escalates the number of experiments. This might be a big problem if we plan to run many different experiments so I would restrict the number of test cases in which we apply this for regular benchmarking.

Option 3 is similar to the approach shown in the OSADL QA farm. This can easily be implemented by using stress-ng with the --seq option. The problem is that it's not easy to identify which stress was responsible for a specific worst case.

Option 4, is the same approach used with hackbench. The same could be achieved using stress-ng --all or specifying the stressors we want to run. Same problem as option 3, but this could be interesting to run on regular bases and then switch to option 2 to identify the stressor responsible of the issue.

Feel free to add your input here and let's discuss all these options in the next meeting.

@carlossvg
Copy link
Contributor Author

cc @LanderU @razr @shuhaowu

@carlossvg carlossvg changed the title Define which stressors to use for the RT benchmarks Discussion: Which stressors to use for the RT benchmarks Apr 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Todo
Development

No branches or pull requests

1 participant