You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I said I'm interested in defining which kind of stressors we should use by default when we run the benchmarks. I see 3 different approaches:
Use CPU stress only
Use different stressors applied one by one in different experiments
Use different stressors applied sequentially
Apply all the stressors simultaneously
Option 1 is the common option we typically see for simple benchmarks but it's quite limited and it doesn't reflect real scenarios. This could lead to incorrect assumptions about the expected worst-case latency.
Option 2 is the one shown in this guide. See the snapshots below.
This is a good approach because it provides useful information about how which kind of stress is affecting the measurements. The obvious downside is that it escalates the number of experiments. This might be a big problem if we plan to run many different experiments so I would restrict the number of test cases in which we apply this for regular benchmarking.
Option 3 is similar to the approach shown in the OSADL QA farm. This can easily be implemented by using stress-ng with the --seq option. The problem is that it's not easy to identify which stress was responsible for a specific worst case.
Option 4, is the same approach used with hackbench. The same could be achieved using stress-ng --all or specifying the stressors we want to run. Same problem as option 3, but this could be interesting to run on regular bases and then switch to option 2 to identify the stressor responsible of the issue.
Feel free to add your input here and let's discuss all these options in the next meeting.
The text was updated successfully, but these errors were encountered:
Follow-up from this comment: ros-realtime/ros-realtime-rpi4-image#30 (comment)
Connected to: ros-realtime/ros2_realtime_benchmarks#4
As I said I'm interested in defining which kind of stressors we should use by default when we run the benchmarks. I see 3 different approaches:
Option 1 is the common option we typically see for simple benchmarks but it's quite limited and it doesn't reflect real scenarios. This could lead to incorrect assumptions about the expected worst-case latency.
Option 2 is the one shown in this guide. See the snapshots below.
This is a good approach because it provides useful information about how which kind of stress is affecting the measurements. The obvious downside is that it escalates the number of experiments. This might be a big problem if we plan to run many different experiments so I would restrict the number of test cases in which we apply this for regular benchmarking.
Option 3 is similar to the approach shown in the OSADL QA farm. This can easily be implemented by using stress-ng with the
--seq
option. The problem is that it's not easy to identify which stress was responsible for a specific worst case.Option 4, is the same approach used with hackbench. The same could be achieved using stress-ng
--all
or specifying the stressors we want to run. Same problem as option 3, but this could be interesting to run on regular bases and then switch to option 2 to identify the stressor responsible of the issue.Feel free to add your input here and let's discuss all these options in the next meeting.
The text was updated successfully, but these errors were encountered: