You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We could create tests on OPEN-CUTS for every action to make it easier to see if the installer's features have actually been covered and where problems are frequent. The plugin index in the installer could keep track of every action that has been executed and report a run with the action's config as a comment as a run for those tests. If there is an error, only the error message could be included as a "log".
This should not replace the "main" test of installing a device, but instead augment it. The runs for every action could be reported together with the main run after the user selected a result.
Cases:
The installation completed and the user selected PASS: Report PASS runs for every action that has been run without error. If any errors in previous actions have been ignored, report FAIL for those actions.
The installation completed and the user selected WONKY or FAIL: Report WONKY runs for every action that has been run without error, as it is unclear where exactly the problem originated. If any errors in previous actions have been ignored, report FAIL for those actions.
The user reports an Error: Report WONKY runs for all previous actions that did not throw an error. Report a fail run for the action that did.
Note: As with all things crowdsourcing, the goal here is not to achieve the most accurate result possible, but something that allows the devs a big picture view of what's going on.
The text was updated successfully, but these errors were encountered:
We could create tests on OPEN-CUTS for every action to make it easier to see if the installer's features have actually been covered and where problems are frequent. The plugin index in the installer could keep track of every action that has been executed and report a run with the action's config as a comment as a run for those tests. If there is an error, only the error message could be included as a "log".
This should not replace the "main" test of installing a device, but instead augment it. The runs for every action could be reported together with the main run after the user selected a result.
Cases:
Note: As with all things crowdsourcing, the goal here is not to achieve the most accurate result possible, but something that allows the devs a big picture view of what's going on.
The text was updated successfully, but these errors were encountered: