You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All of the current grading infrastructure is based on the output from stderr/stdout. This is suboptimal for a couple reasons. First, it means that the output must be parsed by grading scripts, which is error prone and subject to potential mischief. Second, stderr/stdout necessarily contain all of students' logging statements. Asking students to disable all logging before submitting has historically not had a high success rate, and sufficiently verbose logging can result in very large outputs of test runs.
There should be a JUnit test listener which, when a global flag is set, logs results as they happen and at the end of a test run outputs them to a file in a structured format (probably JSON). This output should optionally contain a copy of stdout/stderr, which can be obtained with the TeeStdOutErr utility. I'm not sure what the default should be.
Both of these configuration options should probably be accessible through run_tests.py. Students might want to use this test output themselves.
Ultimately, some sort of schema for the test output would be really useful. It would likely evolve over time, but at least we would have something that grading scripts (in this repo and developed by other instructors for their use cases) could reference.
The text was updated successfully, but these errors were encountered:
All of the current grading infrastructure is based on the output from
stderr
/stdout
. This is suboptimal for a couple reasons. First, it means that the output must be parsed by grading scripts, which is error prone and subject to potential mischief. Second,stderr
/stdout
necessarily contain all of students' logging statements. Asking students to disable all logging before submitting has historically not had a high success rate, and sufficiently verbose logging can result in very large outputs of test runs.There should be a JUnit test listener which, when a global flag is set, logs results as they happen and at the end of a test run outputs them to a file in a structured format (probably JSON). This output should optionally contain a copy of
stdout
/stderr
, which can be obtained with the TeeStdOutErr utility. I'm not sure what the default should be.Both of these configuration options should probably be accessible through
run_tests.py
. Students might want to use this test output themselves.Ultimately, some sort of schema for the test output would be really useful. It would likely evolve over time, but at least we would have something that grading scripts (in this repo and developed by other instructors for their use cases) could reference.
The text was updated successfully, but these errors were encountered: