Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Answer Test based on validator function #1243

Closed
georgekinnear opened this issue Aug 7, 2024 · 4 comments
Closed

Answer Test based on validator function #1243

georgekinnear opened this issue Aug 7, 2024 · 4 comments

Comments

@georgekinnear
Copy link
Collaborator

I think it could be useful to have a new answer test, something like ATValidator, that takes a validator function (https://docs.stack-assessment.org/en/CAS/Validator/) and an expression, and returns 0/1 according to the result being invalid/valid. Also, the non-quiet version of the answer test would append the validator output to the feedback message.

One example might be if there is a validator function to check for a "+C" in the student answer (perhaps relevant to #1229). In some contexts, the teacher might want to use that validator function precisely for validation as described in the docs currently, so that students are not allowed to submit an answer that is missing a "+C". In other contexts, the teacher might want to check for the "+C" only when grading the answer. ATValidator would allow them to simply reuse the validator function, and (if they wish) the feedback message that it provides.

I think this would enable efficient reuse of validator functions in PRTs (and perhaps encourage sharing of these in the community!). It would already be possible to achieve the sort of thing I'm suggesting here with a little bit of work in the PRT, but having it as a built-in answer test would make it simpler to write PRTs that build on existing validator functions.

One point that I wasn't sure about is whether it would make sense to pass the validator function to ATValidator using the "teacher answer" box when setting up a PRT node. Since the test would only have two arguments (the validator function, and the expression to be validated) it seems reasonable to me that the teacher/student answers would be used for these (unless the teacher answer could be left blank, and the validator function is passed through the options field).

@sangwinc
Copy link
Member

sangwinc commented Aug 7, 2024

@georgekinnear thanks for a very interesting idea which potentially makes it easy for people to work within the PRT question model, and build flexible answer tests for edge cases.

Now to the specifics. The traditional answer test returns three things as far as a user

  1. true/false (which is the outcome of the test)
  2. An string, which may be empty, which is appended to the branch feedback
  3. A note for statistical purposes.

This is documented here: https://docs.stack-assessment.org/en/Authoring/Answer_Tests/

Rather than "reuse" a bespoke validator function, I'd suggest having an answer test which returns a list of 4 things, consistent with the existing answer tests. Then, you pass in the name of that function to the STACK (PHP) wrapper as the optional argument. I appreciate that's more complex than the validator for an author to write. They can easily enough ignore errors and notes. I can't see a way to have a true/false result of the call for the result and an empty string as feedback. The validator uses a non-empty string to signal an error, so the empty string is a signal for "valid" were feedback is never needed. We might return "true" from the test and have feedback.

I think you need to flesh out the design. Adding in "ATUser" to STACK, with a function as an optional argument to the test, would be easy enough.

@LukeLongworth
Copy link
Contributor

Kia ora @sangwinc,

Would the following (pseudocode) solution work? George and I had a chat and whipped up a basic prototype.

ATValidator(sa, ta, [opt]):= block(
  validation: ta(sa),
  errors: "?",  /*I don't know how this is handled normally*/
  result: is(validation=""),
  if not("PRT set to Quiet") then feedback: sconcat(feedback, validation), /*I don't know how feedback is appended normally*/
  note: "?", /*I don't know how this is handled normally*/
  return([errors, result, feedback, note])
);

In plain language:

  • sa is the student's answer (as usual)
  • ta is the validation function and must follow the normal validator rules (take a single input, return a string or boolean)
  • No optional inputs used
  • It would run the validation function on the student's answer. If this returns "", then the result is True and no feedback is given. If this returns a non-empty string, then the result is False and the validator output is appended to any existing feedback.
  • If the node is set to Quiet, then don't append validation to the feedback.
  • The above code just ignores the possibility of boolean outputs from the validator, but I assume that would be easy to catch.

This is easily recreated using existing functionality, as we did in the attached question export:
questions-Maths and Stats Sandbox-Validation as AT-20240808-1333.zip

@sangwinc
Copy link
Member

sangwinc commented Aug 8, 2024

Thanks @LukeLongworth, @georgekinnear

I appreciate this discussion started with "validators" in mind, but the model for answer tests is a little different. Therefore calling it ATValidator is probably a misnomer. I propose the user-level name (in the STACK authoring interface) would be ATUser (rather than ATValidator).

ATUser(sans, tans, ATfun)

The option to the STACK answer test ATUser would be a user-defined Maxima function ATfun, much as you suggest above. Details tbc, but the docs could provide a Maxima template for this function. If you want to re-use your validator function it could be embedded within the answer test wrapper.

Do you have a compelling use-case which (i) would justify all the work in PHP, and (ii) can act as an example in the docs and provide test-cases?

@georgekinnear
Copy link
Collaborator Author

Thanks Chris, it looks like your commit does exactly what I had in mind!

As well as the "+C" example I mentioned in the original post, I think another compelling use-case is in example-generation questions, e.g. "give an example of a polynomial that has a root when x=1". I often write multi-part tasks with different properties or constraints in each part, but there are usually properties in common across the parts - like being a polynomial. In those cases, it would make sense to write a validate_polynomial validator function, to reuse across the different PRTs (including for the feedback messages when the property fails).

This was referenced Nov 7, 2024
@sangwinc sangwinc added this to the 4.8.0 milestone Nov 7, 2024
@sangwinc sangwinc closed this as completed Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants