You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current approach relies too heavily on manual authoring of common tests. We need to leverage reflection and comments.
Introduce "exercisers", TP.test.Exerciser types which focus on exercising a particular set of objects using automated logic.
An exerciser should take in a reflection filter that lets it find the list of target objects (if not provided directly).
The exerciser then runs its exercise* methods (in random order) against that list.
Each exercise* method can be coded to reflect on the target object to get information such as input contract (from docs if a function), "default response" etc. so it can automate execution and evaluation of the function in question.
Objects being exercised should be queried via reflection (including new APIs injected by the exerciser(s) as needed) such that they can individually respond to I/O, mocking hooks, etc. during the exercise process.
Examples:
"is" tests (find all TP.is* methods... ask them their default response... then pass them one of everything in the system)
"as" tests (find all "TP.as* methods.... follow pattern above...
expansion tests... find all tag types and run their expand process using provided input and output xhtml strings.
binding tests... find all tag types, inject them with one or more binds to standard data sets, verify results
The text was updated successfully, but these errors were encountered:
idearat
added
Authoring
authoring enhancements / simplifications
TEST
author tests and/or manually test
LIMIT
not quite a bug... but frustrating/limiting
labels
Dec 12, 2021
Our current approach relies too heavily on manual authoring of common tests. We need to leverage reflection and comments.
Introduce "exercisers", TP.test.Exerciser types which focus on exercising a particular set of objects using automated logic.
An exerciser should take in a reflection filter that lets it find the list of target objects (if not provided directly).
The exerciser then runs its
exercise*
methods (in random order) against that list.Each
exercise*
method can be coded to reflect on the target object to get information such as input contract (from docs if a function), "default response" etc. so it can automate execution and evaluation of the function in question.Objects being exercised should be queried via reflection (including new APIs injected by the exerciser(s) as needed) such that they can individually respond to I/O, mocking hooks, etc. during the exercise process.
Examples:
The text was updated successfully, but these errors were encountered: