You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This came up in discussing PR #2838, so opening an issue here for longer discussion/evaluation.
We currently "hand validate" example python code used in introduction.md, about.md, and instruction.md and similar documents. This leads to errors where certain code will not work in the REPL, or syntax or other errors get made and published. As we scale up exercises, this doesn't feel like a sustainable solution, hence this issue to propose, evaluate, and track possible tools and strategies for verifying code , and (possibly) adding that verification to the track CI.
Below are three applicable libraries, but I'd warmly welcome more. Of the three below, pmdoctest feels like the nicest solution, and I've run the comparisons concept exercise through it with reasonable results. But I'd like to see if there are other strategies/libraries out there.
doctest - this is the old-school original, but doesn't really work well in markdown fences. mkcodes - have not tried this yet. pmdoctest - reasonably good, but requires some weird quirks with code fence language names and or excess >>> in code fences to make parsing work.
The text was updated successfully, but these errors were encountered:
If you are requesting support, we will be along shortly to help. (generally within72 hours,often more quickly).
Found a problem with tests, exercises or something else?? 🎉
◦ We'll take a look as soon as we can & identify what work is needed to fix it. (generally within72 hours).
◦ If you'd also like to make a PR to fix the issue, please have a quick look at the Pull Requests doc. We 💙 PRs that follow our Exercism & Track contributing guidelines!
Here because of an obvious (andsmallset of) spelling, grammar, or punctuation issues with one exercise,
concept, or Python document?? 🌟 Please feel free to submit a PR, linking to this issue. 🎉
‼️Please Do Not‼️
❗ Run checks on the whole repo & submit a bunch of PRs.
This creates longer review cycles & exhausts reviewers energy & time.
It may also conflict with ongoing changes from other contributors.
❗ Insert only blank lines, make a closing bracket drop to the next line, change a word
to a synonym without obvious reason, or add trailing space that's not an EOL for the very end of text files.
❗ Introduce arbitrary changes "just to change things" .
...These sorts of things are not considered helpful, and will likely be closed by reviewers.
For anything complicated or ambiguous, let's discuss things -- we will likely welcome a PR from you.
Here to suggest a feature or new exercise??Hooray! Please keep in mind Chesterton's Fence. Thoughtful suggestions will likely result faster & more enthusiastic responses from maintainers.
💛 💙 While you are here... If you decide to help out with other open issues, you have our gratitude 🙌 🙌🏽.
Anything tagged with [help wanted] and without [Claimed] is up for grabs.
Comment on the issue and we will reserve it for you. 🌈 ✨
This came up in discussing PR #2838, so opening an issue here for longer discussion/evaluation.
We currently "hand validate" example python code used in
introduction.md
,about.md
, andinstruction.md
and similar documents. This leads to errors where certain code will not work in the REPL, or syntax or other errors get made and published. As we scale up exercises, this doesn't feel like a sustainable solution, hence this issue to propose, evaluate, and track possible tools and strategies for verifying code , and (possibly) adding that verification to the track CI.Below are three applicable libraries, but I'd warmly welcome more. Of the three below,
pmdoctest
feels like the nicest solution, and I've run thecomparisons
concept exercise through it with reasonable results. But I'd like to see if there are other strategies/libraries out there.doctest - this is the old-school original, but doesn't really work well in markdown fences.
mkcodes - have not tried this yet.
pmdoctest - reasonably good, but requires some weird quirks with code fence language names and or excess
>>>
in code fences to make parsing work.The text was updated successfully, but these errors were encountered: