-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run the interpreter tests when no generator tests are available #1094
base: maintenance/mps20223
Are you sure you want to change the base?
run the interpreter tests when no generator tests are available #1094
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions first:
- Why just catching the exception and not at least logging it here? http://127.0.0.1:63320/node?ref=r%3A7961970e-5737-42e2-b144-9bef3ad8d077%28org.iets3.core.expr.tests.behavior%29%2F327386200772325207
- Do you mind explaining why loading the "generator related" class via the ReloadableModule helps in this case?
I try to load the generator test to check if it is available (= generated). One of the use cases of this new approach that I forgot to mention is when you want to execute tests where some concepts don't have a Java generator, you can now still execute them as part of the generator tests. The missing generated classes were also not an error in the past. |
However I think that the privious behaviour in that case was: Broken test generation due to missing generators for concepts that do not have one. In my humble oppion this was a "poor man's" indicator that helped us to figure out that generators were missing. I think a better and more clean solution would be to have both things:
One possiblity to get 1) would be to use and implement assesments to figure out if we are missing generators for concepts and fail the build (CI) in case we have those. Having 1) in place would enable 2). Without having any indicator that shows you missing generators for newly created concepts feels risky. |
On the CI we didn't had an indicator in the past, so there is no difference. Some languages or concepts will always only be interpreted I think. You also can't differentiate between not having a generator and forgetting to add the dev kit. Not sure if you can really create an assessment for that. What would you check for? |
I looked into creating an assessment, and this is not trivial at all. |
Can we merge this PR and create a separate ticket for the feature request? I noticed that in 2024.1 if I try to run the tests in |
It is so annoying to have red (not failing) tests because you are trying to execute generator tests but have not added the corresponding devkit to the model. Let's execute the interpreter tests instead for those cases. You can still see based on the name of the test which tests you are executing (generator tests have the word "generate" in their name)