Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run the interpreter tests when no generator tests are available #1094

Open
wants to merge 2 commits into
base: maintenance/mps20223
Choose a base branch
from

Conversation

alexanderpann
Copy link
Member

It is so annoying to have red (not failing) tests because you are trying to execute generator tests but have not added the corresponding devkit to the model. Let's execute the interpreter tests instead for those cases. You can still see based on the name of the test which tests you are executing (generator tests have the word "generate" in their name)

Copy link
Member

@arimer arimer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions first:

@alexanderpann
Copy link
Member Author

I try to load the generator test to check if it is available (= generated). One of the use cases of this new approach that I forgot to mention is when you want to execute tests where some concepts don't have a Java generator, you can now still execute them as part of the generator tests. The missing generated classes were also not an error in the past.

@arimer
Copy link
Member

arimer commented Oct 30, 2024

. One of the use cases of this new approach that I forgot to mention is when you want to execute tests where some concepts don't have a Java generator, you can now still execute them as part of the generator tests

However I think that the privious behaviour in that case was: Broken test generation due to missing generators for concepts that do not have one.

In my humble oppion this was a "poor man's" indicator that helped us to figure out that generators were missing.

I think a better and more clean solution would be to have both things:

  1. an indicator that helps us to figure out which in case generators are missing for new concepts
  2. a proper way to run tests withough getting problems due to missing generators (your contribution here)

One possiblity to get 1) would be to use and implement assesments to figure out if we are missing generators for concepts and fail the build (CI) in case we have those. Having 1) in place would enable 2).

Without having any indicator that shows you missing generators for newly created concepts feels risky.

@alexanderpann
Copy link
Member Author

alexanderpann commented Oct 30, 2024

On the CI we didn't had an indicator in the past, so there is no difference. Some languages or concepts will always only be interpreted I think. You also can't differentiate between not having a generator and forgetting to add the dev kit. Not sure if you can really create an assessment for that. What would you check for?

@alexanderpann
Copy link
Member Author

I looked into creating an assessment, and this is not trivial at all.

@alexanderpann
Copy link
Member Author

Can we merge this PR and create a separate ticket for the feature request? I noticed that in 2024.1 if I try to run the tests in test.in.expr.os, it just stops all tests because, for example, there are no generated algebraic tests. So no tests are executed at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants