-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Junits now reports when suites fail to run #47
Conversation
Tests broke locally because jest was not pinned against a version
@phawxby this looks great! I LOVE that you added the xsd validation! Since this may introduce some breaking changes I want to take some additional time to review this so I'll take a more in-depth look tomorrow. |
Cool. I mostly added it because I couldn't tell if the object I was producing would actually convert to XML the way I wanted. Then I realised it wasn't actually doing that check anywhere so it seemed safe and sensible to expand all the tests slightly. |
@phawxby so at this point I'm not comfortable merging this PR. But there are a lot of aspects of it that I would love in a separate PR because you've done a lot of great work! The main thing I'm not comfortable merging is the core logic around creating a fake test case and renaming a test suite to a filename in the case of uncaught errors. It's a novel approach and I agree that it's the only way, right now, to achieve this functionality. But there are many systems like Teamcity and other reporting systems, like what we have at Spotify, which will get really confusing if these tests run on CI. Say I have a test suite named "foo" with one test case named "bar" - I'm able to see the history of those test runs over time in many systems due to junit.xml output. But if suddenly there is an uncaught error then that same test suite/case won't be reported - it will be under an entirely different name. So because of that I don't think it's correct to merge this and I would, instead, encourage us both to contribute fixes upstream to Jest itself to do a better job passing error information to testResultProcessors so we can keep consistent junit.xml files. If you want I am happy to start a github review and specify the pieces of this PR I'd like removed. Or you can send another PR. I absolutely love the xsd checking and the changes you've made to clean up the unit/integration tests of jest-junit itself! Excellent work! |
@phawxby I've created a detailed issue on the jest repo which highlights the problem we're trying to solve. Please follow the progress there. Thanks again. |
That makes sense. So maybe for now I should strip this back and give you the xsd schema validation and the other little fixes as a separate PR then when jest updates their status codes we can revisit this. |
@phawxby that would be excellent and I would be very grateful. Thanks again! |
Closing this PR for now. Feel free to open another one that contains the xsd validation. |
I'm still facing the same issue
But the junit xml says, 0 failures
Am I missing something? |
Hey @shahp00ja! Thanks for reporting. So you're correct that jest-junit doesn't report on suites that completely fail to run. The reason for this is that jest does not pass enough information to reporters, when a test suite fails to even execute, so I could uniquely identify what failed to run. Using your suite here as an example: if jest-junit were to report on this failure we wouldn't know the name of any of the "describe" or "it" or "test" blocks and if we reported on a failure by inventing some kind of identifier it would be very difficult for you to reason about what failed. So I've chosen not to report on test suites where I can't uniquely identify them. jest-junit will still exit 1 if tests fail in any way (including your use-case). |
I talk with the jest core team about this regularly and it's always on my mind. But right now if we were to build something with the current state of things it would cause more confusion than solve problems. Happy to talk about this more if you want. |
Any updates on this issue? I too encounter the same issue of suites that failed to run, are not considered as failed, and thereupon affects our Azure CI pipeline automated test |
The thread makes it abundantly clear this is not a fixable issue |
Any potential workarounds at least? Using this in a Gitlab pipeline and would love it if there's a potential workaround, even if not 💯% |
There's no workaround because if jest can't even parse the file then what do I report on to tell you which test(s) failed? That's what I've described in this thread multiple times. I'm open to new ideas around this but I've thought about it for years and it's a tough problem. But if you can think of a way to report on tests that jest can't even parse while respecting the jest-junit configurations people have selected then please open a PR. |
Fair enough. |
@imolorhe @xueyongg @shahp00ja the issue has been addressed in the latest release |
Fixes #46