Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mutmut3] Forced fail does not fail. #337

Open
awgymer opened this issue Oct 21, 2024 · 11 comments
Open

[mutmut3] Forced fail does not fail. #337

awgymer opened this issue Oct 21, 2024 · 11 comments

Comments

@awgymer
Copy link

awgymer commented Oct 21, 2024

Tried updating to mutmut 3.0.5 but I the forced fail test seems to pass and thus (ironically) fail.

running forced fail test========================================================================================== test session starts ===========================================================================================

...

========================================================================================== 1440 passed in 8.16s ==========================================================================================


FAILED
@boxed
Copy link
Owner

boxed commented Oct 21, 2024

Could you supply the project so I can test?

@awgymer
Copy link
Author

awgymer commented Oct 21, 2024

I cannot as this is not an open-source library it's a proprietary one for work.

I looked at the code for run_forced_fail though and I couldn't figure out why there is an assumption it would fail? AFAICT it runs pytest like normal and just doesn't supply any test args?

@boxed
Copy link
Owner

boxed commented Oct 21, 2024

It sets the environment variable that makes all trampolines fail immediately. Think of it like turning on an exception being thrown in all functions in your code base.

@awgymer
Copy link
Author

awgymer commented Oct 22, 2024

I tried again with the latest version of my codebase.

Now I am getting an error on the stats collection:

mutmut run
generating mutants
    done in 77ms
⠏ running stats
    done
failed to collect stats, no active tests found

I see a mutants folder and inside that I see htmlcov and tests (which appears to contain a copy of my tests dir).

My pkg structure is:

pkg-name
    - pkg_name/<source code here>
    - tests/<test code and resources here>
    - pyproject.toml

@boxed
Copy link
Owner

boxed commented Oct 22, 2024

Did you try with 3.1.0? I just released it with some fixes for stuff found by other users.

@awgymer
Copy link
Author

awgymer commented Oct 22, 2024

Yes it ran on 3.1.0

@awgymer
Copy link
Author

awgymer commented Oct 22, 2024

Ok. Turns out I left an empty src dir from some previous testing so I solved that issue.

However during stats I get:

FAILED tests/test_main.py::test_rich_force_colours[env_var2-True] - KeyError: 'MUTANT_UNDER_TEST'

Which I think might be due to the test itself mocking os.environ?

@boxed
Copy link
Owner

boxed commented Oct 22, 2024

Ah, nice that you solved it.

mocking os.environ

That's an interesting situation yea. Hmm.. actually.. I don't think I need to use os.environ at all now that I think of it. I think that's a leftover from some earlier idea.

@boxed
Copy link
Owner

boxed commented Oct 27, 2024

I realized I do need to use os.environ for multiprocessing support later.

Could you make your mock at least fall back to reading the real environ if there's a keyerror?

@awgymer
Copy link
Author

awgymer commented Oct 27, 2024

I'm not sure how that would work? Would a patch rather than mock fall back to the real object except for the value i am patching?

I probably can actually avoid mocking/patching it anyway. The method under test is simply checking an ENV var and returning some bool based on the value. I can probably just directly set the os.environ and call it good.

@boxed
Copy link
Owner

boxed commented Oct 28, 2024

You don't have to use Mock to do mocking :)

But yea, just setting it seems more sane.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants