Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure acceptance tests #908

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ekohl
Copy link
Member

@ekohl ekohl commented Dec 5, 2020

This restructures the tests into focused files. By defining an order, there is less need to purge Foreman on every test which greatly speeds up execution.

This restructures the tests into focused files. By defining an order,
there is less need to purge Foreman on every test which greatly speeds
up execution.
@ekohl
Copy link
Member Author

ekohl commented Jul 9, 2021

@ehelms @wbclark I still think this is a good idea but before I invest time in redoing this, please have a look. Do you agree with the general direction?

@theforeman-bot
Copy link
Member

@ekohl, this pull request is currently not mergeable. Please rebase against the master branch and push again.

If you have a remote called 'upstream' that points to this repository, you can do this by running:

    $ git pull --rebase upstream master

This message was auto-generated by Foreman's prprocessor

@wbclark
Copy link
Contributor

wbclark commented Jul 12, 2021

@ekohl i'm definitely in favor of tests that are more organized, more readable (and produce more readable output), and run faster.

What isn't clear to me in trying to review this is, in what cases / for what reasons was cleanup before tests necessary in the current design? And how does moving to lots of context blocks avoid that situation?

@ekohl
Copy link
Member Author

ekohl commented Jul 12, 2021

What isn't clear to me in trying to review this is, in what cases / for what reasons was cleanup before tests necessary in the current design? And how does moving to lots of context blocks avoid that situation?

Today we are very careful in trying to produce a fresh environment between tests, but this makes tests a lot slower (remove + reinstall a few times obviously is slower than install once and make a few small additions).

You can argue it gives less coverage. Maybe this helps. Imagine a path:

  • Empty server
  • Install Foreman without any plugins
  • Migrate DB
  • Run server
  • Test

This is actually different from:

  • Empty server
  • Install Foreman
  • Install a bunch of plugins
  • Migrate DB
  • Run server
  • Test

And yet again different from:

  • Empty server
  • Install Foreman without any plugins
  • Migrate DB
  • Run server
  • Test
  • Install a bunch of plugins
  • Migrate DB
  • Run server
  • Test

So that's how I view it: we have several ways of going through these and what do we want to cover. Covering everything is best in terms of assurances but also the most costly. If we don't find anything with it or don't analyze the results, we might as well go for quick.

Does that help?

@wbclark
Copy link
Contributor

wbclark commented Jul 12, 2021

Does that help?

Yes, I think it's helpful to get a conversation started.

I think in a lot of cases it's simply not realistic to test every possible scenario; for example, given enough plugins and the varying orders in which you can install Foreman, install plugins, migrate DB, run server, we probably aren't going to cover every single permutation. So I think it's helpful to look for cases that should be covered in two ways:

  1. What is important to users (both individual end users as well as other software projects in the ecosystem)? Examples: initial install with a "standard" set of plugins enabled from the start, enable/disable individual plugins on a server that has been running for some time / already ran some DB migrations, ...

  2. What is likely to break / what are we likely to break? Are there any tightly coupled plugins that have non-trivial differences of behavior in the presence/absence of another plugin? Special cases in the code to handle different scenarios? Specific types of bugs for which we are at heightened risk and want to avoid (re)introducing?

I'm not as familiar with this module compared to some of the other modules in the ecosystem in which I've been more directly involved, so I'm probably lacking a lot of the context and history to give very specific thoughts about those points, but that at least lays out how I would try to approach it.

Given your experience with this module, I'm curious to know where that line of reasoning would take you.

@ekohl
Copy link
Member Author

ekohl commented Jul 12, 2021

I think now I'm leaning to at least this path:

  • Empty server
  • Install Foreman
    • Test
  • Install Foreman with feature X
    • Test
  • Install Foreman with feature Y
    • Test

So essentially mostly stop cleaning up unless it really matters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants