Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated "makereport" fixture description #154

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 27 additions & 8 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,34 @@

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
'''
This code defines a hook, which is executed after each phase
of a test execution and is responsible for creating a test report.

The "when" attribute of the test report represents the phase of the test:
- "setup": the report is generated during the setup phase of the test.
- "call": the report is generated during the actual execution of the test.
- "teardown": the report is generated during the teardown phase of the test.

The outcome of a test can have the following possible values:
- "passed": the test has passed successfully.
- "failed": the test has failed.
- "skipped": the test was skipped intentionally.
- "error": an unexpected error occurred during the test execution.
- "xfailed": the test was expected to fail, and it actually failed as expected.
- "xpassed": the test was expected to fail, but it passed unexpectedly.
'''

outcome = yield
rep = outcome.get_result()

outcome = yield
rep = outcome.get_result()

global _previous_test_failed
if rep.when == "setup":
_previous_test_failed = rep.outcome not in ["passed", "skipped"]
elif not _previous_test_failed:
_previous_test_failed = rep.outcome not in ["passed", "skipped"]
global _previous_test_failed
if rep.when == "setup":
# Store initial outcome of the test
_previous_test_failed = rep.outcome not in ["passed", "skipped"]
elif not _previous_test_failed:
# Update the outcome only in case all previous phases were successful
_previous_test_failed = rep.outcome not in ["passed", "skipped"]


@pytest.fixture
Expand Down
6 changes: 6 additions & 0 deletions tests/ut/test_bridge_ut.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,12 @@ def skip_all(testbed_instance):
pytest.skip("invalid for \"{}\" testbed".format(testbed.name))


@pytest.fixture(autouse=True)
def on_prev_test_failure(prev_test_failed, npu):
if prev_test_failed:
npu.reset()


@pytest.fixture(scope="module")
def sai_bport_obj(npu):
bport_oid = npu.dot1q_bp_oids[0]
Expand Down