You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Different ways of calling pytest or using python manage.py test leads to different test counts being reported. Note that with parallel testing the warnings are not collated, so we get a nice multiple of them. This is to be expected.
$ find mreg* hostpolicy -iname '*test*' |grep -v __pycache__
mreg/api/v1/tests
mreg/api/v1/tests/test_txts.py
mreg/api/v1/tests/test_host_permissions.py
mreg/api/v1/tests/test_srvs.py
mreg/api/v1/tests/test_zonefile.py
mreg/api/v1/tests/tests_zones.py
mreg/api/v1/tests/test_hostgroups.py
mreg/api/v1/tests/test_labels.py
mreg/api/v1/tests/test_nameservers.py
mreg/api/v1/tests/test_networks.py
mreg/api/v1/tests/tests.py
mreg/api/v1/tests/tests_bacnet.py
mreg/api/v1/tests/test_history.py
mreg/api/v1/tests/test_permissions.py
mreg/tests.py
hostpolicy/api/v1/tests.py
hostpolicy/tests.py
$ pytest -n 6 -vv -s $( find mreg* hostpolicy -iname '*test*' |grep -v __pycache__ )
[...]
==== 1038 passed, 4 skipped, 51 warnings in 112.95s (0:01:52) ====
$ pytest -n 12 mreg/api/v1/tests mreg/tests.py hostpolicy/tests.py hostpolicy/api/v1/tests.py
[...]
==== 368 passed, 89 warnings in 35.81s ====
# Just to show that xdist/parallel makes no difference for the test count
$ pytest -n 4 mreg/api/v1/tests mreg/tests.py hostpolicy/tests.py hostpolicy/api/v1/tests.py
[...]
==== 368 passed, 33 warnings in 58.44s ====
$ coverage run manage.py test && coverage report -m
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
...........................................................................................................................................................................................................................................................................................................................................................................................................s......................................................s.......................................................................................................................................................
----------------------------------------------------------------------
Ran 602 tests in 357.215s
$ python3 -m unittest discover 2>&1 | grep 'unittest.loader._' | awk '{print $2}'
hostpolicy.api.v1.tests
hostpolicy.tests
mreg.api.v1.tests.test_history
mreg.api.v1.tests.test_host_permissions
mreg.api.v1.tests.test_hostgroups
mreg.api.v1.tests.test_labels
mreg.api.v1.tests.test_nameservers
mreg.api.v1.tests.test_networks
mreg.api.v1.tests.test_permissions
mreg.api.v1.tests.test_srvs
mreg.api.v1.tests.test_txts
mreg.api.v1.tests.test_zonefile
mreg.api.v1.tests.tests
mreg.api.v1.tests.tests_bacnet
mreg.api.v1.tests.tests_zones
mreg.tests
All in all, fairly inconsistent. Ideally, we should support either way of doing testing, especially when it comes to parallelisation.
The text was updated successfully, but these errors were encountered:
Different ways of calling
pytest
or usingpython manage.py test
leads to different test counts being reported. Note that with parallel testing the warnings are not collated, so we get a nice multiple of them. This is to be expected.All in all, fairly inconsistent. Ideally, we should support either way of doing testing, especially when it comes to parallelisation.
The text was updated successfully, but these errors were encountered: