You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some repos have both unit and integration tests. And each produce code coverage.
Decide a standard for naming and processing the multiple sets of coverage reports so that they do not stomp on each other, but do provide a reading of the code coverage of each level of testing.
(note: acceptance tests are another level up - "external" - there is not currently any coverage measure for those)
The text was updated successfully, but these errors were encountered:
Here is the part, to utilize codecovs tagging functionality - the reports need to be uploaded separately ( similar as done in core ).
This can be done either by splitting the integration and unit tests per matrix permutation and providing a tag - or by having separate upload targets with fixed tags.
(note: acceptance tests are another level up - "external" - there is not currently any coverage measure for those)
Generating code coverage for acceptance tests is usually not done - I can understand that this is a nice-to-have part for QA-Team.
Difficulty here would be to setup the remote apache instance with xdebug and collecting the coverage logs from there - while behat drives the testing
Some repos have both unit and integration tests. And each produce code coverage.
Decide a standard for naming and processing the multiple sets of coverage reports so that they do not stomp on each other, but do provide a reading of the code coverage of each level of testing.
(note: acceptance tests are another level up - "external" - there is not currently any coverage measure for those)
The text was updated successfully, but these errors were encountered: