-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/* c8 ignore next */ does not ignore classes or functions #544
Comments
I ran into this recently. c8's |
@guilhermehn actually, this re-opens the question: why would we want to ignore the fact that a part of the code is not covered? This question already makes sense for lines, but it is even more critical for classes: why would we want to ignore a class from the coverage? What is the benefit of this? Why write a class if it is never tested anywhere? |
@ericmorand I agree, most of the time you're doing it wrong when a piece of code is ignored and not tested. I'm not a fan of ignoring code coverage, but there are legitimate use-cases. The two I've seen most often is environment related, like using different values depending on the current environment the application is running (homolog/production or even browser/server), and analytics related. There are easy ways to circumvent the former (without duplicating tests), but the latter can be more of a hassle, specially when dealing with third-party APIs. In the company I work for there is a layer of abstraction for the Google Analytics API, which is accessible by a global variable in the One could argue that I could create tests for those cases, mocking the global API, but remember that there are a lot of other instances of this, so I would have twenty-some tests just to cover some lines that are not really testing much (it's just testing the mock I've created), or that there are better ways to do these things, like creating an abstraction of this analytics-related code. But, coming back to your question, I return another question: why should I create tests that are only there for coverage? You could say that it tests that the analytics API is actually called given the necessary conditions, but how valuable is that test in reality, since the mock was created by me (aka not real code) and the if statement is guaranteed by the runtime to function properly? Should I refactor lots of lines just so that the required coverage percentage is met? I should be clear here: I don't like ignoring code coverage, 99.99% of the time I'm against it and I always aim for 100% coverage. But there are real cases where the cost of covering a single if statement is way higher than the value it yields. When the options available are:
Ignoring it could be a valid answer depending on how much it would cost for the project/company. |
There are a couple of use cases I run into. One is with classes and inheritance. Setting aside any/all "inheritance is bad" arguments, JS doesn't have a good way to express methods of a class that are required to be implemented by subclasses, so I tend to do things like this:
I don't expect this error to ever be thrown, but it's helpful to detect such errors are runtime with a meaningful message. Maybe it's reasonable to say that this behavior should actually be tested, but I'm not sure I really care. It's a mild improvement over the default "no such method" error (or whatever JS spits out), and I don't think I really care if it doesn't do what it's supposed to. The second use case complex to describe, but involves convoluted error cases in async browser operations. It's notoriously difficult to mock the behavior of native browser objects (e.g. IndexedDB), whose APIs are predominantly callback based. Absent a detailed explanation of this (which I could do, but would take a lot of words), suffice it to say that sometimes the effort needed to orchestrate a mock of the conditions required to exercise the code under test is so large, that the risk of having untested code that might be very, very unlikely to fire is preferrable to spending the time to create a valid test. In these cases, I might just ignore a block and get on with life. Temporary ignores is another case. I find I'm constantly chasing coverage (I don't subscribe to the TDD philosophy, at all), and I will sometimes have classes or methods that are not fully baked, and not yet complete. I tend to want to just ignore them until I get around to implementing them completely. There are a few other examples that I could cite, all of which are (frankly) debatable in their validity, but the main point (and the point of this issue), is that istanbul has/had this facility. Maybe it's valid to say, "it shouldn't exist" (I don't think it is), but even if we take that viewpoint, it's really just a compatibility issue. if c8 wants to be a drop-in replacement for istanbul (which it can be, and should be IMO); then it should support the same syntax and semantics, independent of any philosophical stance on what's "right" or "wrong" about how to correctly unit test. |
The
/* c8 ignore next */
comment annotation does not seem to ignore classes or functions.By contrast, the
/* istanbul ignore next */
annotation will ignore classes, functions, blocks and lines.For example:
Ideally the
ignore next
would ignore the entire block, function, or classSimilarly:
The text was updated successfully, but these errors were encountered: