Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/* c8 ignore next */ does not ignore classes or functions #544

Open
jasonpolites opened this issue Sep 23, 2024 · 4 comments
Open

/* c8 ignore next */ does not ignore classes or functions #544

jasonpolites opened this issue Sep 23, 2024 · 4 comments

Comments

@jasonpolites
Copy link

  • Version: v22.9.0
  • Platform: Ubuntu 20.04.6 LTS

The /* c8 ignore next */ comment annotation does not seem to ignore classes or functions.

By contrast, the /* istanbul ignore next */ annotation will ignore classes, functions, blocks and lines.

For example:

/* c8 ignore next */
class Foo {  // <== ignored!
  foo = 'foo'; // <== not ignored :(
  bar = 'bar'; // <== not ignored :(
} // <== not ignored :(

Ideally the ignore next would ignore the entire block, function, or class

Similarly:

class Foo {
  /* c8 ignore next */
  foo() {  // <== ignored!
    console.log('bar'); // <== not ignored :(
  } // <== not ignored :(
} // <== not ignored :(
@guilhermehn
Copy link

I ran into this recently. c8's ignore next doesn't work like Istanbul's. As stated in the readme, it only ignores the next line. Sadly, c8 doesn't seem to support ignoring the next whole block or specified blocks like ignore next branch.

@ericmorand
Copy link

@guilhermehn actually, this re-opens the question: why would we want to ignore the fact that a part of the code is not covered? This question already makes sense for lines, but it is even more critical for classes: why would we want to ignore a class from the coverage? What is the benefit of this? Why write a class if it is never tested anywhere?

@guilhermehn
Copy link

@ericmorand I agree, most of the time you're doing it wrong when a piece of code is ignored and not tested. I'm not a fan of ignoring code coverage, but there are legitimate use-cases. The two I've seen most often is environment related, like using different values depending on the current environment the application is running (homolog/production or even browser/server), and analytics related. There are easy ways to circumvent the former (without duplicating tests), but the latter can be more of a hassle, specially when dealing with third-party APIs. In the company I work for there is a layer of abstraction for the Google Analytics API, which is accessible by a global variable in the window (like window.myCompanyAnalytics). The login page can have multiple branching paths depending on the user and it's condition, and we have to send multiple events detailing which step the user is going through and what actions did they took, so there is a lot of analytics code in mangled with the login front-end logic code, and there are certain events that would only be sent when an specific condition is met, e.g. if (userAccount.hasSpecificRestriction) window.myCompanyAnalytics(eventData).

One could argue that I could create tests for those cases, mocking the global API, but remember that there are a lot of other instances of this, so I would have twenty-some tests just to cover some lines that are not really testing much (it's just testing the mock I've created), or that there are better ways to do these things, like creating an abstraction of this analytics-related code. But, coming back to your question, I return another question: why should I create tests that are only there for coverage? You could say that it tests that the analytics API is actually called given the necessary conditions, but how valuable is that test in reality, since the mock was created by me (aka not real code) and the if statement is guaranteed by the runtime to function properly? Should I refactor lots of lines just so that the required coverage percentage is met?

I should be clear here: I don't like ignoring code coverage, 99.99% of the time I'm against it and I always aim for 100% coverage. But there are real cases where the cost of covering a single if statement is way higher than the value it yields. When the options available are:

  • create a test to check that a mock is called;
  • refactor a lot of code to abstract some if;
  • ignore said if block;

Ignoring it could be a valid answer depending on how much it would cost for the project/company.

@jasonpolites
Copy link
Author

There are a couple of use cases I run into.

One is with classes and inheritance. Setting aside any/all "inheritance is bad" arguments, JS doesn't have a good way to express methods of a class that are required to be implemented by subclasses, so I tend to do things like this:

class Foo { 
  /* c8 ignore next */
  subclassShouldDefine() {
    throw new Error(`Class ${this.constructor.name} does not implement subclassShouldDefine`);
  }
}

I don't expect this error to ever be thrown, but it's helpful to detect such errors are runtime with a meaningful message. Maybe it's reasonable to say that this behavior should actually be tested, but I'm not sure I really care. It's a mild improvement over the default "no such method" error (or whatever JS spits out), and I don't think I really care if it doesn't do what it's supposed to.

The second use case complex to describe, but involves convoluted error cases in async browser operations. It's notoriously difficult to mock the behavior of native browser objects (e.g. IndexedDB), whose APIs are predominantly callback based. Absent a detailed explanation of this (which I could do, but would take a lot of words), suffice it to say that sometimes the effort needed to orchestrate a mock of the conditions required to exercise the code under test is so large, that the risk of having untested code that might be very, very unlikely to fire is preferrable to spending the time to create a valid test. In these cases, I might just ignore a block and get on with life.

Temporary ignores is another case. I find I'm constantly chasing coverage (I don't subscribe to the TDD philosophy, at all), and I will sometimes have classes or methods that are not fully baked, and not yet complete. I tend to want to just ignore them until I get around to implementing them completely.

There are a few other examples that I could cite, all of which are (frankly) debatable in their validity, but the main point (and the point of this issue), is that istanbul has/had this facility. Maybe it's valid to say, "it shouldn't exist" (I don't think it is), but even if we take that viewpoint, it's really just a compatibility issue. if c8 wants to be a drop-in replacement for istanbul (which it can be, and should be IMO); then it should support the same syntax and semantics, independent of any philosophical stance on what's "right" or "wrong" about how to correctly unit test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants