Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sentry-related spans are appearing in dns/net opentelemetry traces #13466

Closed
3 tasks done
Bruno-DaSilva opened this issue Aug 26, 2024 · 5 comments · Fixed by #13491
Closed
3 tasks done

Sentry-related spans are appearing in dns/net opentelemetry traces #13466

Bruno-DaSilva opened this issue Aug 26, 2024 · 5 comments · Fixed by #13491
Labels
Package: node Issues related to the Sentry Node SDK

Comments

@Bruno-DaSilva
Copy link

Is there an existing issue for this?

How do you use Sentry?

Sentry Saas (sentry.io)

Which SDK are you using?

@sentry/node

SDK Version

8.26.0

Framework Version

Node 20.12.2

Link to Sentry event

No response

Reproduction Example/SDK Setup

Sentry.init({
    dsn: process.env.SENTRY_DSN,
    skipOpenTelemetrySetup: true
  });
// We want 100% sample rate but without telling sentry to init tracing (rolling our own tracing, using sentry for errors only).
export class CustomSentrySampler implements Sampler {
  shouldSample(
    context: Context,
    _traceId: string,
    _spanName: string,
    _spanKind: SpanKind,
    attributes: SpanAttributes,
    _links: Link[],
  ) {
    const decision = SamplingDecision.RECORD_AND_SAMPLED;

    // wrap the result, necessary for sentry
    return SentryOtel.wrapSamplingDecision({
      decision,
      context,
      spanAttributes: attributes,
    });
  }

  toString() {
    return CustomSentrySampler.name;
  }
}

// can use SentrySampler if you init sentry with sample rate instead
const provider = new NodeTracerProvider({ sampler: new CustomSentrySampler(), }); 
// Only required if you are sending traces up to sentry
// provider.addSpanProcessor(new SentrySpanProcessor());
// an alternative exporter if you don't want to send traces to sentry
provider.addSpanProcessor(new OTLPTraceExporter({
  url: TRACING_EXPORTER_URL,
}));

const propagator = SentryPropagator();
const contextManager = SentryContextManager();

// register additional instrumentations outside of what sentry uses by default.
// For this bug report, just adding net/dns instrumentations.
registerInstrumentations({
  tracerProvider: provider,
  instrumentations: [
    new NetInstrumentation(),
    new DnsInstrumentation(),
  ],
});
provider.register({ propagator, contextManager });

Steps to Reproduce

  1. Setup SDK as above
  2. Run the app, hit an endpoint that triggers a captureException or captureMessage
  3. Check the trace for this request

Expected Result

The trace should NOT have any sentry related spans in it

Actual Result

Despite the @opentelemetry/instrumentation-http ignoring (not creating spans for) the sentry outgoing requests, the dns and net instrumentations still appear in the trace:
Image
Image

@github-actions github-actions bot added the Package: node Issues related to the Sentry Node SDK label Aug 26, 2024
@Bruno-DaSilva Bruno-DaSilva changed the title Sentry-related spans should not appear in opentelemetry traces Sentry-related spans are appearing in dns/net opentelemetry traces Aug 26, 2024
@Bruno-DaSilva
Copy link
Author

Bruno-DaSilva commented Aug 26, 2024

Hopefully it's clear that this is behaviour that causes spans to be extra spammy, so we should find a way around this.

This is because @opentelemetry/instrumentation-http does not suppress tracing on outgoing ignored requests.
I'm hoping (if y'all agree with this approach) that this feature request in instrumentation-http can be pushed for: open-telemetry/opentelemetry-js#4926

It asks for a config flag to suppress tracing on ignored outgoing requests, to prevent this issue. I figure then the Sentry SDK can either let a user pass that as a custom flag to the underlying instrumentation (no change required in sentry SDK) or can simply enable the flag by default (might be considered a breaking change, up to y'all).

@lforst
Copy link
Member

lforst commented Aug 27, 2024

Hi, we are already using suppressTracing for SDK requests which makes this look like a bug in the dns/net instrumentation. I am not sure if a feature request makes sense in that case.

@Bruno-DaSilva
Copy link
Author

Bruno-DaSilva commented Aug 27, 2024

Hi, we are already using suppressTracing for SDK requests which makes this look like a bug in the dns/net instrumentation. I am not sure if a feature request makes sense in that case.

Hi Luca - could you point me to where this happens in the Sentry SDK? Due to the way instrumentation-http is designed there's no place to suppress tracing that would affect children in the various hooks.
EDIT: found it, see next comment :)

@Bruno-DaSilva
Copy link
Author

I have found this:

// This ensures we do not generate any spans in OpenTelemetry for the transport
return suppressTracing(() => {
const requestExecutor = createRequestExecutor(options, options.httpModule ?? nativeHttpModule, agent);
return createTransport(options, requestExecutor);
});

Which calls two functions, createRequestExecutor and createTransport. In both these functions we do not perform the actual sending of any http calls, we simply create callbacks that will be called later:

return function makeRequest(request: TransportRequest): Promise<TransportMakeRequestResponse> {

and:
const flush = (timeout?: number): PromiseLike<boolean> => buffer.drain(timeout);
function send(envelope: Envelope): PromiseLike<TransportMakeRequestResponse> {

return {
send,
flush,
};

Now I think the problem is in how we're trying to suppressTracing here. The suppression of tracing is happening JUST in the creation of these callbacks, NOT during the execution of these callbacks. So I don't think we are suppressing tracing at all when sending the outgoing http requests.
I think you'd probably have to suppressTracing(...) within the flush function? Assuming that's the only place we actually send the http reqs.


I think the reason we haven't noticed this so far is because we ignore sentry requests in the http integration:

if (isSentryRequestUrl(url, getClient())) {
return true;
}

While that prevents the http spans from being created, it doesn't suppress tracing to catch the dns/net spans (hence the original otel feature request).

SO with all that said, if we can fix how we call suppressTracing in the sentry SDK, I agree that the otel feature request wouldn't be required.

@lforst
Copy link
Member

lforst commented Aug 28, 2024

I think you're absolutely right. TBH looking at the code retroactively, I would have probably expected the AsyncContext for trace suppression to be preserved down to the code where we execute, but looking a bit closer it makes sense that it is not suppressed because we call that code from a completely different async context.

I opened a PR that should hopefully fix this #13491

lforst added a commit that referenced this issue Aug 28, 2024
…an transport creation (#13491)

#13466 (comment)
correctly points out that we are suppressing tracing for the transport
creation instead of the transport execution.

This PR wraps the code that is actually conducting the request with
`suppressTracing` instead of the transport creation.

Fixes #13466
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Package: node Issues related to the Sentry Node SDK
Projects
Archived in project
2 participants