Skip to content

2.14.0rc1

Pre-release
Pre-release
Compare
Choose a tag to compare
@erikayasuda erikayasuda released this 24 Sep 13:55
d4a7535

Deprecation Notes

  • tracing: Deprecates the DD_TRACE_SPAN_AGGREGATOR_RLOCK environment variable. It will be removed in 3.0.0.
  • tracing: Deprecates support for APM Legacy App Analytics. This feature and its associated configuration options are deprecated and will be removed in a future release.
  • tracing: DD_HTTP_CLIENT_TAG_QUERY_STRING configuration is deprecated and will be removed in v3.0.0. Use DD_TRACE_HTTP_CLIENT_TAG_QUERY_STRING instead.

New Features

  • google_generativeai: Introduces tracing support for Google Gemini API generate_content calls.
    See the docs for more information.

  • LLM Observability: Adds support to automatically submit Gemini Python SDK calls to LLM Observability.

  • LLM Observability: The OpenAI integration now captures tool calls returned from streamed responses when making calls to the chat completions endpoint.

  • LLM Observability: The LangChain integration now submits tool spans to LLM Observability.

  • openai: The OpenAI integration now includes a new openai.request.client tag with the possible values OpenAI/AzureOpenAI to help differentiate whether the request was made to Azure OpenAI or OpenAI.

  • LLM Observability: LLM Observability spans generated by the OpenAI integration now have updated span name and model_provider values. Span names are now prefixed with the OpenAI client name (possible values: OpenAI/AzureOpenAI) instead of the default openai prefix to better differentiate whether the request was made to Azure OpenAI or OpenAI. The model_provider field also now corresponds to openai or azure_openai based on the OpenAI client.

  • openai: The OpenAI integration now captures token data from streamed completions and chat completions, if provided in the streamed response. To ensure accurate token data in the traced streamed operation, ensure that the stream_options={"include_usage": True} option is set on the completion or chat completion call.

  • LLM Observability: The OpenAI integration now ensures accurate token data from streamed OpenAI completions and chat completions, if provided in the streamed response. To ensure accurate token data in the traced streamed operation, ensure that the stream_options={"include_usage": True} option is set on the completion or chat completion call.

  • DSM: Introduces new tracing and datastreams monitoring functionality for Avro Schemas.

  • DSM: Introduces new tracing and datastreams monitoring functionality for Google Protobuf.

  • LLM Observability: Introduces the LLMObs.annotation_context() context manager method which allows modifying the tags of integration generated LLM Observability spans created while the context manager is active.

  • profiling: Captures asyncio.Lock usages with with context managers

  • pymongo: Adds support for pymongo>=4.9.0

  • botocore: Adds span pointers to some successful AWS botocore spans. Currently only supports S3 PutObject.

  • LLM Observability: Introduces prompt template annotation, which can be passed as an argument to LLMObs.annotate(prompt={...}) for LLM span kinds. For more information on prompt annotations, see Annotating a Span.

Bug Fixes

  • library injection: Resolves an issue where the version of attrs installed by default on some Ubuntu installations was treated as incompatible with library injection
  • anthropic: Resolves an issue where attempting to tag non-JSON serializable request arguments caused a TypeError. The Anthropic integration now safely tags non-JSON serializable arguments with a default placeholder text.
  • LLM Observability: Fixes an issue where the OpenAI and LangChain integrations would still submit integration metrics even in agentless mode. Integration metrics are now disabled if using agentless mode via LLMObs.enable(agentless_enabled=True) or setting DD_LLMOBS_AGENTLESS_ENABLED=1.
  • LLM Observability: Resolves an issue in the LLMObs.annotate() method where non-JSON serializable arguments were discarded entirely. Now, the LLMObs.annotate() method safely handles non-JSON-serializable arguments by defaulting to a placeholder text.
  • LLM Observability: Resolves an issue where attempting to tag non-JSON serializable request/response parameters resulted in a TypeError in the OpenAI, LangChain, Bedrock, and Anthropic integrations.
  • langchain: Resolves an issue where attempting to tag non-JSON serializable tool config arguments resulted in a TypeError. The LangChain integration now safely tags non-JSON serializable arguments with a default placeholder text.
  • SSI: This fix ensures injection denylist is included in published OCI package.
  • postgres: Fixes circular imports raised when psycopg automatic instrumentation is enabled.
  • ASM: This fix resolves an issue where exploit prevention was not properly blocking requests with custom redirection actions.
  • Code Security: This fixes a bug in the IAST patching process where AttributeError exceptions were being caught, interfering with the proper application cycle.
  • kafka: Fixes an issue where a TypeError exception would be raised if the first message's topic() returned None during consumption.
  • kinesis: Resolves an issue where unparsable data in a Kinesis record would cause a NoneType error.
  • pymongo: Ensures instances of the pymongo.MongoClient can be patch after pymongo is imported.