[Feature Request] Support for llm-emit-token-metric in AIProjectClient ChatCompletionsClient for Azure OpenAI Token Tracking #39385
Labels
AI
Client
This issue points to a problem in the data-plane of the library.
customer-reported
Issues that are reported by GitHub users external to the Azure organization.
feature-request
This issue requires a new behavior in the product in order be resolved.
needs-team-attention
Workflow: This issue needs attention from Azure service team or SDK team
Service Attention
Workflow: This issue is responsible by Azure service team.
Feature Request: Enable
llm-emit-token-metric
orazure-openai-emit-token-metric
in AIProjectClient's ChatCompletionsClientIs your feature request related to a problem? Please describe.
I am looking to use the
llm-emit-token-metric
orazure-openai-emit-token-metric
headers within Azure API Management (APIM) to capture token usage. My setup uses Azure Foundry alongside the newazure.ai.projects
AIProjectClient and its corresponding ChatCompletionsClient.While I understand how to enable token metric headers when performing regular POST requests to Azure OpenAI endpoints, I have been unable to find functionality to do this within the ChatCompletionsClient.
The goal is to configure an APIM instance for the endpoint
https://<deployment>.openai.azure.com/
and log token metrics to Azure Log Analytics.Describe the solution you'd like
I would like the ChatCompletionsClient (from
azure.ai.projects
) to include functionality for enabling and utilizing thellm-emit-token-metric
orazure-openai-emit-token-metric
headers when interacting with Azure OpenAI services.This would allow token usage tracking to seamlessly integrate with APIM and Log Analytics while leveraging the new client library.
Describe alternatives you've considered
llm-emit-token-metric
orazure-openai-emit-token-metric
headers.azure.ai.projects
.Additional context
AIProjectClient
orChatCompletionsClient
from theazure.ai.projects
library.The text was updated successfully, but these errors were encountered: