-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Structured outputs response_format
requires strict
function calling JSON Schema?
#1733
Comments
Hi @moonbox3, the
The docs are right that there isn't any coupling between function calling and response formats as the OpenAPI API doesn't require any coupling, it's just the Does that help answer your question? |
Hi @RobertCraigie, thanks for your reply, I appreciate it. I'm really hoping to be able to support the json_schema response format only right now, without having to manage the strict tools. Given that this is present/allowed in .Net, I would expect the same in other OpenAI SDKs like Python. Is there another way I can handle a chat completion json_schema response format without using the Thanks for your help. |
We'll be shipping a public API for this shortly but for now your best bet would be to use our internal API for converting a type to a from typing import List
import rich
from pydantic import BaseModel
from openai import OpenAI
from openai.lib._parsing._completions import type_to_response_format_param
class Step(BaseModel):
explanation: str
output: str
class MathResponse(BaseModel):
steps: List[Step]
final_answer: str
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": "You are a helpful math tutor."},
{"role": "user", "content": "solve 8x + 31 = 2"},
],
response_format=type_to_response_format_param(MathResponse),
)
message = completion.choices[0].message
if message.content:
parsed = MathResponse.model_validate_json(message.content)
rich.print(parsed)
else:
print(message.refusal) This will work as the |
@RobertCraigie thanks for your help! I will have a look at this. |
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
I am using the OpenAI Python 1.47.0 library and the model
gpt-4o-2024-08-06
. I've got thejson_schema
response format working with Pydantic/Non-Pydantic models (non-pydantic meaning I manually create the proper response format JSON schema) without tool calling. However, when I attempt to send tools with the payload to the method:client.beta.chat.completions.parse(...)
I am getting a 400 because the tool's JSON schema does not have
strict
/additionalProperties
.The error shows as:
When I do add the
strict: True
andadditionalProperties: False
, I get a 200:In your docs, I don't see this coupling between function calling schema and json_schema response format called out (if it is there, I am obviously missing it).
The docs say:
This makes it seem like they're able to be used independently.
As an additional note: in .Net, I can use the OpenAI library and make a call to the normal chat completions endpoint, configure the proper
strict
JSON Schema for the json_schema response format, and not need to manipulate the function calling JSON schema to includestrict
oradditionalParameters
and the calls work fine. No 400s encountered. Something like this:To Reproduce
response_format
client.beta.chat.completions.parse(...)
strict
/additionalProperties
keys/values.Code snippets
No response
OS
MacOS
Python version
Python 3.12.5
Library version
openai 1.47.0
The text was updated successfully, but these errors were encountered: