-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extract_json_between_markers doesn't handle responses missing json markers #53
Comments
Hi! Sure thing if you think there would be a measurable improvement to success rate, from our observations this could happen but unclear whether this happens that often or more than other types of JSON formatting issues! |
I encountered raw json running perform_review with gpt4o on Azure. The change in the pr worked there, but I have not run this against other models. |
Interesting - this is quite the pathological case since we ask for chain of thought steps as well as the JSON. We didn’t observe failures in formatting more than perhaps 1% of the time, are you observing higher? |
I only ran the perform_review with a paper using gpt4o on Azure. It was always returned as well formatted json but the method was returning none because it did not have the json markers. With this fallback mechanism it is returned as valid json. |
I used strictjson to reformat the responses, since I'm running locally, token is no longer an issue. |
I found when using this with Azure OpenAI that the llm responses were coming back as valid json but extract_json_between_markers was rejecting them because it is enforcing a check for the json marker which was not in the response.
I can submit a PR to add a fallback to try parsing the output as json without those markers if you want it.
The text was updated successfully, but these errors were encountered: