You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Batch requests (https://platform.openai.com/docs/api-reference/batch) are a new OpenAI API feature. They will allow us to avoid rate limits and reduce token costs, at the expense of up to 24 hours of waiting time. For some assignments this might be acceptable, so we can add a toggle to allow the use of the batch API.
Currently only /v1/chat/completions is supported.
This is on hold however, until embeddings are supported as well.
Embeddings are supported as of 2024-05.
The text was updated successfully, but these errors were encountered:
Batch requests (https://platform.openai.com/docs/api-reference/batch) are a new OpenAI API feature. They will allow us to avoid rate limits and reduce token costs, at the expense of up to 24 hours of waiting time. For some assignments this might be acceptable, so we can add a toggle to allow the use of the batch API.
This is on hold however, until embeddings are supported as well.Embeddings are supported as of 2024-05.
The text was updated successfully, but these errors were encountered: