I usually have ChatGPT write them for me by copying and pasting from teh OpenAI API reference (example chat), but double check everything because Chat always makes mistakes, especially around adding @JsonProperty
annotations.
- Make all java variables camel case, and use
@JsonProperty
for fields that OpenAI returns as snake case - Include comments for each variable, I take these directly from the OpenAI website
- Include
@Data
on every response class, and@Builder @NoArgsConstructor @AllArgsConstructor @Data
on every request - Include basic class-level documentation and a link to the OpenAI reference page, example
- Add a JSON test for every new java object, this ensures that your definition and variable name overrides are correct.
Add to OpenAiApi
This is usually straightforward, use OpenAiResponse for endpoints that return lists.
Add to OpenAiService
Since 99% of the work of this library is done on OpenAI's servers, the objective of these tests is to call each endpoint at least once.
Specify every available parameter to make sure that OpenAI accepts everything, but don't create extra test cases unless a parameter drastically affects the results.
For example, CompletionTest has one test for normal completions, and one for streaming.
If your test relies on creating and retrieving external resources, FineTuningTest is a good example of how to share resources between tests and clean up afterwards.