A Rust client library for the OpenAI API.
Supports multiple OpenAI endpoints, including Chat, Completions, Embeddings, Models, Moderations, Files, Fine-tunes, and more. Built with async-first design using Tokio and Reqwest, featuring robust error handling and SSE streaming for real-time responses.
Important: This release introduces breaking changes. The project has been significantly refactored, and these updates are too complex for a standard migration guide. If you have existing code, you will likely need to adapt function calls and data structures to the new design. See the updated examples folder or the documentation for guidance.
- Features
- Installation
- Quick Start
- API Highlights
- Environment Variables
- Streaming (SSE)
- Example Projects
- Contributing
- License
- Async-first: Built on Tokio + Reqwest.
- Complete Coverage of major OpenAI API endpoints:
- Chat (with streaming SSE for partial responses)
- Completions
- Models (list and retrieve)
- Embeddings
- Moderations
- Files (upload, list, download, delete)
- Fine-Tunes (create, list, retrieve, cancel, events, delete models)
- Rustls for TLS: avoids system dependencies like OpenSSL.
- Thorough Error Handling with custom
OpenAIError
. - Typed request/response structures (Serde-based).
- Extensive Documentation and usage examples, including SSE streaming.
In your Cargo.toml
, under [dependencies]
:
chat-gpt-lib-rs = "x.y.z" # Replace x.y.z with the latest version
Then build your project:
cargo build
Below is a minimal example using Completions:
use chat_gpt_lib_rs::{OpenAIClient, OpenAIError};
use chat_gpt_lib_rs::api_resources::completions::{create_completion, CreateCompletionRequest};
#[tokio::main]
async fn main() -> Result<(), OpenAIError> {
// Pass your API key directly or rely on the OPENAI_API_KEY environment variable
let client = OpenAIClient::new(None)?;
// Prepare a request to generate text completions
let request = CreateCompletionRequest {
model: "text-davinci-003".to_string(),
prompt: Some("Write a short advertisement for ice cream.".into()),
max_tokens: Some(50),
temperature: Some(0.7),
..Default::default()
};
// Call the Completions API
let response = create_completion(&client, &request).await?;
println!("Completion Response:\n{:?}", response);
Ok(())
}
use chat_gpt_lib_rs::api_resources::models;
let all_models = models::list_models(&client).await?;
println!("Available Models: {:?}", all_models);
let model_details = models::retrieve_model(&client, "text-davinci-003").await?;
println!("Model details: {:?}", model_details);
use chat_gpt_lib_rs::api_resources::completions::{
create_completion, CreateCompletionRequest
};
let req = CreateCompletionRequest {
model: "text-davinci-003".to_string(),
prompt: Some("Hello, world!".into()),
max_tokens: Some(50),
..Default::default()
};
let resp = create_completion(&client, &req).await?;
println!("Completion:\n{:?}", resp);
use chat_gpt_lib_rs::api_resources::chat::{
create_chat_completion, CreateChatCompletionRequest, ChatMessage, ChatRole
};
let chat_req = CreateChatCompletionRequest {
model: "gpt-3.5-turbo".into(),
messages: vec![
ChatMessage {
role: ChatRole::System,
content: "You are a helpful assistant.".to_string(),
name: None,
},
ChatMessage {
role: ChatRole::User,
content: "Give me a fun fact about Rust.".to_string(),
name: None,
},
],
max_tokens: Some(50),
..Default::default()
};
let response = create_chat_completion(&client, &chat_req).await?;
println!("Chat reply:\n{:?}", response);
use chat_gpt_lib_rs::api_resources::embeddings::{
create_embeddings, CreateEmbeddingsRequest, EmbeddingsInput
};
let emb_req = CreateEmbeddingsRequest {
model: "text-embedding-ada-002".to_string(),
input: EmbeddingsInput::String("Hello world!".to_string()),
user: None,
};
let emb_res = create_embeddings(&client, &emb_req).await?;
println!("Embedding:\n{:?}", emb_res.data[0].embedding);
use chat_gpt_lib_rs::api_resources::moderations::{
create_moderation, CreateModerationRequest, ModerationsInput
};
let mod_req = CreateModerationRequest {
input: ModerationsInput::String("I hate you and want to harm you.".into()),
model: None,
};
let mod_res = create_moderation(&client, &mod_req).await?;
println!("Moderation result:\n{:?}", mod_res.results[0].flagged);
use chat_gpt_lib_rs::api_resources::files::{
upload_file, list_files, retrieve_file_content, delete_file,
UploadFilePurpose
};
use std::path::PathBuf;
let file_path = PathBuf::from("training_data.jsonl");
let upload = upload_file(&client, &file_path, UploadFilePurpose::FineTune).await?;
println!("Uploaded file ID: {}", upload.id);
let all_files = list_files(&client).await?;
println!("All files: {:?}", all_files.data);
let content = retrieve_file_content(&client, &upload.id).await?;
println!("File content size: {}", content.len());
delete_file(&client, &upload.id).await?;
use chat_gpt_lib_rs::api_resources::fine_tunes::{
create_fine_tune, list_fine_tunes, CreateFineTuneRequest
};
let ft_req = CreateFineTuneRequest {
training_file: "file-abc123".into(),
model: Some("curie".to_string()),
..Default::default()
};
let job = create_fine_tune(&client, &ft_req).await?;
println!("Created fine-tune job: {}", job.id);
let all_jobs = list_fine_tunes(&client).await?;
println!("All fine-tune jobs: {:?}", all_jobs.data);
By default, the library reads your OpenAI API key from OPENAI_API_KEY
:
export OPENAI_API_KEY="sk-xxx"
Or use a .env
file with dotenvy.
Alternatively, provide a key directly:
let client = OpenAIClient::new(Some("sk-your-key".to_string()))?;
For real-time partial responses, pass stream = true
to Chat or Completions endpoints and process the resulting stream:
use futures_util::StreamExt;
use chat_gpt_lib_rs::api_resources::chat::{
create_chat_completion_stream, CreateChatCompletionRequest, ChatMessage, ChatRole
};
#[tokio::main]
async fn main() -> Result<(), chat_gpt_lib_rs::OpenAIError> {
let client = chat_gpt_lib_rs::OpenAIClient::new(None)?;
let request = CreateChatCompletionRequest {
model: "gpt-3.5-turbo".into(),
messages: vec![ChatMessage {
role: ChatRole::User,
content: "Tell me a joke.".to_string(),
name: None,
}],
stream: Some(true),
..Default::default()
};
let mut stream = create_chat_completion_stream(&client, &request).await?;
while let Some(chunk_result) = stream.next().await {
match chunk_result {
Ok(partial) => {
println!("Chunk: {:?}", partial);
}
Err(e) => eprintln!("Stream error: {:?}", e),
}
}
println!("Done streaming.");
Ok(())
}
Check the examples/
folder for CLI chat demos and more.
Third-Party Usage:
- techlead uses this library for advanced AI-driven chat interactions.
We welcome contributions and feedback! To get started:
- Fork this repository and clone your fork locally.
- Create a branch for your changes or fixes.
- Make & test your changes.
- Submit a pull request describing the changes.
Because this release is a major refactor, please note that much of the code has changed. If you’re updating older code, see the new examples and docs for updated usage patterns.
Licensed under the Apache License 2.0—see LICENSE for full details.
Breaking Changes Note:
Due to the extensive updates, we do not provide a direct migration guide. You may need to adapt your existing code to updated function signatures and data structures. Consult the new documentation and examples to get started quickly.