A Rust crate that provides a simple function for interacting with the OpenAI API and performing language-based tasks. This crate focuses on streaming responses from the API, enabling real-time processing of large amounts of data.
GptStream
struct provides a convenient interface for interacting with the OpenAI chat completions endpoint.To use this crate, simply create an instance of the OpenAIStream
struct, passing your API key as a parameter. Then, call the gpt_stream
method with your desired input and await the response. The returned GptStream
object allows you to asynchronously iterate over the streaming API response.
Example:
toml
[dependencies]
"openai-api-stream-rs" = "0.1.0"
"tokio" = { version = "1.12.0", features = ["full"] }
"futures" = "0.3.19"
```rust use openaiapistream_rs::OpenAIStream; use futures::stream::StreamExt;
async fn main() { let apikey = "yourapikey"; let openaistream = OpenAIStream::new(apikey.tostring());
let input = r#"
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Write a simple advanced usage of Rust in one sentence"
}
]
}
"#;
let gpt_stream = openai_stream.gpt_stream(input).await.unwrap();
let mut gpt_stream = Box::pin(gpt_stream);
while let Some(response) = gpt_stream.next().await {
println!("{}", response);
}
} ```
Note: Replace "yourapikey" with your actual OpenAI API key.
For more details and advanced configuration options, please refer to the crate documentation.
Note: This crate is still in development and may be subject to changes and updates.