Skip to content

Commit

Permalink
[ Edit ] fixed some issue caught at package deploy, minor changes
Browse files Browse the repository at this point in the history
  • Loading branch information
anasfik committed Nov 17, 2023
1 parent 0c6ab25 commit 29b124b
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 10 deletions.
5 changes: 1 addition & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
# Changelog

## 4.1.5

- Removed the exposed field for configuring the package to use fetch_client instead of http_client manually withe is `isWeb` field, in favor of using `dart.library.js` and `dart.library.io` conditional imports to automatically detect the platform and use the appropriate client for it.

## 4.1.4

- Removed the exposed field for configuring the package to use fetch_client instead of http_client manually withe is `isWeb` field, in favor of using `dart.library.js` and `dart.library.io` conditional imports to automatically detect the platform and use the appropriate client for it.
- Exposed field for configuring the package to use fetch_client instead of http_client for making requests in web apps (flutter web, etc..)

## 4.1.3
Expand Down
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ OpenAI.showLogs = true;

This will only log the requests steps such when the request started and finished, when the decoding started...

But if you want to log raw responses that are returned from the API (JSON, RAW...), you can set the `showResponsesLogs` to `true`:
But if you want to log raw responses that are returned from the API (JSON, RAW...), you can set the `showResponsesLogs`:

```dart
OpenAI.showResponsesLogs = true;
Expand Down Expand Up @@ -605,15 +605,14 @@ to get access to the translation API, and translate an audio file to english, yo
OpenAIAudioModel translation = await OpenAI.instance.audio.createTranslation(
file: File(/* THE FILE PATH*/),
model: "whisper-1",
responseFormat: OpenAIAudioResponseFo rmat.text,
responseFormat: OpenAIAudioResponseFormat.text,
);
// print the translation.
print(translation.text);
```

Learn more from [here](C:\projects\Flutter_and_Dart\openai
).
Learn more from [here](https://platform.openai.com/docs/api-reference/audio/createTranslation).

</br>

Expand Down
3 changes: 2 additions & 1 deletion lib/src/core/models/edit/sub_models/usage.dart
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ final class OpenAIEditModelUsage {
int get hashCode =>
promptTokens.hashCode ^ completionTokens.hashCode ^ totalTokens.hashCode;

/// {@template openai_edit_model_usage}
/// {@macro openai_edit_model_usage}
const OpenAIEditModelUsage({
required this.promptTokens,
required this.completionTokens,
Expand All @@ -27,6 +27,7 @@ final class OpenAIEditModelUsage {

/// {@template openai_edit_model_usage}
/// This method is used to convert a [Map<String, dynamic>] object to a [OpenAIEditModelUsage] object.
/// {@endtemplate}
factory OpenAIEditModelUsage.fromMap(Map<String, dynamic> json) {
return OpenAIEditModelUsage(
promptTokens: json['prompt_tokens'],
Expand Down
2 changes: 1 addition & 1 deletion pubspec.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: dart_openai
description: Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.
version: 4.1.5
version: 4.1.4
homepage: https://github.com/anasfik/openai
repository: https://github.com/anasfik/openai
documentation: https://github.com/anasfik/openai/blob/main/README.md
Expand Down

0 comments on commit 29b124b

Please sign in to comment.