Skip to content

Commit

Permalink
Adds api-server and model eval chat ENVs to native mode example
Browse files Browse the repository at this point in the history
Signed-off-by: Brent Salisbury <[email protected]>
  • Loading branch information
nerdalert committed Jan 25, 2025
1 parent 94ddd85 commit 0abf69b
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 5 deletions.
2 changes: 2 additions & 0 deletions .env.native.example
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,5 @@ NEXT_PUBLIC_TAXONOMY_ROOT_DIR=
NEXT_PUBLIC_EXPERIMENTAL_FEATURES=false

# IL_FILE_CONVERSION_SERVICE=http://localhost:8000 # Uncomment and fill in the http://host:port if the docling conversion service is running.
# NEXT_PUBLIC_API_SERVER=http://localhost:8080 # Uncomment and point to the URL the api-server is running on. Native mode only and needs to be running on the same host as the UI.
# NEXT_PUBLIC_MODEL_SERVER_URL=http://x.x.x.x # Used for model chat evaluation vLLM instances. Currently, server side rendering is not supported so the client must have access to this address for model chat evaluation to function in the UI. Currently ports, 8000 & 8001 are hardcoded and why it is not an option to set.
10 changes: 5 additions & 5 deletions api-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ Starts a training job.
{
"modelName": "name-of-the-model",
"branchName": "name-of-the-branch",
"epochs": 10 // Optional
"epochs": 10
}
```

Expand Down Expand Up @@ -199,7 +199,7 @@ Combines data generation and training into a single pipeline job.
{
"modelName": "name-of-the-model",
"branchName": "name-of-the-branch",
"epochs": 10 // Optional
"epochs": 10
}
```

Expand Down Expand Up @@ -230,7 +230,7 @@ Serves the latest model checkpoint on port `8001`.

```json
{
"checkpoint": "samples_12345" // Optional
"checkpoint": "samples_12345"
}
```

Expand Down Expand Up @@ -353,7 +353,7 @@ Unloads a specific VLLM container.

```json
{
"model_name": "pre-train" // Must be either "pre-train" or "post-train" for meow
"model_name": "pre-train"
}
```

Expand Down Expand Up @@ -387,7 +387,7 @@ Fetches the status of a specific VLLM model.

```json
{
"status": "running" // Possible values: "running", "loading", "stopped"
"status": "running"
}
```

Expand Down

0 comments on commit 0abf69b

Please sign in to comment.