diff --git a/_redirects b/_redirects
index 4258c82a..c13a1fe9 100644
--- a/_redirects
+++ b/_redirects
@@ -15,7 +15,7 @@
/docs/troubleshooting/undefined-issue /docs/troubleshooting 302
/getting-started/troubleshooting /docs/troubleshooting 302
/docs/troubleshooting/gpu-not-used /docs/troubleshooting 302
-/guides/integrations/openrouter /docs/remote-inference/openrouter 302
+/guides/integrations/openrouter /docs/remote-models/openrouter 302
/guides/integrations/continue /integrations/coding/continue-dev 302
/docs/extension-capabilities /docs/extensions 302
/guides/using-extensions /docs/extensions 302
@@ -24,7 +24,7 @@
/integrations/tensorrt /docs/built-in/tensorrt-llm 302
/guides/using-models/integrate-with-remote-server /docs/remote-inference/generic-openai 302
/guides/using-models/customize-engine-settings /docs/built-in/llama-cpp 302
-/developers/plugins/azure-openai /docs/remote-inference/openai 302
+/developers/plugins/azure-openai /docs/remote-models/openai 302
/docs/api-reference/assistants /api-reference#tag/assistants 302
/docs/api-reference/models/list /api-reference#tag/models 302
/docs/api-reference/threads /api-reference#tag/chat 302
@@ -53,18 +53,18 @@
/guides/advanced/ /docs/settings 302
/guides/engines/llamacpp/ /docs/built-in/llama-cpp 302
/guides/engines/tensorrt-llm/ /docs/built-in/tensorrt-llm 302
-/guides/engines/lmstudio/ /docs/local-inference/lmstudio 302
+/guides/engines/lmstudio/ /docs/local-models/lmstudio 302
/guides/engines/ollama/ /docs/built-in/llama-cpp 302
-/guides/engines/groq/ /docs/remote-inference/groq 302
-/guides/engines/mistral/ /docs/remote-inference/mistralai 302
-/guides/engines/openai/ /docs/remote-inference/openai 302
+/guides/engines/groq/ /docs/remote-models/groq 302
+/guides/engines/mistral/ /docs/remote-models/mistralai 302
+/guides/engines/openai/ /docs/remote-models/openai 302
/guides/engines/remote-server/ /docs/remote-inference/generic-openai 302
/extensions/ /docs/extensions 302
/integrations/discord/ /integrations/messaging/llmcord 302
/integrations/interpreter/ /integrations/function-calling/interpreter 302
/integrations/raycast/ /integrations/workflow-automation/raycast 302
/docs/integrations/raycast /integrations/workflow-automation/raycast 302
-/integrations/openrouter/ /docs/remote-inference/openrouter 302
+/integrations/openrouter/ /docs/remote-models/openrouter 302
/integrations/continue/ /integrations/coding/continue-dev 302
/troubleshooting/ /docs/troubleshooting 302
/changelog/changelog-v0.4.9/ /changelog 302
@@ -105,35 +105,35 @@
/developer/build-assistant/ /docs/assistants 302
/guides/integrations/ /docs/integrations 302
/specs/hub/ /docs 302
-/install/windows/ /docs/desktop-installation/windows 302
-/install/linux/ /docs/desktop-installation/linux 302
-/install/nightly/ /docs/desktop-installation/windows 302
+/install/windows/ /docs/desktop/windows 302
+/install/linux/ /docs/desktop/linux 302
+/install/nightly/ /docs/desktop/windows 302
/docs/engineering/fine-tuning/ /docs 302
/developer/assistant/ /docs/assistants 302
/guides/common-error/broken-build/ /docs/troubleshooting#broken-build 302
/guides/using-server/using-server/ /docs/local-api 302
-/guides/integrations/azure-openai-service/ /docs/remote-inference/openai 302
+/guides/integrations/azure-openai-service/ /docs/remote-models/openai 302
/specs/messages/ /docs/threads 302
/docs/engineering/models/ /docs/models 302
/docs/specs/assistants/ /docs/assistants 302
/docs/engineering/chats/ /docs/threads 302
/guides/using-extensions/extension-settings/ /docs/extensions 302
/guides/models/customize-engine/ /docs/models 302
-/guides/integration/mistral/ /docs/remote-inference/mistralai 302
+/guides/integration/mistral/ /docs/remote-models/mistralai 302
/guides/common-error/ /docs/troubleshooting 302
-/guides/integrations/ollama/ /docs/local-inference/ollama 302
+/guides/integrations/ollama/ /docs/local-models/ollama 302
/server-suite/ /api-reference 302
-/guides/integrations/lmstudio/ /docs/local-inference/lmstudio 302
-/guides/integrations/mistral-ai/ /docs/remote-inference/mistralai 302
+/guides/integrations/lmstudio/ /docs/local-models/lmstudio 302
+/guides/integrations/mistral-ai/ /docs/remote-models/mistralai 302
/guides/start-server/ /docs/local-api 302
/guides/changelog/ /changelog 302
/guides/models-list/ /docs/models 302
/guides/thread/ /docs/threads 302
/docs/engineering/messages/ /docs/threads 302
/guides/faqs/ /about/faq 302
-/docs/integrations/openrouter/ /docs/remote-inference/openrouter 302
-/guides/windows /docs/desktop-installation/windows 302
-/docs/integrations/ollama/ /docs/local-inference/ollama 302
+/docs/integrations/openrouter/ /docs/remote-models/openrouter 302
+/guides/windows /docs/desktop/windows 302
+/docs/integrations/ollama/ /docs/local-models/ollama 302
/api/overview/ /api-reference 302
/docs/extension-guides/ /docs/extensions 302
/specs/settings/ /docs 302
@@ -144,7 +144,7 @@
/guides/using-models/import-manually/ /docs/models 302
/docs/team/contributor-program/ /about/team 302
/docs/modules/models /docs/models 302
-/getting-started/install/linux /docs/desktop-installation/linux 302
+/getting-started/install/linux /docs/desktop/linux 302
/guides/chatting/start-thread/ /docs/threads 302
/api/files/ /docs 302
/specs/threads/ /docs/threads 302
@@ -152,7 +152,7 @@
/guides/chatting/upload-images/ /docs/threads 302
/guides/using-models/customize-models/ /docs/models 302
/docs/modules/models/ /docs/models 302
-/getting-started/install/linux/ /docs/desktop-installation/linux 302
+/getting-started/install/linux/ /docs/desktop/linux 302
/specs/chats/ /docs/threads 302
/specs/engine/ /docs 302
/specs/data-structures /docs 302
@@ -160,15 +160,15 @@
/docs/get-started/use-local-server/ /docs/local-api 302
/guides/how-jan-works/ /about/how-we-work 302
/guides/install/cloud-native /docs/installation/server 302
-/guides/windows/ /docs/desktop-installation/windows 302
+/guides/windows/ /docs/desktop/windows 302
/specs/ /docs 302
/docs/get-started/build-extension/ /docs/extensions 302
/specs/files/ /docs 302
/guides/using-models/package-models/ /docs/models 302
-/install/overview/ /docs/desktop-installation/windows 302
+/install/overview/ /docs/desktop/windows 302
/docs/get-started/extension-anatomy/ /docs/extensions 302
/docs/get-started/ /docs 302
-/guides/mac/ /docs/desktop-installation/mac 302
+/guides/mac/ /docs/desktop/mac 302
/intro/ /about 302
/specs/fine-tuning/ /docs 302
/guides/server/ /docs/installation/server 302
@@ -180,13 +180,13 @@
/reference/store/ /api-reference 302
/tutorials/build-chat-app /docs/quickstart 302
/features/acceleration /docs/built-in/llama-cpp 302
-/getting-started/install/mac /docs/desktop-installation/mac 302
+/getting-started/install/mac /docs/desktop/mac 302
docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/docs/specs/threads /docs/threads 302
/docs/api-reference/fine-tuning /api-reference 302
/docs/guides/speech-to-text/prompting /docs/quickstart 302
/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model /docs 302
-/getting-started/install/windows /docs/desktop-installation/windows 302
+/getting-started/install/windows /docs/desktop/windows 302
/docs/modules/assistants /docs/assistants 302
/docs/modules/chats /docs/threads 302
/docs/specs/chats /docs/threads 302
@@ -203,7 +203,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/docs/guides/fine-tuning /docs 302
/docs/specs/files /docs 302
/docs/modules/threads /docs/threads 302
-/guides/linux /docs/desktop-installation/linux 302
+/guides/linux /docs/desktop/linux 302
/developer/build-engine/engine-anatomy/ /docs 302
/developer/engine/ /docs 302
/docs/product/system-monitor/ /docs 302
@@ -213,7 +213,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/guides/troubleshooting/gpu-not-used/ /docs/troubleshooting#troubleshooting-nvidia-gpu 302
/docs/integrations/langchain/ /docs/integrations 302
/onboarding/ /docs/quickstart 302
-/installation/hardware/ /docs/desktop-installation/windows 302
+/installation/hardware/ /docs/desktop/windows 302
/docs/features/load-unload /docs 302
/guides/chatting/upload-docs/ /docs/threads 302
/developer/build-extension/package-your-assistant/ /docs 302
@@ -273,7 +273,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/ai/been-you /about 302
/tokenizer?view=bpe /docs 302
/docs/engineering/ /docs 302
-/developer/install-and-prerequisites#system-requirements /docs/desktop-installation/windows 302
+/developer/install-and-prerequisites#system-requirements /docs/desktop/windows 302
/guides/quickstart /docs/quickstart 302
/guides/models /docs/models 302
/guides/threads /docs/threads 302
@@ -281,18 +281,18 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/guides/advanced /docs/settings 302
/guides/engines/llamacpp /docs/built-in/llama-cpp 302
/guides/engines/tensorrt-llm /docs/built-in/tensorrt-llm 302
-/guides/engines/lmstudio /docs/local-inference/lmstudio 302
-/guides/engines/ollama /docs/local-inference/ollama 302
-/guides/engines/groq /docs/remote-inference/groq 302
-/guides/engines/mistral /docs/remote-inference/mistralai 302
-/guides/engines/openai /docs/remote-inference/openai 302
+/guides/engines/lmstudio /docs/local-models/lmstudio 302
+/guides/engines/ollama /docs/local-models/ollama 302
+/guides/engines/groq /docs/remote-models/groq 302
+/guides/engines/mistral /docs/remote-models/mistralai 302
+/guides/engines/openai /docs/remote-models/openai 302
/guides/engines/remote-server /docs/remote-inference/generic-openai 302
/extensions /docs/extensions 302
/integrations/discord /integrations/messaging/llmcord 302
/docs/integrations/discord /integrations/messaging/llmcord 302
/integrations/interpreter /integrations/function-calling/interpreter 302
/integrations/raycast /integrations/workflow-automation/raycast 302
-/integrations/openrouter /docs/remote-inference/openrouter 302
+/integrations/openrouter /docs/remote-models/openrouter 302
/integrations/continue /integrations/coding/continue-dev 302
/troubleshooting /docs/troubleshooting 302
/changelog/changelog-v0.4.9 /changelog 302
@@ -323,7 +323,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/docs/troubleshooting/undefined-issue/ /docs/troubleshooting 302
/getting-started/troubleshooting/ /docs/troubleshooting 302
/docs/troubleshooting/gpu-not-used/ /docs/troubleshooting#troubleshooting-nvidia-gpu 302
-/guides/integrations/openrouter/ /docs/remote-inference/openrouter 302
+/guides/integrations/openrouter/ /docs/remote-models/openrouter 302
/guides/integrations/continue/ /integrations/coding/continue-dev 302
/guides/using-extensions/ /docs/extensions 302
/features/extensions/ /docs/extensions 302
@@ -331,7 +331,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/integrations/tensorrt/ /docs/built-in/tensorrt-llm 302
/guides/using-models/integrate-with-remote-server/ /docs/remote-inference/generic-openai 302
/guides/using-models/customize-engine-settings/ /docs/built-in/llama-cpp 302
-/developers/plugins/azure-openai/ /docs/remote-inference/openai 302
+/developers/plugins/azure-openai/ /docs/remote-models/openai 302
/docs/api-reference/assistants/ /api-reference#tag/assistants 302
/docs/api-reference/models/list/ /api-reference#tag/models 302
/docs/api-reference/threads/ /api-reference#tag/chat 302
@@ -368,34 +368,34 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/developer/build-assistant /docs/assistants 302
/guides/integrations /docs/integrations 302
/specs/hub /docs 302
-/install/windows /docs/desktop-installation/windows 302
-/install/linux /docs/desktop-installation/linux 302
-/install/nightly /docs/desktop-installation/windows 302
+/install/windows /docs/desktop/windows 302
+/install/linux /docs/desktop/linux 302
+/install/nightly /docs/desktop/windows 302
/docs/engineering/fine-tuning /docs 302
/developer/assistant /docs/assistants 302
/guides/common-error/broken-build /docs/troubleshooting#broken-build 302
/guides/using-server/using-server /docs/local-api 302
-/guides/integrations/azure-openai-service /docs/remote-inference/openai 302
+/guides/integrations/azure-openai-service /docs/remote-models/openai 302
/specs/messages /docs/threads 302
/docs/engineering/models /docs/models 302
/docs/specs/assistants /docs/assistants 302
/docs/engineering/chats /docs/threads 302
/guides/using-extensions/extension-settings /docs/extensions 302
/guides/models/customize-engine /docs/models 302
-/guides/integration/mistral /docs/remote-inference/mistralai 302
+/guides/integration/mistral /docs/remote-models/mistralai 302
/guides/common-error /docs/troubleshooting 302
-/guides/integrations/ollama /docs/local-inference/ollama 302
+/guides/integrations/ollama /docs/local-models/ollama 302
/server-suite /api-reference 302
-/guides/integrations/lmstudio /docs/local-inference/lmstudio 302
-/guides/integrations/mistral-ai /docs/remote-inference/mistralai 302
+/guides/integrations/lmstudio /docs/local-models/lmstudio 302
+/guides/integrations/mistral-ai /docs/remote-models/mistralai 302
/guides/start-server /docs/local-api 302
/guides/changelog /changelog 302
/guides/models-list /docs/models 302
/guides/thread /docs/threads 302
/docs/engineering/messages /docs/threads 302
/guides/faqs /about/faq 302
-/docs/integrations/openrouter /docs/remote-inference/openrouter 302
-/docs/integrations/ollama/ /docs/local-inference/ollama 302
+/docs/integrations/openrouter /docs/remote-models/openrouter 302
+/docs/integrations/ollama/ /docs/local-models/ollama 302
/api/overview /api-reference 302
/docs/extension-guides /docs/extensions 302
/specs/settings /docs 302
@@ -421,9 +421,9 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/docs/get-started/build-extension /docs/extensions 302
/specs/files /docs 302
/guides/using-models/package-models /docs/models 302
-/install/overview /docs/desktop-installation/windows 302
+/install/overview /docs/desktop/windows 302
/docs/get-started/extension-anatomy /docs/extensions 302
-/guides/mac /docs/desktop-installation/mac 302
+/guides/mac /docs/desktop/mac 302
/intro /about 302
/specs/fine-tuning /docs 302
/specs/file-based /docs 302
@@ -434,13 +434,13 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned /docs 302
/reference/store /api-reference 302
/tutorials/build-chat-app/ /docs/quickstart 302
/features/acceleration/ /docs/built-in/llama-cpp 302
-/getting-started/install/mac/ /docs/desktop-installation/mac 302
+/getting-started/install/mac/ /docs/desktop/mac 302
docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
/docs/specs/threads/ /docs/threads 302
/docs/api-reference/fine-tuning/ /api-reference 302
/docs/guides/speech-to-text/prompting/ /docs/quickstart 302
/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model/ /docs 302
-/getting-started/install/windows/ /docs/desktop-installation/windows 302
+/getting-started/install/windows/ /docs/desktop/windows 302
/docs/modules/chats/ /docs/threads 302
/docs/specs/chats/ /docs/threads 302
/docs/modules/files/ /docs 302
@@ -455,7 +455,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
/docs/guides/fine-tuning/ /docs 302
/docs/specs/files/ /docs 302
/docs/modules/threads/ /docs/threads 302
-/guides/linux/ /docs/desktop-installation/linux 302
+/guides/linux/ /docs/desktop/linux 302
/developer/build-engine/engine-anatomy /docs 302
/developer/engine /docs 302
/docs/product/system-monitor /docs 302
@@ -464,7 +464,7 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
/engineering/research /docs 302
/docs/integrations/langchain /docs/integrations 302
/onboarding /docs/quickstart 302
-/installation/hardware /docs/desktop-installation/windows 302
+/installation/hardware /docs/desktop/windows 302
/docs/features/load-unload/ /docs 302
/guides/chatting/upload-docs /docs/threads 302
/developer/build-extension/package-your-assistant /docs 302
@@ -566,26 +566,26 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
/developer/framework/product/system-monitor/ /docs 302
/developer/user-interface /docs 302
/developer/user-interface/ /docs 302
-/docs/desktop /docs/desktop-installation/windows 302
-/docs/desktop/ /docs/desktop-installation/windows 302
-/docs/inferences/groq /docs/remote-inference/groq 302
-/docs/inferences/groq/ /docs/remote-inference/groq 302
+/docs/desktop /docs/desktop/windows 302
+/docs/desktop/ /docs/desktop/windows 302
+/docs/inferences/groq /docs/remote-models/groq 302
+/docs/inferences/groq/ /docs/remote-models/groq 302
/docs/inferences/llamacpp /docs/built-in/llama-cpp 302
/docs/inferences/llamacpp/ /docs/built-in/llama-cpp 302
-/docs/inferences/lmstudio /docs/local-inference/lmstudio 302
-/docs/inferences/lmstudio/ /docs/local-inference/lmstudio 302
-/docs/inferences/mistralai /docs/remote-inference/mistralai 302
-/docs/inferences/mistralai/ /docs/remote-inference/mistralai 302
-/docs/inferences/ollama /docs/local-inference/ollama 302
-/docs/inferences/ollama/ /docs/local-inference/ollama 302
-/docs/inferences/openai /docs/remote-inference/openai 302
-/docs/inferences/openai/ /docs/remote-inference/openai 302
+/docs/inferences/lmstudio /docs/local-models/lmstudio 302
+/docs/inferences/lmstudio/ /docs/local-models/lmstudio 302
+/docs/inferences/mistralai /docs/remote-models/mistralai 302
+/docs/inferences/mistralai/ /docs/remote-models/mistralai 302
+/docs/inferences/ollama /docs/local-models/ollama 302
+/docs/inferences/ollama/ /docs/local-models/ollama 302
+/docs/inferences/openai /docs/remote-models/openai 302
+/docs/inferences/openai/ /docs/remote-models/openai 302
/docs/inferences/remote-server-integration /docs/remote-inference/generic-openai 302
/docs/inferences/remote-server-integration/ /docs/remote-inference/generic-openai 302
/docs/inferences/tensorrtllm /docs/built-in/tensorrt-llm 302
/docs/inferences/tensorrtllm/ /docs/built-in/tensorrt-llm 302
-/docs/integrations/router /docs/remote-inference/openrouter 302
-/docs/integrations/router/ /docs/remote-inference/openrouter 302
+/docs/integrations/router /docs/remote-models/openrouter 302
+/docs/integrations/router/ /docs/remote-models/openrouter 302
/docs/server /docs/local-api 302
/docs/server/ /docs/local-api 302
/features/ /docs 302
@@ -612,4 +612,14 @@ docs/guides/fine-tuning/what-models-can-be-fine-tuned/ /docs 302
/guides/using-server/server/ /docs/local-api 302
/integrations/coding/vscode /integrations/coding/continue-dev 302
/docs/integrations/interpreter /integrations/function-calling/interpreter 302
-/cortex/built-in/llama-cpp /docs 302
\ No newline at end of file
+/cortex/built-in/llama-cpp /docs 302
+/docs/desktop-installation/linux /docs/desktop/linux 302
+/docs/desktop-installation/windows /docs/desktop/windows 302
+/docs/desktop-installation/mac /docs/desktop/mac 302
+/docs/local-inference/lmstudio /docs/local-models/lmstudio 302
+/docs/local-inference/ollama /docs/local-models/ollama 302
+/docs/remote-inference/openai /docs/remote-models/openai 302
+/docs/remote-inference/groq /docs/remote-models/groq 302
+/docs/remote-inference/mistralai /docs/remote-models/mistralai 302
+/docs/remote-inference/openrouter /docs/remote-models/openrouter 302
+/docs/remote-models/generic-openai /docs/remote-models/generic-openai 302
\ No newline at end of file
diff --git a/src/pages/docs/_assets/amd.gif b/src/pages/docs/_assets/amd.gif
new file mode 100644
index 00000000..cc053b80
Binary files /dev/null and b/src/pages/docs/_assets/amd.gif differ
diff --git a/src/pages/docs/_assets/delete-data.png b/src/pages/docs/_assets/delete-data.png
new file mode 100644
index 00000000..178a0085
Binary files /dev/null and b/src/pages/docs/_assets/delete-data.png differ
diff --git a/src/pages/docs/_assets/instructions.gif b/src/pages/docs/_assets/instructions.gif
new file mode 100644
index 00000000..5ae101c2
Binary files /dev/null and b/src/pages/docs/_assets/instructions.gif differ
diff --git a/src/pages/docs/_assets/models.gif b/src/pages/docs/_assets/models.gif
new file mode 100644
index 00000000..8dc4d792
Binary files /dev/null and b/src/pages/docs/_assets/models.gif differ
diff --git a/src/pages/docs/_meta.json b/src/pages/docs/_meta.json
index 135562c6..a65dea1c 100644
--- a/src/pages/docs/_meta.json
+++ b/src/pages/docs/_meta.json
@@ -11,8 +11,9 @@
"quickstart": {
"title": "Quickstart"
},
- "desktop-installation": "Desktop Installation",
+ "desktop": "Desktop",
"server-installation": "Server Installation",
+ "data-folder": "Jan Data Folder",
"user-guides": {
"title": "BASIC USAGE",
"type": "separator"
@@ -25,12 +26,12 @@
"shortcuts": "Keyboard Shortcuts",
"local-api": "",
"inference-engines": {
- "title": "INFERENCE ENGINES",
+ "title": "MODEL PROVIDER",
"type": "separator"
},
- "built-in": "Built-in Engines",
- "local-inference": "Local Engines",
- "remote-inference": "Remote APIs",
+ "built-in": "Built-in Models",
+ "local-models": "Local Models",
+ "remote-models": "Remote APIs",
"extensions-separator": {
"title": "EXTENSIONS",
"type": "separator"
diff --git a/src/pages/docs/data-folder.mdx b/src/pages/docs/data-folder.mdx
new file mode 100644
index 00000000..da442137
--- /dev/null
+++ b/src/pages/docs/data-folder.mdx
@@ -0,0 +1,142 @@
+---
+title: Jan Data Folder
+description: Discover the Structure of Jan Data.
+sidebar_position: 2
+keywords:
+ [
+ Jan,
+ Customizable Intelligence, LLM,
+ local AI,
+ privacy focus,
+ free and open source,
+ private and offline,
+ conversational AI,
+ no-subscription fee,
+ large language models,
+ quickstart,
+ getting started,
+ using AI model,
+ ]
+---
+
+import { Tabs } from 'nextra/components'
+import { Callout, Steps } from 'nextra/components'
+
+# Jan Data Folder
+Jan stores your data locally in your own filesystem in a universal file format (JSON). We build for privacy by default and do not collect or sell your data.
+
+This guide helps you understand where and how this data is stored. We'll also show you how to delete or move the data folder location.
+
+## Folder Structure
+Jan app data folder should have the following folder structure:
+
+Jan is stored in the root `~/jan` by default.
+
+```yaml
+/assistants
+ /jan
+ assistant.json
+ /shakespeare
+ assistant.json
+/extensions
+ extensions.json
+ /@janhq
+ /extension_A
+ package.json
+/logs
+ /app.txt
+/models
+ /model_A
+ model.json
+/settings
+ settings.json
+ /@janhq
+ /extension_A_Settings
+ settings.json
+/threads
+ /jan_thread_A
+ messages.jsonl
+ thread.json
+ messages.jsonl
+```
+### `jan/` (The Root Directory)
+
+This is the primary directory where all files related to Jan are stored. It typically resides in the user's home directory.
+
+### `assistants/`
+
+Stores configuration files for various AI assistants. Each assistant within this directory can have different settings.
+
+- **Default Assistant**: Located in `/assistants/jan/`, it includes an `assistant.json` configuring the default settings and capabilities. The default sample of `assistan.json` is as follows:
+
+```json
+{
+ "avatar": "",
+ "id": "jan",
+ "object": "assistant",
+ "created_at": 1715132389207,
+ "name": "Jan",
+ "description": "A default assistant that can use all downloaded models",
+ "model": "*",
+ "instructions": ""
+}
+
+```
+
+Each parameter in the file is defined as follows:
+
+| Parameter | Description |
+| --- | --- |
+| avatar | Path to the assistant's avatar image, allowing visual customization. |
+| id | Unique identifier for the assistant. |
+| object | Indicates that this is an assistant configuration. |
+| created_at | Timestamp of creation, in milliseconds since the epoch. |
+| name | The assistant's name. |
+| description | Describes the assistant’s capabilities and intended role. |
+| model | Defines accessible models, with "*" representing access to all models. |
+| instructions | Specifies queries and commands to tailor Jan's responses for improved interaction effectiveness. |
+- **Custom Assistant Example**: The `/assistants/shakespeare/` shows a custom setup, also with its own `assistant.json`.
+
+### `extensions/`
+
+Extensions enhance Jan's functionality by adding new capabilities or integrating external services.
+
+- **Extension Configuration**: The `extensions.json` in the `/extensions/` directory provides settings for all installed extensions.
+- **Specific Extensions**: Subdirectories like `/@janhq/extension_A/` contain `package.json` files that import necessary modules for each extension.
+
+### `logs/`
+
+Logs from the application are stored here. This is useful for troubleshooting and monitoring the application's behavior over time. The file `/logs/app.txt` captures general application activity.
+
+### `models/`
+
+Stores the AI models that the assistants use to process requests and generate responses.
+
+- **Model Configurations**: Each model directory, such as `/models/modelA/`, contains a `model.json` with settings specific to that model.
+
+### `settings/`
+
+General settings for the application are stored here, separate from individual assistant or engine configurations.
+
+- **General Settings**: The `settings.json` in the `/settings/` directory holds application-wide settings.
+- **Extension-specific Settings**: Additional settings for extensions are stored in respective subdirectories under `/settings/@janhq/`.
+
+### `threads/`
+
+Threads history is kept in this directory. Each session or thread is stored in a way that makes it easy to review past interactions. Each thread is stored in its subdirectory, such as `/threads/jan_unixstamp/`, with files like `messages.jsonl` and `thread.json` detailing the thread settings.
+
+## Open the Data Folder
+
+To open the Jan data folder, follow the steps in the [Settings](/docs/settings#access-the-jan-data-folder) guide.
+
+## Delete Jan Data Folder
+
+If you have uninstalled the Jan app, you may also want to delete the Jan data folder. You can automatically remove this folder during uninstallation by selecting **OK** when prompted.
+
+![Delete Data Folder](./_assets/delete-data.png)
+
+If you missed this step and need to delete the folder manually, please follow these instructions:
+
+1. Go to the root data folder in your Users directory.
+2. Locate the Jan data folder.
+3. Delete the folder manually.
\ No newline at end of file
diff --git a/src/pages/docs/desktop-installation/_meta.json b/src/pages/docs/desktop-installation/_meta.json
deleted file mode 100644
index 0bd0fd9c..00000000
--- a/src/pages/docs/desktop-installation/_meta.json
+++ /dev/null
@@ -1,14 +0,0 @@
-{
- "mac": {
- "title": "Mac",
- "href": "/docs/desktop-installation/mac"
- },
- "windows": {
- "title": "Windows",
- "href": "/docs/desktop-installation/windows"
- },
- "linux": {
- "title": "Linux",
- "href": "/docs/desktop-installation/linux"
- }
-}
diff --git a/src/pages/docs/desktop-installation/linux.mdx b/src/pages/docs/desktop-installation/linux.mdx
deleted file mode 100644
index 90bc5020..00000000
--- a/src/pages/docs/desktop-installation/linux.mdx
+++ /dev/null
@@ -1,97 +0,0 @@
----
-title: Linux
-description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
-keywords:
- [
- Jan,
- Customizable Intelligence, LLM,
- local AI,
- privacy focus,
- free and open source,
- private and offline,
- conversational AI,
- no-subscription fee,
- large language models,
- quickstart,
- getting started,
- using AI model,
- installation,
- "desktop"
- ]
----
-
-import { Tabs } from 'nextra/components'
-import { Callout } from 'nextra/components'
-
-
-# Linux Installation
-To install Jan desktop on Linux, follow the steps below:
-## Pre-requisites
-Ensure that your system meets the following requirements:
- - glibc 2.27 or higher (check with `ldd --version`)
- - Ensure that `gcc-11`, `g++-11`, `cpp-11`, or higher is installed. To install, please see [here](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for Ubuntu installation.
- - **Post-Installation Actions**: Add CUDA libraries to `LD_LIBRARY_PATH`. To install, follow the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) instructions.
-
-To enable GPU support, you will need:
- - NVIDIA GPU with CUDA Toolkit 11.7 or higher.
- - NVIDIA driver 470.63.01 or higher.
-
-## Stable Releases
-
-To download stable releases:
-1. Go to [Jan](https://jan.ai/).
-2. Click the **Download** tab.
-
-![Linux Installation](../_assets/download.png)
-
-3. Under the **Linux** section, click the preferred file you want to download.
-
-![Linux Installation](../_assets/linux.png)
-### Install Command
-You can also install Jan using the following command:
-
-
-
- ```sh
- # Install Jan using dpkg
- sudo dpkg -i jan-linux-amd64-{version}.deb
- ```
-
-
-
- ```sh
- # Install Jan using apt-get
- sudo apt-get install ./jan-linux-amd64-{version}.deb
- # where jan-linux-amd64-{version}.deb is the path to the Jan package
- ```
-
-
-
- ```sh
- # Install Jan using AppImage
- chmod +x jan-linux-x86_64-{version}.AppImage
- ./jan-linux-x86_64-{version}.AppImage
- # where jan-linux-x86_64-{version}.AppImage is the path to the Jan package
- ```
-
-
-
-
-The download should be available as a `.AppImage` file or a `.deb` file.
-
-## Nightly Releases
-
-We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
-
-You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
-
-
- If stuck in a broken build, go to the [Broken Build](/docs/troubleshooting#broken-build) section.
-
-## Enable GPU
-Once Jan is installed and you have a GPU, you can configure your GPU to accelerate the model's performance. To enable the use of your GPU in the Jan app, follow the steps below:
-1. Open Jan application.
-2. Go to **Settings** -> **Advanced Settings** -> **Accelerator** -> Enable and choose the GPU you want.
-3. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
-
-![Enable GPU](../_assets/gpu.gif)
diff --git a/src/pages/docs/desktop-installation/mac.mdx b/src/pages/docs/desktop-installation/mac.mdx
deleted file mode 100644
index 3b5913e6..00000000
--- a/src/pages/docs/desktop-installation/mac.mdx
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: Mac
-description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
-keywords:
- [
- Jan,
- Customizable Intelligence, LLM,
- local AI,
- privacy focus,
- free and open source,
- private and offline,
- conversational AI,
- no-subscription fee,
- large language models,
- quickstart,
- getting started,
- using AI model,
- installation,
- "desktop"
- ]
----
-
-import { Tabs } from 'nextra/components'
-import { Callout } from 'nextra/components'
-
-
-# Mac Installation
-To install Jan desktop on Mac, follow the steps below:
-## Pre-requisites
-Before installing Jan, ensure :
-- Mac with an Apple Silicon Processor.
-- Mac with an Intel Processor.
-
-Users may experience slow performance with Jan on Macs that have Intel processors.
-
-- Homebrew and its dependencies are installed for installing Jan with the Homebrew package.
-- Your macOS Ventura or higher.
-
-## Stable Releases
-
-To download stable releases:
-1. Go to [Jan](https://jan.ai/).
-2. Click the **Download** tab.
-
-![Mac Installation](../_assets/download.png)
-
-3. Under the **MacOS** section, select the download file based on your **MacOS specification**.
-
-![Mac Installation](../_assets/mac2.png)
-
-
-The download should be available as a `.dmg`.
-
-## Nightly Releases
-
-We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
-
-You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
-
-## Install with Homebrew
-Install Jan with the following Homebrew command:
-
-```bash
-brew install --cask jan
-```
-
-
- Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
-
-## Enable GPU
-Once Jan is installed and you have a GPU, you can configure your GPU to accelerate the model's performance. To enable the use of your GPU in the Jan app, follow the steps below:
-
-This feature is not supported on Mac Intel.
-
-1. Open Jan application.
-2. Go to **Settings** -> **Advanced Settings** -> **Accelerator** -> Enable and choose the GPU you want.
-3. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
-
-![Enable GPU](../_assets/gpu.gif)
diff --git a/src/pages/docs/desktop-installation/windows.mdx b/src/pages/docs/desktop-installation/windows.mdx
deleted file mode 100644
index 4c304ff8..00000000
--- a/src/pages/docs/desktop-installation/windows.mdx
+++ /dev/null
@@ -1,87 +0,0 @@
----
-title: Windows
-description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
-keywords:
- [
- Jan,
- Customizable Intelligence, LLM,
- local AI,
- privacy focus,
- free and open source,
- private and offline,
- conversational AI,
- no-subscription fee,
- large language models,
- quickstart,
- getting started,
- using AI model,
- installation,
- "desktop"
- ]
----
-
-import { Tabs, Callout, Steps } from 'nextra/components'
-
-
-# Windows Installation
-To install Jan desktop on Windows, follow the steps below:
-## Pre-requisites
-Ensure that your system meets the following requirements:
- - Windows 10 or higher is required to run Jan.
-
-To enable GPU support, you will need:
-- Install an [NVIDIA Driver](https://www.nvidia.com/Download/index.aspx) supporting CUDA 11.7 or higher.
- - Use the following command to verify the installation:
-
-```bash
-nvidia-smi
-```
-- Install a [CUDA toolkit](https://developer.nvidia.com/cuda-downloads) that is compatible with your NVIDIA driver.
- - Use the following command to verify the installation:
-
-```bash
-nvcc --version
-```
-
-## Stable Releases
-
-To download stable releases:
-1. Go to [Jan](https://jan.ai/).
-2. Click the **Download** tab.
-
-![Windows Installation](../_assets/download.png)
-
-3. Under the **Windows** section, click **Standard (64-bit)** to download the `.exe` file.
-
-![Windows Installation](../_assets/windows.png)
-
-
-Jan only supports the **64-bit** architecture.
-
-## Nightly Releases
-
-We provide the Nightly Release so that you can test new features and see what might be coming in a future stable release. Please be aware that there might be bugs!
-
-You can download it from [Jan's Discord](https://discord.gg/FTk2MvZwJH) in the [`#nightly-builds`](https://discord.gg/q8szebnxZ7) channel.
-
-
-## Enable GPU
-Once Jan is installed and you have a GPU, you can configure your GPU to accelerate the model's performance. To enable the use of your GPU in the Jan app, follow the steps below:
-1. Open Jan application.
-2. Go to **Settings** -> **Advanced Settings** -> **Accelerator** -> Enable and choose the GPU you want.
-3. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
-
-![Enable GPU](../_assets/gpu.gif)
-
-
-## Default Installation Directory
-
-By default, Jan is installed in the following directory:
-
-```sh
-# Default installation directory
-C:\Users\{username}\AppData\Local\Programs\Jan
-```
-
- If stuck in a broken build, go to the [Broken Build](/docs/troubleshooting#broken-build) section.
-
\ No newline at end of file
diff --git a/src/pages/docs/desktop-installation.mdx b/src/pages/docs/desktop.mdx
similarity index 91%
rename from src/pages/docs/desktop-installation.mdx
rename to src/pages/docs/desktop.mdx
index ea7b6a2f..c7f0de4c 100644
--- a/src/pages/docs/desktop-installation.mdx
+++ b/src/pages/docs/desktop.mdx
@@ -18,7 +18,7 @@ keywords:
---
import { Cards, Card } from 'nextra/components'
-import childPages from './desktop-installation/_meta.json';
+import childPages from './desktop/_meta.json';
# Desktop Installation
diff --git a/src/pages/docs/desktop/_meta.json b/src/pages/docs/desktop/_meta.json
new file mode 100644
index 00000000..5cc930af
--- /dev/null
+++ b/src/pages/docs/desktop/_meta.json
@@ -0,0 +1,14 @@
+{
+ "mac": {
+ "title": "Mac",
+ "href": "/docs/desktop/mac"
+ },
+ "windows": {
+ "title": "Windows",
+ "href": "/docs/desktop/windows"
+ },
+ "linux": {
+ "title": "Linux",
+ "href": "/docs/desktop/linux"
+ }
+}
diff --git a/src/pages/docs/desktop/linux.mdx b/src/pages/docs/desktop/linux.mdx
new file mode 100644
index 00000000..f2bc40ce
--- /dev/null
+++ b/src/pages/docs/desktop/linux.mdx
@@ -0,0 +1,295 @@
+---
+title: Linux
+description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
+keywords:
+ [
+ Jan,
+ Customizable Intelligence, LLM,
+ local AI,
+ privacy focus,
+ free and open source,
+ private and offline,
+ conversational AI,
+ no-subscription fee,
+ large language models,
+ quickstart,
+ getting started,
+ using AI model,
+ installation,
+ "desktop"
+ ]
+---
+
+import { Tabs } from 'nextra/components'
+import { Callout } from 'nextra/components'
+import FAQBox from '@/components/FaqBox'
+
+
+# Linux Installation
+To install Jan desktop on Linux, follow the steps below:
+## Compatibility
+
+Ensure that your system meets the following requirements to use Jan effectively:
+
+- **OS**:
+ - **Debian-based** (Supports `.deb` and `AppImage` )
+ - Ubuntu-based
+ - Ubuntu Desktop LTS / Ubuntu Server LTS
+ - Edubuntu
+ - Kubuntu
+ - Lubuntu
+ - Ubuntu Budgie
+ - Ubuntu Cinnamon
+ - Ubuntu Kylin
+ - Ubuntu MATE
+ - **Pacman-based** (Supports `AppImage` )
+ - Arch Linux based
+ - Arch Linux
+ - SteamOS
+ - **RPM-based** (Supports `.rpm` and `AppImage` )
+ - Fedora-based
+ - RHEL-based
+ - openSUSE
+
+
+ - Please check whether your Linux distribution supports desktop, server, or both environments.
+ - For server versions, please refer to the [server installation](https://jan.ai/docs/server-installation).
+
+
+- **Hardware**:
+ - **CPU**:
+ - **Intel**:
+ - Haswell processors (Q2 2013) and newer.
+ - Tiger Lake (Q3 2020) and newer for Celeron and Pentium processors.
+ - **AMD**:
+ - Excavator processors (Q2 2015) and newer.
+
+ Jan mainly supports [AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) processors. We also support older [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX) and [AVX-512](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX-512) processors, but we don't recommend it.
+
+ - **RAM**:
+ - 8GB for running up to 3B models (int4).
+ - 16GB for running up to 7B models (int4).
+ - 32GB for running up to 13B models (int4).
+
+
+ We support DDR2 RAM as the minimum requirement but recommend using newer generations of RAM for improved performance.
+
+
+
+ - **GPU:**
+ - 6GB can load the 3B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+ - 8GB can load the 7B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+ - 12GB can load the 13B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+
+
+ Having at least 6GB VRAM when using NVIDIA, AMD, or Intel Arc GPUs is recommended.
+
+
+
+ - **Disk:** At least 10GB for app storage and model download.
+
+## Prerequisites
+
+- **System Libraries**:
+ - glibc 2.27 or higher. You can verify this by running `ldd --version`.
+ - Install gcc-11, g++-11, cpp-11, or later versions. Refer to the [Ubuntu installation guide](https://gcc.gnu.org/projects/cxx-status.html#cxx17) for assistance.
+- **Post-Installation Actions**:
+ - Add CUDA libraries to the `LD_LIBRARY_PATH` per the instructions in the [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions).
+
+## Installing Jan
+
+To install Jan, follow the steps below:
+
+### Step 1: Download the Jan Application
+
+Jan provides two types of releases:
+
+
+#### Stable Releases
+
+The stable release is a stable version of Jan. You can download a stable release Jan app via the following:
+
+- **Official Website**: [https://jan.ai](https://jan.ai/)
+- **Jan GitHub repository**: https://github.com/janhq/jan
+
+
+Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository.
+
+
+
+#### Nightly Releases
+
+The nightly Release allows you to test out new features and get a sneak peek at what might be included in future stable releases. You can download this version via:
+
+- **Jan GitHub repository**: https://github.com/janhq/jan
+
+
+Keep in mind that this build might crash frequently and may contain bugs!
+
+
+
+For Linux, Jan provides two types of downloads:
+
+1. **Ubuntu**: `.deb`
+2. **Fedora**: `.AppImage`
+
+### Step 2: Install the Jan Application
+
+Here are the steps to install Jan on Linux based on your Linux distribution:
+
+
+### Ubuntu
+Install Jan using the following command:
+
+
+
+```
+# Install Jan using dpkg
+sudo dpkg -i jan-linux-amd64-{version}.deb
+
+```
+
+
+
+```json
+# Install Jan using apt-get
+sudo apt-get install ./jan-linux-amd64-{version}.deb
+# where jan-linux-amd64-{version}.deb is the path to the Jan package
+```
+
+
+
+
+### Fedora
+
+1. Make the AppImage executable using the following command:
+
+```
+ chmod +x jan-linux-x86_64-{version}.AppImage
+
+```
+
+2. Run the AppImage file using the following command:
+
+```
+ ./jan-linux-x86_64-{version}.AppImage
+
+```
+
+
+## Data Folder
+
+By default, Jan is installed in the following directory:
+
+```
+# Default installation directory
+Applications\\Users\\{username}\\Jan
+
+```
+
+
+- You can move the Jan data folder to a specific folder by following the steps [here](/docs/settings#access-the-jan-data-folder).
+- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
+
+
+## GPU Acceleration
+
+Once Jan is installed and you have a GPU, you can use your GPU to accelerate the model's performance.
+
+### NVIDIA GPU
+
+To enable the use of your NVIDIA GPU in the Jan app, follow the steps below:
+
+
+Ensure that you have installed the following to use NVIDIA GPU:
+- NVIDIA GPU with CUDA Toolkit 11.7 or higher.
+- NVIDIA driver 470.63.01 or higher.
+
+
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**.
+3. Enable and choose the NVIDIA GPU you want.
+4. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+
+While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is recommended for faster performance.
+
+
+
+### AMD GPU
+
+To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
+3. Enable the **Vulkan Support** under the **GPU Acceleration**.
+4. Enable the **GPU Acceleration** and choose the GPU you want to use.
+5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+### Intel Arc GPU
+
+To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
+3. Enable the **Vulkan Support** under the **GPU Acceleration**.
+4. Enable the **GPU Acceleration** and choose the GPU you want to use.
+5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+## Uninstalling Jan
+
+To uninstall Jan, follow the steps below:
+
+
+
+### Ubuntu
+
+1. Open a terminal.
+2. Run the following command to uninstall Jan:
+
+```
+sudo apt-get remove jan
+```
+
+
+### Fedora
+
+1. Open a terminal.
+2. Run the following command to uninstall Jan:
+
+```
+sudo dnf remove jan
+```
+
+
+
+
+The deleted Data Folder cannot be restored.
+
+
+
+## FAQs
+
+
+Nightly Releases allow you to test new features and previews of upcoming stable releases. You can download them from Jan's GitHub repository. However, remember that these builds might contain bugs and crash frequently.
+
+
+Yes, you can move the Jan data folder.
+
+
+Depending on your GPU type (NVIDIA, AMD, or Intel), follow the respective instructions provided in the [GPU Acceleration](https://www.notion.so/docs/desktop/windows#gpu-acceleration) section above.
+
+
+No, it cannot be restored once you delete the Jan data folder during uninstallation.
+
+
+Yes, `.AppImage` is designed to be distribution-agnostic, meaning it can run on various Linux distributions without requiring installation. You can use the Jan `.AppImage` on any Linux distribution that supports the `AppImage` format.
+
+
+No, `.deb` files are specifically intended for Debian-based distributions and may not be compatible with other Linux distributions.
+
+
+Warning: If you have any trouble during installation, please see our [Troubleshooting](https://www.notion.so/docs/troubleshooting) guide to resolve your problem.
+
+
\ No newline at end of file
diff --git a/src/pages/docs/desktop/mac.mdx b/src/pages/docs/desktop/mac.mdx
new file mode 100644
index 00000000..f738144b
--- /dev/null
+++ b/src/pages/docs/desktop/mac.mdx
@@ -0,0 +1,165 @@
+---
+title: Mac
+description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
+keywords:
+ [
+ Jan,
+ Customizable Intelligence, LLM,
+ local AI,
+ privacy focus,
+ free and open source,
+ private and offline,
+ conversational AI,
+ no-subscription fee,
+ large language models,
+ quickstart,
+ getting started,
+ using AI model,
+ installation,
+ "desktop"
+ ]
+---
+
+import { Tabs } from 'nextra/components'
+import { Callout } from 'nextra/components'
+import FAQBox from '@/components/FaqBox'
+
+
+# Mac Installation
+Jan has been developed as a Mac Universal application, allowing it to run natively on both Apple Silicon and Intel-based Macs.
+
+## Compatibility
+
+Ensure that your system meets the following requirements to use Jan effectively:
+
+- **OS**:
+ - MacOSX 13.6 or higher.
+- **Hardware**:
+ - **Mac Intel CPU**
+ - Memory:
+ - 8GB for running up to 3B models.
+ - 16GB for running up to 7B models.
+ - 32GB for running up to 13B models.
+ - **Mac Apple Silicon**
+ - Memory:
+ - 8GB for running up to 3B models.
+ - 16GB for running up to 7B models.
+ - 32GB for running up to 13B models.
+
+
+ Apple Silicon Macs leverage Metal for GPU acceleration, providing faster performance than Intel Macs, which rely solely on CPU processing.
+
+
+ - **Disk**: At least 10GB for model downloads.
+
+## Installing Jan
+
+To install Jan, follow the steps below:
+
+### Step 1: Download the Jan Application
+
+Jan provides two types of releases:
+
+
+#### Stable Releases
+
+Please download Jan from official distributions, or build it from source.
+
+- **Official Website**: [https://jan.ai](https://jan.ai/)
+- **Jan GitHub repository**: [Github](https://github.com/janhq/jan/releases)
+
+
+Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository.
+
+
+
+#### Nightly Releases
+
+Nightly Releases let you test out new features, which may be buggy:
+
+- **Jan GitHub repository**: https://github.com/janhq/jan
+
+
+Keep in mind that this build might crash frequently and may contain bugs!
+
+
+
+### Step 2: Install the Jan Application
+
+1. Once you have downloaded the Jan app `.dmg` file, open the file.
+2. Drag the application icon to the Applications folder shortcut.
+3. Wait for the installation process.
+4. Once installed, you can access Jan on your machine.
+
+#### Install Jan with Homebrew
+
+You can also install Jan using the following Homebrew command:
+
+```bash
+brew install --cask jan
+```
+
+
+- Ensure that you have installed Homebrew and its dependencies.
+- Homebrew package installation is currently limited to **Apple Silicon Macs**, with upcoming support for Windows and Linux.
+
+
+## Data Folder
+
+By default, Jan is installed in the following directory:
+
+```sh
+# Default installation directory
+~/jan
+```
+
+
+- You can move the Jan data folder to a specific folder by following the steps [here](/docs/settings#access-the-jan-data-folder).
+- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
+
+
+## Metal Acceleration
+
+Jan is specially designed to work well on Mac Silicon, using `llama.cpp` as its main engine for processing AI tasks efficiently. It **automatically uses [Metal](https://developer.apple.com/documentation/metal)**, Apple's latest technology that can be used for GPU acceleration, so you don’t need to turn on this feature manually.
+
+
+💡 Metal, used for GPU acceleration, is not supported on Intel-based Mac devices.
+
+
+
+## Uninstalling Jan
+
+To uninstall Jan, follow the steps below:
+
+1. If the app is currently open, exit the app before continuing.
+2. Open the **Finder** menu.
+3. Click the **Applications** option from the sidebar.
+4. Find the **Jan** app or type in the search bar.
+5. Use any of these ways to move the **Jan** app to the Trash:
+- Drag the app to the Trash.
+- Select the app and choose the Move to Trash option.
+- Select the app and press Command-Delete on your keyboard.
+
+
+The deleted Data Folder cannot be restored.
+
+
+## FAQs
+
+
+Nightly Releases allow you to test new features and previews of upcoming stable releases. You can download them from Jan's GitHub repository. However, remember that these builds might contain bugs and crash frequently.
+
+
+Yes, you can move the Jan data folder.
+
+
+Depending on your Mac type (Apple Silicon or Intel), you won't be able to utilize the GPU acceleration feature if you have a Mac with an Intel processor.
+
+
+No, it cannot be restored once you delete the Jan data folder during uninstallation.
+
+
+
+💡 Warning: If you have any trouble during installation, please see our [Troubleshooting](/docs/troubleshooting) guide to resolve your problem.
+
+
\ No newline at end of file
diff --git a/src/pages/docs/desktop/windows.mdx b/src/pages/docs/desktop/windows.mdx
new file mode 100644
index 00000000..53518787
--- /dev/null
+++ b/src/pages/docs/desktop/windows.mdx
@@ -0,0 +1,199 @@
+---
+title: Windows
+description: Get started quickly with Jan, a ChatGPT-alternative that runs on your own computer, with a local API server. Learn how to install Jan and select an AI model to start chatting.
+keywords:
+ [
+ Jan,
+ Customizable Intelligence, LLM,
+ local AI,
+ privacy focus,
+ free and open source,
+ private and offline,
+ conversational AI,
+ no-subscription fee,
+ large language models,
+ quickstart,
+ getting started,
+ using AI model,
+ installation,
+ "desktop"
+ ]
+---
+
+import { Tabs, Callout, Steps } from 'nextra/components'
+import FAQBox from '@/components/FaqBox'
+
+
+# Windows Installation
+To install Jan desktop on Windows, follow the steps below:
+## Compatibility
+
+Ensure that your system meets the following requirements to use Jan effectively:
+
+- **OS**:
+ - Windows 10 or higher.
+- **Hardware**:
+ - **CPU**:
+ - **Intel**:
+ - Haswell processors (Q2 2013) and newer.
+ - Tiger Lake (Q3 2020) and newer for Celeron and Pentium processors.
+ - **AMD**:
+ - Excavator processors (Q2 2015) and newer.
+
+ Jan mainly supports [AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) processors. We also support older [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX) and [AVX-512](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX-512) processors, but we don't recommend it.
+
+ - **RAM**:
+ - 8GB for running up to 3B models (int4).
+ - 16GB for running up to 7B models (int4).
+ - 32GB for running up to 13B models (int4).
+
+
+ We support DDR2 RAM as the minimum requirement but recommend using newer generations of RAM for improved performance.
+
+
+ - **GPU:**
+ - 6GB can load the 3B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+ - 8GB can load the 7B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+ - 12GB can load the 13B model (int4) with `ngl` at 120 ~ full speed on CPU/ GPU.
+
+
+ Having at least 6GB VRAM when using NVIDIA, AMD, or Intel Arc GPUs is recommended.
+
+ - **Disk:** At least 10GB for app storage and model download.
+
+## Installing Jan
+
+To install Jan, follow the steps below:
+
+### Step 1: Download the Jan Application
+
+Jan provides two types of releases:
+
+
+#### Stable Releases
+
+The stable release is a stable version of Jan. You can download a stable release Jan app via the following:
+
+- **Official Website**: [https://jan.ai](https://jan.ai/)
+- **Jan GitHub repository**: https://github.com/janhq/jan
+
+
+Make sure to verify the URL to ensure that it's the official Jan website and GitHub repository.
+
+
+
+#### Nightly Releases
+
+The nightly Release allows you to test out new features and get a sneak peek at what might be included in future stable releases. You can download this version via:
+
+- **Jan GitHub repository**: https://github.com/janhq/jan
+
+
+Keep in mind that this build might crash frequently and may contain bugs!
+
+
+
+### Step 2: Install the Jan Application
+
+1. Once you have downloaded the Jan app `.exe` file, open the file.
+2. Wait for Jan to be completely installed on your machine.
+3. Once installed, you can access Jan on your machine.
+
+## Data Folder
+
+By default, Jan is installed in the following directory:
+
+```
+# Default installation directory
+Applications\\Users\\{username}\\Jan
+
+```
+
+
+- You can move the Jan data folder to a specific folder by following the steps [here](/docs/settings#access-the-jan-data-folder).
+- Please see the [Jan Data Folder](/docs/data-folder) for more details about the data folder structure.
+
+
+## GPU Acceleration
+
+Once Jan is installed and you have a GPU, you can use your GPU to accelerate the model's performance.
+
+### NVIDIA GPU
+
+To enable the use of your NVIDIA GPU in the Jan app, follow the steps below:
+
+
+Ensure that you have installed the following to use NVIDIA GPU:
+- NVIDIA GPU with CUDA Toolkit 11.7 or higher.
+- NVIDIA driver 470.63.01 or higher.
+
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**.
+3. Enable and choose the NVIDIA GPU you want.
+4. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+
+While **Vulkan** can enable Nvidia GPU acceleration in the Jan app, **CUDA** is recommended for faster performance.
+
+
+### AMD GPU
+
+To enable the use of your AMD GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
+3. Enable the **Vulkan Support** under the **GPU Acceleration**.
+4. Enable the **GPU Acceleration** and choose the AMD GPU you want to use.
+5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+### Intel Arc GPU
+
+To enable the use of your Intel Arc GPU in the Jan app, you need to activate the Vulkan support first by following the steps below:
+
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> enable the **Experimental Mode**.
+3. Enable the **Vulkan Support** under the **GPU Acceleration**.
+4. Enable the **GPU Acceleration** and choose the Intel Arc GPU you want to use.
+5. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+## Uninstalling Jan
+
+To uninstall Jan, follow the steps below:
+
+### Step 1: Open the Control Panels
+
+1. Open the **Control Panels**.
+2. Click **Uninstall Program** under the **Programs** section.
+
+### Step 2: Uninstall Jan App
+
+1. Search for **Jan**.
+2. Click the **three dots icon** -> **Uninstall**.
+3. Click **Uninstall** once again to confirm the action.
+4. Click **OK**.
+5. A message will appear: **"Do you also want to delete the DEFAULT Jan data folder at C:\Users\{username}\Jan?"**.
+6. Click **OK** to delete the entire Jan data folder, or click **Cancel** to save your Jan Data folder so you can use this on the new installation folder.
+
+
+The deleted Data Folder cannot be restored.
+
+
+## FAQs
+
+
+Nightly Releases allow you to test new features and previews of upcoming stable releases. You can download them from Jan's GitHub repository. However, remember that these builds might contain bugs and crash frequently.
+
+
+Yes, you can move the Jan data folder.
+
+
+Depending on your GPU type (NVIDIA, AMD, or Intel), follow the respective instructions provided in the [GPU Acceleration](https://www.notion.so/docs/desktop/windows#gpu-acceleration) section above.
+
+
+No, it cannot be restored once you delete the Jan data folder during uninstallation.
+
+
+
+If you have any trouble during installation, please see our [Troubleshooting](https://www.notion.so/docs/troubleshooting) guide to resolve your problem.
+
\ No newline at end of file
diff --git a/src/pages/docs/local-inference/_meta.json b/src/pages/docs/local-models/_meta.json
similarity index 51%
rename from src/pages/docs/local-inference/_meta.json
rename to src/pages/docs/local-models/_meta.json
index b628f0a0..82520275 100644
--- a/src/pages/docs/local-inference/_meta.json
+++ b/src/pages/docs/local-models/_meta.json
@@ -1,10 +1,10 @@
{
"lmstudio": {
"title": "LM Studio",
- "href": "/docs/local-inference/lmstudio"
+ "href": "/docs/local-models/lmstudio"
},
"ollama": {
"title": "Ollama",
- "href": "/docs/local-inference/ollama"
+ "href": "/docs/local-models/ollama"
}
}
diff --git a/src/pages/docs/local-inference/lmstudio.mdx b/src/pages/docs/local-models/lmstudio.mdx
similarity index 100%
rename from src/pages/docs/local-inference/lmstudio.mdx
rename to src/pages/docs/local-models/lmstudio.mdx
diff --git a/src/pages/docs/local-inference/ollama.mdx b/src/pages/docs/local-models/ollama.mdx
similarity index 100%
rename from src/pages/docs/local-inference/ollama.mdx
rename to src/pages/docs/local-models/ollama.mdx
diff --git a/src/pages/docs/models/manage-models.mdx b/src/pages/docs/models/manage-models.mdx
index 44b68039..e253519b 100644
--- a/src/pages/docs/models/manage-models.mdx
+++ b/src/pages/docs/models/manage-models.mdx
@@ -74,21 +74,6 @@ This is the easiest and most space-efficient way if you have already used other
Windows users should drag and drop the model file, as **Click to Upload** might not show the model files in Folder Preview.
-## Delete Models
-To delete a model:
-
-1. Go to **Settings**.
-
-![Settings](../_assets/settings.png)
-
-2. Go to **My Models**.
-
-![My Models](../_assets/mymodels.png)
-
-3. Select the three dots next and select `Delete model`.
-
-![Delete Model](../_assets/delete.png)
-
### Add a Model Manually
You can also add a specific model that is not available within the **Hub** section by following the steps below:
1. Open the Jan app.
@@ -156,4 +141,19 @@ To see the complete list of a model's parameters, please see [below](/docs/model
"frequency_penalty": 0,
"presence_penalty": 0
}
-```
\ No newline at end of file
+```
+
+## Delete Models
+To delete a model:
+
+1. Go to **Settings**.
+
+![Settings](../_assets/settings.png)
+
+2. Go to **My Models**.
+
+![My Models](../_assets/mymodels.png)
+
+3. Select the three dots next and select `Delete model`.
+
+![Delete Model](../_assets/delete.png)
\ No newline at end of file
diff --git a/src/pages/docs/quickstart.mdx b/src/pages/docs/quickstart.mdx
index 88b3e0f1..9ae0b522 100644
--- a/src/pages/docs/quickstart.mdx
+++ b/src/pages/docs/quickstart.mdx
@@ -22,46 +22,56 @@ keywords:
import { Tabs } from 'nextra/components'
import { Callout, Steps } from 'nextra/components'
-import download from './_assets/download.gif'
-import gpt from './_assets/gpt.gif'
-import model from './_assets/model.gif'
# Quickstart
To quickly get started, follow the steps below:
-### Step 1: Install Jan Desktop
-Before using Jan, you need to install Jan. Jan can be installed through:
-- [Desktop](/docs/desktop.mdx)
-- [Server-side](/docs/server-installation)
+### Step 1: Install Jan
+You can run Jan either on your desktop using the Jan desktop app or on a server by installing the Jan server. To get started, check out these pages:
+- [Desktop](/docs/desktop)
+- [Server](/docs/server)
+Once you have installed Jan, you should see the Jan application as shown below without any local model installed:
+
+
+![Default State](./_assets/models.gif)
+
+
+
+### Step 2: Turn on the GPU Acceleration (Optional)
+If you have a graphics card, boost model performance by enabling GPU acceleration:
+1. Open Jan application.
+2. Go to **Settings** -> **Advanced Settings** -> **GPU Acceleration**.
+3. Click the Slider and choose your preferred GPU.
+3. A success notification saying **Successfully turned on GPU acceleration** will appear when GPU acceleration is activated.
+
+Ensure you have installed your GPU driver. Please see [Desktop](/docs/desktop) for more information on activating the GPU acceleration.
+
-### Step 2: Download a Model
+### Step 3: Download a Model
-Jan provides a variety of local AI models tailored to different needs and is ready for download. These models are installed and run directly on the user's device.
+Jan offers various local AI models tailored to different needs, all ready for download directly to your device:
1. Go to the **Hub**.
2. Select the models that you would like to install. To see model details, click the **dropdown** button.
-3. Or you can also paste the Hugging Face model's **ID** or **URL** in the search bar.
+3. You can also paste the Hugging Face model's **ID** or **URL** in the search bar.
Ensure you select the appropriate model size by balancing performance, cost, and resource considerations in line with your task's specific requirements and hardware specifications.
-3. Click the **Download** button.
+4. Click the **Download** button.
![Download a Model](./_assets/download-model.gif)
-### Step 3: Configure the Model
-
-After downloading, configure the model to fit your requirements and hardware specs by following these steps:
-1. Go to the **Thread** tab.
-2. Click the **Model** dropdown button.
-3. Choose the downloaded model, either **Local** or **Remote**.
-4. Adjust the configurations as needed.
+5. Go to the **Thread** tab.
+6. Click the **Model** dropdown button.
+7. Choose the downloaded model, either **Local** or **Remote**.
+8. Adjust the configurations as needed.
- Please see [Using Models](/docs/models#model-parameters) for detailed model configuration.
+ Please see [Model Parameters](/docs/models#model-parameters) for detailed model configuration.
@@ -70,61 +80,43 @@ After downloading, configure the model to fit your requirements and hardware spe
-### Step 4: Start the Model
+### Step 4: Customize the Assistant Instruction
+Customize Jan's assistant behavior by specifying queries, commands, or requests in the Assistant Instructions field to get the most responses from your assistant. To customize, follow the steps below:
+1. On the **Thread** section, navigate to the right panel.
+2. Select the **Assistant** dropdown menu.
+3. Provide a specific guideline under the **Instructions** field.
+
-After downloading and configuring your model, you can chat with the model.
+![Assistant Instruction](./_assets/instructions.gif)
+### Step 5: Start Thread
-![Chat with a Model](./_assets/model.gif)
+Once you have downloaded a model and customized your assistant instruction, you can start chatting with the model.
-### Step 5: Connect to ChatGPT (Optional)
+![Chat with a Model](./_assets/model.gif)
-Jan also provides access to remote models hosted on external servers, requiring an API key for connectivity. For example, to use the ChatGPT model with Jan, you must input your API key by following these steps:
+
-1. Go to the **Thread** tab.
-2. Under the Model dropdown menu, select the ChatGPT model.
-3. Fill in your ChatGPT API Key that you can get in your [OpenAI platform](https://platform.openai.com/account/api-keys).
+### Step 6: Connect to a Remote API
+Jan also offers access to remote models hosted on external servers. You can link up with any Remote AI APIs compatible with OpenAI. Jan comes with numerous extensions that facilitate connections to various remote AI APIs. To explore and connect to Remote APIs, follow these steps:
+1. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
+2. Under the **Settings screen**, click the **Extensions**.
+3. Ensure that you have installed the extensions.
+4. Fill in the API **URL** and **Keys** in the OpenAI Inference Engine section.
-![Download a Model](./_assets/gpt.gif)
+![Connect Remote API](./_assets/server-openai.gif)
-## Best Practices
-
-Optimize your experience with Jan by implementing these best practices, designed for developers, analysts, and AI enthusiasts. These guidelines will enhance the performance of AI models on your local machine.
-
-### Quickstart Guide
-
-Refer to the quickstart guide to set up Jan efficiently. It provides straightforward instructions that help you start quickly, even if you're new to AI.
-
-### Selecting the Right Models
-
-Choose from Jan's pre-configured AI models to match your needs. Consider the model's capabilities, accuracy, and processing speed.
-
-Remember, your choice may also depend on your hardware specs (see the Hardware Requirements section).
-
-### Setting Up Jan
-
-Get familiar with Jan's application and its advanced settings to fine-tune AI performance. Refer to the [Settings](/docs/settings) for configuration details.
-
-### Integrations
-
-Understand the integration capabilities and limitations when connecting Jan with other systems or open-source LLM providers.
-
-### Mastering Prompt Engineering
-
-Enhancing AI responses involves skilled, prompt engineering. Here are key tips:
-- Adopt a specific persona for the model.
-- Provide detailed prompts for precise responses.
-- Start with context or examples.
-- Use clear, concise language and specific keywords.
-
-## Pre-configured Models
-
-For a complete list of available AI models, visit our [official GitHub](https://github.com/janhq/jan).
\ No newline at end of file
+## What's Next?
+Now that Jan is up and running, explore further:
+1. Learn how to download and manage your [models](/docs/models).
+2. Customize Jan's [application settings](/docs/settings) according to your preferences.
+3. Discover supported [integrations](/integrations) to expand Jan's capabilities.
+4. Find out how to use [Jan as a client and a server](/docs/local-api).
\ No newline at end of file
diff --git a/src/pages/docs/remote-inference/_meta.json b/src/pages/docs/remote-inference/_meta.json
deleted file mode 100644
index e2a9a213..00000000
--- a/src/pages/docs/remote-inference/_meta.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "openai": {
- "title": "OpenAI API",
- "href": "/docs/remote-inference/openai"
- },
- "groq": { "title": "Groq API", "href": "/docs/remote-inference/groq" },
- "mistralai": {
- "title": "Mistral AI API",
- "href": "/docs/remote-inference/mistralai"
- },
- "openrouter": { "title": "OpenRouter", "href": "/docs/remote-inference/openrouter" },
- "generic-openai": { "title": "Any OpenAI Compatible API", "href": "/docs/remote-inference/generic-openai" }
-}
diff --git a/src/pages/docs/remote-models/_meta.json b/src/pages/docs/remote-models/_meta.json
new file mode 100644
index 00000000..fd46b672
--- /dev/null
+++ b/src/pages/docs/remote-models/_meta.json
@@ -0,0 +1,13 @@
+{
+ "openai": {
+ "title": "OpenAI API",
+ "href": "/docs/remote-models/openai"
+ },
+ "groq": { "title": "Groq API", "href": "/docs/remote-models/groq" },
+ "mistralai": {
+ "title": "Mistral AI API",
+ "href": "/docs/remote-models/mistralai"
+ },
+ "openrouter": { "title": "OpenRouter", "href": "/docs/remote-models/openrouter" },
+ "generic-openai": { "title": "Any OpenAI Compatible API", "href": "/docs/remote-models/generic-openai" }
+}
diff --git a/src/pages/docs/remote-inference/generic-openai.mdx b/src/pages/docs/remote-models/generic-openai.mdx
similarity index 100%
rename from src/pages/docs/remote-inference/generic-openai.mdx
rename to src/pages/docs/remote-models/generic-openai.mdx
diff --git a/src/pages/docs/remote-inference/groq.mdx b/src/pages/docs/remote-models/groq.mdx
similarity index 100%
rename from src/pages/docs/remote-inference/groq.mdx
rename to src/pages/docs/remote-models/groq.mdx
diff --git a/src/pages/docs/remote-inference/mistralai.mdx b/src/pages/docs/remote-models/mistralai.mdx
similarity index 100%
rename from src/pages/docs/remote-inference/mistralai.mdx
rename to src/pages/docs/remote-models/mistralai.mdx
diff --git a/src/pages/docs/remote-inference/openai.mdx b/src/pages/docs/remote-models/openai.mdx
similarity index 100%
rename from src/pages/docs/remote-inference/openai.mdx
rename to src/pages/docs/remote-models/openai.mdx
diff --git a/src/pages/docs/remote-inference/openrouter.mdx b/src/pages/docs/remote-models/openrouter.mdx
similarity index 100%
rename from src/pages/docs/remote-inference/openrouter.mdx
rename to src/pages/docs/remote-models/openrouter.mdx
diff --git a/src/pages/docs/server-installation/aws.mdx b/src/pages/docs/server-installation/aws.mdx
index 0fc0d9cd..56fd05b0 100644
--- a/src/pages/docs/server-installation/aws.mdx
+++ b/src/pages/docs/server-installation/aws.mdx
@@ -30,7 +30,7 @@ To install Jan Server, follow the steps below:
1. Go to AWS console -> `EC2`.
2. Choose an instance with at least `c5.2xlarge` for CPU only or `g5.2xlarge` for NVIDIA GPU support.
3. Add EBS volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1377`.
+4. Configure network security group rules to allow inbound traffic on port `1337`.
### Step 2: Get Jan Server
@@ -72,7 +72,7 @@ The available Docker Compose profile and the environment variables are listed be
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system |
| `AWS_ENDPOINT` | AWS endpoint URL - leave blank for default file system |
| `AWS_REGION` | AWS region - leave blank for default file system |
-| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1377 |
+| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
### Step 4: Run Jan Server
You can run the Jan server in two modes:
diff --git a/src/pages/docs/server-installation/azure.mdx b/src/pages/docs/server-installation/azure.mdx
index a4659241..0fa56061 100644
--- a/src/pages/docs/server-installation/azure.mdx
+++ b/src/pages/docs/server-installation/azure.mdx
@@ -30,7 +30,7 @@ To install Jan Server, follow the steps below:
1. Go to Azure console -> `Service` -> `Virtual machines`.
2. Choose an instance with at least `Standard_F8s_v2` for CPU only or `Standard_NC4as_T4_v3` for NVIDIA GPU support.
3. Add an Azure Disk or Azure Blob Storage volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1377`.
+4. Configure network security group rules to allow inbound traffic on port `1337`.
### Step 2: Get Jan Server
@@ -72,7 +72,7 @@ The available Docker Compose profile and the environment variables are listed be
| `AZURE_SECRET_ACCESS_KEY` | AZURE secret access key - leave blank for default file system |
| `AZURE_ENDPOINT` | AZURE endpoint URL - leave blank for default file system |
| `AZURE_REGION` | AZURE region - leave blank for default file system |
-| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1377 |
+| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
### Step 4: Run Jan Server
You can run the Jan server in two modes:
diff --git a/src/pages/docs/server-installation/gcp.mdx b/src/pages/docs/server-installation/gcp.mdx
index 719ad08f..6a599966 100644
--- a/src/pages/docs/server-installation/gcp.mdx
+++ b/src/pages/docs/server-installation/gcp.mdx
@@ -30,7 +30,7 @@ To install Jan Server, follow the steps below:
1. Go to GCP console -> `Compute instance`.
2. Choose an instance with at least `c2-standard-8` for CPU only or `g2-standard-4` for NVIDIA GPU support.
3. Add a Persistent Disk volume with at least **100GB**.
-4. Configure network security group rules to allow inbound traffic on port `1377`.
+4. Configure network security group rules to allow inbound traffic on port `1337`.
### Step 2: Get Jan Server
@@ -72,7 +72,7 @@ The available Docker Compose profile and the environment variables are listed be
| `GCP_SECRET_ACCESS_KEY` | GCP secret access key - leave blank for default file system |
| `GCP_ENDPOINT` | GCP endpoint URL - leave blank for default file system |
| `GCP_REGION` | GCP region - leave blank for default file system |
-| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1377 |
+| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
### Step 4: Run Jan Server
You can run the Jan server in two modes:
diff --git a/src/pages/docs/server-installation/onprem.mdx b/src/pages/docs/server-installation/onprem.mdx
index 776bc04a..e63a0c26 100644
--- a/src/pages/docs/server-installation/onprem.mdx
+++ b/src/pages/docs/server-installation/onprem.mdx
@@ -70,7 +70,7 @@ The available Docker Compose profile and the environment variables are listed be
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system |
| `AWS_ENDPOINT` | AWS endpoint URL - leave blank for default file system |
| `AWS_REGION` | AWS region - leave blank for default file system |
-| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1377 |
+| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
### Step 4: Run Jan Server
You can run the Jan server in two modes:
@@ -182,7 +182,7 @@ The available Docker Compose profile and the environment variables are listed be
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key - leave blank for default file system |
| `AWS_ENDPOINT` | AWS endpoint URL - leave blank for default file system |
| `AWS_REGION` | AWS region - leave blank for default file system |
-| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1377 |
+| `API_BASE_URL` | Jan Server URL, please modify it as your public IP address or domain name default http://localhost:1337 |
### Step 4: Run Jan Server
You can run the Jan server in two modes: