AI Chat as Markdown lets GPT-4 Omni / Claude 3.5 talk directly into your Obsidian Markdown notes.
It relies on nesting of headings, and thus you can have multiple conversations and even branching conversations in the same note.
Please see the documented branching conversation example to understand how that works.
The plugin supports images, so that you can talk about the images and diagrams that are embedded in your markdown, with models that support this, such os Omni and Claude 3.5.
It can be configured via the Obsidian plugin settings to use any OpenAI-compatible API server.
Please go to the dedicated screenshots page
- Multiple branching chats as nested headings anywhere in your notes, see nesting example
- Optionally configure different AI models for each markdown file via the frontmatter, in addition to the conventional plugin config
- Use markdown files as system prompts. Use this for example to build up a library of system prompts for different applications.
- Optionally configure a different system prompt file for each note via the frontmatter, e.g.
aicmd-system-prompt-file: ai-chats/system prompt productivity.md
- Use Obsidian embeds to construct modular and dynamic system prompts, see screenshots
- Install plugin via community plugins
- In Obsidian settings, under
AI Chat as Markdown
configure API Host (e.g.https://api.openai.com/
), API key (sk-xxxxx
), model name (e.g.gpt-4o
) - In your Obsidian note, add example text
# My heading\nAre you there?
, position cursor inside, then, in edit mode, invoke via command paletteAI Chat as Markdown: Send current thread to AI
. Answer should appear under new sub-heading titled## AI
.- You could also just select some text, and then invoke
AI Chat as Markdown: Send selected text to AI and append the response
.
- You could also just select some text, and then invoke
- Copy over
main.js
,manifest.json
to your vaultVaultFolder/.obsidian/plugins/your-plugin-id/
.
- gptel LLM client for Emacs, and especially its branching context feature
- ChatGPT-MD Obsidian plugin, but I preferred to use the official OpenAI nodejs library and to use the gptel-style nested heading approach
- Clone this repo.
- Make sure your NodeJS is at least v16 (
node --version
). corepack enable
yarn
to install dependencies (we use Yarn PnP)yarn run dev
to start compilation in watch mode.
- support Obsidian embeds (aka transclusion) so that system prompts can be enriched with additional notes (planned for 1.5.0)
- Send embedded images to the model
- settings for default model, key, etc.
- setup yarn PnP style packages
- Add README section explaining the nesting to conversation algorithm
- Make debug mode configurable, then feature-flag logging behind that
- implement user-friendly file selector for the system prompt file setting
- enable per-document / yaml-header overrides of model, system prompt, etc.
- Optionally add used model to each AI heading
- ignore
%...%
comment blocks
-
Update manifest.json and CHANGELOG.md.
-
yarn run build
-
Create new github release and tag with e.g. 1.1.5
- Upload the freshly built
main.js
and updatedmanifest.json
as binary attachments.
- Upload the freshly built