An AI-powered note management system that enables natural language interactions with your documents.
- 🤖 AI-Powered Analysis: Advanced note analysis and intelligent question answering
- 📁 Multi-Format Support: Seamlessly handle PDF, TXT, and MD files
- 🔄 Real-Time Processing: Live status updates during file processing
- 📎 Easy File Management: Intuitive drag-and-drop file uploads
- 💬 Interactive Chat: Natural conversation interface with your documents
- 🔄 Flexible AI Models: Switch between Ollama and OpenAI models
- 📊 Vector Database: Efficient document embedding and retrieval
- 🔍 RAG Implementation: State-of-the-art retrieval augmented generation
- Framework: Next.js 14.0
- UI Components: shadcn/ui
- Styling: Tailwind CSS
- Server: FastAPI
- AI Models:
- Ollama (llama3.1:8b)
- OpenAI (GPT-4)
- Vector Store: FAISS
- Node.js (v18 or higher)
- Python 3.9+
- Ollama (for local AI model support)
- Git
git clone https://github.com/boiled-fish/RAG_from_scratch.git
cd RAG_from_scratch/note-helper-rag
- Create and activate Python environment:
conda create -n note-helper python=3.9.19
conda activate note-helper
pip install -r server/requirements.txt
- Ollama Setup 2.1. Install Ollama from official website 2.2. Pull required models:
ollama pull llama3.1:8b
ollama pull nomic-embed-text:latest
- Environment Variables
Create a
.env
file in the root directory:
# AI Model Configuration
OPENAI_API_KEY=your_openai_api_key
OLLAMA_BASE_URL=http://localhost:11434
# Langsmith Configuration
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
LANGCHAIN_API_KEY=your_langsmith_api_key
LANGCHAIN_PROJECT=your_project_name
- Frontend Dependencies
npm install
- Launch Ollama server
ollama serve
- Start development server (in a new terminal):
npm run dev
- Start backend server (in another terminal):
uvicorn server:app --reload --port 8000
- Visit http://localhost:3000 to access the application.
note-helper-rag/
├── src/ # Source code
│ ├── app/ # Next.js app
│ │ ├── globals.css # Global styles
│ │ ├── layout.tsx # App layout
│ │ └── page.tsx # Main page
│ ├── components/ # React components
│ │ ├── note_helper.tsx # Main app component
│ │ └── ui/ # UI components
│ └── server/ # Backend
│ ├── server.py # FastAPI server
│ ├── rag_langchain.py # RAG implementation
│ └── requirements.txt # Python dependencies
├── py_client/ # Python desktop client
│ ├── note_helper.py # PyQt5 client
└── package.json # Node.js dependencies
- Select Note Path:
- Use default path (
./note-helper-rag/note
) - Or choose custom directory
- Use default path (
- Process Notes:
- System will automatically process the note file and save the embeddings to the vector database, and when you add new notes, system will also process the new notes and update the vector database.
- Toggle between Ollama and OpenAI models via the dropdown in top-right corner
- Default: Ollama (llama3.1:8b)
- Ask questions about your documents
- System uses RAG to provide context-aware responses
- Supports general chat functionality
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request