This project is an innovative mental health support application that leverages Gemini (GPT-based LLM) and FastAPI to provide empathetic responses, coping strategies, and resources for professional help. The application enables users to anonymously discuss their mental health concerns, offering support in real-time.
- Features
- Tech Stack
- Installation
- Configuration
- Usage
- API Documentation
- Frontend (Streamlit)
- Backend (FastAPI)
- Best Practices
- Future Enhancements
- Anonymous Mental Health Support: Users can submit concerns without revealing their identity.
- Empathetic Responses: Powered by Gemini LLM, the app provides empathetic and non-judgmental responses.
- Coping Strategies: Users receive personalized coping strategies based on their input.
- Professional Resources: The app suggests professional mental health resources if needed.
- Interactive Client Interface: Simple and intuitive user interface using Streamlit.
- Backend: FastAPI
- Frontend: Streamlit
- LLM: Gemini (via OpenAI GPT API)
- Other Libraries:
requests
for API callspydantic
for data validation in FastAPIuvicorn
to run the FastAPI server
git clone https://github.com/mr-rakesh-ranjan/mental-health-support-app.git
cd mental-health-support-app
It’s recommended to use a virtual environment to avoid dependency conflicts.
python3 -m venv venv
source venv/bin/activate # For Linux/MacOS
venv\Scripts\activate # For Windows
pip install -r requirements.txt
You need an GEMINI API Key to interact with the Gemini LLM.
- Create a
.env
file or set environment variables for your API key.
GEMINI_API_KEY=your_gemini_api_key
- You can also update the
config.py
file to hardcode your API key (not recommended for production).
First, start the backend API service using FastAPI and Uvicorn.
cd backend
uvicorn backend.main:app --reload
This will start the backend on http://localhost:8000
.
Next, start the Streamlit client interface where users can submit their concerns.
cd frontend
streamlit run frontend/app.py
This will launch the frontend in your browser at http://localhost:8501
.
The FastAPI backend exposes the following endpoints:
- Description: Analyzes user input to provide empathetic responses and coping strategies.
- Request:
- Body (JSON):
{ "text": "I feel very stressed and anxious about work." }
- Body (JSON):
- Response (JSON):
{ "llm_response": "It's completely normal to feel anxious during tough times at work. Try taking breaks and practicing mindfulness.", "coping_strategies": [ "Take deep breaths and focus on the present moment.", "Engage in a hobby or activity that brings you joy.", "Reach out to a trusted friend or family member." ] }
- Description: Provides a list of professional mental health resources.
- Response (JSON):
[ {"name": "National Suicide Prevention Lifeline", "contact": "1-800-273-8255"}, {"name": "BetterHelp", "website": "https://www.betterhelp.com"}, {"name": "Talkspace", "website": "https://www.talkspace.com"} ]
The Streamlit client provides an easy-to-use interface for users to input their mental health concerns.
- User Input: Users can submit their text, which is sent to the FastAPI backend for analysis.
- Real-Time Response: The app shows LLM-generated empathetic responses and coping strategies directly on the interface.
- Enter your mental health concerns in the provided text area.
- Click Submit to receive suggestions and feedback.
- View the Empathetic Response and a list of Coping Strategies.
The FastAPI backend provides the main logic for processing input, calling the Gemini LLM API, and returning coping strategies.
- LLM Service: This handles communication with the Gemini API (through OpenAI) to generate responses based on user input.
- Coping Strategies Service: This module returns helpful strategies and professional resources.
- Data Validation: The input is validated using Pydantic models.
You can run the FastAPI server locally using uvicorn
.
uvicorn backend.main:app --reload
- API Key Management: Store your OpenAI API key in environment variables, not in code.
- Input Sanitization: Ensure proper input sanitization to avoid malicious content being processed.
- Separation of Concerns: The project is divided into backend and frontend components for better modularity and maintainability.
- Service-Oriented: The backend services (LLM, coping strategies) are isolated into separate modules for future scalability.
- Containerization: Consider using Docker to containerize the FastAPI and Streamlit services for easy deployment.
- Caching: Implement caching for frequently used resources (e.g., coping strategies).
- User Authentication: Implement user authentication for personalized experiences and mental health tracking over time.
- Analytics Dashboard: Use Streamlit or another frontend to provide users with data on their progress and insights.
- Advanced NLP Models: Incorporate more advanced NLP techniques or custom-trained models for better text understanding.
- Mental Health Assessment: Expand the service to offer assessments for various mental health conditions.
- Support for Multiple Languages: Implement support for users in different languages.
This project is licensed under the MIT License. See the LICENSE file for more information.
Feel free to modify the README as per your specific project requirements and branding! Let me know if you need further adjustments or additions.