Medi-Care utilizes LLaVA-Med base weights for precise visual question answering in medical images, alongside the LLaMA model for accurate answer generation. This combination enables efficient and accurate diagnostic support, enhancing healthcare outcomes.
-
Clone the repository:
git clone https://github.com/rahulsharmavishwakarma/medi-care.git cd medi-care
-
Set up a virtual environment (optional but recommended):
python -m venv env source env/bin/activate # On Windows use `env\Scripts\activate`
-
Upgrade pip:
pip install --upgrade pip
-
Install the required libraries: Ensure you have a
requirements.txt
file and then run the following command to install all dependencies:pip install -r requirements.txt
-
Install Git Large File Storage (LFS) if required:
git lfs install
To start the inference, run the following commands in the terminal:
-
Start the controller:
python -m llava.serve.controller --host 0.0.0.0 --port 10008
-
Start the model worker:
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10008 --port 40000 --worker http://localhost:40000 --model-path /teamspace/studios/this_studio/Medi-Care/LLaVA-Med-weights --multi-modal --num-gpus 1
-
Run the Gradio web server:
python -m llava.serve.gradio_web_server --controller http://localhost:10008 --share
We welcome contributions from the community. To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes.
- Commit your changes (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature-branch
). - Open a Pull Request.