This repository was created as our final capstone project to finish the neuefische Data Science boot camp. You can take a look at the recording and the slides of our final presentation.
Position-based image matching is used in 3D scanning of real objects under normally calibrated conditions. Using LoFTR, we match images regardless of size, lighting conditions, obstacles, and even photo filters, enabling the first step in digital 3D preservation of monuments and landmarks from mixed public images.
The EDA notebook gives an overview over the dataset.
The LoFTR notebook demonstrates how to run LoFTR with PyTorch and how to plot matched images. The standalone Python script can be used to calculate matches for all possible image pairs in a given folder.
A dashboard, created using Plotly Dash, makes it easy to navigate through the dataset, plot matches for all image pairs, and even allows matching of uploaded custom images.
To use it after setup (see below), start the virtual environment, navigate to dashboard
source .venv/bin/activate
cd dashboard
and run the app from the dashboard-folder:
python app.py
The dashboard can then be reached in a browser at 127.0.0.1:8050.
- pyenv with Python 3.9.8
- Data from the Image Matching Challenge 2022.
Use the requirements file in this repo to create a new environment:
make setup
or
pyenv local 3.9.8
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt