Multi-stage semantic segmentation of land cover in the Peak District using high-resolution RGB aerial imagery
This is the code corresponding to the following publication. If you use our code, please cite:
van der Plas, T.L.; Geikie, S.T.; Alexander, D.G.; Simms, D.M. Multi-Stage Semantic Segmentation Quantifies Fragmentation of Small Habitats at a Landscape Scale. Remote Sensing 2023, 15, 5277. https://doi.org/10.3390/rs15225277
This repository is a forked version of the original land cover segmentation model, with additional processes for extending the model's application. In this modified version, we incorporate additional labelling across image tiles and in larger grouped clusters. Soil classifications and post-processing were altered to enhance the segmentation outputs and processing speed.
The extended functionality includes:
- Running the original land cover segmentation model to predict initial land cover classes.
- Increase training and test data 5,025 (512x512 px) patches 3.49% of study area with new sample selction across image tiles using .vrt
- Inference and integration of soil labels, adding soil-specific classifications to land cover outputs at the raster stage.
- OS_NGD, new dictionary to match schema required for modelling work
- Dissolving small patches of less than a specified threshold area to enhance spatial coherence in the final output.
Original README
- To ensure you've got all the necessary packages, follow the instructions in
envs/README_envs.md
to install a new conda environment with the correct set of packages. - Set your user-specific file paths in
content/data_paths.json
. There is "new-username" template that you can use to enter your paths (using your computer username). An explanation of what each path is for is given incontent/README_datapaths.md
. - If you want to train new models using the same data set as our paper, you can download the images from this data repository
- If you want to use the CNN models from our paper for predicting new image tiles, you can download these here.
- TBA Download extra files if possible, or run without
- Example notebook: Please see
notebooks/Getting started.ipynb
for an example notebook of how to load the data set, models etc. - Training a LC segmentation model: There is a script provided in
scripts/train_segmentation_network.py
. See the function call underif __name__ == '__main__':
for an example of how to call the function. It trains a network using a folder of RGB image patches and a folder of LC annotation mask patches. These can be downloaded from our data repository (see above). - Predicting LC of new images using an existing model: There is a script provided in
scripts/prediction_of_trained_network.py
. See the function call underif __name__ == '__main__':
for an example of how to call the function. It predicts entire RGB image tiles, which it splits up into patches, predicts LC of the patches, reconstructs the tile and saves. The CNN models from the paper can be downloaded here. - Figures for the paper are generated in Jupyter notebooks, see all notebooks in
notebooks/
with a file name starting withFigure ...
.
- Additionally, we have created an interpretation key of all land cover classes at https://reports.peakdistrict.gov.uk/interpretation-key/docs/introduction.html
- For more details, please see our Remote Sensing publication.