diff --git a/README.md b/README.md index 7710ea0..579efa1 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,13 @@ # HEST +#### What does the hest library provide? +- Missing file imputation and automatic alignment for Visium +- Fast functions for pooling transcripts and tesselating ST/H&E pairs into patches (these functions are GPU optimized with CUCIM if CUDA is available). +- Functions for interacting with the HEST-1K dataset +- Deep learning based or Otsu based tissue segmentation for both H&E and IHC stains +- Compatibility with Scanpy and SpatialData -Hest provides legacy readers for the different Spatial Transcriptomics data formats supporting H&E (Visium/Visium-HD, Xenium and ST) and for automatically aligning them with their associated histology image. Hest was used to assemble the HEST-1k dataset, processing challenging ST datasets from a wide variety of sources and converting them to formats commonly used in pathology (.tif, Scanpy AnnData). The framework also provides helper functions for pooling transcripts and tesselating slides into patches centered around ST spots. +Hest was used to assemble the HEST-1k dataset, processing challenging ST datasets from a wide variety of sources and converting them to formats commonly used in pathology (.tif, Scanpy AnnData).
@@ -46,11 +52,11 @@ apt install libvips libvips-dev openslide-tools NOTE: hest was only tested on linux/macOS machines, please report any bugs in the GitHub issues. -## Install CONCH/UNI (Optional, for HEST-bench only) +### CONCH/UNI installation (Optional, for HEST-bench only) If you want to benchmark CONCH/UNI, additional steps are necesary -### Install CONCH (model + weights) +#### CONCH installation (model + weights) 1. Request access to the model weights from the Huggingface model page [here](https://huggingface.co/MahmoodLab/CONCH). @@ -64,7 +70,7 @@ cd CONCH pip install -e . ``` -### Install UNI (weights only) +#### UNI installation (weights only) 1. Request access to the model weights from the Huggingface model page [here](https://huggingface.co/MahmoodLab/UNI). diff --git a/src/hest/segmentation/segmentation.py b/src/hest/segmentation/segmentation.py index aa5b5df..c6b3e33 100644 --- a/src/hest/segmentation/segmentation.py +++ b/src/hest/segmentation/segmentation.py @@ -83,7 +83,11 @@ def segment_tissue_deep(img: Union[np.ndarray, openslide.OpenSlide, 'CuImage', W stride=1 ) - checkpoint = torch.load(weights_path) + if torch.cuda.is_available(): + checkpoint = torch.load(weights_path) + else: + checkpoint = torch.load(weights_path, map_location=torch.device('cpu')) + new_state_dict = {} for key in checkpoint['state_dict']: if 'aux' in key: