ℹ️ main place where scprint is built and maintained
scPRINT is a large transformer model built for the inference of gene networks (connections between genes explaining the cell's expression profile) from scRNAseq data.
It uses novel encoding and decoding of the cell expression profile and new pre-training methodologies to learn a cell model.
scPRINT can be used to perform the following analyses:
- expression denoising: increase the resolution of your scRNAseq data
- cell embedding: generate a low-dimensional representation of your dataset
- label prediction: predict the cell type, disease, sequencer, sex, and ethnicity of your cells
- gene network inference: generate a gene network from any cell or cell cluster in your scRNAseq dataset
Read the manuscript! if you would like to know more about scPRINT. Have a look at some of my X-plainers.
- scPRINT: Large Cell Model for scRNAseq data
- Table of Contents
- Install
scPRINT
- Usage
- FAQ
- I want to generate gene networks from scRNAseq data:
- I want to generate cell embeddings and cell label predictions from scRNAseq data:
- I want to denoise my scRNAseq dataset:
- I want to generate an atlas-level embedding
- I need to generate gene tokens using pLLMs
- I want to pre-train scPRINT from scratch on my own data
- how can I find if scPRINT was trained on my data?
- can I use scPRINT on other organisms rather than human?
- how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)
- I have different scRNASeq batches. Should I integrate my data before running scPRINT?
- where to find the gene embeddings?
- Documentation
- Model Weights
- Docker
- Development
- Work in progress (PR welcomed):
For the moment scPRINT has been tested on MacOS and Linux (Ubuntu 20.04) with Python 3.10. Its instalation takes on average 10 minutes.
If you want to be using flashattention2, know that it only supports triton 2.0 MLIR's version and torch==2.0.0 for now.
To use scPRINT, you will need to use lamin.ai. This is needed to load biological informations like genes, cell types, organisms etc...
To start you will need to do:
conda create -n <env-name> python==3.10 #scprint might work with python >3.10, but it is not tested
#one of
pip install scprint # OR
pip install scprint[dev] # for the dev dependencies (building etc..) OR
pip install scprint[flash] # to use flashattention2 with triton: only if you have a compatible gpu (e.g. not available for apple GPUs for now, see https://github.com/triton-lang/triton?tab=readme-ov-file#compatibility)
#OR pip install scPRINT[dev,flash]
lamin init --storage ./testdb --name test --schema bionty
if you start with lamin and had to do a lamin init
, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.
you can do it manually or with our function:
from scdataloader.utils import populate_my_ontology
populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)
populate_my_ontology( #the minimum for scprint to run some inferences (denoising, grn inference)
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
sex: List[str] = ["PATO:0000384", "PATO:0000383"],
celltypes = None,
ethnicities = None,
assays = None,
tissues = None,
diseases = None,
dev_stages = None,
)
We make use of some additional packages we developed alongside scPRint.
Please refer to their documentation for more information:
- scDataLoader: a dataloader for training large cell models.
- GRnnData: a package to work with gene networks from single cell data.
- benGRN: a package to benchmark gene network inference methods from single cell data.
scPRINT can run on machines without GPUs, but it will be slow. It is highly recommended to use a GPU for inference.
Once you have a GPU, and installed the required drivers, you might need to install a specific version of pytorch that is compatible with your drivers (e.g. nvidia 550 drivers will lead to a nvidia toolkit 11.7 or 11.8 which might mean you need to re-install a different flavor of pytorch for things to work. e.g. using the command:
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
on my case on linux
).
I was able to test it with nvidia 11.7, 11.8, 12.2.
If you want to use the latest version of scPRINT and work on the code yourself use git clone
and pip -e
instead of pip install
.
git clone https://github.com/cantinilab/scPRINT
git clone https://github.com/jkobject/scDataLoader
git clone https://github.com/cantinilab/GRnnData
git clone https://github.com/jkobject/benGRN
pip install -e scPRINT[dev]
pip install -e scDataLoader[dev]
pip install -e GRnnData[dev]
pip install -e benGRN[dev]
This is the most minimal example of how scPRINT works:
from lightning.pytorch import Trainer
from scprint import scPrint
from scdataloader import DataModule
datamodule = DataModule(...)
model = scPrint(...)
# to train / fit / test the model
trainer = Trainer(...)
trainer.fit(model, datamodule=datamodule)
# to do predictions Denoiser, Embedder, GNInfer
denoiser = Denoiser(...)
adata = sc.read_h5ad(...)
denoiser(model, adata=adata)
...
or, from a bash command line
$ scprint fit/train/predict/test/denoise/embed/gninfer --config config/[medium|large|vlarge] ...
find out more about the commands by running scprint --help
or scprint [command] --help
.
more examples of using the command line are available in the docs.
If you do not have triton installed you will not be able to take advantage of GPU acceleration, but you can still use the model on the CPU.
In that case, if loading from a checkpoint that was trained with flashattention, you will need to specify transformer="normal"
in the load_from_checkpoint
function like so:
model = scPrint.load_from_checkpoint(
'../data/temp/last.ckpt', precpt_gene_emb=None,
transformer="normal")
An instalation of scPRINT and a simple test of the denoiser is performed during each commit to the main branch with a Github action and pytest workflow. It also provides an expected runtime for the installation and run of scPRINT.
We now explore the different usages of scPRINT:
-> Refer to the section . gene network inference in this notebook.
-> More examples in this notebook ./notebooks/assessments/bench_omni.ipynb.
-> Refer to the embeddings and cell annotations section in this notebook.
-> Refer to the Denoising of B-cell section in this notebook.
-> More example in our benchmark notebook ./notebooks/assessments/bench_denoising.ipynb.
-> Refer to the notebook nice_umap.ipynb.
To run scPRINT, you can use the option to define the gene tokens using protein language model embeddings of genes. This is done by providing the path to a parquet file of the precomputed set of embeddings for each gene name to scPRINT via "precpt_gene_emb"
-> To generate this file please refer to the notebook generate_gene_embeddings.
-> Refer to the documentation page pretrain scprint
If your data is available in cellxgene, scPRINT was likely trained on it. However some cells, datasets were dropped due to low quality data and some were randomly removed to be part of the validation / test sets.
scPRINT has been pretrained on both humans and mouse, and can be used on any organism with a similar gene set. If you want to use scPRINT on very different organisms, you will need to generate gene embeddings for that organism and re-train scPRINT
how long does scPRINT takes? what kind of resources do I need? (or in alternative: can i run scPRINT locally?)
please look at our supplementary tables in the manuscript
scPRINT takes raw count as inputs, so please don't use integrated data. Just give the raw counts to scPRINT and it will take care of the rest.
If you think you need the gene embeddings file for loading the model from a checkpoint, you don't, as the embeddings are also stored in the model weights. You just need to load the weights like this:
model = scPrint.load_from_checkpoint(
'../../data/temp/last.ckpt',
precpt_gene_emb=None,
)
You can also recreate the gene embedding file through this notebook. Just call the functions, and it should recreate the file itself.
the file itself is also available on hugging face
For more information on usage please see the documentation in https://www.jkobject.com/scPRINT/
Model weights are available on hugging face.
By using the scPRINT Docker image
, you can bypass the complexities of manual package installation, ensuring a consistent deployment environment. Included in this repository is a Dockerfile that lets you craft a container for the project; you have the choice to either build this image on your own or conveniently pull it from Docker Hub.
Make sure that you have the docker
command line interface installed on your system.
A recommended way to install docker with the correct nvidia drivers on linux is to use this script
To build the Docker image from the provided Dockerfile
, run the following command from the root directory of this repository:
docker build -t scprint:latest -f Dockerfile .
If you don't want to build the image yourself, you can pull it directly from Docker Hub:
docker pull jkobject/scprint:1.1.3
docker tag jkobject/scprint:1.1.3 scprint:latest
Once you have the image (either by building it or pulling it), you can start a container with:
docker run --gpus all --rm -it scprint:latest bash
Please note: When running the Docker container, ensure you mount any necessary folders using the -v option to access them inside the container. `
Read the CONTRIBUTING.md file.
Read the training runs document to know more about how pre-training was performed and the its behavior.
code coverage is not right as I am using the command line interface for now. >50% of the code is covered by my current unit test.
Acknowledgement: python template laminDB lightning
- remove the triton dependencies
- add version with additional labels (tissues, age) and organisms (mouse, zebrafish) and more datasets from cellxgene
- version with separate transformer blocks for the encoding part of the bottleneck learning and for the cell embeddings
- improve classifier to output uncertainties and topK predictions when unsure
- setup latest lamindb version
Awesome Large Cell Model created by Jeremie Kalfon.