Turn 'noise' to signal: accurately rectify millions of erroneous short reads through graph learning on edit distances
noise2read, originated in a computable rule translated from PCR erring mechanism that: a rare read is erroneous if it has a neighboring read of high abundance, turns erroneous reads into their original state without bringing up any non-existing sequences into the short read set(< 300bp) including DNA and RNA sequencing (DNA/RNA-seq), small RNA, unique molecular identifiers (UMI) and amplicon sequencing data.
Click noise2read to jump to its documentation
Note: All the experimental results obtained in this study utilised version 0.2.7 of noise2read.
Quick-run example for testing noise2read by setting only 1 trial for Optuna and 10 estimators for xGboost which are not the parameters used in our paper.
- noise2read installation
Please refer to QuickStart or Installation.
- Clone the codes with datasets in github
git clone https://github.com/Jappy0/noise2read
cd noise2read/Examples/simulated_miRNAs
Quick-run testing noise2read on D14
- with high ambiguous errors correction and using GPU for training (running about 4 mins with 26 cores and GPU)
noise2read -m correction -c ../../config/quick_test.ini -a True -g gpu_hist
Examples for correcting simulated miRNAs data with mimic UMIs by noise2read
Take data sets D14 and D16 as examples.
- noise2read installation
Please refer to QuickStart or Installation.
- Clone the codes with datasets in github
git clone https://github.com/Jappy0/noise2read
cd noise2read/Examples/simulated_miRNAs
- Reproduce the evaluation results for D14 and D16 from raw, true and corrected datasets
noise2read -m evaluation -i ./raw/D14_umi_miRNA_mix.fa -t ./true/D14_umi_miRNA_mix.fa -r ./correct/D14_umi_miRNA_mix.fasta -d ./D14
noise2read -m evaluation -i ./raw/D16_umi_miRNA_subs.fa -t ./true/D16_umi_miRNA_subs.fa -r ./correct/D16_umi_miRNA_subs.fasta -d ./D16
correcting D14
- with high ambiguous errors correction and using GPU for training
noise2read -m correction -c ./configs/D14.ini
- without high ambiguous errors correction and using GPU for training
noise2read -m correction -c ./configs/D14_without_high.ini
correcting D16
- with high ambiguous errors correction and using GPU for training
noise2read -m correction -c ./configs/D16.ini
- without high ambiguous errors correction and using GPU for training
noise2read -m correction -c ./configs/D16_without_high.ini
Expected Results
Please find the expected log files and correction results at the folder noise2read of benchmark for correcting data sets of D14-D16. The results under noise2read and noise2read-1 represent the corrected results with and without high ambiguous errors' prediction, respectively.
Note: Noise2read may produce slightly different corrected result from these results under Examples/simulated_miRNAs/correct and correction. This is because the easy-usable and automatic tuning of the classifiers' parameters facilitates wide-range explorations, different best models are obtained for each training, but the final prediction results are stable within a certain range. We have discussed this in the Discussion section of our paper.
Examples for correcting outcome sequence of ABEs and CBEs by noise2read
- Clone the codes
git clone https://github.com/Jappy0/noise2read
cd noise2read/CaseStudies
mkdir ABEs_CBEs
cd ABEs_CBEs
- Download datasets under the folder of data of D32_D33.
- Using noise2read to correct the datasets. The running time of each experiment is about 13 minutes using 26 cores and GPU for training.
noise2read -m correction -i ./data/D32_ABE_outcome_seqs.fasta -a False -d ./ABE/
noise2read -m correction -i ./data/D33_CBE_outcome_seqs.fasta -a False -d ./CBE/
- Expected Results
Please find the expected log files and correction results at the folder D32_D33. The results for correcting D32 and D33 are presented under the folders of ABE and CBE, respectively.
Note: Noise2read may produce slightly different corrected result from these under D32_ABE and D33_CBE of D32_D33. This is because the easy-usable and automatic tuning of the classifiers' parameters facilitates wide-range explorations, different best models are obtained for each training, but the final prediction results are stable within a certain range. We have discussed this in the Discussion section of our paper.