You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can confirm that this issue occurs after running LIONESS for ~ 2 hours on a sample data set with 108 samples and 3,000 expression levels (similar configuration to the user's). The issue is not with LIONESS itself but with the pandaR package when computing the correlation matrix. This is likely a memory overflow issue because, as currently implemented, LIONESS requires PANDA to union the genes in the motif with the genes in the expression data set, which can be memory intensive for large motifs. Working on a suitable workaround.
I tried slurm job for PANDA-LIONESS. I used 5 cores (80GB), but I encountered the following error:
*** caught segfault ***
address 0x2b9064976040, cause 'memory not mapped'
Traceback:
1: cor(t(expr), method = cor.method, use = "pairwise.complete.obs")
2: panda(motif, expr, ppi, ...)
3: lioness(expr = test2, motif = MOTIF, ppi = PPI, network.inference.method = "panda", ncores = 5)
An irrecoverable exception occurred. R is aborting now ...
/var/spool/slurmd/job1886295/slurm_script: line 13: 26930 Segmentation fault (core dumped) Rscript /udd/nhlna/NHS_ovca_proteomics/src/LIONESS.R
The text was updated successfully, but these errors were encountered: