-
Notifications
You must be signed in to change notification settings - Fork 48
'relion - particle extracting' crashes with 1M particles #2058
Comments
Hi, @JuhaHuiskonen. No 200 K (I guess you missed K) is not the maximun limit. I've seen project using more than that. It is true that getting over 500K things get very slow and it might become annoying. This issue must be something else. Could you please post more log lines.? |
I used just 200 (not 200K) to check that the project itself and the inputs were fine. I can try with more to see where it fails. Here's more log lines from the failed run with 1M particles: 03433: srun |
Sorry @JuhaHuiskonen , now I realized I did not read you correctly. I've seen sets of almost 8M elements, but they were clearly unpracticable. 1M particle should work but you'll be waiting so long for some steps to finish or to visualize sets. Here our users (I've just asked) said that works but takes "TOO LONG". I'd say 1M, as it is now, challenges Scipion and it's clearly degrading scipion usability. We have planned to invest time on this for the next release (we always planned for this)...but I believe this time has to happen. |
@pconesa OK, we will wait for the update and in the meanwhile split the set to smaller chunks. |
From the error log it seems like a bug in the Xmipp metadata class, when trying to execute the line:
I have created an issue in the scipion-em-relion repo, we might consider to replace the use of the Xmipp's metadata (We will do it anyway for Relion 3.1 new star files handling) @pconesa I don't know if you want to close this one or keep it as a reminder of this problem. |
leave it....I'll address it with the others when improving performance |
I was wondering if there will be a quick fix to md.sort(sortByLabel) or should we wait for Relion3.1 protocols? |
Hi @JuhaHuiskonen, I'm wondering if this issue happened in streaming mode or not. Could you try to re-launch this protocol and try a batchSize=20, for example? In that way, I think the generated star files are parsed in smaller chunks and not the whole set. |
This helped us with errors related to large projects and SQL operations: SQLITE_TMPDIR=/path/to/large/scratch/disk/ |
Thanks Juha! I think we will keep this issue open as a reminder to check for more robust solutions. |
When extracting over 1M particles using 'relion - particle extracting' I get the following error:
03487: Sqlite query: INSERT INTO MDTable_3( "rlnCoordinateX", "rlnCoordinateY", "rlnImageName", "rlnMicrographName", "rlnMagnification", "rlnVoltage", "rlnDefocusU", "rlnDefocusV", "rlnDefocusAngle", "rlnSphericalAberration", "rlnBfactor", "rlnCtfScalefactor", "rlnPhaseShift", "rlnAmplitudeContrast", "rlnOriginX", "rlnOriginY", "rlnDetectorPixelSize") SELECT "rlnCoordinateX", "rlnCoordinateY", "rlnImageName", "rlnMicrographName", "rlnMagnification", "rlnVoltage", "rlnDefocusU", "rlnDefocusV", "rlnDefocusAngle", "rlnSphericalAberration", "rlnBfactor", "rlnCtfScalefactor", "rlnPhaseShift", "rlnAmplitudeContrast", "rlnOriginX", "rlnOriginY", "rlnDetectorPixelSize" FROM MDTable_2
If I make a subset of just 200 coordinates, the protocol finishes fine. Is there a maximum limit of particles Scipion can handle?
The text was updated successfully, but these errors were encountered: