You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using datasets to download a large collection of genomes, many of the associated .fna files are silently truncated. I was fortunate in that: 1) a few of the files were truncated so that the final line was a fasta header 2) one of the tools in my pipeline complained about this. Otherwise, the pipeline would have finished with no obvious failures.
I'm using datasets 16.3.0 on Ubuntu 20.04. Archives are extracted with unzip 6.0. My network connection is fine and I have plenty of disk space.
Running datasets download genome taxon 473814 --include genome,seq-report reliably reproduces the issue, albeit with various numbers of truncations. I get about 80-100 truncated fna files out of the 1790 available.
Using the dehydrate/rehydrate workflow so far has avoided the issue.
Dehydrate is clearly signposted in the documentation, but it reads as an 'optional, if you have issues with large downloads' style of approach. Further, the issues that come time mind are things like 'failed downloads, disk/network capacity' (e.g. stuff that would be immediately obvious) and not 'seemingly correct downloads that may silently produce erroneous results'.
Right now the 'obvious, simple' first approach myself and probably many other users try when testing out datasets is a footgun, moreover it's a footgun where they may not even know they've been shot.
If I could suggest a soft fix, the documentation should make it very clear that if you download without dehydration any large (where large is probably smaller than you think) set of genomes, you will get truncated FNAs that may very well make it through your pipeline. I'd also suggest that running the command without --dehydrate for anything above some conservative number of genomes should at the very least strongly warn the user.
I know from other issues (#302) that adding MD5 checksums to part of the process is being considered. I strongly support that notion.
In the meantime, I've written a python tool that will check each FNA against the expected total sequence length and number of sequences listed by the assembly data report (which appears to contain valid data on all tests). This works for both direct downloaded and rehydrated datasets.
I see that datasets 16.8.1 is the most recent version and I am unsure if the bug persists (though it feels like it's at the server end). I would update and try to reproduce, but my machine is currently unavailable for that.
FWIW, I think datasets is otherwise a wonderful advance and a nice quality of life improvement.
The text was updated successfully, but these errors were encountered:
Thanks for the suggestions and sorry about the problem with truncated files. I apologize for any reliability concerns this has created. We have plans in the coming weeks to investigate and address this problem, including the addition of MD5 checksums to ensure data integrity. We will keep this issue open and provide updates until it is resolved.
Nuala
Nuala A. O'Leary, PhD
Product Owner, NCBI Datasets
National Center for Biotechnology Information, NLM, NIH, DHHS
When using
datasets
to download a large collection of genomes, many of the associated.fna
files are silently truncated. I was fortunate in that: 1) a few of the files were truncated so that the final line was a fasta header 2) one of the tools in my pipeline complained about this. Otherwise, the pipeline would have finished with no obvious failures.I'm using
datasets 16.3.0
onUbuntu 20.04
. Archives are extracted withunzip 6.0
. My network connection is fine and I have plenty of disk space.Running
datasets download genome taxon 473814 --include genome,seq-report
reliably reproduces the issue, albeit with various numbers of truncations. I get about 80-100 truncated fna files out of the 1790 available.Using the
dehydrate/rehydrate
workflow so far has avoided the issue.Dehydrate is clearly signposted in the documentation, but it reads as an 'optional, if you have issues with large downloads' style of approach. Further, the issues that come time mind are things like 'failed downloads, disk/network capacity' (e.g. stuff that would be immediately obvious) and not 'seemingly correct downloads that may silently produce erroneous results'.
Right now the 'obvious, simple' first approach myself and probably many other users try when testing out datasets is a footgun, moreover it's a footgun where they may not even know they've been shot.
If I could suggest a soft fix, the documentation should make it very clear that if you download without dehydration any large (where large is probably smaller than you think) set of genomes, you will get truncated FNAs that may very well make it through your pipeline. I'd also suggest that running the command without
--dehydrate
for anything above some conservative number of genomes should at the very least strongly warn the user.I know from other issues (#302) that adding MD5 checksums to part of the process is being considered. I strongly support that notion.
In the meantime, I've written a python tool that will check each FNA against the expected total sequence length and number of sequences listed by the assembly data report (which appears to contain valid data on all tests). This works for both direct downloaded and rehydrated datasets.
I see that datasets 16.8.1 is the most recent version and I am unsure if the bug persists (though it feels like it's at the server end). I would update and try to reproduce, but my machine is currently unavailable for that.
FWIW, I think datasets is otherwise a wonderful advance and a nice quality of life improvement.
The text was updated successfully, but these errors were encountered: