-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 272 #282
Issue 272 #282
Conversation
|
Addresses #272 |
@grst I checked the paths on the samplesheets after running bases2fastq, bcl2fastq, bclconvert, fqtk, sgdemux and mkfastq. I tested them specifying both a local and a remote outdir and they are now displaying the right paths. Note: We discussed about adding the samplesheets to the nf-tests, but due to the way the are implemented, I don't see how this could be done. When we generate the snaps we do it locally, and the paths for the fastq files contain whichever path we are running the tests, but when the tests are ran by github, the paths are different, so the samplesheets can't match. |
6cc9b72
to
833edde
Compare
I think that's fair. I could even think of some cases where you'd want to use relative paths. If someone requires an absolute path they shall specify an absolute path as
I was thinking of actually running the pipelines on the samplesheets that are generated. In that case the paths should match, but maybe it's too much overhead. What we actually want is to test if the generated path exists. Would something along the lines of then {
def csv_file = new File(process.out.samplesheet)
csvFile.eachLine { line, index ->
columns = line.split(",")
assert path(columns[1]).exists()
assert path(columns[2]).exists()
}
} work? This is untested, it's more like pseudo-code to convey my idea. |
Ah yes, that would be a good way of testing them, I'll take care of it |
@grst I was able to add a function in the nf-tests that validate the downstream samplesheet, and doing so I found that the samplesheet generator script would not add the right number of commas when it had empty values in some of the rows, causing the samplesheet validator to fail. So I updated the samplesheet builder module as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very neat implementation! Just two minor things...
// Clone the first item in meta for output | ||
meta_clone = meta.first().clone() | ||
meta_clone.remove('publish_dir') // Removing the publish_dir just in case, although output channel is not used by other process |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to be used anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This process is the last one that uses the samplesheet, we are outputting the samplesheet and the meta, regardless, in case we use it for anything in the future. I added the publish_dir key to the meta map for easier handling of the paths, so I'm removing it to revert it back to its original state
Co-authored-by: Gregor Sturm <[email protected]>
PR checklist
nf-core lint
).nextflow run . -profile test,docker --outdir <OUTDIR>
).nextflow run . -profile debug,test,docker --outdir <OUTDIR>
).docs/usage.md
is updated.docs/output.md
is updated.CHANGELOG.md
is updated.README.md
is updated (including new tool citations and authors/contributors).