Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up option fail when using AWS Batch (and likely other clouds) #3645

Open
pditommaso opened this issue Feb 14, 2023 · 5 comments · May be fixed by #3849
Open

Clean up option fail when using AWS Batch (and likely other clouds) #3645

pditommaso opened this issue Feb 14, 2023 · 5 comments · May be fixed by #3849

Comments

@pditommaso
Copy link
Member

Bug report

Nextflow fails to perform the pipeline work directory cleanup when using AWS Batch and other cloud providers

The following error is reported

Feb-14 18:55:04.981 [main] WARN  nextflow.file.FileHelper - Unable to start plugin 'nf-amazon' required by s3://nextflow-ci/work/79/17395af5457010376510e5808a2881
java.lang.NullPointerException: Cannot invoke "nextflow.plugin.CustomPluginManager.getPlugin(String)" because "this.manager" is null
	at nextflow.plugin.PluginsFacade.isStarted(PluginsFacade.groovy:338)
	at nextflow.plugin.PluginsFacade.startIfMissing(PluginsFacade.groovy:432)
	at nextflow.plugin.Plugins.startIfMissing(Plugins.groovy:91)
	at nextflow.file.FileHelper.autoStartMissingPlugin(FileHelper.groovy:288)
	at nextflow.file.FileHelper.asPath(FileHelper.groovy:268)

Complete log file is attached

nextflow.log

@BrunoGrandePhD
Copy link
Contributor

We're running into high S3 storage costs due to our inability to smartly delete files when a workflow completes successfully. It's introducing overhead on Tower admins because we need to manually delete files when buckets get too large.

Could we see this feature bumped in priority? Ideally with a timeline. Thanks!

@bentsherman
Copy link
Member

The problem is pretty straightforward, in ScriptRunner:

    protected shutdown() {
        session.destroy()
        session.cleanup()
        Global.cleanUp()
        log.debug "> Execution complete -- Goodbye"
    }

Session destroy() is called before cleanup(). But the Session destroy() shuts down the plugin manager, which breaks the cleanup.

On the other hand, destroy() also shuts down the publish dir executor, which includes waiting for any ongoing publish tasks to finish. So we can't just switch the two calls because then the task directories might be deleted before they are done publishing.

What I propose is to put the cleanup() code in destroy() so that it is called after the publishing but before the plugin manager shutdown. I will draft a PR.

@BrunoGrandePhD
Copy link
Contributor

@bentsherman Thanks for tackling this!

@stale
Copy link

stale bot commented Sep 17, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Sep 17, 2023
@bentsherman bentsherman removed the stale label Sep 17, 2023
@bentsherman bentsherman linked a pull request Oct 5, 2023 that will close this issue
5 tasks
Copy link

stale bot commented Mar 17, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants