Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gifsave: ensure return code is propagated #4344

Open
wants to merge 2 commits into
base: 8.16
Choose a base branch
from

Conversation

lovell
Copy link
Member

@lovell lovell commented Jan 8, 2025

Whilst investigating lovell/sharp#4257 I discovered that vips_sink_disc appears to always return zero during vips_foreign_save_cgif_build even when vips_foreign_save_cgif_sink_disc returns a non-zero return code. It feels like a bit of a race condition, and I think this is because we need to have the entire input in memory before the CPU-intensive quantisation takes place.

This PR adds a new aborted property to track any non-zero return code, which then allows for things like an eval callback. The call to vips_image_eval occurs for each frame just after the image is remapped to the new palette as this is the most CPU-intensive section. I'd be happy to move the vips_image_eval call to a different location if required, or perhaps it might be better to add multiple calls given the CPU-intensive nature of quantisation as a whole.

- adds support for eval callbacks
- adds GIF timeout test
Mizokuiam

This comment was marked as spam.

@Mizokuiam

This comment was marked as spam.

@jcupitt
Copy link
Member

jcupitt commented Jan 11, 2025

I did a little digging. sink_disc picks up error returns here:

https://github.com/libvips/libvips/blob/master/libvips/iofuncs/sinkdisc.c#L174

And handles them here:

https://github.com/libvips/libvips/blob/master/libvips/iofuncs/sinkdisc.c#L264

But it does this lazily, since the write is actually in a background thread. It can't detect a write error until the background thread that was doing the write terminates.

The wait for the exit of the final write happens here:

https://github.com/libvips/libvips/blob/master/libvips/iofuncs/sinkdisc.c#L532

... but we've already set the return code on line 518, so a failure in the final write is never returned.

A simple fix might be to change the end of vips_sink_disc() to be:

    /* Just before allocate signalled stop, it set write.buf writing. We
     * need to wait for this write to finish.
     *
     * We can't just free the buffers (which will wait for the bg threads
     * to finish), since the bg thread might see the kill before it gets a
     * chance to write.
     *
     * If the pool exited with an error, write.buf might not have been
     * started (if the allocate failed), and in any case, we don't care if
     * the final write went through or not.
     */
    if (!result)
        vips_semaphore_down(&write.buf->done);

    vips_image_posteval(im);

    /* The final write might have failed, pick up any error code.
     */
    result |= write.buf->write_errno;

    write_free(&write);

    vips_image_minimise_all(im);

    return result;
}

ie. OR in any code from the final thread. Though maybe we should also set an error message? Flush does this:

        /* Previous write succeeded?
         */ 
        if (write->buf_back->write_errno) {
            vips_error_system(write->buf_back->write_errno,
                "wbuffer_write", "%s", _("write failed"));
            return -1;
        }   

So we should do the same for the final write, or maybe remove the message from flush.

@@ -589,6 +594,12 @@ vips_foreign_save_cgif_write_frame(VipsForeignSaveCgif *cgif)

VIPS_FREEF(vips__quantise_image_destroy, image);

/* Remapping is relatively slow, trigger eval callbacks.
*/
vips_image_eval(cgif->in, VIPS_IMAGE_N_PELS(cgif->in));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can write vips_image_eval(cgif->in, n_pels); here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants