Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue #10513 - fix multipart lockup with HTTP/2 #10554

Merged
merged 18 commits into from
Sep 28, 2023

Conversation

lachlan-roberts
Copy link
Contributor

Issue #10513

  • add Http2MultiPartServletTest to replicate the issue
  • proposed fix inside HTTP2Session.StreamData.

PR is not complete yet but worth getting some early reviews.

todo:

  • more improvements on the testing, also test HTTP/3 etc...
  • investigate why chunks aren't released if the part is being written to a file
  • form configuration from the ServletContextHandler is overriding configuration from the MultipartConfigElement

@gregw gregw removed their request for review September 20, 2023 06:26
@gregw gregw self-assigned this Sep 20, 2023
@gregw gregw requested a review from lorban September 20, 2023 06:26
@lorban
Copy link
Contributor

lorban commented Sep 20, 2023

I saw two failing tests in a PR build because of spurious leaks. Could you please disable leak tracking for them? You just have to annotate:

org.eclipse.jetty.test.client.transport.HttpClientStreamTest.testInputStreamContentProviderThrowingWhileReading with @Tag("DisableLeakTracking:server:H3")

and

org.eclipse.jetty.test.client.transport.HttpClientTimeoutTest.testVeryShortTimeout with

@Tag("DisableLeakTracking:client:HTTP")
@Tag("DisableLeakTracking:client:HTTPS")
@Tag("DisableLeakTracking:client:UNIX_DOMAIN")

}
return result;

// If all the data has been consumes and at least one release call is made, that indicates that the content
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: If all the data has been consumed

private void start(HttpServlet servlet, MultipartConfigElement config) throws Exception
{
config = config == null ? new MultipartConfigElement(tmpDirString, MAX_FILE_SIZE, -1, 0) : config;
server = new Server(null, null, null);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use a leak-tracking buffer pool. See HttpClientProxyProtocolTest as an example of how to use ArrayByteBufferPool.Tracking.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other test already uses that and I'm deleting this test

// Note that a release with a non-empty buffer, may indicate the release of just a part of the buffer and thus
// the data has not been consumed. An optimization might be to consume the data up to the current position on each
// release.
if (BufferUtil.isEmpty(data.frame().getByteBuffer()) && consumed.compareAndSet(false, true))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not so sure about this one. How about the following oddity?

Buffer duplicate = buffer.slice();
buffer.get(...); // read everything
duplicate.get(...); // try to read again

Wouldn't that be a nasty surprise?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussing this issue with @sbordet, we came to an agreement that the right place to open the flow control window is in HTTP2Stream.readData(), i.e.: right before handing the buffer to the application code, to decouple the window control from the re-pooling of the buffer. This would make the behavior in line with how H1 works, as the TCP window is controlled by when you read from the socket.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lorban I don't understand the surprise? If a make a duplicate, then of course I can read the data again from the buffer.

@sbordet @lorban Might gut reaction to the idea of opening the window in readData is NNNNNNNNOOOOOOOoooooo.... but I can't think of a rational reason why not.
That would make reading about consumption and retain/release about buffering. Good separation of concerns.

Let me try to implement....

@Override
public boolean handle(Request request, org.eclipse.jetty.server.Response response, Callback callback)
{
Runnable retainer = new Runnable()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may find o.e.j.util.CompletableTask better to use instead of bare Runnable.

ByteBufferRequestContent content = new ByteBufferRequestContent(ByteBuffer.wrap(data));
client.newRequest(newURI(transport))
.body(content)
.send(new BufferingResponseListener(data.length)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may use CompletableResponseListener here, if it's easier.

Comment on lines 1241 to 1245
if (Content.Chunk.isFailure(chunk))
{
callback.failed(chunk.getFailure());
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd move this just after the null check, as per usual idiom.

if (chunk.hasRemaining())
{
ByteBuffer byteBuffer = chunk.getByteBuffer();
if (chunk.canRetain())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a test, surely we can retain?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know? Do all the transports provide retainable chunks?

import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;

public class Http2MultiPartServletTest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to module jetty-ee10-test-client-transport, so that you don't pollute ee10-servlet with http2 deps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can delete this test now, as we have a simpler reproducer that works for all transports in HttpClientStreamTest

@sbordet sbordet requested review from gregw, sbordet and lorban September 27, 2023 17:55
Copy link
Contributor

@lorban lorban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, but the tests could be slightly simplified.

Signed-off-by: Simone Bordet <[email protected]>
@sbordet sbordet requested a review from lorban September 28, 2023 08:37
@sbordet sbordet merged commit aa880cf into jetty-12.0.x Sep 28, 2023
@sbordet sbordet deleted the jetty-12.0.x-10513-multipart-http2 branch September 28, 2023 09:47
sbordet added a commit that referenced this pull request Oct 5, 2023
…onsumesDataOfInFlightStream()`, that became flaky after #10554.

Signed-off-by: Simone Bordet <[email protected]>
sbordet added a commit that referenced this pull request Oct 5, 2023
sbordet added a commit that referenced this pull request Oct 5, 2023
lorban added a commit that referenced this pull request Oct 12, 2023
sbordet pushed a commit that referenced this pull request Oct 13, 2023
* Fixed more tests that became flaky after #10554.

Signed-off-by: Ludovic Orban <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Lockup processing POST request body with Jetty 12.0.1 using http/2
4 participants