Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast HTTP/2 client available #3471

Closed
domsolutions opened this issue Nov 26, 2023 · 5 comments
Closed

Fast HTTP/2 client available #3471

domsolutions opened this issue Nov 26, 2023 · 5 comments
Assignees
Labels
awaiting user waiting for user to respond feature

Comments

@domsolutions
Copy link

Feature Description

It'd be good if there was an experimental module using https://github.com/valyala/fasthttp as it's so much quicker than net/http I can see from a previous issue #1664 this was looked at but decided against due to not supporting HTTP/2. A client has been created using fasthttp supporting HTTP/2 https://github.com/dgrr/http2 not sure if this was known at the time..

Is this something you'd consider as an experimental module? I'd be happy to work on it if so..

Suggested Solution (optional)

No response

Already existing or connected issues / PRs (optional)

No response

@joanlopez
Copy link
Contributor

Hi @domsolutions,

First of all, thanks for your feature request, we're happy to see people engaging with k6 in form of feature requests.

However, as you can imagine, developing a new http package requires some efforts, and we would like to understand better the reasons why to do so, before embarking us on such a considerable project.

So, could you please tell us what are your specific needs? From your message, I read that _fasthttp being much quicker than net/http_ is the only reason. But.. I'm wondering, is the speed an actual bottleneck in your case? Or, is pure speed the most important aspect in your case? For instance, taking a brief at the mentioned repository for HTTP/2 support, it looks like it has serious issues (data races that causes panics, etc) that remained unsolved for long time.

In addition, you can see that some of the other requests raised in the issue that you referred to (#1664), are being discussed independently. Indeed, we're actually planning the development of a new http package. You can track its progress on the corresponding epic issue (#2461) , and take a look at the details in the design document. You'll notice there that we tend to prioritize user experience and reliability than just pure speed, unless it becomes a true bottleneck.

That being said, and considering that you offered your help, I'd like to leave the door open for you to experiment on your own. If you're interested, you can write your own http package in form of a k6 extension, and see how it goes. If you bring back remarkable differences and huge improvements, we might take it into consideration.

Thanks!

@joanlopez joanlopez added awaiting user waiting for user to respond and removed triage labels Dec 20, 2023
@domsolutions
Copy link
Author

Hi @domsolutions,

First of all, thanks for your feature request, we're happy to see people engaging with k6 in form of feature requests.

However, as you can imagine, developing a new http package requires some efforts, and we would like to understand better the reasons why to do so, before embarking us on such a considerable project.

So, could you please tell us what are your specific needs? From your message, I read that _fasthttp being much quicker than net/http_ is the only reason. But.. I'm wondering, is the speed an actual bottleneck in your case? Or, is pure speed the most important aspect in your case? For instance, taking a brief at the mentioned repository for HTTP/2 support, it looks like it has serious issues (data races that causes panics, etc) that remained unsolved for long time.

In addition, you can see that some of the other requests raised in the issue that you referred to (#1664), are being discussed independently. Indeed, we're actually planning the development of a new http package. You can track its progress on the corresponding epic issue (#2461) , and take a look at the details in the design document. You'll notice there that we tend to prioritize user experience and reliability than just pure speed, unless it becomes a true bottleneck.

That being said, and considering that you offered your help, I'd like to leave the door open for you to experiment on your own. If you're interested, you can write your own http package in form of a k6 extension, and see how it goes. If you bring back remarkable differences and huge improvements, we might take it into consideration.

Thanks!

Hi @joanlopez

I did some testing of the so called HTTP/2 server https://github.com/dgrr/http2 and there are indeed lots of bugs... I spent a long time checking how it worked against the HTTP/2 RFC and lots of things don't line up, I tried fixing a few bugs but looks like the whole stack needs rewriting...

But the HTTP/1.1 client https://github.com/valyala/fasthttp seems quite stable and has been around for quite a long time. I've created an extension https://github.com/domsolutions/xk6-fasthttp using it as a client, obviously it only supports HTTP/1.1 but the results are pretty good.

Get around 70% increase in the RPS. I've also introduced file streaming from disk in the extension as can see in that design doc you provided that was one of the issues where k6 consumes the whole files into memory before sending, so this feature addresses that. My extension doesn't have all the rich features like of observability of certains metrics i.e. bytes sent due to current limitations of fasthttp library, listed them here https://github.com/domsolutions/xk6-fasthttp?tab=readme-ov-file#not-supported but I still think this extension is of use, for customers who want to run stress tests with a higher RPS.

Would you consider listing this on https://k6.io/docs/extensions/get-started/explore/ ?

@joanlopez
Copy link
Contributor

Would you consider listing this on https://k6.io/docs/extensions/get-started/explore/ ?

I might be wrong, but I think that's up to you, and as simply as opening a PR in https://github.com/grafana/k6-docs, similar to grafana/k6-docs#1423. Feel free to submit it, we'll be happy to review it!

If you did already, please point me out, as I haven't seen any after a quick search.

Thanks!

@domsolutions
Copy link
Author

Would you consider listing this on https://k6.io/docs/extensions/get-started/explore/ ?

I might be wrong, but I think that's up to you, and as simply as opening a PR in https://github.com/grafana/k6-docs, similar to grafana/k6-docs#1423. Feel free to submit it, we'll be happy to review it!

If you did already, please point me out, as I haven't seen any after a quick search.

Thanks!

@joanlopez Ah ok, sorry didn't realise, created one here grafana/k6-docs#1466

@joanlopez
Copy link
Contributor

@joanlopez Ah ok, sorry didn't realise, created one here grafana/k6-docs#1466

Great @domsolutions! I'm glad to see your extension is already listed there.

As I said, I think this is a great way to expose that feature to k6 users, and see how much engagement it gets.
So, thanks for contributing to the ecosystem with your own extension.

For now, I'm going to close this issue as there's no more pending actions. In the future, if there's a huge amount of people using the extension and asking for this to be part of the core, we might reconsider it, but for now I see this extension as the best place for those who want to experiment with fasthttp client.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting user waiting for user to respond feature
Projects
None yet
Development

No branches or pull requests

2 participants