Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Use mio instead of tiny-http #144

Open
wants to merge 51 commits into
base: master
Choose a base branch
from
Open

[WIP] Use mio instead of tiny-http #144

wants to merge 51 commits into from

Conversation

tomaka
Copy link
Owner

@tomaka tomaka commented Aug 8, 2017

cc #27

@tomaka tomaka mentioned this pull request Aug 10, 2017
@cardoe
Copy link
Contributor

cardoe commented Oct 27, 2017

What's the status on this work? It could be good to combine this with #154 as a new major version (realistically for a major version bump I'd drop rustc-serialize entirely).

@tomaka
Copy link
Owner Author

tomaka commented Oct 27, 2017

The code here is working, but it's not tested.
I was more or less hoping that Rust would support generators soon-ish (not async, just coroutines for parsing), but it looks like we're still far away from them.

@mentalbrew
Copy link

What would the process be to get this PR and the Serde PR implemented? These two PRs would be a great boost for the project though I'm sure other concerns come in to play here.

@cardoe
Copy link
Contributor

cardoe commented Oct 31, 2017 via email

@tomaka
Copy link
Owner Author

tomaka commented Oct 31, 2017

So the problem with this PR right now is that it breaks websockets.
Websockets is a bit tricky because it's touching the topic of asynchronous I/O in Rust.

Other than that, I think there are some minor problems such as HTTP error codes not being returned in some situations (eg. request line too long), and performances could be improved.

@jaemk
Copy link
Contributor

jaemk commented Nov 14, 2017

The performance gains from this PR are really impressive! Running the hello-world example goes from 19k req/s to 83k req/s (with wrk -t4 -c100 -d10s)

I did notice two issues though:

  • If a handler panics, the connection hangs. Once the client kills the connection, the cpu usage explodes for all server threads.
  • Performance is great for the first wrk -t4 -c100 -d10s run, but each subsequent run gets worse and worse. (req/s) 83k -> 41k -> 35k -> 32k -> 27k -> 23k -> 22k -> 19k .... eventually down to zero. The server needs to be restarted to reset performance.

@tomaka
Copy link
Owner Author

tomaka commented Nov 14, 2017

Thanks for the investigation! That's unfortunate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants