-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement NodeJS based server replicating osrm-routed #6411
base: master
Are you sure you want to change the base?
Conversation
7606b0b
to
7f5acf6
Compare
22370ee
to
98dce89
Compare
b122edc
to
53f2da5
Compare
@@ -52,11 +52,11 @@ Feature: Weight tests | |||
| abc | | |||
|
|||
When I route I should get | |||
| waypoints | route | distances | weights | times | a:distance | a:duration | a:weight | a:speed | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be caused by the fact that our JSON renderer truncates numbers to have maximum 10 digits after dot
size_t decimalpos = std::find(buffer.begin(), buffer.end(), '.') - buffer.begin(); |
scripts/run_cucumber_tests.sh
Outdated
# we do not run `osrm-routed-js` tests on v12 since fastify doesn't support it | ||
NODE_VERSION=$(node --version) | ||
NODE_MAJOR_VERSION=$(echo $NODE_VERSION | cut -d. -f1) | ||
if [[ $NODE_MAJOR_VERSION != "v12" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to not run mmap
tests here until we won't fully migrate to JS-based osrm-routed to save CI time.
@@ -0,0 +1,8 @@ | |||
"use strict"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for reviewers: all routed-js/*.js
files are generated by TypeScript compiler, so there are no a lot of sense to review it.
@@ -40,7 +42,7 @@ module.exports = function () { | |||
|
|||
this.OSRM_PORT = process.env.OSRM_PORT && parseInt(process.env.OSRM_PORT) || 5000; | |||
this.OSRM_IP = process.env.OSRM_IP || '127.0.0.1'; | |||
this.OSRM_CONNECTION_RETRIES = process.env.OSRM_CONNECTION_RETRIES && parseInt(process.env.OSRM_CONNECTION_RETRIES) || 10; | |||
this.OSRM_CONNECTION_RETRIES = process.env.OSRM_CONNECTION_RETRIES && parseInt(process.env.OSRM_CONNECTION_RETRIES) || 100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to change it since new server has higher startup time than our existing osrm-routed.
Would be good to also have a comparison of request round-trip times, especially for requests with large numbers of coordinates. |
It turned out to be really good idea. I used Apache Benchmark to do that. Results for
Results for
So it is currently 1470.62 requests/sec for osrm-routed-js and 1913.53 requests/sec for osrm-routed what is not so bad taking into account that node bindings have certain overhead too. But what I noticed is that NodeJS server doesn't "scale" if we do requests concurrently, so I probably use bindings in some wrong way - will investigate. |
i.e. 11697.33 req/sec vs 4491.96 req/sec. |
I am seeing issues with running "large" requests with But in general @mjjbell do you think it is a problem if we will have certain slowdown here? |
For large requests I used h2load https://nghttp2.org/documentation/h2load-howto.html with HTTP 1.1. osrm-routed
osrm-routed-js
Benchmark:
|
It was working with
That does seem like quite a significant slowdown in the multithreaded test, especially in the last few percentile. Would be interesting to know where in the call stack it's happening. Was there a reason to switch between benchmarking tools? Do you get the same results using |
yes, ab was just hanging on large request - not sure why, but I found it as quite buggy tool… When it comes to other questions I will return to it a bit later - just had ~1h today and did only this :) |
Not sure how Fastify works, but might want to double check it's a proper like-for-like comparison. By default osrm-backend/src/tools/routed.cpp Line 89 in 41dda32
|
I applied approach explained here and it seems it helps(but I still not sure that I did everything which I could to improve things here).
As you can see for whatever reason ~50% of all requests to osrm-routed are failed(most likely on network level since there no HTTP codes for them), so it is hard to properly make conclusions here. I'll try to understand why, but it is again one more evidence that existing server is not very good to work under load... Simple route requestBenchmark
osrm-routed
osrm-routed-js with single worker
osrm-routed-js with multiple workers
"Large" requestBenchmark
osrm-routed
osrm-routed-js with single worker
osrm-routed-js with multiple workers
|
Well, for whatever reason
So repeating experiment with simple route request, but with such benchmark
osrm-routed
osrm-routed-js with multiple workers
Results are not different, but a bit more reliable since we do not have failing requests on osrm-routed :) |
I commented call to OSRM in JS server and started returning empty object instead and got 73919 req/s with multiple workers
Similar change for existing osrm-routed(i.e. do not build route, but return empty json instead) gives only 38807 req/s:
I.e. it looks like this performance drop comes from our node bindings. Not sure though if it is possible to do something with it. |
I commented code in node bindings which converts our JSON representation to node and started returning empty object instead(the rest of code is the same) and results of osrm-routed-js are now absolutely comparable with
|
Just for run I've tried to substitute code which converts JSONs to V8 representation with trick when we first render it to string in C++ and then parse back by Nan.JSON::Parse and surprisingly it works faster :)
|
@mjjbell sorry for pinging, have you seen results above? WDYT? On one hand I see ~25% performance drop, but on another hand it seems to be caused only by Node bindings overhead. In the future we can try to optimize it via doing serialization on C++ side(without conversion to JS objects). |
That is suspiciously similar to the number of requests handled by a single-keep alive connection x 100 clients. Sounds like it's not playing well with the connection being closed.
25% still feels significant IMO. The tradeoff here is they're getting a production/robust HTTP server. But if you were to put an nginx or envoy proxy in front of How much work is it to perform the optimization you are suggesting? And does it then defeat the purpose of using the Node API? |
The idea is to just use Now it is better(~10700 req/s vs 9500 req/s before).
Tbh I expected it to be a bit more efficient, so will look a bit more. |
Well, in experiment above I wrongly used
More than it according to this article we can try to avoid copy here and just use @mjjbell WDYT? Anyway I want to emphasise that it is just a beginning and there are a lot of questions to be solved before we can completely remove old |
And final results for "large" request(50000 requests to mitigate osrm-routed problems + osrm-routed-js with optimizations applied): osrm-routed
osrm-routed-js
i.e. results are ~ equal. Benchmark:
|
@mjjbell @DennisOSRM can we may be make second round of review here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last set of numbers look good, so 👍 to have it on master and iterate on it.
Maybe target a v6 release?
This PR seems to be stale. Is it still relevant? |
Issue
#3832
Some notes:
osrm-routed
yet,osrm-routed-js
is separate package which depends onnode-osrm
.js
files to .gitignore, but I'd propose to do it later, because there are too many questions arise on how to properly do thatTasklist
Requirements / Relations
Link any requirements here. Other pull requests this PR is based on?