Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Enhancement: Change deserialization method to not block event loop #369

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

stweedie
Copy link

@stweedie stweedie commented Dec 3, 2015

Currently, the deserialization in collection.find operates synchronously. Large data sets and/or models with several properties will delay the event loop.

This proposed change leverages node's timing API. Instead of completing all deserialization during one tick, this will deserialize a fixed number each tick and invoke the callback function when all documents have been deserialized.

Here is the library I am using to accomplish this: https://github.com/stweedie/node-background-compute

@luislobo
Copy link
Contributor

@particlebanana can you check this? we are getting a huge performance gain using this method

@particlebanana
Copy link
Contributor

Async is already a dependency. Can we use async.map here instead of adding another dependency?

@stweedie
Copy link
Author

Per @luislobo, I have added a benchmark to the library which shows the difference between a native array.map, async's array map, and the background-compute module's array map.

Essentially, the async library does not yield processing to the event loop until the entire task is completed, so deserializing large queries blocks the application until finished. The added dependency allows yielding to the event loop so that the application can resume normal operation while processing the deserialization.

The example added shows that both the native array.map and async.map are blocking. I demonstrated this with a simple native setInterval that should run every 100 ms, unless the event loop is blocked, in which case it will not run until the event loop is no longer blocked.

compute/example$ node map.js
--------------------------------------------------------------------------------
native array.map operation starting
native array.map operation ended after 858ms, yielded 0 times.
EVENT LOOP BLOCKED! timer elapsed in 1034ms, expected 100ms
--------------------------------------------------------------------------------
async.map starting
async.map operation ended after 1048ms, yielded 0 times.
EVENT LOOP BLOCKED! timer elapsed in 1048ms, expected 100ms
--------------------------------------------------------------------------------
parallel.map promise operation starting
parallel.map promise operation ended after 844ms, yielded 8 times.
--------------------------------------------------------------------------------
parallel.map callback operation starting
parallel.map callback operation ended after 878ms, yielded 9 times.
--------------------------------------------------------------------------------
parallel.map event operation starting
parallel.map event operation ended after 986ms, yielded 9 times.

@luislobo
Copy link
Contributor

This looks great. For some long returning data queries we have, that makes our set of app servers not to stall.

@particlebanana
Copy link
Contributor

Yeah the function inside the loop needs to be re-written to be async. So using async.each on the loop instead of Object.keys(_model).forEach will yield.

@luislobo
Copy link
Contributor

Well, that will not suffice. Async does not work with chunks, so if you pull 10000 records, until all of them are processed, the normalizeResults does not finish. The new module works with configurable chunks, so it will yield after x items have been processed, letting the event loop to work on other stuff while normalizeResults finish.

@stweedie
Copy link
Author

Unfortunately, async.forEachOf (which is the async library's function to iterate over the properties of an object) also operates without yielding.

I tried a test to create an object of a large number of properties and it did not yield until it had finished iterating over all of the properties.

Objects would have to have a large number of properties before this blocking can be observed. I can imagine that there won't be many use cases for objects with thousands of properties.

The main issue that I'm trying to fix is with large data sets. If we have a data set of length m with n properties, using async.each for all objects in the data set and then async.forEachOf for all properties on each object, the event loop will compute through all m x n operations before yielding.

@akocmark
Copy link

when will this get into the main branch?

@stweedie
Copy link
Author

I've created a kind of proof-of-concept test application that demonstrates the main issue I'm talking about. You can check it out at https://github.com/stweedie/sails-mongo-test

I will soon be submitting a similar PR to Waterline, as they perform similar steps in the deserialization stage. I just wanted to get this (much smaller, in terms of code changes) PR in before attempting that.

@luislobo
Copy link
Contributor

Hey @mikermcneil, what do you think about this enhancement? In our use case, it becomes critical for our normal day to day App Event queries, that might return a big set of data.
Can you please take a look and advise if this can be merged as it is, or we need to do some rework? /cc @stweedie

@luislobo
Copy link
Contributor

luislobo commented Mar 1, 2016

Hey @particlebanana can you please take a look again at this issue? It really makes projects using mongodb a lot faster and responsive, on the cases where queries returning several objects are used. We are running a big production system with these fixes. We definitely would love to use the npm versions instead of having to update our version each time. I hope you can help on this.

@stweedie stweedie changed the title Change deserialization method to not block event loop Performance Enhancement: Change deserialization method to not block event loop Mar 1, 2016
@Josebaseba
Copy link
Contributor

@luislobo I'm with you. I'm working in some cases with a large amount of data, and this will be a huge improvement. Thanks @stweedie!

@particlebanana
Copy link
Contributor

Hey everyone, I would really prefer to solve this without the use of a non-standard way of dealing with it and using a brand new module I don't have control over. Can we work out a way to do it using the dependencies we already have?

@Salakar
Copy link

Salakar commented Apr 23, 2016

@particlebanana the logic of not using this because you have no control over the module is a little flawed, I believe this relies on async, bluebird, lodash etc of which you don't have control over. Ideally this could be built into waterline itself as a util that adapters can make use of though, so from that perspective I see where you're coming from. 👍

This works great when doing large queries, especially if you're a big data consumer. However I do think this should be used based on certain thresholds I.e length > 500. Anything smaller is in most cases is probably not worth it and should just be standard async with no chunking rather than synchronous as it is now?

@jgeurts
Copy link

jgeurts commented Apr 26, 2016

Avoiding using the async library is something to consider with smaller results sets. I did some relative informal performance tests of iterators with async.js, lodash, and native (https://github.com/jgeurts/node-loop-perf#results-100k-iterations). async.js can be orders of magnitude slower than alternative iterators when wrapping the iteratee block with async.setImmediate, to avoid call stack overflows.

@stweedie
Copy link
Author

@jgeurts It seems to me in your performance test that the waterfall tests are using the setImmediate function multiple times per iteration. In effect, if you have n operations on an array of size m:

array.forEach((f) => {
  setImmediate(operation1, (err, res1) => { 
    setImmediate(operation2, (err, res2) => {
    }) 
  })
})

which would result in using the setImmediate function m * n times, once per operation per array element

The change referenced in the pull request is kind of different. It mirrors the following logic:

setImmediate(() => {
  array.slice(0, 16).forEach((f) => {
    operation1()
    operation2()
  })
})

which results in using the setImmediate function m / (chunk size) times. Iterating over each item in the array chunk is a fully synchronous operation rather than one timed on node's event loop. This also helps to avoid call stack overflows.

The end goal would be the same: to yield current operation to other operations while still making progress on serialization.

It is definitely a bit of overkill for smaller result sets, but it's not as drastic of a performance overhead as seen in the waterfall performance tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

7 participants