Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize! #5

Open
ryan-endacott opened this issue Feb 14, 2013 · 2 comments
Open

Optimize! #5

ryan-endacott opened this issue Feb 14, 2013 · 2 comments

Comments

@ryan-endacott
Copy link
Owner

If we want to do lots of interests, optimizing the process definitely wouldn't hurt.

I imagine our bottleneck now is calls to Google Location services, which will be lowered as our cache builds up. It may be worth it to separate the concerns a bit and build a queue of raw locations needed and then have that run separately in parallel. Just a thought, and it may not even be worth our time for now.

@dan-silver
Copy link
Collaborator

Very interesting idea and I completely agree that we should do this, I just don't know when. I think it's important that we start getting baseline results and tracking how long it takes to run through the script. Building another app to collect the stat objects so we can browse through them may be a good next step. If we can include time stamps for each function (or section of the process) that would really let us know where the bottleneck is and it would give us something to compare any future improvements with.

@ryan-endacott
Copy link
Owner Author

Agreed. I'll look into nodejs profiling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants