-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory consumption #109
Comments
I don't see how that will work since typically the processes will be running on different machines. It would only work if there was a shared/common filesystem, which won't be on every system. |
Can you give details on how you are determining the memory usage? Also, are you sure these requests are being serviced? Eg, they aren't just getting stuck into rabbitmq without any workers to process? In that case, I'd expect the memory to increase in the way you observed. I did a quick pass over the code and no obvious memory leaks jumped out at me. |
Hm, you're right, i forgot to take this into account... Need to rethink the idea, there must be a solution for this. Can you help me to find there(in the case above) additional 50Mb come from? This alone would drop memory consumption by 2/3. I tried to investigate further and found out, that memory is returned to the OS after 8-10 Minutes upon OCR text delivery to the client. |
Sorry, i accidentally closed the issue. It was not intended. My testing procedure:
All request got valid reply from open-ocr and the memory usage went down to just 40Mb and 9Mb after ca. 10 Minutes upon delivery. |
So it sounds like you are saying the memory is eventually being reclaimed to the OS? Are you just trying to avoid the memory spikes from happening at all? Or are you seeing what appears to be a leak where it just keeps increasing. You mentioned you saw it grow to 2 GB. |
The |
Where are two problems with memory footprint.
You can see how the memory footprint looks over the time(thanks for gctrace link!). Thanks! |
I think the problem is here: Line 215 in 4fd4a2a
If it times out waiting for a worker response, then nothing will be listening on that Do you see Alternatively, do you see extra goroutines stuck there? (here are some ways to dump the goroutine stacks) |
This is the output. Last succeeded ocr request and the rest what didn't make in time:
There is no "send result to rpcResponseChan" at all. |
I wonder if the memory leak would be reproducible in a simpler case -- what if there is no worker running? Do you see the memory just keep increasing as you add jobs that are never processed? |
I got a workaround for leaking memory by cli_worker, but I think we should address the actual issue, since this workaround would not help if the request which is being processed times out.
|
Ok thanks for the explanations and the steps to repro. |
Hey tleyden!
I'm looking into memory consumption by both the worker and the http daemon. And in can be very high. Starting with a low memory profile just under 1MB it grows with every request. On my test the memory consumption grew to over 2GB for 10 requests with TIFF image about 30Mb of size.
I tried to debug a little bit, but lacking the experience I only found out - the most memory is allocated on the heap by make.Slice. This example is showing consumption after one Request with 30Mb payload total.
I'm trying to understand where additional 50 Mb came from, but i can't figure it out.
I assume the memory load comes from storing the payload as []byte by the function url2bytes.
Is it possible to rewrite the function to store the payload on a file system and passing over rabbitmq the path to the file instead of the whole payload?
The memory consumption by the worker is even bigger. I think for the same reason, but i can't prove it because I was not able to get the pprof data for this part of open-ocr.
I would try to rewrite the code. I just need to know which parts are involved(I don't understand if the rabbitmq part will be involved).
Are you interested in elaborating this problem together and provide me some hints?
Regards!
The text was updated successfully, but these errors were encountered: