Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the correct way to increase JVM memory resource #5370

Open
marblerun opened this issue Jun 22, 2022 · 5 comments
Open

What is the correct way to increase JVM memory resource #5370

marblerun opened this issue Jun 22, 2022 · 5 comments

Comments

@marblerun
Copy link

Hi,

Hopefully a simple question. I'd like to at least double the existing values for jni.initial.memory and jni.max.memory, and can see that the expected update method is to edit ConfigOptions.asciidoc.

That seems fairly clear, what isn't is what to do next. I'm hoping that I edit the "Default Value" for both, but is there a process required to generate the corresponding json file, or is this just automatic. Just restarting tomcat isn't enough, so I assume I'm missing something here.

Kind regards,

Mike

@brianhatchl
Copy link
Contributor

I think changes to ConfigOptions.asciidoc only get picked up at build time. There are a set of conf files under core/conf that can be updated to override default parameter values for specific conflation/command types. So if you are running Reference Conflation you could add those two jni params to core/conf/ReferenceConflation.conf like so:

{
  "#": "Only add entries here that have different default values than the options in ConfigOptions.asciidoc. Reference",
  "#": "conflation is the default configuration, so there shouldn't be too many.",
  "hootapi.db.writer.preserve.version.on.insert": "true",
  "jni.initial.memory": "512m",
  "jni.max.memory": "4g",
  "#": "end"
}

@marblerun
Copy link
Author

Thanks Brian,

I had a feeling that a build might be involved. We are currently running off an ami you suggested a couple of months back, and its now going quite well. I wasn't sure of the format, and had seen a suggestion that some years go the option to do something similar in conf/hoot.json had been removed, so hadn't realised there might be another way.

The bad news is that this appears to have had no effect. I've restarted tomcat, but the -Xmx value remains unchanged at 2048M.

Mike

@brianhatchl
Copy link
Contributor

brianhatchl commented Jun 22, 2022

I believe that when the Hoot command process accesses JNI that is a separate jvm from the one used by tomcat. The conf file is passed as an argument to specific hoot commands called from the web services, where the params will be read to override the defaults defined in ConfigOptions.asciidoc.

What hoot command are you running? I don't have first hand experience with the JOSM validation integration, but digging around it seems like only the hoot validate command may use it. (I was thinking it might be invoked during conflation output).

I don't believe the validate command is hooked up to run after any conflate calls from the Web UI. You'd have to invoke it directly from the command line and if that's the case you can specify values for those params:

hoot validate -D jni.initial.memory=512m -D jni.max.memory=4g input.osm --output output.osm

p.s. Are you actually seeing java out-of-memory errors?

@marblerun
Copy link
Author

Hi Brian,

I'll get to the errors seen in a mo. My collegue Callum is using commands like

hoot conflate -C ReferenceConflation.conf -C UnifyingAlgorithm.conf -D add.review.tags.to.features=true -D search.radius.highway=1 input1.osm input2.osm output.osm

which I hope means he is picking up the changes I made to ReferenceConflation.conf

His next comment was " Seems to work when results are exported which is good" so we are making progress.

He passed me a screen grab which I will try and include
image-callum

This suggests to me that a program called MemoryUsageChecker.cpp has reached or exceeded a 95% threshold, even though there is plenty of real memory available. This happened yesterday, and one of my normal perf tools wasn't installed, so I can't be certain. Just trying to make the required adjustments to stop this happening again.

Mike

@bmarchant
Copy link
Contributor

@marblerun This is simply a warning that you are using more than the 95% threshold of total physical memory available. It states that Hootenanny is using 22% of it and everything else (i.e. OS and other running programs) are using the remaining 73%+. It doesn't have anything to do with the JNI memory.

Check the amount of memory available that's left in the environment (after what the OS uses, plus all running hoot jobs, etc.). If its less than the threshold, log a warning and also let the user know what percentage of the total memory their job was taking up. Only logging once so the logs don't get cluttered up for jobs whose memory consumption goes repeatedly above and below the threshold. We may end up really needing to only check physical memory but will check both it and virtual memory consumption for now.

That warning doesn't affect the output or Hootenanny's ability to perform correctly, it just indicates that the physical machine that Hootenanny is running on is close to running out of memory. It is possible that since you are in a docker container there may have been some limits you've put on the amount of memory the docker container has access to, though probably not because it isn't something that the "default nightly docker container" does by default.

There is a utility that we've included in Hootenanny that helps us view the memory usage of Hootenanny as it is running. It is found at $HOOT_HOME/scripts/util/graph_memory.sh. From the base Hootenanny directory use the following command:

./scripts/util/graph_memory.sh hoot.bin

Then run your Hootenanny command and it will monitor the memory usage of the executable and will show a graph of memory usage after execution is complete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants