-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance bottleneck in TypeFactory._fromClass #435
Comments
Could you add bit more information on location of the bottleneck? Sounds like it should be related to caching (meant to avoid unneeded re-resolution). And multi-core use cases are important to cater for, so I'd be happy to resolve the issue you are facing. But bit more info would help. |
My test example case has 16 cores, each calling the following during their processing:
These trigger TypeFactory._fromClass(..) to resolve the type. A YourKit profile is telling me that the function is getting blocked on the synchronized(..) within the method. If it helps, have attached a partial YourKit screenshot of one of the threads running: Is there anything else I can get? Many thanks! Sean |
I'll have a look. Beyond avoiding locking, it is quite possibly there are ways to avoid resolution itself, i.e. this could also be a side-effect of some other performance issue (not reusing some objects that should or such). But I'll let you know what I find. |
Ok. With version 2.3, the only places I see are short sync blocks around There are couple of work-arounds that could help. Basically the goal of
these are actually preferable anyway ( |
Thanks for looking into it. The problem is that the code that calls readValue(..) doesn't use concrete types -- it is also generic. It is part of our application's serialization framework. To implement either of the workarounds it means creating an additional cache. That seems unnecessary, given that the map already exists within TypeFactory. In our use-case, the lookup on LRUMap does occur each time -- use of a concurrent variant could be worth it. Sean |
Ok. I suspect that use of a read/write lock could help here, presumably most accesses are reads. Challenge for me would be verification of improvement; I would not want to make a change without having a way to confirm that it helps with reported problem. |
We would happily compile up and test any code changes in our environment and forward the profiled results. I would need 1-2 days to do it though. Sean |
Sounds like deal -- I hope to find time tonight. If it could be tested on 2.4.0-SNAPSHOTs (for jackson-core), that'd be easiest, I could push snapshots. Or if not, could speculatively check in 2.3 branch, given that it should be small change all around. |
Please don't work of an evening on my account! It's not that important. We're currently using 2.3. Using 2.4 could be problematic, as we haven't tried yet -- seems like some potential to encounter some API differences. Maybe not. |
:) That's ok -- I tend to work on Jackson during off-hours. So not extra work, just depends on when I happen to have time and work on related area. On 2.3 vs 2.4: I do not expect you would be running with 2.4, and if this worked well, I would backport it for 2.3.3. Official 2.4 is not yet out, and may take a while anyway. I'll update this issue when I have the patch, either way. |
Ok: I checked in a change that now uses read/write lock for LRU cache, and should perform better. |
Acknowledged. Will test and post-results in next few days. Many thanks! |
Perfect. Looking forward to the test results. |
I assume this works, closing. |
Ouch. As per #503, looks like change can not be used, and I will need to consider alternative take. |
When using 16 cores to process json, we are seeing a performance bottleneck in TypeFactory._fromClass(..). The function contains a call to synchronized(..). Could you please consider an alternative locking strategy? Simplest fix might be a reader/writer lock.
Many thanks!
Sean
The text was updated successfully, but these errors were encountered: