You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are using Grails 4.0.2 and Gorm 7.0.3 with Mongo engine set to "codec".
We have an entity with an attribute marked as "unique" in constraints.
If we save more of these entities with the same attribute value by calling an API more times (one write for each API call), and time interval between each call is more than about 400-500 ms, the Gorm engine correctly responds that an entity with the same value already exist.
On the other way, if we make more API calls rapidly, with a time interval of about 50-150 ms, often this results in more entities saved to the db with the same attribute value.
We also reproduce this by invoking the API through asynchronous calls from a javascript for loop at client side in the same network.
I think this is very easy to reproduce, but if you need more informations we are available.
If this results in a real bug and not in an our mistake this could be a very bad problem.
The text was updated successfully, but these errors were encountered:
I have the same issue, i'm able to create more users with same username if i call the API many times rapidly. Can anyone tell me how to fix the problem?
Hello,
we are using Grails 4.0.2 and Gorm 7.0.3 with Mongo engine set to "codec".
We have an entity with an attribute marked as "unique" in constraints.
If we save more of these entities with the same attribute value by calling an API more times (one write for each API call), and time interval between each call is more than about 400-500 ms, the Gorm engine correctly responds that an entity with the same value already exist.
On the other way, if we make more API calls rapidly, with a time interval of about 50-150 ms, often this results in more entities saved to the db with the same attribute value.
We also reproduce this by invoking the API through asynchronous calls from a javascript for loop at client side in the same network.
I think this is very easy to reproduce, but if you need more informations we are available.
If this results in a real bug and not in an our mistake this could be a very bad problem.
The text was updated successfully, but these errors were encountered: