-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate records for the a single saveInBackground Call because Internet provider block response #4006
Comments
I am too experiencing this issue. Do you have a workaround on this ? |
we want parse server team solve this issue . you can workaround this issue by create randome number when create object and query that number before save in cloud and response that object if it is already saved it is big Issues for me because always has duplicated values |
This is a ground breaking issue for me. I'm changing from parse to firecloud, let's see how it goes. |
You’l’ have The same issue anywhere you try to save some data on a server and when no confirmation that the server saved the data actually comes to the client. Abd if you’re better off with firecloud, good for you! |
Yeah, you're right. But this issue has been becoming more and more present. Don't get me wrong, is there anyway I can assist you? Because this issue (seems to me) is new. I have been using parse server for a few months and it never ever showed up before. |
Does it show up for you? And if it shows up, we need a bit more than sometimes, on peculiar network conditions etc.... |
Ok, I'll try to go as deep as possible. In the last week and a half, it happened to me 4 times. I have been experiencing issues with my internet connection, and whenever I was having issues with my connection and called the method saveInBackground(), LogCat kept telling me a proxy status of NODATA (which, I assume means that Android couldn't connect to internet). Then, after the connection was successful, Parse notified that the object was saved successfully (and this is where the data is duplicated). I think is relevant to say that the notification came only once. If I can be of any more help, just ask :). |
I do not know how network is work for blocking site . I remember I was test my app for some android version and one off the version always fail . I try more and more and take some time to know the site is blocked for that version . but all other version it is work fine . when I ran vpn app for that version who is always fail . the app works fine . I did not know why it is blocked for that version only . my be Is random function to block some sites |
This has definitely been an issue for us. We have solved it now, but it took some time.
While it did work sometimes, we found that we would still occasionally get duplicate objects when a client had a bad internet connection.
There is probably a better general solution to our afterSave hook, like a periodic timer that checks each class for dupes, or something, but it would require our client SDK's to make a GUID for each object before sending it to the server. Something kinda like what you get from an object persistent service in java like hibernate. I think that a general solution baked into the SDK's (and ideally documented with a solution for the REST API), would be a valuable addition to parse. Thoughts and/or questions most welcome! |
I think the best way to solve this it is make device before save object is pending Id from server when the id come to device for first time start to save all object . when the second id came . check the object if it is has id do not change id or save the object again . this also will work even if you have enable local data storage . this is simple solution but I will know if the team work to solve this problem It will Offers best off my solution |
@acinader I do like the idea of the clients setting a unique id before submitting it to the server. Same object coming in more than once could be screened out by the hooks, the server itself, or even another sdk. |
@montymxb but if you enable local data store you will not know Which of the duplicate object must be remove also you will not to sure if the id is uniq even if you query the last 10 mint . but you can say it is uniq id if you query last 10 mint |
@oazya500 You would have to respect the uuid in a local data store as well, so ideally you wouldn't have duplicates there either. This is up to the implementation to guard against. As for uniqueness, on the client end, you would have to check the server and anything else (like a local datastore) to ensure it's not duplicating an existing entry. Essentially, ACID compliance, with individual requests completing (and being unique) or being rejected outright. |
That would work in theory, in practice this is moving the ID generation on the client and hope there’s no conflicts :/ |
It's ideal in theory, but tough in application :/ , not impossible however. |
Not really, but that means the server is just doing upserts which can be costly |
Well it helps to guarantee the validity of a transaction, but it is exceptionally expensive. Too expensive for anything more than small apps probably. |
And i'm not sure we'll need it in that case, @acinader you mentioned you had the issue, perhaps the exponential backoff in the apps is not good enough, perhaps also, the connection get closed without a timeout, so we could infer that the save actually happened. |
@flovilmart in my case at firset and second i get time out and object saved We Agreed object must have id befor save but that id must be gnrated at the server to ensure there is no conflicts with other id |
This is already how it's done, the client has no idea that the save failed from a network error, perhaps we should cancel the save / abort when the request is cancelled server side instead, not sure how to handle that.
Can you provide network traces, stack traces, call traces? What SDK are you using? I'm all for improving the situation, at the moment, if the response fails, we don't destroy the created object. Even if we could, that has large implications and I'M not sure any systems supports that correctly, moreover, your average MEAN stack app would not support that. |
@oazya500 For starters what sdk(s) are involved would be helpful, that would narrow this down to how you set anything up on the client. As for a filtering solution I do like the For generating a 100% unique ID you can't really get around not querying the server before-hand. However, and this is just a thought, you could use the current timestamp in combination with a randomly generated id, something like Even though a duplicate is not too likely, it may still happen from time to time. Cloud code hooks or another sdk could run as often as needed to account for possible collisions, whether that's each request or every 10 minutes or so, whatever time frame works. In the event of a collision you would have to have some sort of a conflict resolution you follow through on (assuming you didn't catch it before the duplicate saved). A simple resolution mechanism would be to just toss the newer object(s) and keep the older/oldest one. That's a bit overly simplistic, but from there you should be able to put together a mechanism that works best for you. ::EDIT:: If you want to make your |
make clinet befor save object request id from server without save any object just request uniq id . |
I will provide when I at the network make that issue . now this network not make this error |
You should probably investigate that SSL exception, because it’s not coming from parse-server. |
ignore this error . this hapen when the wifi switch to off and not the error we talk about it this network i am in Is work fine and not show the duplicated error |
From what I can see, the objects are created correctly, and the server responds properly. I’m not sure the issue is coming from the SDK at all. How is your object saved? |
at this network all is fine and no error . I upload pic to just see all off my configuration values . I will upload the pic off my error when I am at the network make the error |
@flovilmart I am able to config Mikrotik router ass the network make problem . I create firewall filter that blocking incoming paket . when the rule is work 100% . device try to save object 3 times and stop to request any future save . that make object duplicate 3 times at the server when make rule about 80% random . device try to save object up to save it . if fail 3 times it is stop request to save for some time then try to save it again all of fails to save object is create duplicate object at the server |
@montymxb if you sure id is unique you can add unique index att mongodb . And you will not need to check dublicated objects aftersave |
if your app is controlling a nuke plant, then i think we would want uuid from the server. However, for just about every other application, we can generate uuids on the client and just be ok with the fact that every couple hundred years we'll have a collision. Here's a good writeup: https://stackoverflow.com/questions/105034/create-guid-uuid-in-javascript In our case, we're generating the uuids on ios and while i didn't write that bit of code (I can look into it), i'm pretty sure that apple exposes some bits to make uuids closer to 100% than the current javascript alternatives in the post above. if we wanted to get closer to the nuke reliability, we could only look for dupes in a short window, say 30 min - 1 hour (a day?). the first time i encountered this idea of "practically unique" vs. "prove-ably unique" it gave me a lot of pause...I've since just learned to accept it ;). |
You’d also put more thoughts on network reliability and redundancy to make sure everything goes to the server. I don’t believe the solution is client side, and you can implement it through a uuid key and validate in Cloud Code that this doesn’t exist. |
@acinader the solution must be at the server and client side . solution is to make client obtain objectid from server before save object . when the objectid back to client start save all object . this solution will work with Local Datastore with saveEventually @acinader will save uuid with all objects and that will add size and cost to your database |
And what if the subsequent save doesn't return? I mean this is getting out of whack really, the solutions are NOT viable. not counting the impact on everyone that don't really care about that mechanism. @oazya500 IF you need a transactional system, you're not looking at the right technologies. |
@flovilmart I'm not advocating anything, just throwing out how we solved the problem. FYI, we already had been generating a uuid on the client for the relevant object anyway so it was a pretty simple fix to just use it in the afterSave to mark dupes. FYI, the only class where this has been an issue is one with a (potentially large) file associated with it. I never really dug into why it was happening exactly as I just assumed that it was a limitation and worked around it. It would be a lot of wasted queries to do on all classes after all inserts. I do think that our solution is a very solid one for anyone facing a similar problem for a particular class. If there were enough demand/complaints, it might be worth generalizing the solution so it could be "turned on" on a class by class basis. I am neither advocating that at this point nor volunteering ;) edit, ok, i went back and read my comment and I was actually advocating, so mea culpa. I'll blame jet lag. |
@acinader if you have this issue just for specific class . you should search for resone that make it for this class . |
Overall we have to remember that the client is ‘smart’ and will retry by itself if a call fails. Perhaps this is not a desired behavior. On object creation we could probably ‘undo’ the save if the response times out, because that’s a simple save and the deletion is obvious (delete by Id) in case of a PUT, this is harmless as this will just retry to put the data. I can have a look at soft rollback on res.timeout. |
what if the response not timeout . some time response is bad json because network overwrite response to frame that contain ACCESS TO THIS WEBSITE IS DENIED . but object is created I will upload pics when I am at that network |
Then probably parse-server is not the right technology for your use case. You want transactional consistency and this can't be guaranteed by parse-server. |
Are you sure it's not your hosting provider that's throttling you???? |
any new in this Issues? |
It is unlikely we’ll address it |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Happening to me on every save that has more than 3 objects |
There are two ways I can think of to mitigate that:
|
Internet provider some time block response to device and device request to save object again because the response is error all off request success and save the object but device receive error response because internet provider block the site in response to device and show this web to device
http://82.114.160.94/webadmin/deny/
some time response success and device stop to request again but it save object some time to 4 times
it is big Issues
We use GitHub Issues for bugs.
If you have a non-bug question, ask on Stack Overflow or Server Fault:
You may also search through existing issues before opening a new one: https://github.com/ParsePlatform/Parse-Server/issues?utf8=%E2%9C%93&q=is%3Aissue
--- Please use this template. If you don't use this template, your issue may be closed without comment. ---
Issue Description
Describe your issue in as much detail as possible.
Steps to reproduce
Please include a detailed list of steps that reproduce the issue. Include curl commands when applicable.
Expected Results
What you expected to happen.
Actual Outcome
What is happening instead.
Environment Setup
Server
Database
Logs/Trace
Include all relevant logs. You can turn on additional logging by configuring VERBOSE=1 in your environment.
The text was updated successfully, but these errors were encountered: