-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: The process cannot access the file because another process has locked a portion of the file. #309
Comments
The current implementation is like this: if a file can't be opened at all, it will be skipped; however, if a file can be opened but can't be read, then Duplicacy will bail out with an error. It can't simply skip this file because some bytes from this file may have been already put into a chunk and now it is in an inconsistent state. However, in the case where no bytes have been read when a read error occurs, it may be safe to skip the file. In your case if you enable the -vss option that particular file may be readable. |
Vss didn't help! |
In this case, may i suggest duplicacy handles these files same as any other non-reading processes? Somehow i feel that a backup software should try to backup as much as possible: tell me what couldn't been backed up (give me error codes, etc. so i can give them to the developer), and then carry on. |
Again this problem, for a different file now:
|
This problem seems to continue even after a restart, and therefore i added the whole |
I've bumped into this same problem, and it;s darn annoying. Anything else is a nightmare! Backup runs an hour, then hangs. You find it later, start it , hangs again, etc. |
Shouldn't the process then be like this: At the time of the read error we are collecting for a certain chunk. At this time there must be a list of which files are in the current chunk until now. Then simply discard the current chunk, add the file which failed to read to an ignore list, and start the chunk from scratch with the files that we were able to read, skip the bad file and continue as usual. This is a major issue, I constantly run into this files, have to put them on my filter list and restart the whole process, which takes ages with 3TB to backup, because all files need to be hashed again. (I guess that's another ticket, to not rehash if the backup failed, but skip the files we already uploaded). |
Duplicacy is also failing for me due to some locked files. This is happening with the -vss option (though I'd prefer not to use it at all so a non-admin user can run the backup jobs). I know what files are locked - they're always some process-specific memory-mapped files which I don't necessarily care if they get backed up or not. I do have the option of messing around with include/exclude patterns to ignore these, but would prefer if there's a way I can tell Duplicacy to just ignore any file read or locked file errors and move on. Currently it's terminating with exit code 100:
The problem with include/exclude filters is that these filenames change, and may exist in different locations (which may include files which we want to be backed up). The other option for us is to do some dev work on the components that are creating these files in the first place, so they can all go into the same folder and/or use a consistent filename. |
I'm also having this problem, and I found the dev at FFS was able to solve this stating only: |
Another hit of this problem: |
I was testing 2.1.2 before purchase and ran into this same issue "Failed to read 0 bytes: read (file in question here): The process cannot access the file because another process has locked a portion of the file." Of all the software I've been testing so far, this is the only one to have this issue...? |
Also hitting this error. In my case the file is in my user directory: AppData/Local/Google/Chrome/User Data/Default/LOCK (not surprising that it's locked, I guess! ;) |
I was very happy with Duplicacy and it has ALL the features I want, but this is a deal breaker for me. After a few days, when I checked the logs, I saw that Duplicacy simply bails out mid backup, with only an error line in the logs. The process doesn't even exit with a non-zero exit code. If I hadn't checked the logs I had no idea that 80% of my backup was skipped.
|
One more occurrence of this on the forum: https://forum.duplicacy.com/t/unexpected-network-error-occured/2027 |
I'm pinging this again as it keeps happening on all my machines now for Telegram:
|
I got an update on this bug. Absolute path Symlinks inside VSSs still point to the non-VSS folder, and thus the live+locked files! Junctions are only absolute-pathed, have the same problem, and would cause major problems if looped. Don't even try using them! Solutions: Nevertheless, Duplicacy should never return a SUCCESS if a file failed to backup and the whole backup was interrupted... |
@gilbertchen and @TheBestPessimist , I had an idea for a solution for this problem: IF Symlink-mode and VSS is used: Using the paths from the last example by @TheBestPessimist : Currently, Duplicacy does this: Instead, it should do: This would make the root-symlinks nothing more than configurations, a list of path to folders to be backed-up by duplicacy, not using the FS symlink interface at all, but in principle would solve all LOCKs problems. |
This is brilliant @NiJoao - thank you for all your investigations! It's ironic that those of us who thought we were being clever by using symlinks intentionally, were, in fact, shooting ourselves in the foot. My favorite solution is your "(a) don't use symlink mode" combined with a fix to do things the "right way," i.e. as a command line argument. For example -f or -root or even -repo. (As I understand it, the main reason for the odd symlink behavior was to avoid another command line arg.) If I were doing it: -f would point to an file listing root folders, -f would treat "folder" as the root. If -f is specified, symbolic links are never followed. For backward compatibility, use the buggy behavior if -f is not specified (and therefore "-f ." disables symlink following in the default root). |
@NiJoao that's for the findings and suggestions (which i'm unsure how to verify), but my issue is that i get the same error even without using Re.
I honestly vote for this implementation since in my mind a backup is used as "worst-case scenario recovery". I would like to be able to restore anything, even if that is incomplete, instead of having my backups continuously fail simply because one file (which may even not be needed in the restore!) could not be read properly. A backup should save my ass no matter what, and that, for me, automatically implies the backup process should complete no matter what. (after all, without a new revision, there's nothing to restore -- there are only useless chunks from failed revisions). |
@TheBestPessimist your issue, of Duplicacy not able to access locked files even without -vss, is expected. Expected:
Not expected (Totally agreeing with you):
Duplicacy should skip the unreadable file only, and perform the remaining backup as usual. Anyway, I'm taking the obvious path that fixes everything which is migrating my backups (and filters) to Root/Repo mode. |
I've been affected by this issue and I'm willing to make a pull request to ignore all read errors (if this feature isn't already planned!). From reading this thread the consensus seems to be that we should stop reading a file on read error, discard it and move on. But would it be desirable to keep the partial file and flag it as corrupted if the read errors happens later than offset 0? |
Has there been any work done on this yet? |
For the past 5 days I am again having the error |
Any plans to update this so that a backup will continue on a file read error? Example below: "The network is busy" and the backup stops, it would be ideal to skip this file and continue. Failed to read 0 bytes: read \?\UNC....filename: The network is busy. |
I forked duplicacy and I just ignored this error in my version. I'm using this version for a while now: dluxhu@e77f665 |
Is it okay that duplicacy just dies when it cannot read a specific file? I thought it skips it and then carries on.
(yes i am backing up my whole C drive. i am doing this because i am trying to make a good filter file, which i intend to give it to you, if you want to add it, if not in code, at least in wiki -- as a start)
c_link
is a symlink toC:\
The text was updated successfully, but these errors were encountered: