Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: The process cannot access the file because another process has locked a portion of the file. #309

Open
TheBestPessimist opened this issue Dec 20, 2017 · 26 comments

Comments

@TheBestPessimist
Copy link
Contributor

TheBestPessimist commented Dec 20, 2017

2017-12-20 12:45:50.604 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/SettingSync/metastore/meta.jfm: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.604 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/SettingSync/remotemetastore/v1/edb.log: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.646 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/SettingSync/remotemetastore/v1/meta.edb: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.646 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/SettingSync/remotemetastore/v1/meta.jfm: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.654 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/UPPS/UPPS.bin: Access is denied.
2017-12-20 12:45:50.654 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/WebCache/V01.log: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.675 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/WebCache/V01tmp.log: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.675 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/WebCache/WebCacheV01.dat: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.675 WARN OPEN_FAILURE Failed to open file for reading: open D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Microsoft/Windows/WebCache/WebCacheV01.jfm: The process cannot access the file because it is being used by another process.
2017-12-20 12:45:50.703 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk d36bb82e5c5781ad5ed0ff7c88025ae1885d1380d52a6c047126532b3a036109
2017-12-20 12:45:50.703 INFO UPLOAD_PROGRESS Uploaded chunk 1678 size 8503908, 18.56MB/s 01:32:41 7.5%
2017-12-20 12:45:50.704 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk bb2e5056e24561c6a8ff68ca9d0f92c220208b0e3e1a0d03296dee45203f7b07
2017-12-20 12:45:50.704 INFO UPLOAD_PROGRESS Uploaded chunk 1679 size 1092829, 18.56MB/s 01:32:40 7.5%
2017-12-20 12:45:50.773 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 394e20d9c257e2b2cb594673cd9b3595989d1ebb1431652351f105bb5ea147dc
2017-12-20 12:45:50.773 INFO UPLOAD_PROGRESS Uploaded chunk 1680 size 7449196, 18.58MB/s 01:32:35 7.5%
2017-12-20 12:45:50.837 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 8ffc8728ac9b007a57f203292e1a69d2ccdcda438704226edd7a34d42208fb80
2017-12-20 12:45:50.837 INFO UPLOAD_PROGRESS Uploaded chunk 1681 size 7339147, 18.59MB/s 01:32:30 7.5%
2017-12-20 12:45:50.914 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 53cc639910a9e92fd959404674b4a521a89cc46310be19d724d3117d2d0a3915
2017-12-20 12:45:50.914 INFO UPLOAD_PROGRESS Uploaded chunk 1682 size 3153920, 18.60MB/s 01:32:28 7.5%
2017-12-20 12:45:50.980 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 6e1d8565050fc03d04509ed093be2c427d5887add5a520c530747a72c34c9d5c
2017-12-20 12:45:50.980 INFO UPLOAD_PROGRESS Uploaded chunk 1683 size 3690215, 18.61MB/s 01:32:26 7.5%
2017-12-20 12:45:51.625 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 0382ffe4037fdee43dbec5b17dbd358fa323d2fd5f8de6824f930e3092dde663
2017-12-20 12:45:51.625 INFO UPLOAD_PROGRESS Uploaded chunk 1685 size 5196331, 18.58MB/s 01:32:34 7.5%
2017-12-20 12:45:51.654 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 520e9d6539a4f8d1645058948ed26784391fbd8d3bf71261842cb56926a07b93
2017-12-20 12:45:51.654 INFO UPLOAD_PROGRESS Uploaded chunk 1665 size 1546293, 18.58MB/s 01:32:33 7.5%
2017-12-20 12:45:51.750 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk 791039ab5248ef293752d6233da8fdfe1265332a675fe196b36d30785cfd8502
2017-12-20 12:45:51.750 INFO UPLOAD_PROGRESS Uploaded chunk 1686 size 5256058, 18.59MB/s 01:32:30 7.5%
2017-12-20 12:45:51.828 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk fdb754593eea29cb7e3cef2ef4dd179e14fad521122b2457d73406d586dc7f59
2017-12-20 12:45:51.828 INFO UPLOAD_PROGRESS Uploaded chunk 1687 size 5974861, 18.60MB/s 01:32:26 7.5%
2017-12-20 12:45:52.367 DEBUG CHUNK_UPLOAD Uploading was skipped for chunk b01524e43707d91f3f64a772f7028432ba0978a86f05e5bcb3b28affe80ca6f5
2017-12-20 12:45:52.367 INFO UPLOAD_PROGRESS Uploaded chunk 1684 size 5056151, 18.57MB/s 01:32:34 7.6%
2017-12-20 12:45:53.824 ERROR CHUNK_MAKER Failed to read 0 bytes: read D:\backup_software_test\duplicacy repo/c_link/Users/crist/AppData/Local/Packages/Microsoft.SkypeApp_kzf8qxf38zg5c/LocalState/DataRv/offline-storage.data-shm: The process cannot access the file because another process has locked a portion of the file.

Is it okay that duplicacy just dies when it cannot read a specific file? I thought it skips it and then carries on.

(yes i am backing up my whole C drive. i am doing this because i am trying to make a good filter file, which i intend to give it to you, if you want to add it, if not in code, at least in wiki -- as a start)

  • Note: c_link is a symlink to C:\
  • Second note: i am not using skype at all. Not the modern app and not the win32 app, so i don't know why windows runs it.
  • Note 3: as you can see, there are some files which are skipped because of read problems, but this one wasn't.
@gilbertchen
Copy link
Owner

The current implementation is like this: if a file can't be opened at all, it will be skipped; however, if a file can be opened but can't be read, then Duplicacy will bail out with an error. It can't simply skip this file because some bytes from this file may have been already put into a chunk and now it is in an inconsistent state.

However, in the case where no bytes have been read when a read error occurs, it may be safe to skip the file.

In your case if you enable the -vss option that particular file may be readable.

@TheBestPessimist
Copy link
Contributor Author

Vss didn't help!

@gilbertchen
Copy link
Owner

This suggested that this error is caused by the use of the LockFile function. I couldn't find any doc on VSS and LockFile, but if it didn't help in your test then probably they aren't compatible with each other.

@TheBestPessimist
Copy link
Contributor Author

In this case, may i suggest duplicacy handles these files same as any other non-reading processes?

Somehow i feel that a backup software should try to backup as much as possible: tell me what couldn't been backed up (give me error codes, etc. so i can give them to the developer), and then carry on.
In this way i will have a backup of at least most of my data, instead of none. (Think of someone who sets up a scheduled task for backup and doesn't check the stats at the end -- for example i would be doing this for my grandma's computer)

@TheBestPessimist
Copy link
Contributor Author

Again this problem, for a different file now:

2018-03-18 11:58:50.344 ERROR CHUNK_MAKER Failed to read 0 bytes: 
read C:\duplicacy repo/C__Users_link/crist/AppData/Local/NVIDIA/GLCache/bb8124ee13ed6bd9595b872889a79514/f2245e201e5b7842/8aa002a52e1fef93.bin: 
The process cannot access the file because another process has locked a portion of the file.

@TheBestPessimist
Copy link
Contributor Author

This problem seems to continue even after a restart, and therefore i added the whole AppData/Local/NVIDIA/ folder to my ignore list.

@ghost
Copy link

ghost commented Apr 15, 2018

I've bumped into this same problem, and it;s darn annoying.
I am in agreement with The BestPessimist:
"backup software should try to backup as much as possible: tell me what couldn't been backed up (give me error codes, etc. so i can give them to the developer), and then carry on."

Anything else is a nightmare! Backup runs an hour, then hangs. You find it later, start it , hangs again, etc.

@dreamflasher
Copy link

It can't simply skip this file because some bytes from this file may have been already put into a chunk and now it is in an inconsistent state.

Shouldn't the process then be like this: At the time of the read error we are collecting for a certain chunk. At this time there must be a list of which files are in the current chunk until now. Then simply discard the current chunk, add the file which failed to read to an ignore list, and start the chunk from scratch with the files that we were able to read, skip the bad file and continue as usual.

This is a major issue, I constantly run into this files, have to put them on my filter list and restart the whole process, which takes ages with 3TB to backup, because all files need to be hashed again. (I guess that's another ticket, to not rehash if the backup failed, but skip the files we already uploaded).

@mojimba
Copy link

mojimba commented Aug 16, 2018

Duplicacy is also failing for me due to some locked files. This is happening with the -vss option (though I'd prefer not to use it at all so a non-admin user can run the backup jobs). I know what files are locked - they're always some process-specific memory-mapped files which I don't necessarily care if they get backed up or not.

I do have the option of messing around with include/exclude patterns to ignore these, but would prefer if there's a way I can tell Duplicacy to just ignore any file read or locked file errors and move on. Currently it's terminating with exit code 100:

Storage set to \\srv07bck01\backup$\SRV12UTIL01\duplicacy
No previous backup found
Creating a shadow copy for C:\
Shadow copy {EF0B671F-3CEF-4AED-826E-EC046AE529AB} created
Indexing C:\Backups\BackupSources
Incomplete snapshot loaded from C:\Backups\BackupSources/.duplicacy/incomplete
Listing all chunks
Skipped 38127 files from previous incomplete backup
Skipped chunk 2 size 3507432, 3.34MB/s 02:02:11 0.0%
...
Skipped chunk 27 size 1861202, 124.47MB/s 00:03:17 0.5%
Failed to read 0 bytes: read 
C:\Backups\BackupSources/.duplicacy\shadow\\Backups\BackupSources/repo-home/var/cache/Configs/repo/data077991.bin: The process cannot access the file because another process has locked a portion of the file.

The problem with include/exclude filters is that these filenames change, and may exist in different locations (which may include files which we want to be backed up). The other option for us is to do some dev work on the components that are creating these files in the first place, so they can all go into the same folder and/or use a consistent filename.

@NiJoao
Copy link

NiJoao commented Sep 21, 2018

I'm also having this problem, and I found the dev at FFS was able to solve this stating only:
"I've fixed the issue which was related to locked byte ranges not being considered by FFS, but now they are".
Does it ring a bell to a possible solution, @gilbertchen ?

@TheBestPessimist
Copy link
Contributor Author

Another hit of this problem:
2018-12-12 20:00:28.118 ERROR CHUNK_MAKER Failed to read 0 bytes: read C:\duplicacy repositories\tbp-nuc/C__Users_link/crist/AppData/Local/Google/Chrome/User Data/Default/LOCK: The process cannot access the file because another process has locked a portion of the file.

@Xipherisroot
Copy link

I was testing 2.1.2 before purchase and ran into this same issue

"Failed to read 0 bytes: read (file in question here): The process cannot access the file because another process has locked a portion of the file."

Of all the software I've been testing so far, this is the only one to have this issue...?

@praxiq
Copy link

praxiq commented Apr 4, 2019

Also hitting this error. In my case the file is in my user directory: AppData/Local/Google/Chrome/User Data/Default/LOCK (not surprising that it's locked, I guess! ;)

@vladgh
Copy link

vladgh commented Apr 30, 2019

I was very happy with Duplicacy and it has ALL the features I want, but this is a deal breaker for me. After a few days, when I checked the logs, I saw that Duplicacy simply bails out mid backup, with only an error line in the logs. The process doesn't even exit with a non-zero exit code. If I hadn't checked the logs I had no idea that 80% of my backup was skipped.
For me the errored files are the .nst or .ost in the Outlook folder.
I tried excluding them, or the entire path, by adding it to filters, but it doesn't work. The error is still there.
This is great software, but it's a pity, because basically, unless I close Outlook, I cannot backup my computer (and not even know it).

2019-04-30 10:02:57.225 INFO UPLOAD_PROGRESS Skipped chunk 53 size 16777216, 34.36MB/s 00:00:27 20.7%
2019-04-30 10:02:57.284 INFO UPLOAD_PROGRESS Skipped chunk 54 size 5657007, 35.13MB/s 00:00:26 21.2%
2019-04-30 10:02:57.321 INFO UPLOAD_PROGRESS Skipped chunk 55 size 4015717, 35.67MB/s 00:00:26 21.5%
2019-04-30 10:02:57.425 INFO UPLOAD_PROGRESS Skipped chunk 56 size 8110700, 36.78MB/s 00:00:25 22.2%
2019-04-30 09:42:55.181 ERROR CHUNK_MAKER Failed to read 0 bytes: read C:\duplicacy/.duplicacy\shadow\\duplicacy/vlad/AppData/Local/Microsoft/Outlook/[email protected] - zzz.nst: The process cannot access the file because another process has locked a portion of the file.
2019-04-30 09:42:55.197 INFO VSS_DELETE The shadow copy has been successfully deleted

@TheBestPessimist
Copy link
Contributor Author

One more occurrence of this on the forum: https://forum.duplicacy.com/t/unexpected-network-error-occured/2027

@TheBestPessimist
Copy link
Contributor Author

I'm pinging this again as it keeps happening on all my machines now for Telegram:

2019-08-03 21:02:48.729 ERROR CHUNK_MAKER Failed to read 0 bytes: read C:\duplicacy repositories\tbp-v/all_link/Telegram/tdata/user_data/media_cache/1/binlog: The process cannot access the file because another process has locked a portion of the file.

@NiJoao
Copy link

NiJoao commented Aug 5, 2019

I got an update on this bug.
After many trials with manually locked files and configurations, I found the culprit!!

Absolute path Symlinks inside VSSs still point to the non-VSS folder, and thus the live+locked files!
Found this post from 2014 reporting the same finding.
Also, Duplicacy doesn't seem to follow relative-paths Symlinks, which work ok with VSS.
It seems VSS is useless with Duplicacy in Symlinked-mode

Junctions are only absolute-pathed, have the same problem, and would cause major problems if looped. Don't even try using them!

Solutions:
a) Don't use symlink mode.
b) Update duplicacy to support relative-path symlinks. Which doesn't work between drives and is very cumbersome, since C:\Users\username\Downloads\all_link -> C:\ would become C:\Users\username\Downloads\all_link -> ..\..\..\..\.
c) Update duplicacy to test-read files and skip if fail (as per solution of #559, which I consider bad practice)
d) Lock at my next comment!!

Nevertheless, Duplicacy should never return a SUCCESS if a file failed to backup and the whole backup was interrupted...

@NiJoao
Copy link

NiJoao commented Aug 5, 2019

@gilbertchen and @TheBestPessimist , I had an idea for a solution for this problem:

IF Symlink-mode and VSS is used:
For each symlink in the root, get it's target and replace the drive letter (C:\) for the path of the Shadow-Copy.

Using the paths from the last example by @TheBestPessimist :
C:\duplicacy repositories\tbp-v\all_link -> C:\
VSS created at C:\duplicacy repositories\tbp-v\shadow

Currently, Duplicacy does this:
C:\duplicacy repositories\tbp-v\* -> C:\duplicacy repositories\tbp-v\shadow\duplicacy repositories\tbp-v\* -> C:\duplicacy repositories\tbp-v\shadow\duplicacy repositories\tbp-v\all_link -> C:\
I.e.: replaces the C:\ of the working folder by the VSS path,
and thus tries to backup C:\duplicacy repositories\tbp-v\shadow\duplicacy repositories\tbp-v\all_link
which still is a symlink pointing to C:\, and thus might have open locks.

Instead, it should do:
C:\duplicacy repositories\tbp-v\* -> C:\duplicacy repositories\tbp-v\all_link -> C:\ -> C:\duplicacy repositories\tbp-v\shadow
I.e.: list the root-symlinks first, replacing the C:\ of each one for the VSS path.

This would make the root-symlinks nothing more than configurations, a list of path to folders to be backed-up by duplicacy, not using the FS symlink interface at all, but in principle would solve all LOCKs problems.

@arikorn
Copy link
Contributor

arikorn commented Aug 7, 2019

This is brilliant @NiJoao - thank you for all your investigations! It's ironic that those of us who thought we were being clever by using symlinks intentionally, were, in fact, shooting ourselves in the foot.

My favorite solution is your "(a) don't use symlink mode" combined with a fix to do things the "right way," i.e. as a command line argument. For example -f or -root or even -repo. (As I understand it, the main reason for the odd symlink behavior was to avoid another command line arg.)

If I were doing it: -f would point to an file listing root folders, -f would treat "folder" as the root. If -f is specified, symbolic links are never followed. For backward compatibility, use the buggy behavior if -f is not specified (and therefore "-f ." disables symlink following in the default root).

@TheBestPessimist
Copy link
Contributor Author

TheBestPessimist commented Sep 21, 2019

@NiJoao that's for the findings and suggestions (which i'm unsure how to verify), but my issue is that i get the same error even without using -vss flag.

Re.

c) Update duplicacy to test-read files and skip if fail (as per solution of #559, which I consider bad practice)

I honestly vote for this implementation since in my mind a backup is used as "worst-case scenario recovery". I would like to be able to restore anything, even if that is incomplete, instead of having my backups continuously fail simply because one file (which may even not be needed in the restore!) could not be read properly.

A backup should save my ass no matter what, and that, for me, automatically implies the backup process should complete no matter what. (after all, without a new revision, there's nothing to restore -- there are only useless chunks from failed revisions).

@NiJoao
Copy link

NiJoao commented Sep 23, 2019

@TheBestPessimist your issue, of Duplicacy not able to access locked files even without -vss, is expected.

Expected:

  • Duplicacy NOT able to read locked files without -vss
  • Duplicacy NOT able to read locked files with -vss in symlink mode
  • Duplicacy IS able to read locked files with -vss in root/repo mode.

Not expected (Totally agreeing with you):

  • Duplicacy to crash, skip other files, or leave the revision unterminated, while returning an exit code of 0 (OK) when a locked file exists!!

Duplicacy should skip the unreadable file only, and perform the remaining backup as usual.
The return exit code on such situations is debatable.

Anyway, I'm taking the obvious path that fixes everything which is migrating my backups (and filters) to Root/Repo mode.
And I suggest @gilbertchen to at least state in the documentation the incompatibility between symlink mode and the -VSS, if he does not want to implement my suggested fix. I don't feel confident enough to do it myself and make a PR.

@ducalex
Copy link

ducalex commented Oct 7, 2020

I've been affected by this issue and I'm willing to make a pull request to ignore all read errors (if this feature isn't already planned!).

From reading this thread the consensus seems to be that we should stop reading a file on read error, discard it and move on.

But would it be desirable to keep the partial file and flag it as corrupted if the read errors happens later than offset 0?

@dluxhu
Copy link

dluxhu commented Aug 26, 2022

Has there been any work done on this yet?

@dreamflasher
Copy link

For the past 5 days I am again having the error The process cannot access the file because another process has locked a portion of the file. despite -vss. Googled it, found this issue… where I posted 5 years ago 😂 So, even 5 years later I am still very much interested in a solution!

@gkwok604
Copy link

Any plans to update this so that a backup will continue on a file read error?

Example below: "The network is busy" and the backup stops, it would be ideal to skip this file and continue.

Failed to read 0 bytes: read \?\UNC....filename: The network is busy.

@dluxhu
Copy link

dluxhu commented May 20, 2024

I forked duplicacy and I just ignored this error in my version. I'm using this version for a while now: dluxhu@e77f665

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests