Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What a coincidence #39

Open
henfri opened this issue Sep 9, 2020 · 27 comments
Open

What a coincidence #39

henfri opened this issue Sep 9, 2020 · 27 comments
Assignees
Labels
bug Something isn't working

Comments

@henfri
Copy link

henfri commented Sep 9, 2020

Hello,

thanks for dduper!
I have run over a directory recursively:

dduper --device /dev/sda1 --dir /srv/dev-disk-by-label-DataPool1/Video/  -r --dry-run
Perfect match :  /srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip /srv/dev-disk-by-label-DataPool1/Video/VDR/unsortiert/Topspione_der_Geschichte/2016-11-04.20.13.23-0.rec/00055.ts
Summary
blk_size : 4KB  chunksize : 128KB
/srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip has 0 chunks
/srv/dev-disk-by-label-DataPool1/Video/VDR/unsortiert/Topspione_der_Geschichte/2016-11-04.20.13.23-0.rec/00055.ts has 0 chunks
Matched chunks: 0
Unmatched chunks: 0
Total size(KB) available for dedupe: 0
Perfect match :  /srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip /srv/dev-disk-by-label-DataPool1/Video/VDR/unsortiert/Topspione_der_Geschichte/2016-11-04.20.13.23-0.rec/00039.ts
Summary
blk_size : 4KB  chunksize : 128KB
/srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip has 0 chunks
/srv/dev-disk-by-label-DataPool1/Video/VDR/unsortiert/Topspione_der_Geschichte/2016-11-04.20.13.23-0.rec/00039.ts has 0 chunks

What I find odd is, that the plugin.video.vdr.recordings_0.2.4.zip seems to match every single ts file (https://fileinfo.com/extension/ts).
I can imagine that every ts file must contain a certain bit-pattern in it... But that to be in a zip file as well?

Greetings,
Hendrik

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Sep 9, 2020

Hi @henfri

Total size(KB) available for dedupe: 0

Above says no match data found.

And interestingly

/srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip has 0 chunks

says 0 chunks (meaning file size is 0)

Are you using new checksum types ? can you share some details on your BTRFS setup?

@Lakshmipathi Lakshmipathi self-assigned this Sep 9, 2020
@Lakshmipathi
Copy link
Owner

My initial impression : you are running into some dduper bug!

@Lakshmipathi Lakshmipathi added the bug Something isn't working label Sep 9, 2020
@henfri
Copy link
Author

henfri commented Sep 9, 2020

Hello,

Total size(KB) available for dedupe: 0

Above says no match data found.

And interestingly

/srv/dev-disk-by-label-DataPool1/Video/plugin.video.vdr.recordings_0.2.4.zip has 0 chunks

says 0 chunks (meaning file size is 0)

Hm, I have deleted the zip file now, to check if now another file would take its place...

But I found another case now:

-rwxrwxrwx 1 henfri users 121658 Feb  7  2017 /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip

Total size(KB) available for dedupe: 0
Perfect match :  /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip /srv/dev-disk-by-label-DataPool1/Video/Series/Tatort/Tatort/s2013/s2013e29 - Kalter Engel (2013).mkv
Summary
blk_size : 4KB  chunksize : 128KB
/srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip has 0 chunks
/srv/dev-disk-by-label-DataPool1/Video/Series/Tatort/Tatort/s2013/s2013e29 - Kalter Engel (2013).mkv has 0 chunks
Matched chunks: 0
Unmatched chunks: 0
Total size(KB) available for dedupe: 0

Are you using new checksum types ? can you share some details on your BTRFS setup?

No, no new checksum types. I have installed the static binary as per your instructions.

 btrfs fi show /dev/sda1
Label: 'DataPool1'  uuid: c4a6a2c9-5cf0-49b8-812a-0784953f9ba3
        Total devices 2 FS bytes used 6.65TiB
        devid    1 size 7.28TiB used 7.01TiB path /dev/sda1
        devid    2 size 7.28TiB used 7.01TiB path /dev/sdj1

uname -r
5.6.12
btrfs-progs v4.20.2

Regards,
Hendrik
What other information would be useful?

@Lakshmipathi
Copy link
Owner

Thanks for the details. Couple of things will be helpful:

  1. Can you share the exact command used to create your btrfs setup with label DataPool1 and how its mounted?
  2. What is the original size of /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip and .mkv file? Is it greater than 128KB?
  3. Also please share output for following dduper results
mkdir -p /srv/dev-disk-by-label-DataPool1/Video/testfiles

dd if=/dev/urandom of=/tmp/f1 bs=1M count=50
cp -v /tmp/f1 /srv/dev-disk-by-label-DataPool1/Video/testfiles/f1
cp -v /tmp/f1 /srv/dev-disk-by-label-DataPool1/Video/testfiles/f2
sync
dduper --device /dev/sda1 --dir /srv/dev-disk-by-label-DataPool1/Video/testfiles -r --dry-run

@henfri
Copy link
Author

henfri commented Sep 10, 2020

Hello,

  1. Can you share the exact command used to create your btrfs setup with label DataPool1 and how its mounted?

How it was created: Sorry, that is too long ago. I think it was a single FS first and then I changed it to RAID1 after adding a second drive.
It is mounted like this:
/dev/sda1 on /srv/dev-disk-by-label-DataPool1 type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)

  1. What is the original size of /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip and .mkv file? Is it greater than 128KB?
-rwxrwxrwx 1 henfri users 121658 Feb  7  2017 /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip
-rw------- 1 root users 7396165045 Jul 18  2016 '/srv/dev-disk-by-label-DataPool1/Video/Series/Tatort/Tatort/s2013/s2013e29 - Kalter Engel (2013).mkv'

  1. Also please share output for following dduper results
root@homeserver:/usr/bin# dduper --device /dev/sda1 --dir /srv/dev-disk-by-label-DataPool1/Video/testfiles -r --dry-run
Perfect match :  /srv/dev-disk-by-label-DataPool1/Video/testfiles/f1 /srv/dev-disk-by-label-DataPool1/Video/testfiles/f2
Summary
blk_size : 4KB  chunksize : 16384KB
/srv/dev-disk-by-label-DataPool1/Video/testfiles/f1 has 0 chunks
/srv/dev-disk-by-label-DataPool1/Video/testfiles/f2 has 0 chunks
Matched chunks: 0
Unmatched chunks: 0
Total size(KB) available for dedupe: 0
dduper took 41.705245460849255 seconds

Greetings,
Hendrik

@Lakshmipathi
Copy link
Owner

Thanks for the details. Strangely the basic check fails for 50MB files.

srv/dev-disk-by-label-DataPool1/Video/testfiles/f1 has 0 chunks
/srv/dev-disk-by-label-DataPool1/Video/testfiles/f2 has 0 chunks

It should report at-least 3 chunks. (since we have chunk size as 16MB). I did a quick check with RAID1 and it produced expected results.

$ sudo btrfs fi show /mnt
Label: none  uuid: bafaf984-a7bc-40c8-8b5b-74efb40b1fdc
	Total devices 2 FS bytes used 128.00KiB
	devid    1 size 512.00MiB used 123.19MiB path /dev/loop13
	devid    2 size 512.00MiB used 123.19MiB path /dev/loop14



$  btrfs fi du /mnt/f1 /mnt/f2
     Total   Exclusive  Set shared  Filename
  50.00MiB    50.00MiB       0.00B  /mnt/f1
  50.00MiB    50.00MiB       0.00B  /mnt/f2


$ sudo ./dduper --device /dev/loop13 --dir /mnt/ --dry-run
Perfect match :  /mnt/f1 /mnt/f2
Summary
blk_size : 4KB  chunksize : 16384KB
/mnt/f1 has 4 chunks
/mnt/f2 has 4 chunks
Matched chunks: 4
Unmatched chunks: 0
Total size(KB) available for dedupe: 65536 
dduper took 1.095728604001124 seconds


//after dedupe 50mb shared

$ btrfs fi du /mnt/f1 /mnt/f2
     Total   Exclusive  Set shared  Filename
  50.00MiB       0.00B    50.00MiB  /mnt/f1
  50.00MiB       0.00B    50.00MiB  /mnt/f2

I think it was a single FS first and then I changed it to RAID1 after adding a second drive.

May be this causes trouble, I'll start single FS and convert it into RAID1 and verify the results.

@henfri
Copy link
Author

henfri commented Sep 12, 2020

Hello,

that's really odd. Please let me know if I can do more to help you finding the issue.
Do you see any risk of data loss from my first run of dduper (which was no dry-run)?

Regards,
Hendrik

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Sep 12, 2020

sure, let me check few stuffs to re-create your issue.

Do you see any risk of data loss from my first run of dduper (which was no dry-run)?

No, I don't think there is any data loss from your run. Since dduper always show has 0 chunks instead of actual file size. And also you are running in default-mode. So there will be no data loss because of dduper on your system.

@Lakshmipathi
Copy link
Owner

Hi @henfri ,

Can you provide extent details output for these files?

filefrag -e srv/dev-disk-by-label-DataPool1/Video/testfiles/f1
filefrag -e /srv/dev-disk-by-label-DataPool1/Video/testfiles/f2
filefrag -e /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Sep 13, 2020

Wait, are you running dduper on btrfs subvolume. Can you try it on root volume ? May be this issue related to #35 (comment)

@henfri
Copy link
Author

henfri commented Sep 13, 2020

Hello,

it is a sub-volume, but it is not mounted as sub-volume.

mount |grep sda
/dev/sda1 on /srv/dev-disk-by-label-DataPool1 type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)

I ran it in the root folder of it now. That does not look good:

dduper --device /dev/sda1 --dir /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles -r --dry-run
parent transid verify failed on 9332119748608 wanted 204976 found 204978
parent transid verify failed on 9332119748608 wanted 204976 found 204978
parent transid verify failed on 9332119748608 wanted 204976 found 204978
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=9332109934592 item=6 parent level=2 child level=0
ERROR: failed to read block groups: Input/output error
unable to open /dev/sda1
parent transid verify failed on 9332147879936 wanted 204979 found 204981
parent transid verify failed on 9332147879936 wanted 204979 found 204981
parent transid verify failed on 9332147879936 wanted 204979 found 204981
Ignoring transid failure
ERROR: failed to read block groups: Operation not permitted
unable to open /dev/sda1
Perfect match :  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
Summary
blk_size : 4KB  chunksize : 16384KB
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 has 0 chunks
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2 has 0 chunks
Matched chunks: 0
Unmatched chunks: 0
Total size(KB) available for dedupe: 0
dduper took 10.663485337048769 seconds

I am currently running a balance.. Not sure if that could be related.

root@homeserver:~# filefrag -e /srv/dev-disk-by-label-DataPool1/Video/testfiles/f1
Filesystem type is: 9123683e
File size of /srv/dev-disk-by-label-DataPool1/Video/testfiles/f1 is 52428800 (12800 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..    6431: 3447932207..3447938638:   6432:
   1:     6432..   12799: 3448851294..3448857661:   6368: 3447938639: last,eof
/srv/dev-disk-by-label-DataPool1/Video/testfiles/f1: 2 extents found
root@homeserver:~# filefrag -e /srv/dev-disk-by-label-DataPool1/Video/testfiles/f2
Filesystem type is: 9123683e
File size of /srv/dev-disk-by-label-DataPool1/Video/testfiles/f2 is 52428800 (12800 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..   12799: 3448857662..3448870461:  12800:             last,eof
/srv/dev-disk-by-label-DataPool1/Video/testfiles/f2: 1 extent found
root@homeserver:~# filefrag -e /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip
Filesystem type is: 9123683e
File size of /srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip is 121658 (30 blocks of 4096 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..      29: 3194406235..3194406264:     30:             last,eof
/srv/dev-disk-by-label-DataPool1/Video/xbmc.repo.elmerohueso-1.0.zip: 1 extent found

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Sep 13, 2020

thanks, filefrag output shows proper extents.

I am currently running a balance.. Not sure if that could be related.

Okay, let me know the status of --dry-run on /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 after rebalance completed. It should show few chunks instead of '0'

We have two other bugs related to sub-volume. which shows '0' chunks. I hope to fix them soon.

@Lakshmipathi
Copy link
Owner

Quick update on subvolume: I spent 3 or 4 days trying to figure out issue with sub-volume, I can dump csum of subvolume from different code path. Still need some work to explore the btrfs disk-layout.

@henfri
Copy link
Author

henfri commented Oct 15, 2020

Ok, thanks for the status update and sorry for my late reply.

Here the output after the balance (the balance did not fix the transid failures, but this did:
mount -t btrfs -o nospace_cache,clear_cache /dev/sda1 /mnt/test)

dduper --device /dev/sda1 --dir /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles -r --dry-run
Perfect match :  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
Summary
blk_size : 4KB  chunksize : 16384KB
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 has 4 chunks
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2 has 4 chunks
Matched chunks: 4
Unmatched chunks: 0
Total size(KB) available for dedupe: 65536
dduper took 145.25861906504724 seconds

Regards,
Hendrik

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Oct 19, 2020

thanks for the output.

/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 has 4 chunks
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2 has 4 chunks

Now it seems like reporting correct values. Can you try running dedupe on two files

$  btrfs fi du  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2

$ dduper --device /dev/sda1 --files /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2

$  btrfs fi du  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2

After dedupe, can you see values under shared from btrfs fi du output?

@henfri
Copy link
Author

henfri commented Oct 19, 2020

Hm...

btrfs fi du  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
     Total   Exclusive  Set shared  Filename
  50.00MiB    50.00MiB       0.00B  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1
  50.00MiB    50.00MiB       0.00B  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
root@homeserver:/home/henfri# dduper --device /dev/sda1 --files /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2

 btrfs fi du  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2


parent transid verify failed on 16465691033600 wanted 352083 found 352085
parent transid verify failed on 16465691033600 wanted 352083 found 352085
parent transid verify failed on 16465691033600 wanted 352083 found 352085
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=16465689034752 item=113 parent level=2 child level=0
ERROR: failed to read block groups: Input/output error
unable to open /dev/sda1
************************
Dedupe completed for /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1:/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
Summary
blk_size : 4KB  chunksize : 128KB
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 has 400 chunks
/srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2 has 0 chunks
Matched chunks: 0
Unmatched chunks: 0
Total size(KB) deduped: 0
dduper took 155.268902996002 seconds

root@homeserver:/home/henfri#  btrfs fi du  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1 /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2
     Total   Exclusive  Set shared  Filename
  50.00MiB    50.00MiB       0.00B  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f1
  50.00MiB    50.00MiB       0.00B  /srv/dev-disk-by-label-DataPool1/dduper_test/testfiles/f2

I have no Idea, why I have the parent transid verify failed on 16465691033600 wanted 352083 found 352085 again...

I do not see any transid verify failed messages in /var/log/*

Regards,
Hendrik

@Lakshmipathi
Copy link
Owner

Lakshmipathi commented Oct 20, 2020

parent transid verify failed on

Typically points to file system errors. and you can see dduper fails to open the device and it said

unable to open /dev/sda1

So it didn't perform any deduplication on your device.

If I'm not wrong, something going/gone with your filesystem. please post the errors on btrfs emailing list and resolve the issue before running dduper. Similar issue: https://stackoverflow.com/a/46472522/246365

@henfri
Copy link
Author

henfri commented Nov 6, 2020

Hello,

I did post the error on the btrfs mailing list and it is suspected to be a bug in dduper:
https://lore.kernel.org/linux-btrfs/em2ffec6ef-fe64-4239-b238-ae962d1826f6@ryzen/T/

file deduper:

194 def btrfs_dump_csum(filename):
195     global device_name
196 
197     btrfs_bin = "/usr/sbin/btrfs.static"
198     if os.path.exists(btrfs_bin) is False:
199         btrfs_bin = "btrfs"
200 
201     out = subprocess.Popen(
202         [btrfs_bin, 'inspect-internal', 'dump-csum', filename, device_name],
203         stdout=subprocess.PIPE,
204         close_fds=True).stdout.readlines()
205     return out

OK there's the problem: it's dumping csums from a mounted filesystem by
reading the block device instead of using the TREE_SEARCH_V2 ioctl.
Don't do that, because it won't work. ;)

The "parent transid verify failed" errors are harmless. They are due
to the fact that a process reading the raw block device can never build
an accurate model of a live filesystem that is changing underneathi it.

If you manage to get some dedupe to happen, then that's a bonus.

I suggest to continue (if needed) the discussion on the btrfs mailing-list.

Regards,
Hendrik

@Lakshmipathi
Copy link
Owner

@henfri ,

First of all, sorry for pretty late response.

Thanks for updating the issue. I had a quick look at the mailing list thread. I agree that it looks like a bug with dduper. I'll try to reproduce this issue and hope to fix it. Thank you again!

@adam-devel
Copy link

should i be worried?

@adam-devel
Copy link

adam-devel commented Dec 23, 2020

i think, if the dry-run indicate no errors, i shouldn't worry about deduping my btrfs partition?

@Lakshmipathi
Copy link
Owner

In this issue, @henfri received messages like #39 (comment)

parent transid verify failed on 9332119748608 wanted 204976 found 204978
unable to open /dev/sda1

As per ML post, this seems to happen when file system is currently active and file are being updated.

i think, if the dry-run indicate no errors, i shouldn't worry about deduping my btrfs partition?

Yes, With default mode, dduper asks the kernel to perform validation and then perform dedupe if required.
```dduper --device /dev/sda1 --files /mnt/f1 /mnt/f2``. Please run dduper on sample files.

If you plan to run it on critical data, I would recommend:

  1. Backup data
  2. Run sha256 on btrfs files and store it a file.
  3. Now perform dduper in default mode.
  4. Re-Run sha256 and verify that it matches with results from Step-2
  5. If everything looks fine, delete backup data from Step-1.

Please remember dduper works only with top-level subvolume (id=5) and other subvolume with id >= 256 won't work, as of now.

@adam-devel
Copy link

wow thanks for the very fast reply
would it be a problem that i used to dedupe with duperemove?
maybe dduper expect a filesystem that's not already deduped, or deduped a certain way, i would think that this is managed by the fs and therefore its abstracted away so i shouldn't worry, but these tools seem to be doing a low level job, maybe past that abstraction layer

@adam-devel
Copy link

does the partition have to be unmounted? or that doesn't matter?

if ddupe is able to dedupe unmounted partitions that would be safer, there will be no files being updated

@Lakshmipathi
Copy link
Owner

would it be a problem that i used to dedupe with duperemove?
maybe dduper expect a filesystem that's not already deduped, or deduped a certain way

I haven't tried using dduper after running duperremove. But I guess it won't make any difference for dduper as it relies on low-level csum data. To be 100% sure, can you try running it on sample directory a) first run duperremove and then check with dduper. Verify the results.

if ddupe is able to dedupe unmounted partitions that would be safer, there will be no files being updated

Currently dduper expects partition to be mounted. That's good idea, to unmount partition and then run dedupe on it. This should resolve ML reported issues. Let me try to add new option --unmount for this tool :)

@RlndVt
Copy link

RlndVt commented Jan 13, 2021

As mentioned in the mailing list, are you considering switching to using the TREE_SEARCH_V2 ioctl?

The kernel provides TREE_SEARCH_V2, an ioctl which can provide a map
from file to extent address, and also retrieve the csum tree items by
extent address from a mounted filesystem. Accessing the csums through the
filesystem's ioctl interface instead of the block device will eliminate
the race condition leading to parent transid verify failure. Switching to
this ioctl in dduper would also remove the dependency on non-upstream
btrfs-progs and probably make it run considerably faster, especially on
filesystems with many small files. I'd recommend using python-btrfs for
that--no need to invent the wheel twice in the same programming language.

Thanks,

@Lakshmipathi
Copy link
Owner

Hi @RlndVt ,

Yes, I'm planning to add "TREE_SEARCH_V2 ioctl" as option. Initially, user can pass cli option (--use-ioctl) to use it, later completely switch to ioctl. I'm kind of stuck with fixing dduper on subvolume (getting closer but still no luck) and that delayed working on other stuff like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants