Releases: pkolaczk/fclones
v0.17.1
Release 0.17.0
This release introduces a new command fclones dedupe
contributed by Thomas Otto (@th1000s).
Dedupe does not remove duplicate files, but it uses the copy-on-write capability
available on some file systems like BTRF or XFS to deduplicate
file data transparently.
What's Changed
- Extract linting and formatting to separate CircleCI jobs by @pkolaczk in #78
- Implement
fclones dedupe
command by @pkolaczk in #79 - Implement
fclones dedupe
command by @th1000s in #80 - Reflink bugfix, examples in --help by @th1000s in #81
- Documentation improvements by @pkolaczk in #82
Full Changelog: v0.16.1...v0.17.0
Match duplicate files only between different paths
In this release, a new flag -I
, --isolate
has been added. If this flag is given, duplicates found within a directory tree given as a single argument are all counted as one. Hence, in the following example, only duplicates that exist both inside dir1
and dir2
will be reported:
$ echo "foo" > test/dir1/foo1
$ echo "foo" > test/dir1/foo2
$ fclones group --isolate test/dir1 test/dir2
# Report by fclones 0.16.0
# Timestamp: 2021-09-26 16:11:21.905 +0200
# Command: fclones group --isolate test/dir1 test/dir2
# Found 0 file groups
# 0 B (0 B) in 0 redundant files can be removed
$ echo "foo" > test/dir2/foo3
# Report by fclones 0.16.0
# Timestamp: 2021-09-26 16:12:03.224 +0200
# Command: fclones group --isolate test/dir1 test/dir2
# Found 1 file groups
# 8 B (8 B) in 2 redundant files can be removed
6109f093b3fd5eb1060989c990d1226f, 4 B (4 B) * 3:
/home/pkolaczk/Projekty/fclones/test/dir1/foo1
/home/pkolaczk/Projekty/fclones/test/dir1/foo2
/home/pkolaczk/Projekty/fclones/test/dir2/foo3
Additionally, the output has been made deterministic when option --hardlinks
is set (patch by @th1000s).
Move duplicates to a different directory
New feature: fclones move <target_dir>
moves duplicates to a given directory.
The directory structure is preserved, so files with same names don't conflict with each other and it is easy to undo the move manually.
If files are moved within the same mount-point, only file metadata is modified, which is very fast.
If files are moved to a different mount-point, the data are copied first and the source files are removed afterwards.
Improved initialization speed
Now fclones doesn't scan sysfs on startup on Linux systems.
That saves additional 0.5-0.8 s (on my computer).
Removing duplicates based on JSON report
Now JSON formatted reports are accepted as input to fclones remove
and fclones link
.
The report format is automatically detected.
This new feature allows for easier programmatic processing of the list of duplicate files by standard json processing tools like jq
.
Fix --stdin option
This is a bugfix release to fix #60.
Fix deduplication on Windows
This is a bugfix release.
Fixes a bug with incorrect handling of line terminators (CRLF) on Windows.
Additionally improves error reporting accuracy.
Increase default size of thread pools used for SSD
This is a minor release that introduces better defaults for SSD.
The SSD benchmarks in README.md have been updated and extended with more programs.
Automatic file deduplication
This release introduces subcommands. The old functionality of duplicate file search has been moved to the group
subcommand. New subcommands have been introduced: link
and remove
. Both commands work on a list of files previously generated with group
, passed on the standard input. fclones remove
removes redundant files, fclones link
replaces redundant files with links.
Additionally, a set of options was added to control the selection of files that should be removed. It is possible to select files by their creation time, modification time, last access time, nesting level, or to match them by path / name patterns.