Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Hardlinks (Any number) #48

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

undaunt
Copy link

@undaunt undaunt commented Nov 6, 2024

I took some inspiration from #42 and #46 to work on a way to support any number of hardlinks.

The general workflow is all of the files in a given dataset target will be listed, inodes calculated, then files per inode grouped, such that the hardlinks will be deleted when the original file is deleted during the delete/move process, and they are then immediately recreated based off the copy. This happens per inode group so there would be no downtime of a given file.

I also added a debug mode to export a lot more output on the move/copy/hardlink process.

I removed the skip hardlink function since hardlinks are now natively supported. The read me has also been updated to reflect all of this.

The output for a given line when it is skipped will reflect whether a given file is or is not part of a hardlink inode group.

-Adds debug functionality with extended details
-Supports detecting inode groups for hardlink processing.
-Pulls files and sorts by, then groups by inode group with awk
-Checks all files in an inode group's counts when calculating skipping counts
-Removes existing skip hardlink flag
-Removes hardlinks and recreates them directly after the balance copy/delete/move operation per inode group to minimize 'downtime'
Adds details around the debug flag, hardlink support, removed --skip-hardlinks functionality, and temporary files used during the script processing.
Removed the 'recreating hardlinks' echo for inode groups of 1 file.
Copy link
Owner

@markusressel markusressel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx for the PR!

I have left some comments to further improve it 🤓

Also: Tests seem to be failing, so we need to fix them.

if [[ "${OSTYPE,,}" == "linux-gnu"* ]]; then
# Linux

# --reflink=never -- force standard copy (see ZFS Block Cloning)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't remove this documentation.
It was added specifically to more easily understand the platform differences at a glance, especially since -a does not do the same thing on both platforms.
Yes this information can technically be outdated, but we can always update it if that ever happens (which I highly doubt).

cp -ax "${file_path}" "${tmp_file_path}"
# Mac OS and FreeBSD
cmd=(cp -ax "${main_file}" "${tmp_file_path}")
if [ "$debug_flag" = true ]; then
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a "echo_debug" function for this?

copy_md5=$(lsattr "${tmp_file_path}" | awk '{print $1}')
# file permissions, owner, group
# shellcheck disable=SC2012
copy_md5="${copy_md5} $(ls -lha "${tmp_file_path}" | awk '{print $1 " " $3 " " $4}')"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what the intent was here 🤔
These checks were added to ensure not only the file contents match but also all other attributes (at least all that make sense). Removing those checks might cause problems for people that require the same attributes, don't get them and script doesn't validate it, so they won't even know about it unless they check manually. Things like SMB shares can be problematic for retaining attributes, to give an example.


# Only recreate hardlinks if there are multiple paths
if [ "${num_paths}" -gt 1 ]; then
echo "Recreating hardlinks..."
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original hardlinks are deleted together with the origjnal file, so we don't need to worry about the target file already existing at this point, right?

fi
# Update rebalance "database" for all files
for path in "${paths[@]}"; do
line_nr=$(grep -xF -n -e "${path}" "./${rebalance_db_file_name}" | head -n 1 | cut -d: -f1)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add "-e" here?

echo "Linking ${main_file} to ${paths[$i]}"
fi
ln "${main_file}" "${paths[$i]}"
done
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this script tries to retain attributes, do we need to perform additional checks on the created hardlinks? Do hardlinks even have additional properties?

cp --reflink=never -ax "${file_path}" "${tmp_file_path}"
cmd=(cp --reflink=never -ax "${main_file}" "${tmp_file_path}")
if [ "$debug_flag" = true ]; then
echo "${cmd[@]}"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also a shell flag to print all commands that are executed, which might be easier than print each command ourselves. set -x , see: https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html#index-set

@@ -28,6 +30,10 @@ Since file attributes are fully retained, it is not possible to verify if an ind
1
```

All files in a given inode group will be added to the database when processed. The highest count in a given inode group of files will be used to determine if the group should be skipped when processing against the number of passes in a given script execution.

The hardlink support process creates temporary files in the script location alongside `rebalance_db.txt` which are removed upon the end of each run. `files_list.txt` lists all files found in the given target location. `sorted_files_list.txt` lists all files sorted by inode number. `grouped_inodes.txt` lists all files by inode, but with all files from a given inode space separated on one line.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know saving data in bash is hard, but do we reall need all these additional files just to support hardlinks?
I am asking because I worry about the "cancel" or "error" case while the script is running and must be able to continue fine afterwards. Adding attitional files to the script introduces possible failure points if any of these files is not written correctly. Is there a way to reduce this? Or do you think its not an issue?


line_nr=$(grep -xF -n "${file_path}" "./${rebalance_db_file_name}" | head -n 1 | cut -d: -f1)
line_nr=$(grep -xF -n -e "${file_path}" "./${rebalance_db_file_name}" | head -n 1 | cut -d: -f1)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing, why add -e here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants