Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Promotion of Device From Partition to Whole Disk #16800

Open
Haravikk opened this issue Nov 22, 2024 · 4 comments
Open

Allow Promotion of Device From Partition to Whole Disk #16800

Haravikk opened this issue Nov 22, 2024 · 4 comments
Labels
Type: Feature Feature request or new feature

Comments

@Haravikk
Copy link

Haravikk commented Nov 22, 2024

Describe the feature would like to see added to OpenZFS

When zpool replace is used to replace a partition with its own parent disk, ZFS should attempt to "promote" the device to whole_disk=1, if possible, otherwise it will fail as normal ("disk is busy").

For example:

zpool replace pool /dev/disk5s1 /dev/disk5

This would cause disk5 to go from a non-whole disk reference for disk5s1, to a whole disk reference for disk5 (though disk5s1 may still be used internally) without having to offline, erase and then replace the disk with a full resilver before it is fully ready again.

This behaviour will only be possible if the disk is formatted in a way that either already matches the layout that ZFS would create when given a whole disk, or a layout that can be non-destructively modified to match (i.e- expand the main partition into any free space, and create partition 9 if it's missing).

If the disk has additional partitions beyond the ZFS data partition (except partition 9 and GPT headers), this operation will fail unless the -f flag is given, causing ZFS to expand the partition to eliminate the extras and use all available space, creating a new partition 9 at the end.

How will this feature improve OpenZFS?

It will make it a lot easier to migrate from setups using any form of custom partitioning, which can happen in cases where disks are temporarily mismatched. For example, with 2x 4tb and 2x 8tb in a raidz2, 8tb would normally be left unusable, so partitioning can allow you to use the excess space for something else, but if you swap the 4tb disks for 8tb you might want to discard the old partitions and expand the pool into the whole disks.

It will also make it easier to correct issues in which (hypothetically, not inspired by anything that may or may not have happened to the author) a partition is added mistakenly, rather than a whole disk, which was compatible with the ZFS layout (or even used for ZFS previously) and so just needs whole_disk flipped from 0 to 1.

@Haravikk Haravikk added the Type: Feature Feature request or new feature label Nov 22, 2024
@amotin
Copy link
Member

amotin commented Nov 23, 2024

My personal opinion is that it is not ZFS (or any other file system) business to mess with partition tables. I don't like the fact that ZFS does it on some platforms. On FreeBSD it does not.

@Haravikk
Copy link
Author

Haravikk commented Nov 23, 2024

My personal opinion is that it is not ZFS (or any other file system) business to mess with partition tables.

Why not? ZFS manages disks, why shouldn't it ensure they're formatted correctly for its use? Expecting users to do all the partitioning themselves just to add a disk seems bizarre to me.

But the basic request for this issue (ignoring the -f flag) is to check that the partitioning is what ZFS expects, so it can switch whole_disk to 1 – though I think it would be more useful to also allow correcting the partitioning (in the specific case where later partitions can combined) because that covers other cases where a disk may be added as only one partition, rather than the whole disk.

@amotin
Copy link
Member

amotin commented Nov 24, 2024

ZFS manages disks, why shouldn't it ensure they're formatted correctly for its use?

Because ZFS does not really care. It can happily use raw disks without any partitions. It can run on top of some MBR or who knows what partition. There can be other partitions on the disk, some weird loader in unexpected offsets or some specific partitions alignment. And nothing of that ZFS cares about.

@Haravikk
Copy link
Author

Haravikk commented Nov 24, 2024

Fair enough, though personally I see the benefits of partitioning – I know macOS doesn't particularly like unpartitioned disks, partitioning provides filesystem information that a "raw" device can't, plus there's also the benefit of partition 9 in helping to avoid issues with disks that don't have exactly the same number of bytes. There are some use-cases where partitioning is helpful when ZFS is running on virtual block devices (disk images, virtualisation, cloud platforms etc.). Also disks need to be partitioned to support bootable ZFS on systems that support that. So overall it makes sense to me to support partitioning as the default for best compatibility, using the entire "raw" device makes more sense as being an exception to opt into as you really need to know that's what you want.

But that all feels more like a separate issue – ZFS could for example have an option on attach/create/replace to determine whether it partitions or not, but that should be in its own request, doesn't seem like it should apply to this one?

For this specific issue the key thing is allowing a partition to be promoted to whole_disk = 1 if it's in a format ZFS expects – if partitioning by ZFS were disabled then this would fail (since a partitioned disk can never be in the expected format if the expected layout would be one without any partitioning). The important part is to match what ZFS would create, or match closely enough to be modified to match (where possible).

Also as a side note – who the heck is @scineram? They just seem to downvote everything without giving any kind feedback, why are they even watching the github if they've nothing to say?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

2 participants