Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSD disk reuse #30

Open
ghost opened this issue Jul 31, 2013 · 3 comments
Open

OSD disk reuse #30

ghost opened this issue Jul 31, 2013 · 3 comments

Comments

@ghost
Copy link

ghost commented Jul 31, 2013

The first install of an OSD works. Subsequent installs work, but the OSD fails to see the disk as usable. Possibly some remnant data? I'm still researching the issue. It may be a bug in ceph itself, or perhaps we need to dev/zero the drive?

@fcharlier
Copy link
Member

@dontalton did you try removing the data from the filesystem, then delete the partition, delete the partition table and eventually dd if=/dev/zero … the first sectors of the disk ?

@ghost
Copy link
Author

ghost commented Aug 1, 2013

I did much additional testing of this. Doing just a /dev/zero on the ENTIRE drive is enough to make the OSD reinstall correctly. The other steps don't seem necessary.

@ghost
Copy link
Author

ghost commented Aug 2, 2013

It turns out only a count of 100 is needed for this to work

dd if=/dev/zero of=/dev/DEVICE bs=1M count=100

OSD reloads work nice and smooth now. Before, the residual disk data would cause all kind of erratic behavior during the puppet run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant