Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add helper to zap all non-root/non-mounted disks #259

Open
ddiss opened this issue May 16, 2017 · 6 comments
Open

add helper to zap all non-root/non-mounted disks #259

ddiss opened this issue May 16, 2017 · 6 comments
Labels

Comments

@ddiss
Copy link
Contributor

ddiss commented May 16, 2017

Feel free to reject this as too risky...

Currently users are advised to zap all OSD disks prior to stage.0 invocation.
If done manually, disk zapping can be a very time consuming process. As a significant time-saver, I'd like some way of telling Deepsea to zap any disk that isn't currently used by the operating system, i.e. not a root, home, etc. device.

@ddiss ddiss added the feature label May 16, 2017
@ddiss
Copy link
Contributor Author

ddiss commented May 16, 2017

it's really ugly, but I'm currently using:

salt '*' cmd.run cmd='root_dev=$(cat /proc/mounts |grep " / "|awk "{print substr(\$1, 0, 8)}"); for i in $(ls -l /dev/sd[abcdefghijklmnopqr]|grep -v "$root_dev"|awk "{print \$10}"); do echo "zapping $i on $HOSTNAME"; sgdisk --zap $i; done'

@ImTheKai
Copy link
Contributor

I would like such an functionality/option as well. If I have to install a really large cluster this would save a huge amount of time.

@jan--f
Copy link
Contributor

jan--f commented May 17, 2017

So far the policy was not to this as a safety measure. We don't really want to wipe any data.
In #191 I was wondering if that might be part of the purge feature however.

@swiftgist
Copy link
Contributor

The fear I have had is doing this automatically as part of Stage 0 and getting it wrong. If this is a separate utility, displays what it would do but not doing it by default and has a filter or two (e.g. all but OS, only Ceph disks), that might work.

The current purge is selective in that the zapped disks were OSDs. I think it could go either way of whether to extend it or make another runner.

What are your thoughts?

@ddiss
Copy link
Contributor Author

ddiss commented May 17, 2017

If this is a separate utility, displays what it would do but not doing it by default and has a filter or two (e.g. all but OS, only Ceph disks), that might work.

Sounds good to me.

The current purge is selective in that the zapped disks were OSDs. I think it could go either way of whether to extend it or make another runner.

Does purge require role assignment before running, and does it do anything aside from zapping the OSD disks? If so, I think it'd be good to have something separate, or otherwise some sort of flag to instruct purge to zap all non-root/home disks.

@Martin-Weiss
Copy link

IMO we should add the already existing devices with partitions also to the proposals and allow the admin to specify there if they should be left as is or if they should be zapped/ reinitialized.
maybe we can add all used disks/partitions during the discovery phase to the profiles in a „in use“ section - and in case the admin adjusts the profile (moves the device to an OSD) - he can add an „init: yes“ Parameter that will cause the re-init.
As part of the deployment stage we can the move it back to the „in use“ section - so that during further deployments these disks are ignored again.
With this process we would also have an option to „fix“ broken OSDs as this would allow an admin to adjust the proposal for „re-init an existing OSD“...

And keep in mind that all the zap processes I have found did not care about partitions in use! So in case there is an LVM or an MD Raid on the disks - the classic zap does not work and either the LVM / MD Raid needs to be deactivated before zapping or a reboot is required after zapping so that the kernel gets the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants