-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support migration with more sophisticated disk changes #408
Comments
We are likely missing the comparison of grains to the pillar configuration. Will is_incorrect behave correctly if we tolerate a KeyError? If an OSD is no longer listed in the pillar, then its configuration is incorrect and should be removed. The redeploy would take care of it if we got that far. Likely need to study the different paths and make sure not to impact the normal creation process. |
I tried to sketch the path we take but got interrupted by other tasks. I'll try again betimes. |
@swiftgist Thinking of disk sizes that need to be checked if the resulting xxx_size is bigger than the accumulated current partitions + the 'about to be created' one. Rather substantial changes in the OSDConfig class, messing with the reformatting of grains and pillar and I bet much more when diving into it. In the 'per node' branch we can first purge all disks before redeploying them, which makes things way easier. Most cases will be covered with the 'per node' option either, I assume. TLDR; Although probably doable, only support more sophisticated cluster layout changes with the 'per node' approach. |
targeted with #433 |
Initial setup:
and you want to convert to bluestore with a dedicated wal/db
means not only a change in format but also in layout. Consequently the pillar data changes to have only one entry instead of the initial two. When parsing from the grains, which represent the old structure, we will most likely end up with
KeyError
as the corresponding disk now only serves as db/wal device.These options come to my mind:
The text was updated successfully, but these errors were encountered: