Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit a qsfs instance to a single active mount #9

Open
LeeSmet opened this issue Apr 19, 2021 · 3 comments
Open

Limit a qsfs instance to a single active mount #9

LeeSmet opened this issue Apr 19, 2021 · 3 comments

Comments

@LeeSmet
Copy link

LeeSmet commented Apr 19, 2021

Because the qsfs keeps a local cache, multiple active write mounts will cause data corruption. We should try to limit the active (write) mounts to only a single instance, or at least clearly warn the user if we detect that he mounts 2 times at once

@maxux
Copy link
Collaborator

maxux commented Apr 19, 2021

It's easy in a 0-db-fs to flag instance as running and avoid multiple instance running on the same namespaces.
But inconsistency can easily occurs if process crash and don't release lock or connection are lost for some reason and lock is not released.

I can provide a first update with lock in place and a way to force mount even if lock is in place (in case of crash).

@LeeSmet
Copy link
Author

LeeSmet commented Apr 22, 2021

I wonder, can multiple 0-db-fs processes talk to the same local 0-db without inconsistency, or is there then a data race?

At any rate, the main issue would be if multiple nodes use the same metadata cluster with a different local 0-db. I was thinking to have a special lock key in the metastore, where you can set it with an arbitrary value. If a key is passed to the write comment, we first check if this key matches the one in the metadata storeage. Writers would then generate a random value at startup, as some kind of session token. This is rather naive, but it would eliminate a lot of issues already I think. If a writer has a graceful shutdown, it removes the session token from the metadata. And of course we could force a new key to be written in case of a crash. The main issue I have here is that 0-db-fs would need to generate the key (if mounted in write mode), but its actually 0-db which triggers 0-stor

@maxux
Copy link
Collaborator

maxux commented Apr 22, 2021

I have implemented a lock in 0-db-fs (not finished yet), which prevent a 0-db-fs to reuse in-use namespace and clean the lock on shutdown. This lock is not yet removed on crash and cannot be forced yet. This implementation actually implement a real header on each namespace, with flags and info/settings stored.

It's not a problem that multiple 0-db-fs use the same local 0-db, if they use different namespaces of course.

You are talking about 0-stor using zdb as metadata backend right ? Yes, indeed there is no link between filesystem and backend itself, and that won't really be possible. You can use the instance-id which 0-db generate on boot, but that will be unique for all namespaces :/

I'm thinking about a solution.

@sasha-astiadi sasha-astiadi added this to the later milestone Jun 14, 2021
@robvanmieghem robvanmieghem removed this from the later milestone Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants