-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support installing as Docker plugin #1
Comments
We plan to release easy to install version (without MooseFS on the host system), we are still working on it. Can you share your experience with Docker Swarm? |
Very cool, but after looking into it a bit more I found out that the GPL version of MooseFS doesn't support HA clusters. That is a requirement for my team, along with the need for it to be free. I am going to pursue Dockerizing LizardFS instead. |
Hi @zicklag, see: this reply - I mean the latest reply in that thread @acid-maker wrote :) I'm also quoting it here:
Best regards, |
Well that's cool, too. 🙂. I need to get a cluster up ASAP, but I'll definitely look back into MooseFS when 4.x comes out. Thanks. 👍 |
@karolmajek Karol, I will be more than happy to share my experience with Docker Swarm. I have several clusters running and am currently investigating amongst others MooseFS and how well they play with Docker Swarm. So far, I've found that Dockerizing the services, deploying and scaling them works pretty well. The main hurdle I'm still trying to solve is making the services available externally (outside the docker swarm). The test setup I'm using is a PicoCluster (5E) running Docker Swarm. The ChunkServer is deployed in "global" mode, so that each node in the cluster gets exactly one service on it. The master is running on a random node. For the sake of simplicity and debugging, I have removed any MetaLoggers and Clients. Using this setup, you will have a full working cluster (similar to the one you provide in your docker-compose file). The main issue with this setup is that the chunkservers register with the master using their internal IP (because the master perform a reverse DNS lookup) - meaning the chunkservers can't be accessed by "external" (non-docker) clients. The alternative to this setup is using "host" networking, which I guess, in a way, is acceptable, considering you only ever will want to run one chunkserver on each node anyway. A potential and easy fix that I can come up with, from a docker perspective, would be to introduce a new field in the mfschunkserver.cfg, allowing one to set the name of that chunkserver. That way, you could set it's value to '{{ .Node.Hostname }}', which evaluates to the hostname of the node on which the service is running. If you would like more information - or would like to discuss, feel free to contact me directly. FWIW: I do realize this comment doesn't exactly cover the subject of this issue. But considering your request for feedback / experience with Swarm - I figured this place would be as good as any other.. |
Just an update on my own usage, I ended up creating a Docker plugin for LizardFS. It can be installed as a Docker managed plugin and it can also be deployed as a Swarm service. You can use the LizardFS container I wrote in combination with the plugin to create an instant distributed storage solution for Docker Swarm. Everything is documented on my GitHub issue and the source for the plugin will be mirrored to GitHub soon. I wouldn't doubt that it could very easily be adapted for MooseFS as well. @darkl0rd That is an interesting issue. If you remember, tell me if you figure it out. Right now I'm only worried about making the filesystem accessible inside the swarm, but I might need an external solution later. 🙂 |
Another issue with the "plugin" which is pretty much a showstopper: When you create a volume (docker volume create -d moosefs --name myvolume -o mountpoint=/mnt/moosefs) is that the mountpoint itself is used as the volume, rather than creating a sub-directory. Mountpoint should be the base for volumes; the volumes should be sub-directories (have a look at the BeeGFS docker volume plugin). |
hi guys, any news on the plugin? |
@karolmajek, could you please update on the status of this issue? Thanks, |
Good news @darkl0rd @eleaner |
I am not sure it does (at least the way I see it) I believe that's current docker standard of handling volumes, but I am no expert |
My LizardFS plugin would be really easy to fork for MooseFS if somebody wanted to do that. The source is now on GitHub: kadimasolutions/docker-plugin_lizardfs. |
I see you can install lizardfs plugin this way, I will try to port this. |
@karolmajek I am not an expert so please correct me if I am wrong, but I noticed that moose containers run systemd inside of the docker completely unnecessary also, kadimasolutions has an ingenious way of configuration from the environmental variables |
I tried to port your plugin to moose. do you have any idea what could cause this issue? the only thing I did is to replace "lizard" in every file with "moose" |
@eleaner I don't really know. Were you able to test using the filesystem for anything else other than minio? A good test might be to compile the plugin as a normal Docker image, run that image, and interactively try to mount your MooseFS system on the commandline inside of that container. All the plugin does internally is call |
@zicklag |
@zicklag |
Have you ever managed to run Minio on MooseFS before? It sounds like it might be unrelated to the plugin. You can set the mount options through the |
no luck in here. I can start minio directly on any moose folder I am out of ideas |
@karolmajek I am not sure what would be the best way to share the moose plugin version I have. |
It would be awesome if this volume driver could be installed as a Docker Plugin.
BTW I'm really liking the looks of MooseFS so far. I'm trying to get it running on/for my Docker Swarm cluster. 🙂
The text was updated successfully, but these errors were encountered: