Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support installing as Docker plugin #1

Open
zicklag opened this issue Apr 25, 2018 · 21 comments
Open

Support installing as Docker plugin #1

zicklag opened this issue Apr 25, 2018 · 21 comments
Assignees

Comments

@zicklag
Copy link

zicklag commented Apr 25, 2018

It would be awesome if this volume driver could be installed as a Docker Plugin.

BTW I'm really liking the looks of MooseFS so far. I'm trying to get it running on/for my Docker Swarm cluster. 🙂

@karolmajek
Copy link
Member

We plan to release easy to install version (without MooseFS on the host system), we are still working on it.

Can you share your experience with Docker Swarm?
You can email me if you have further questions.

@zicklag
Copy link
Author

zicklag commented Apr 27, 2018

Very cool, but after looking into it a bit more I found out that the GPL version of MooseFS doesn't support HA clusters. That is a requirement for my team, along with the need for it to be free. I am going to pursue Dockerizing LizardFS instead.

@pkonopelko
Copy link
Member

pkonopelko commented Apr 27, 2018

Hi @zicklag,

see: this reply - I mean the latest reply in that thread @acid-maker wrote :)

I'm also quoting it here:

Soon we will publish version 4.x on GPL and we decided to move our HA to GPL, so be patient.

Best regards,
Peter / MooseFS Team

@zicklag
Copy link
Author

zicklag commented Apr 27, 2018

Well that's cool, too. 🙂. I need to get a cluster up ASAP, but I'll definitely look back into MooseFS when 4.x comes out. Thanks. 👍

@darkl0rd
Copy link

darkl0rd commented Jun 20, 2018

@karolmajek Karol, I will be more than happy to share my experience with Docker Swarm. I have several clusters running and am currently investigating amongst others MooseFS and how well they play with Docker Swarm.

So far, I've found that Dockerizing the services, deploying and scaling them works pretty well. The main hurdle I'm still trying to solve is making the services available externally (outside the docker swarm).

The test setup I'm using is a PicoCluster (5E) running Docker Swarm. The ChunkServer is deployed in "global" mode, so that each node in the cluster gets exactly one service on it. The master is running on a random node. For the sake of simplicity and debugging, I have removed any MetaLoggers and Clients.

Using this setup, you will have a full working cluster (similar to the one you provide in your docker-compose file). The main issue with this setup is that the chunkservers register with the master using their internal IP (because the master perform a reverse DNS lookup) - meaning the chunkservers can't be accessed by "external" (non-docker) clients.

The alternative to this setup is using "host" networking, which I guess, in a way, is acceptable, considering you only ever will want to run one chunkserver on each node anyway.
This setup, however, introduces a new problem - being unable to run both a ChunkServer and Master on the same node, since in host networking mode, Docker Swarm still detects services being on the same host and then prefers the internal network connection instead - in turn making the chunkserver on that same host unavailable to the outside world.

A potential and easy fix that I can come up with, from a docker perspective, would be to introduce a new field in the mfschunkserver.cfg, allowing one to set the name of that chunkserver. That way, you could set it's value to '{{ .Node.Hostname }}', which evaluates to the hostname of the node on which the service is running.

If you would like more information - or would like to discuss, feel free to contact me directly.

FWIW: I do realize this comment doesn't exactly cover the subject of this issue. But considering your request for feedback / experience with Swarm - I figured this place would be as good as any other..

@zicklag
Copy link
Author

zicklag commented Jun 20, 2018

Just an update on my own usage, I ended up creating a Docker plugin for LizardFS. It can be installed as a Docker managed plugin and it can also be deployed as a Swarm service. You can use the LizardFS container I wrote in combination with the plugin to create an instant distributed storage solution for Docker Swarm. Everything is documented on my GitHub issue and the source for the plugin will be mirrored to GitHub soon. I wouldn't doubt that it could very easily be adapted for MooseFS as well.

@darkl0rd That is an interesting issue. If you remember, tell me if you figure it out. Right now I'm only worried about making the filesystem accessible inside the swarm, but I might need an external solution later. 🙂

@darkl0rd
Copy link

darkl0rd commented Jul 4, 2018

Another issue with the "plugin" which is pretty much a showstopper:

When you create a volume (docker volume create -d moosefs --name myvolume -o mountpoint=/mnt/moosefs) is that the mountpoint itself is used as the volume, rather than creating a sub-directory.

Mountpoint should be the base for volumes; the volumes should be sub-directories (have a look at the BeeGFS docker volume plugin).

@eleaner
Copy link

eleaner commented Feb 17, 2019

hi guys, any news on the plugin?

@pkonopelko
Copy link
Member

@karolmajek, could you please update on the status of this issue?

Thanks,
Piotr

@karolmajek
Copy link
Member

Good news @darkl0rd @eleaner
We just released new version https://github.com/moosefs/docker-volume-moosefs/releases/tag/v0.2.0
Now the driver will create a subdirectory (with the name of the volume) in the mountpoint for each volume
Does it match your needs?

@eleaner
Copy link

eleaner commented Feb 18, 2019

I am not sure it does (at least the way I see it)
I am looking for the volume plugin pretty much like the one mentioned by @zicklag
one that one installs with "docker plugin install"

I believe that's current docker standard of handling volumes, but I am no expert

@zicklag
Copy link
Author

zicklag commented Feb 18, 2019

My LizardFS plugin would be really easy to fork for MooseFS if somebody wanted to do that. The source is now on GitHub: kadimasolutions/docker-plugin_lizardfs.

@karolmajek
Copy link
Member

I see you can install lizardfs plugin this way, I will try to port this.
Thanks @zicklag for the link!

@eleaner
Copy link

eleaner commented Feb 19, 2019

@karolmajek
it's not really related but please have a look at this kadimasolutions/docker_lizardfs

I am not an expert so please correct me if I am wrong, but I noticed that moose containers run systemd inside of the docker completely unnecessary
as well as the master container runs the master server and web frontend in the same container - again completely unnecessary.

also, kadimasolutions has an ingenious way of configuration from the environmental variables

@eleaner
Copy link

eleaner commented Mar 4, 2019

@zicklag

I tried to port your plugin to moose.
it sort of works but...
you know there is always a but
I tried to use moose volume to serve through official minio
docker run -p 9000:9000 --name minio1 v test:/data --rm -it minio/minio minio server /data
as long as the volume test is lizard - all is fine
but as soon as it is moose I get this weird
ERROR Unable to initialize backend: value too large for defined data type.

do you have any idea what could cause this issue?

the only thing I did is to replace "lizard" in every file with "moose"
and to change the instalation instruction in the Dockerfile.

@zicklag
Copy link
Author

zicklag commented Mar 4, 2019

@eleaner I don't really know. Were you able to test using the filesystem for anything else other than minio? A good test might be to compile the plugin as a normal Docker image, run that image, and interactively try to mount your MooseFS system on the commandline inside of that container. All the plugin does internally is call mfsmount to mount the filesystem, so if everything installed properly inside the container then you should be able to mount it manually.

@eleaner
Copy link

eleaner commented Mar 4, 2019

@zicklag
I am not sure how to compile this plugin as a normal Docker image. help?

@eleaner
Copy link

eleaner commented Mar 4, 2019

@zicklag
In fact, the plugin seems to work. I can map volumes to containers and they work.
the only problem I noticed so far is that minio does not want to start while it starts with no problem on the lizard.
I have no idea how to find out what's wrong

@zicklag
Copy link
Author

zicklag commented Mar 4, 2019

Have you ever managed to run Minio on MooseFS before? It sounds like it might be unrelated to the plugin. You can set the mount options through the MOUNT_OPTIONS environment variable. Maybe checkout the MooseFS documentation.

@eleaner
Copy link

eleaner commented Mar 4, 2019

no luck in here.
I made both moose mounts (outside and inside the docker with the plugin) to have exactly the same options
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

I can start minio directly on any moose folder
I can start minio in docker on any moose folder bound to docker folder
I can start minio in docker on moose folder that is a volume bound to docker folder
but when I start minio in docker on a volume it fails like above

I am out of ideas

@eleaner
Copy link

eleaner commented Mar 4, 2019

@karolmajek
Hi. Would you be able to lend a hand here?
I checked with minio guys and it does not look like the problem on their end. Especially since all is working just fine outside of the plugin. I am too weak to find the problem on my own.

I am not sure what would be the best way to share the moose plugin version I have.
The pull request does not sound like a good idea since it is a completely different architecture to what you have.
Also, I adjusted only the "workfiles" and simply deleted everything that is related to the documentation.
I could email it to @OXide94 if that's OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants