Skip to content

Commit

Permalink
chore: update README.md and CONTRIBUTING.md
Browse files Browse the repository at this point in the history
  • Loading branch information
jedel1043 authored and NucciTheBoss committed Jan 14, 2025
1 parent 2a2ab1b commit 8dff979
Show file tree
Hide file tree
Showing 2 changed files with 127 additions and 37 deletions.
22 changes: 11 additions & 11 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Contributing to the filesystem-client-operator repository
# Contributing to the filesystem-charms repository

Do you want to contribute to the repository? You've come to
the right place then! __Here is how you can get involved.__
Expand Down Expand Up @@ -68,7 +68,7 @@ Can also be used for pull requests.
* `Statues: Help wanted` - Issues where we need help from the greater open source community to solve.

For a complete look at this repository's labels, see the
[project labels page](https://github.com/charmed-hpc/filesystem-client-operator/labels).
[project labels page](https://github.com/charmed-hpc/filesystem-charms/labels).

## Bug Reports

Expand Down Expand Up @@ -135,16 +135,16 @@ the repository:

```bash
# Clone your fork of the repo into the current directory
git clone https://github.com/<your-username>/filesystem-client-operator.git
git clone https://github.com/<your-username>/filesystem-charms.git

# Navigate to the newly cloned directory
cd filesystem-client-operator
cd filesystem-charms

# Assign the original repo to a remote called "upstream"
git remote add upstream https://github.com/charmed-hpc/filesystem-client-operator.git
git remote add upstream https://github.com/charmed-hpc/filesystem-charms.git
```

2. If you cloned a while ago, pull the latest changes from the upstream filesystem-client-operator repository:
2. If you cloned a while ago, pull the latest changes from the upstream filesystem-charms repository:

```bash
git checkout main
Expand All @@ -162,19 +162,19 @@ the repository:

```bash
# Apply formatting standards to code.
just fmt
just repo fmt

# Check code against coding style standards.
just lint
just repo lint

# Run type checking.
just typecheck
just repo typecheck

# Run unit tests.
just unit
just repo unit

# Run integration tests.
just integration
just repo integration
```

5. Commit your changes in logical chunks to your topic branch.
Expand Down
142 changes: 116 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,143 @@
# Filesystem client operator
# Filesystem charms

A [Juju](https://juju.is) operator for mounting filesystems.

[![Charmhub Badge](https://charmhub.io/filesystem-client/badge.svg)](https://charmhub.io/filesystem-client)
[![CI](https://github.com/charmed-hpc/filesystem-client-operator/actions/workflows/ci.yaml/badge.svg)](https://github.com/charmed-hpc/filesystem-client-operator/actions/workflows/ci.yaml/badge.svg)
[![Publish](https://github.com/charmed-hpc/filesystem-client-operator/actions/workflows/publish.yaml/badge.svg)](https://github.com/charmed-hpc/filesystem-client-operator/actions/workflows/publish.yaml/badge.svg)
[![CI](https://github.com/charmed-hpc/filesystem-charms/actions/workflows/ci.yaml/badge.svg)](https://github.com/charmed-hpc/filesystem-charms/actions/workflows/ci.yaml/badge.svg)
[![Publish](https://github.com/charmed-hpc/filesystem-charms/actions/workflows/publish.yaml/badge.svg)](https://github.com/charmed-hpc/filesystem-charms/actions/workflows/publish.yaml/badge.svg)
[![Matrix](https://img.shields.io/matrix/ubuntu-hpc%3Amatrix.org?logo=matrix&label=ubuntu-hpc)](https://matrix.to/#/#ubuntu-hpc:matrix.org)

[Juju](https://juju.is) charms to manage shared filesystems.

The `filesystem-charms` repository is a collection of charmed operators that enables you to provide,
request, and mount shared filesystems. We currently have:


The filesystem client operator requests and mounts exported filesystems on virtual machines.
* [`filesystem-operator`](./charms/filesystem-operator/): requests and mounts exported filesystems on virtual machines.
* [`nfs-server-proxy-operator`](./charms/nfs-server-proxy/): exports NFS shares from NFS servers not managed by Juju.
* [`cephfs-server-proxy-operator`](./charms/cephfs-server-proxy): exports Ceph filesystems from Ceph clusters not managed by Juju.

## ✨ Getting started

1. Deploy a filesystem provider (microceph with ceph-fs in this case), filesystem-client, and a machine to mount the filesystem on:
#### With a minimal NFS kernel server

First, launch a virtual machine using [LXD](https://ubuntu.com/lxd):

```shell
juju add-model store
juju deploy -n 3 microceph \
--channel latest/edge \
--storage osd-standalone='2G,3' \
--constraints="virt-type=virtual-machine root-disk=10G mem=4G"
juju deploy ceph-fs --channel latest/edge
juju deploy filesystem-client --channel latest/edge \
--config mountpoint='/scratch' \
--config noexec=true
juju deploy ubuntu --base [email protected] --constraints virt-type=virtual-machine
$ snap install lxd
$ lxd init --auto
$ lxc launch ubuntu:24.04 nfs-server --vm
$ lxc shell nfs-server
```

2. Integrate everything, and that's it!
Inside the LXD virtual machine, set up an NFS kernel server that exports
a _/data_ directory:

```shell
apt update && apt upgrade
apt install nfs-kernel-server
mkdir -p /data
cat << 'EOF' > /etc/exports
/srv *(ro,sync,subtree_check)
/data *(rw,sync,no_subtree_check,no_root_squash)
EOF
exportfs -a
systemctl restart nfs-kernel-server
```

> You can verify if the NFS server is exporting the desired directories
> by using the command `showmount -e localhost` while inside the LXD virtual machine.
Grab the network address of the LXD virtual machine and then exit the current shell session:

```shell
hostname -I
exit
```

Now deploy the NFS server proxy operator with the filesystem client operator and the principal charm:

```shell
$ juju deploy nfs-server-proxy --channel latest/edge \
--config hostname=<IPv4 address of LXD virtual machine> \
--config path=/data
$ juju deploy filesystem-client data --config mountpoint=/data
$ juju deploy ubuntu --base [email protected]
$ juju integrate data:juju-info ubuntu:juju-info
$ juju integrate data:filesystem nfs-server-proxy:filesystem
```

#### With Microceph

First, launch a virtual machine using [LXD](https://ubuntu.com/lxd):

```shell
juju integrate microceph:mds ceph-fs:ceph-mds
juju integrate filesystem-client:filesystem ceph-fs:filesystem
juju integrate ubuntu:juju-info data:juju-info
$ snap install lxd
$ lxd init --auto
$ lxc launch ubuntu:22.04 cephfs-server --vm
$ lxc shell cephfs-server
```

Inside the LXD virtual machine, set up [Microceph](https://github.com/canonical/microceph) to export a Ceph filesystem.

```shell
ln -s /bin/true /usr/local/bin/udevadm
apt-get -y update
apt-get -y install ceph-common jq
snap install microceph
microceph cluster bootstrap
microceph disk add loop,2G,3
microceph.ceph osd pool create cephfs_data
microceph.ceph osd pool create cephfs_metadata
microceph.ceph fs new cephfs cephfs_metadata cephfs_data
microceph.ceph fs authorize cephfs client.fs-client / rw # Creates a new `fs-client` user.
```

> You can verify if the CephFS server is working correctly by using the command
> `microceph.ceph fs status cephfs` while inside the LXD virtual machine.
To mount a Ceph filesystem, you'll require some information that you can get with a couple of commands:

```shell
export HOST=$(hostname -I | tr -d '[:space:]'):6789
export FSID=$(microceph.ceph -s -f json | jq -r '.fsid')
export CLIENT_KEY=$(microceph.ceph auth print-key client.fs-client)
```

Print the required information for reference and then exit the current shell session:

```shell
echo $HOST
echo $FSID
echo $CLIENT_KEY
exit
```

Now deploy the CephFS server proxy operator with the filesystem client operator and the principal charm:

```shell
juju add-model ceph
juju deploy cephfs-server-proxy --channel latest/edge \
--config fsid=<FSID> \
--config sharepoint=cephfs:/ \
--config monitor-hosts=<HOST> \
--config auth-info=fs-client:<CLIENT_KEY>
juju deploy ubuntu --base [email protected] --constraints virt-type=virtual-machine
juju deploy filesysten-client data --channel latest/edge --config mountpoint=/data
juju integrate data:juju-info ubuntu:juju-info
juju integrate data:filesystem cephfs-server-proxy:filesystem
```

## 🤝 Project and community

The filesystem client operator is a project of the [Ubuntu High-Performance Computing community](https://ubuntu.com/community/governance/teams/hpc).
The filesystem charms are a project of the [Ubuntu High-Performance Computing community](https://ubuntu.com/community/governance/teams/hpc).
It is an open source project that is welcome to community involvement, contributions, suggestions, fixes, and
constructive feedback. Interested in being involved with the development of the filesystem client operator? Check out these links below:
constructive feedback. Interested in being involved with the development of the filesystem charms? Check out these links below:

* [Join our online chat](https://matrix.to/#/#ubuntu-hpc:matrix.org)
* [Contributing guidelines](./CONTRIBUTING.md)
* [Code of conduct](https://ubuntu.com/community/ethos/code-of-conduct)
* [File a bug report](https://github.com/charmed-hpc/filesystem-client-operator/issues)
* [File a bug report](https://github.com/charmed-hpc/filesystem-charms/issues)
* [Juju SDK docs](https://juju.is/docs/sdk)

## 📋 License

The filesystem client operator is free software, distributed under the
The filesystem charms are free software, distributed under the
Apache Software License, version 2.0. See the [LICENSE](./LICENSE) file for more information.

0 comments on commit 8dff979

Please sign in to comment.