Skip to content

Commit

Permalink
docs: Add information about OpenNebula integration
Browse files Browse the repository at this point in the history
- Exclude doc build output from git
- Fix missing doc build dependency
- Also includes some involuntary automatically persistent linting by vscode

Co-authored-by: Ilya Dryomov <[email protected]>
Co-authored-by: Anthony D'Atri <[email protected]>
Co-authored-by: Zac Dover <[email protected]>
Signed-off-by: Daniel Clavijo <[email protected]>
  • Loading branch information
4 people committed Dec 17, 2023
1 parent 0e36db9 commit ee2ee31
Show file tree
Hide file tree
Showing 8 changed files with 337 additions and 315 deletions.
14 changes: 14 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -83,3 +83,17 @@ GTAGS
# Python building things where it shouldn't
/src/python-common/build/
.cache

# Doc build output
src/pybind/cephfs/build/
src/pybind/cephfs/cephfs.c
src/pybind/cephfs/cephfs.egg-info/
src/pybind/rados/build/
src/pybind/rados/rados.c
src/pybind/rados/rados.egg-info/
src/pybind/rbd/build/
src/pybind/rbd/rbd.c
src/pybind/rbd/rbd.egg-info/
src/pybind/rgw/build/
src/pybind/rgw/rgw.c
src/pybind/rgw/rgw.egg-info/
229 changes: 115 additions & 114 deletions doc/architecture.rst

Large diffs are not rendered by default.

6 changes: 4 additions & 2 deletions doc/install/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
Installing Ceph
===============

There are multiple ways to install Ceph.
There are multiple ways to install Ceph.

Recommended methods
~~~~~~~~~~~~~~~~~~~

:ref:`Cephadm <cephadm_deploying_new_cluster>` is a tool that can be used to
install and manage a Ceph cluster.
install and manage a Ceph cluster.

* cephadm supports only Octopus and newer releases.
* cephadm is fully integrated with the orchestration API and fully supports the
Expand Down Expand Up @@ -59,6 +59,8 @@ tool that can be used to quickly deploy clusters. It is deprecated.

`github.com/openstack/puppet-ceph <https://github.com/openstack/puppet-ceph>`_ installs Ceph via Puppet.

`OpenNebula HCI clusters <https://docs.opennebula.io/stable/provision_clusters/hci_clusters/overview.html>`_ deploys Ceph on various cloud platforms.

Ceph can also be :ref:`installed manually <install-manual>`.


Expand Down
7 changes: 4 additions & 3 deletions doc/rbd/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ the ``librbd`` library.

Ceph's block devices deliver high performance with vast scalability to
`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
cloud-based computing systems like `OpenStack`_, `OpenNebula`_ and `CloudStack`_
that rely on libvirt and QEMU to integrate with Ceph block devices. You can use
the same cluster to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
:ref:`Ceph File System <ceph-file-system>`, and Ceph block devices simultaneously.

.. important:: To use Ceph Block Devices, you must have access to a running
Expand Down Expand Up @@ -69,4 +69,5 @@ to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
.. _kernel modules: ./rbd-ko/
.. _QEMU: ./qemu-rbd/
.. _OpenStack: ./rbd-openstack
.. _OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html
.. _CloudStack: ./rbd-cloudstack
122 changes: 62 additions & 60 deletions doc/rbd/libvirt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@

.. index:: Ceph Block Device; livirt

The ``libvirt`` library creates a virtual machine abstraction layer between
hypervisor interfaces and the software applications that use them. With
``libvirt``, developers and system administrators can focus on a common
The ``libvirt`` library creates a virtual machine abstraction layer between
hypervisor interfaces and the software applications that use them. With
``libvirt``, developers and system administrators can focus on a common
management framework, common API, and common shell interface (i.e., ``virsh``)
to many different hypervisors, including:
to many different hypervisors, including:

- QEMU/KVM
- XEN
Expand All @@ -18,7 +18,7 @@ to many different hypervisors, including:

Ceph block devices support QEMU/KVM. You can use Ceph block devices with
software that interfaces with ``libvirt``. The following stack diagram
illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.


.. ditaa::
Expand All @@ -41,10 +41,11 @@ illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.


The most common ``libvirt`` use case involves providing Ceph block devices to
cloud solutions like OpenStack or CloudStack. The cloud solution uses
cloud solutions like OpenStack, OpenNebula or CloudStack. The cloud solution uses
``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
and CloudStack`_ for details. See `Installation`_ for installation details.
devices via ``librbd``. See `Block Devices and OpenStack`_,
`Block Devices and OpenNebula`_ and `Block Devices and CloudStack`_ for details.
See `Installation`_ for installation details.

You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
``libvirt`` API. See `libvirt Virtualization API`_ for details.
Expand All @@ -62,12 +63,12 @@ Configuring Ceph

To configure Ceph for use with ``libvirt``, perform the following steps:

#. `Create a pool`_. The following example uses the
#. `Create a pool`_. The following example uses the
pool name ``libvirt-pool``.::

ceph osd pool create libvirt-pool

Verify the pool exists. ::
Verify the pool exists. ::

ceph osd lspools

Expand All @@ -80,23 +81,23 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
and references ``libvirt-pool``. ::

ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool'
Verify the name exists. ::

Verify the name exists. ::

ceph auth ls

**NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
not the Ceph name ``client.libvirt``. See `User Management - User`_ and
`User Management - CLI`_ for a detailed explanation of the difference
between ID and name.
**NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
not the Ceph name ``client.libvirt``. See `User Management - User`_ and
`User Management - CLI`_ for a detailed explanation of the difference
between ID and name.

#. Use QEMU to `create an image`_ in your RBD pool.
#. Use QEMU to `create an image`_ in your RBD pool.
The following example uses the image name ``new-libvirt-image``
and references ``libvirt-pool``. ::

qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G

Verify the image exists. ::
Verify the image exists. ::

rbd -p libvirt-pool ls

Expand All @@ -111,7 +112,7 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

The ``client.libvirt`` section name should match the cephx user you created
above.
above.
If SELinux or AppArmor is enabled, note that this could prevent the client
process (qemu via libvirt) from doing some operations, such as writing logs
or operate the images or admin socket to the destination locations (``/var/
Expand All @@ -123,15 +124,15 @@ Preparing the VM Manager
========================

You may use ``libvirt`` without a VM manager, but you may find it simpler to
create your first domain with ``virt-manager``.
create your first domain with ``virt-manager``.

#. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::

sudo apt-get install virt-manager

#. Download an OS image (if necessary).

#. Launch the virtual machine manager. ::
#. Launch the virtual machine manager. ::

sudo virt-manager

Expand All @@ -142,22 +143,22 @@ Creating a VM

To create a VM with ``virt-manager``, perform the following steps:

#. Press the **Create New Virtual Machine** button.
#. Press the **Create New Virtual Machine** button.

#. Name the new virtual machine domain. In the exemplary embodiment, we
use the name ``libvirt-virtual-machine``. You may use any name you wish,
but ensure you replace ``libvirt-virtual-machine`` with the name you
choose in subsequent commandline and configuration examples. ::
but ensure you replace ``libvirt-virtual-machine`` with the name you
choose in subsequent commandline and configuration examples. ::

libvirt-virtual-machine

#. Import the image. ::

/path/to/image/recent-linux.img

**NOTE:** Import a recent image. Some older images may not rescan for
**NOTE:** Import a recent image. Some older images may not rescan for
virtual devices properly.

#. Configure and start the VM.

#. You may use ``virsh list`` to verify the VM domain exists. ::
Expand All @@ -179,11 +180,11 @@ you that root privileges are required. For a reference of ``virsh``
commands, refer to `Virsh Command Reference`_.


#. Open the configuration file with ``virsh edit``. ::
#. Open the configuration file with ``virsh edit``. ::

sudo virsh edit {vm-domain-name}

Under ``<devices>`` there should be a ``<disk>`` entry. ::
Under ``<devices>`` there should be a ``<disk>`` entry. ::

<devices>
<emulator>/usr/bin/kvm</emulator>
Expand All @@ -196,18 +197,18 @@ commands, refer to `Virsh Command Reference`_.


Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
`Virtio`_ for details.

**IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
the configuration file under ``/etc/libvirt/qemu`` with a text editor,
``libvirt`` may not recognize the change. If there is a discrepancy between
the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
**IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
the configuration file under ``/etc/libvirt/qemu`` with a text editor,
``libvirt`` may not recognize the change. If there is a discrepancy between
the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
properly.


#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::

#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::

<disk type='network' device='disk'>
<source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
Expand All @@ -216,21 +217,21 @@ commands, refer to `Virsh Command Reference`_.
<target dev='vdb' bus='virtio'/>
</disk>

Replace ``{monitor-host}`` with the name of your host, and replace the
pool and/or image name as necessary. You may add multiple ``<host>``
Replace ``{monitor-host}`` with the name of your host, and replace the
pool and/or image name as necessary. You may add multiple ``<host>``
entries for your Ceph monitors. The ``dev`` attribute is the logical
device name that will appear under the ``/dev`` directory of your
VM. The optional ``bus`` attribute indicates the type of disk device to
emulate. The valid settings are driver specific (e.g., "ide", "scsi",
device name that will appear under the ``/dev`` directory of your
VM. The optional ``bus`` attribute indicates the type of disk device to
emulate. The valid settings are driver specific (e.g., "ide", "scsi",
"virtio", "xen", "usb" or "sata").

See `Disks`_ for details of the ``<disk>`` element, and its child elements
and attributes.

#. Save the file.

#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
default), you must generate a secret. ::
#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
default), you must generate a secret. ::

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
Expand All @@ -249,11 +250,11 @@ commands, refer to `Virsh Command Reference`_.

ceph auth get-key client.libvirt | sudo tee client.libvirt.key

#. Set the UUID of the secret. ::
#. Set the UUID of the secret. ::

sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml

You must also set the secret manually by adding the following ``<auth>``
You must also set the secret manually by adding the following ``<auth>``
entry to the ``<disk>`` element you entered earlier (replacing the
``uuid`` value with the result from the command line example above). ::

Expand All @@ -266,14 +267,14 @@ commands, refer to `Virsh Command Reference`_.
<auth username='libvirt'>
<secret type='ceph' uuid='{uuid of secret}'/>
</auth>
<target ...
<target ...


**NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
you use the ID component of the Ceph name you generated. If for some reason
you need to regenerate the secret, you will have to execute
``sudo virsh secret-undefine {uuid}`` before executing
**NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
you use the ID component of the Ceph name you generated. If for some reason
you need to regenerate the secret, you will have to execute
``sudo virsh secret-undefine {uuid}`` before executing
``sudo virsh secret-set-value`` again.


Expand All @@ -285,30 +286,31 @@ To verify that the VM and Ceph are communicating, you may perform the
following procedures.


#. Check to see if Ceph is running::
#. Check to see if Ceph is running::

ceph health

#. Check to see if the VM is running. ::
#. Check to see if the VM is running. ::

sudo virsh list

#. Check to see if the VM is communicating with Ceph. Replace
``{vm-domain-name}`` with the name of your VM domain::
#. Check to see if the VM is communicating with Ceph. Replace
``{vm-domain-name}`` with the name of your VM domain::

sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'

#. Check to see if the device from ``<target dev='vdb' bus='virtio'/>`` exists::

virsh domblklist {vm-domain-name} --details

If everything looks okay, you may begin using the Ceph block device
If everything looks okay, you may begin using the Ceph block device
within your VM.


.. _Installation: ../../install
.. _libvirt Virtualization API: http://www.libvirt.org
.. _Block Devices and OpenStack: ../rbd-openstack
.. _Block Devices and OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html#datastore-internals
.. _Block Devices and CloudStack: ../rbd-cloudstack
.. _Create a pool: ../../rados/operations/pools#create-a-pool
.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
Expand Down
Loading

0 comments on commit ee2ee31

Please sign in to comment.