Skip to content

Commit

Permalink
Update content to Antelope and misc changes
Browse files Browse the repository at this point in the history
  • Loading branch information
seunghun1ee committed Sep 12, 2024
1 parent ba4f72f commit 34f41b6
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 68 deletions.
36 changes: 18 additions & 18 deletions doc/source/configuration/wazuh.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,14 @@ Provisioning an infra VM for Wazuh Manager.
Kayobe supports :kayobe-doc:`provisioning infra VMs <deployment.html#infrastructure-vms>`.
The following configuration may be used as a guide. Config for infra VMs is documented :kayobe-doc:`here <configuration/reference/infra-vms>`.

Add a Wazuh Manager host to the ``wazuh-manager`` group in ``etc/kayobe/inventory/hosts``.
Add a Wazuh Manager host to the ``wazuh-manager`` group in ``$KAYOBE_CONFIG_PATH/inventory/hosts``.

.. code-block:: ini
[wazuh-manager]
os-wazuh
Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``etc/kayobe/inventory/groups``.
Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``$KAYOBE_CONFIG_PATH/inventory/groups``.

.. code-block:: ini
Expand All @@ -50,7 +50,7 @@ Add the ``wazuh-manager`` group to the ``infra-vms`` group in ``etc/kayobe/inven
[infra-vms:children]
wazuh-manager
Define VM sizing in ``etc/kayobe/inventory/group_vars/wazuh-manager/infra-vms``:
Define VM sizing in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/infra-vms``:

.. code-block:: yaml
Expand All @@ -64,7 +64,7 @@ Define VM sizing in ``etc/kayobe/inventory/group_vars/wazuh-manager/infra-vms``:
# Capacity of the infra VM data volume.
infra_vm_data_capacity: "200G"
Optional: define LVM volumes in ``etc/kayobe/inventory/group_vars/wazuh-manager/lvm``.
Optional: define LVM volumes in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/lvm``.
``/var/ossec`` often requires greater storage space, and ``/var/lib/wazuh-indexer``
may be beneficial too.

Expand All @@ -86,7 +86,7 @@ may be beneficial too.
create: true
Define network interfaces ``etc/kayobe/inventory/group_vars/wazuh-manager/network-interfaces``:
Define network interfaces ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces``:

(The following is an example - the names will depend on your particular network configuration.)

Expand All @@ -98,7 +98,7 @@ Define network interfaces ``etc/kayobe/inventory/group_vars/wazuh-manager/networ
The Wazuh manager may need to be exposed externally, in which case it may require another interface.
This can be done as follows in ``etc/kayobe/inventory/group_vars/wazuh-manager/network-interfaces``,
This can be done as follows in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces``,
with the network defined in ``networks.yml`` as usual.

.. code-block:: yaml
Expand Down Expand Up @@ -190,7 +190,7 @@ Deploying Wazuh Manager services
Setup
-----

To install a specific version modify the wazuh-ansible entry in ``etc/kayobe/ansible/requirements.yml``:
To install a specific version modify the wazuh-ansible entry in ``$KAYOBE_CONFIG_PATH/ansible/requirements.yml``:

.. code-block:: yaml
Expand All @@ -211,7 +211,7 @@ Edit the playbook and variables to your needs:
Wazuh manager configuration
---------------------------

Wazuh manager playbook is located in ``etc/kayobe/ansible/wazuh-manager.yml``.
Wazuh manager playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-manager.yml``.
Running this playbook will:

* generate certificates for wazuh-manager
Expand All @@ -221,7 +221,7 @@ Running this playbook will:
* setup and deploy wazuh-dashboard on wazuh-manager vm
* copy certificates over to wazuh-manager vm

Wazuh manager variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-manager/wazuh-manager``.
Wazuh manager variables file is located in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-manager``.

You may need to modify some of the variables, including:

Expand All @@ -232,27 +232,27 @@ You may need to modify some of the variables, including:

If you are using multiple environments, and you need to customise Wazuh in
each environment, create override files in an appropriate directory,
for example ``etc/kayobe/environments/production/inventory/group_vars/``.
for example ``$KAYOBE_CONFIG_PATH/environments/production/inventory/group_vars/``.

Files which values can be overridden (in the context of Wazuh):

- etc/kayobe/inventory/group_vars/wazuh/wazuh-manager/wazuh-manager
- etc/kayobe/wazuh-manager.yml
- etc/kayobe/inventory/group_vars/wazuh/wazuh-agent/wazuh-agent
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-manager/wazuh-manager
- $KAYOBE_CONFIG_PATH/wazuh-manager.yml
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-agent/wazuh-agent

You'll need to run ``wazuh-manager.yml`` playbook again to apply customisation.

Secrets
-------

Wazuh requires that secrets or passwords are set for itself and the services with which it communiticates.
Wazuh secrets playbook is located in ``etc/kayobe/ansible/wazuh-secrets.yml``.
Wazuh secrets playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.yml``.
Running this playbook will generate and put pertinent security items into secrets
vault file which will be placed in ``$KAYOBE_CONFIG_PATH/wazuh-secrets.yml``.
If using environments it ends up in ``$KAYOBE_CONFIG_PATH/environments/<env_name>/wazuh-secrets.yml``
Remember to encrypt!

Wazuh secrets template is located in ``etc/kayobe/ansible/templates/wazuh-secrets.yml.j2``.
Wazuh secrets template is located in ``$KAYOBE_CONFIG_PATH/ansible/templates/wazuh-secrets.yml.j2``.
It will be used by wazuh secrets playbook to generate wazuh secrets vault file.


Expand Down Expand Up @@ -380,7 +380,7 @@ Verification
------------

The Wazuh portal should be accessible on port 443 of the Wazuh
manager’s IPs (using HTTPS, with the root CA cert in ``etc/kayobe/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
manager’s IPs (using HTTPS, with the root CA cert in ``$KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem``).
The first login should be as the admin user,
with the opendistro_admin_password password in ``$KAYOBE_CONFIG_PATH/wazuh-secrets.yml``.
This will create the necessary indices.
Expand All @@ -392,9 +392,9 @@ Logs are in ``/var/log/wazuh-indexer/wazuh.log``. There are also logs in the jou
Wazuh agents
============

Wazuh agent playbook is located in ``etc/kayobe/ansible/wazuh-agent.yml``.
Wazuh agent playbook is located in ``$KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml``.

Wazuh agent variables file is located in ``etc/kayobe/inventory/group_vars/wazuh-agent/wazuh-agent``.
Wazuh agent variables file is located in ``$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-agent/wazuh-agent``.

You may need to modify some variables, including:

Expand Down
12 changes: 6 additions & 6 deletions doc/source/operations/ceph-management.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ Working with Cephadm
This documentation provides guide for Ceph operations. For deploying Ceph,
please refer to :ref:`cephadm-kayobe` documentation.

cephadm configuration location
Cephadm configuration location
------------------------------

In kayobe-config repository, under ``etc/kayobe/cephadm.yml`` (or in a specific
Kayobe environment when using multiple environment, e.g.
``etc/kayobe/environments/<Environment Name>/cephadm.yml``)

StackHPC's cephadm Ansible collection relies on multiple inventory groups:
StackHPC's Cephadm Ansible collection relies on multiple inventory groups:

- ``mons``
- ``mgrs``
Expand All @@ -24,11 +24,11 @@ StackHPC's cephadm Ansible collection relies on multiple inventory groups:

Those groups are usually defined in ``etc/kayobe/inventory/groups``.

Running cephadm playbooks
Running Cephadm playbooks
-------------------------

In kayobe-config repository, under ``etc/kayobe/ansible`` there is a set of
cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.
Cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection.

- ``cephadm.yml`` - runs the end to end process starting with deployment and
defining EC profiles/crush rules/pools and users
Expand Down Expand Up @@ -176,11 +176,11 @@ Remove the OSD using Ceph orchestrator command:
ceph orch osd rm <ID> --replace
After removing OSDs, if the drives the OSDs were deployed on once again become
available, cephadm may automatically try to deploy more OSDs on these drives if
available, Cephadm may automatically try to deploy more OSDs on these drives if
they match an existing drivegroup spec.
If this is not your desired action plan - it's best to modify the drivegroup
spec before (``cephadm_osd_spec`` variable in ``etc/kayobe/cephadm.yml``).
Either set ``unmanaged: true`` to stop cephadm from picking up new disks or
Either set ``unmanaged: true`` to stop Cephadm from picking up new disks or
modify it in some way that it no longer matches the drives you want to remove.

Host maintenance
Expand Down
47 changes: 6 additions & 41 deletions doc/source/operations/control-plane-operation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Monitoring
----------

* `Back up InfluxDB <https://docs.influxdata.com/influxdb/v1.8/administration/backup_and_restore/>`__
* `Back up ElasticSearch <https://www.elastic.co/guide/en/elasticsearch/reference/current/backup-cluster-data.html>`__
* `Back up OpenSearch <https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore/>`__
* `Back up Prometheus <https://prometheus.io/docs/prometheus/latest/querying/api/#snapshot>`__

Seed
Expand All @@ -42,8 +42,8 @@ Ansible control host
Control Plane Monitoring
========================

The control plane has been configured to collect logs centrally using the EFK
stack (Elasticsearch, Fluentd and Kibana).
The control plane has been configured to collect logs centrally using Fluentd,
OpenSearch and OpenSearch Dashboards.

Telemetry monitoring of the control plane is performed by Prometheus. Metrics
are collected by Prometheus exporters, which are either running on all hosts
Expand Down Expand Up @@ -227,7 +227,7 @@ Overview
* Remove the node from maintenance mode in bifrost
* Bifrost should automatically power on the node via IPMI
* Check that all docker containers are running
* Check Kibana for any messages with log level ERROR or equivalent
* Check OpenSearch Dashboards for any messages with log level ERROR or equivalent

Controllers
-----------
Expand Down Expand Up @@ -277,7 +277,7 @@ Stop all Docker containers:

.. code-block:: console
monitoring0# for i in `docker ps -q`; do docker stop $i; done
monitoring0# for i in `docker ps -a`; do systemctl stop kolla-$i-container; done
Shut down the node:

Expand Down Expand Up @@ -342,21 +342,6 @@ Host packages can be updated with:
See https://docs.openstack.org/kayobe/latest/administration/overcloud.html#updating-packages

Upgrading OpenStack Services
----------------------------

* Update tags for the images in ``etc/kayobe/kolla-image-tags.yml``
* Pull container images to overcloud hosts with ``kayobe overcloud container image pull``
* Run ``kayobe overcloud service upgrade``

You can update the subset of containers or hosts by

.. code-block:: console
kayobe# kayobe overcloud service upgrade --kolla-tags <service> --limit <hostname> --kolla-limit <hostname>
For more information, see: https://docs.openstack.org/kayobe/latest/upgrading.html

Troubleshooting
===============

Expand All @@ -378,27 +363,7 @@ To boot an instance on a specific hypervisor

.. code-block:: console
openstack server create --flavor <flavour name>--network <network name> --key-name <key> --image <Image name> --os-compute-api-version 2.74 --host <hypervisor hostname> <vm name>
Cleanup Procedures
==================

OpenStack services can sometimes fail to remove all resources correctly. This
is the case with Magnum, which fails to clean up users in its domain after
clusters are deleted. `A patch has been submitted to stable branches
<https://review.opendev.org/#/q/Ibadd5b57fe175bb0b100266e2dbcc2e1ea4efcf9>`__.
Until this fix becomes available, if Magnum is in use, administrators can
perform the following cleanup procedure regularly:

.. code-block:: console
for user in $(openstack user list --domain magnum -f value -c Name | grep -v magnum_trustee_domain_admin); do
if openstack coe cluster list -c uuid -f value | grep -q $(echo $user | sed 's/_[0-9a-f]*$//'); then
echo "$user still in use, not deleting"
else
openstack user delete --domain magnum $user
fi
done
openstack server create --flavor <flavour name> --network <network name> --key-name <key name> --image <image name> --os-compute-api-version 2.74 --host <hypervisor hostname> <vm name>
OpenSearch indexes retention
=============================
Expand Down
6 changes: 3 additions & 3 deletions doc/source/operations/customising-horizon.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,6 @@ If the ``horizon`` container is restarting with the following error:
/var/lib/kolla/venv/bin/python /var/lib/kolla/venv/bin/manage.py compress --force
CommandError: An error occurred during rendering /var/lib/kolla/venv/lib/python3.6/site-packages/openstack_dashboard/templates/horizon/_scripts.html: Couldn't find any precompiler in COMPRESS_PRECOMPILERS setting for mimetype '\'text/javascript\''.
It can be resolved by dropping cached content with ``docker restart
memcached``. Note this will log out users from Horizon, as Django sessions are
stored in Memcached.
It can be resolved by dropping cached content with ``systemctl restart
kolla-memcached-container``. Note this will log out users from Horizon, as Django
sessions are stored in Memcached.

0 comments on commit 34f41b6

Please sign in to comment.