Skip to content

Commit

Permalink
Add an alternative standalone, container-based dev environment
Browse files Browse the repository at this point in the history
The current development environment requires four separate VMs
(three from tinystage and one for Bodhi itself), plus some
containers running inside the Bodhi VM. It's pretty heavy!

This provides an alternative development environment that is
standalone and entirely based on containers: a postgres
container for waiverdb to use, a waiverdb container, a
greenwave container, and a bodhi container that is as similar
as possible to the bodhi VM from the existing environment,
deployed using the same ansible plays (with a few conditional
changes). Instead of using the tinystage environment to provide
a 'real' Ipsilon instance for auth, this uses the permissive
auth policy used by the unit tests, which means you're always
logged in as 'ralph', an admin. If you need to test more
sophisticated auth stuff, you'll need to use the full-fat
VM-based environment.

This unfortunately overlaps to some extent with
devel/docker/compose-services.yml , the docker-compose definition
used to deploy similar wdb/waiverdb/greenwave containers inside
the bodhi VM in the VM-based environment. I couldn't think of
a clean way to de-duplicate these: we can't 'nest' containers,
so the bodhi container can't run those other containers.

I suspect there are actually some issues in the current dev
environment which I fixed in passing for this one: I don't think
the way the docker-compose definition tries to launch the
waiverdb and greenwave containers is valid any more, so I suspect
they don't work.

I have tested this environment from a Fedora 39 host using
podman-docker to have podman imitate docker, I have not tested
it with native docker. The most convenient way to use it is to
do `export VAGRANT_VAGRANTFILE=./Vagrantfile.container` and
then run Vagrant commands as normal.

Signed-off-by: Adam Williamson <[email protected]>
  • Loading branch information
AdamWill committed Dec 15, 2023
1 parent 3bacc58 commit e469a70
Show file tree
Hide file tree
Showing 21 changed files with 638 additions and 325 deletions.
10 changes: 10 additions & 0 deletions Dockerfile.dev-container
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM fedora:latest
MAINTAINER test
RUN ["dnf", "-y", "install", "openssh-server", "openssh-clients", "iputils", "systemd", "sssd-client"]
RUN mkdir -p /root/.ssh
RUN curl -o /root/.ssh/authorized_keys https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
RUN mkdir -p /home/vagrant
RUN systemctl enable sshd.service
EXPOSE 22
EXPOSE 6543
CMD [ "/usr/sbin/init" ]
6 changes: 5 additions & 1 deletion Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,11 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.provision "ansible" do |ansible|
ansible.playbook = "devel/ansible/playbook.yml"
ansible.extra_vars = {
fas_username: fas_username
fas_username: fas_username,
in_container: false,
use_freeipa: true,
use_httpd: true,
vagrant_user: "vagrant"
}
end

Expand Down
217 changes: 217 additions & 0 deletions Vagrantfile.container
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# On your host:
# git clone https://github.com/fedora-infra/bodhi.git
# cd bodhi
# export VAGRANT_VAGRANTFILE=./Vagrantfile.container
# vagrant up

# The networking setup: all other containers are set to use the 'bodhi' container's
# namespace (it has to be 'bodhi' because vagrant needs to ssh into 'bodhi', if we
# have bodhi use another container's namespace that does not work). Other containers
# wait for bodhi to be running (via `podman container exists bodhi`) before running,
# using a trigger. bodhi waits for other containers to be up before provisioning.

# This means if you're destroying containers, bodhi must be the last one destroyed.
# Containers are destroyed in reverse order from the command line, so this is the
# safe way to destroy all containers:
# vagrant destroy bodhi wdb waiverdb greenwave rabbitmq

require 'etc'

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

# To cache update packages (which is helpful if frequently doing `vagrant destroy && vagrant up`)
# you can create a local directory and share it to the guest's DNF cache. Uncomment the line below
# to create a dnf cache directory, and a line in the Bodhi container config below to use it
#
# Dir.mkdir('.dnf-cache') unless File.exists?('.dnf-cache')

# waiverdb database container
config.vm.define "wdb" do |wdb|
wdb.vm.host_name = "wdb.example.com"
# set up container volumes in a dedicated temp space (so we can relabel them safely)
wdb.trigger.before :up do |trigger1|
trigger1.run = {inline: "rm -rf /tmp/bodhi-dev-wdb"}
end
wdb.trigger.before :up do |trigger2|
trigger2.run = {inline: "mkdir -p /tmp/bodhi-dev-wdb"}
end
wdb.trigger.before :up do |trigger3|
trigger3.run = {inline: "curl -o /tmp/bodhi-dev-wdb/waiverdb.dump.xz https://infrastructure.fedoraproject.org/infra/db-dumps/waiverdb.dump.xz"}
end
wdb.trigger.before :up do |trigger4|
trigger4.run = {inline: "unxz --keep --force /tmp/bodhi-dev-wdb/waiverdb.dump.xz"}
end
wdb.trigger.before :up do |trigger5|
trigger5.run = {inline: "cp ./devel/docker/settings/restore_waiverdb.sh /tmp/bodhi-dev-wdb/restore_waiverdb.sh"}
end
# wait for bodhi to be up, as we're joining its namespace
wdb.trigger.before :up do |trigger6|
trigger6.run = {inline: "bash -c 'until podman container exists bodhi || (( count++ >= 600 )); do sleep 1; done'"}
end
wdb.vm.provider "docker" do |domain|
domain.image = "docker.io/library/postgres:latest"
# this allows all access, but it's kinda pointless to set a
# password since this file is checked into a public repo...
# just be aware this setup leaves an unsecured postgresql
# accessible on your dev box
domain.env = {POSTGRES_HOST_AUTH_METHOD: "trust"}
domain.create_args = ["--health-cmd=pg_isready --host=localhost --username=waiverdb --dbname=waiverdb", "--health-interval=5s", "--health-timeout=30s", "--health-retries=3", "--network=container:bodhi"]
domain.has_ssh = false
domain.volumes = ["/tmp/bodhi-dev-wdb/restore_waiverdb.sh:/docker-entrypoint-initdb.d/restore_db.sh:Z", "/tmp/bodhi-dev-wdb/waiverdb.dump:/docker-entrypoint-initdb.d/wdb_pgdata:Z"]
domain.name = "wdb"
end
end

# waiverdb service container
config.vm.define "waiverdb" do |waiverdb|
waiverdb.vm.host_name = "waiverdb.example.com"
# Forward traffic on the host to the development waiverDB on the guest
waiverdb.vm.network "forwarded_port", guest: 6544, host: 6544
# set up container volumes in a dedicated temp space (so we can relabel them safely)
waiverdb.trigger.before :up do |trigger1|
trigger1.run = {inline: "rm -rf /tmp/bodhi-dev-waiverdb"}
end
waiverdb.trigger.before :up do |trigger2|
trigger2.run = {inline: "mkdir -p /tmp/bodhi-dev-waiverdb"}
end
waiverdb.trigger.before :up do |trigger3|
trigger3.run = {inline: "cp ./devel/docker/settings/waiverdb-settings.py /tmp/bodhi-dev-waiverdb/waiverdb-settings.py"}
end
# access postgres via the host since we can't easily do container<->container
# networking
waiverdb.trigger.before :up do |trigger4|
trigger4.run = {inline: "sed -i -e 's,wdb:5432,localhost:5432,g' /tmp/bodhi-dev-waiverdb/waiverdb-settings.py"}
end
waiverdb.trigger.before :up do |trigger4|
trigger4.run = {inline: "cp ./devel/docker/settings/run_waiverdb.sh /tmp/bodhi-dev-waiverdb/run_waiverdb.sh"}
end
# wait for bodhi to be up, as we're joining its namespace
waiverdb.trigger.before :up do |trigger5|
trigger5.run = {inline: "bash -c 'until podman container exists bodhi || (( count++ >= 600 )); do sleep 1; done'"}
end
waiverdb.vm.provider "docker" do |domain|
domain.image = "quay.io/factory2/waiverdb:latest"
domain.create_args = ["-i", "--entrypoint=/usr/libexec/run_waiverdb.sh", "--network=container:bodhi"]
domain.has_ssh = false
domain.volumes = ["/tmp/bodhi-dev-waiverdb/waiverdb-settings.py:/etc/waiverdb/settings.py:Z", "/tmp/bodhi-dev-waiverdb/run_waiverdb.sh:/usr/libexec/run_waiverdb.sh:Z"]
domain.name = "waiverdb"
end
end

# greenwave container
config.vm.define "greenwave" do |greenwave|
# Forward traffic on the host to the development greenwave on the guest
greenwave.vm.network "forwarded_port", guest: 6545, host: 6545
# set up container volumes in a dedicated temp space (so we can relabel them safely)
greenwave.trigger.before :up do |trigger1|
trigger1.run = {inline: "rm -rf /tmp/bodhi-dev-greenwave /tmp/bodhi-dev-policies"}
end
greenwave.trigger.before :up do |trigger2|
trigger2.run = {inline: "mkdir -p /tmp/bodhi-dev-greenwave /tmp/bodhi-dev-policies"}
end
greenwave.trigger.before :up do |trigger3|
trigger3.run = {inline: "curl -o /tmp/bodhi-dev-policies/fedora_tmpl.yaml https://pagure.io/fedora-infra/ansible/raw/main/f/roles/openshift-apps/greenwave/templates/fedora.yaml"}
end
greenwave.trigger.before :up do |trigger4|
trigger4.run = {inline: "jinja2 --format=yaml -o /tmp/bodhi-dev-policies/fedora.yaml /tmp/bodhi-dev-policies/fedora_tmpl.yaml"}
end
greenwave.trigger.before :up do |trigger5|
trigger5.run = {inline: "rm -f /tmp/bodhi-dev-greenwave/fedora_tmpl.yaml"}
end
greenwave.trigger.before :up do |trigger6|
trigger6.run = {inline: "cp ./devel/docker/settings/greenwave-settings.py /tmp/bodhi-dev-greenwave/greenwave-settings.py"}
end
# access waiverdb via the host since we can't easily do container<->container
# networking
greenwave.trigger.before :up do |trigger7|
trigger7.run = {inline: "sed -i -e 's,waiverdb:6544,localhost:6544,g' /tmp/bodhi-dev-greenwave/greenwave-settings.py"}
end
# wait for bodhi to be up, as we're joining its namespace
greenwave.trigger.before :up do |trigger8|
trigger8.run = {inline: "bash -c 'until podman container exists bodhi || (( count++ >= 600 )); do sleep 1; done'"}
end
greenwave.vm.provider "docker" do |domain|
domain.image = "quay.io/factory2/greenwave:latest"
# this is setting args for the container's own entrypoint, which is
# a wrapper script that runs this command inside the venv
domain.cmd = ["gunicorn", "--bind", "0.0.0.0:6545", "--access-logfile", "-", "--error-logfile", "-", "--enable-stdio-inheritance", "greenwave.wsgi:app"]
domain.create_args = ["-i", "--network=container:bodhi"]
domain.has_ssh = false
domain.volumes = ["/tmp/bodhi-dev-greenwave/greenwave-settings.py:/etc/greenwave/settings.py:Z", "/tmp/bodhi-dev-policies:/etc/greenwave/policies:Z"]
domain.name = "greenwave"
end
end

# rabbitmq container
config.vm.define "rabbitmq" do |rabbitmq|
rabbitmq.vm.host_name = "rabbitmq.example.com"
# Forward traffic on the host to the RabbitMQ management UI on the guest.
# This allows developers to view message queues at http://localhost:15672/
rabbitmq.vm.network "forwarded_port", guest: 15672, host: 15672

# wait for bodhi to be up, as we're joining its namespace
rabbitmq.trigger.before :up do |trigger1|
trigger1.run = {inline: "bash -c 'until podman container exists bodhi || (( count++ >= 600 )); do sleep 1; done'"}
end

rabbitmq.vm.provider "docker" do |domain|
domain.image = "docker.io/library/rabbitmq:3-management"
domain.create_args = ["-i", "--network=container:bodhi"]
domain.has_ssh = false
domain.name = "rabbitmq"
end
end


# bodhi container
config.vm.define "bodhi", primary: true do |bodhi|
bodhi.vm.host_name = "bodhi-dev.example.com"
# we need ssh on this container so ansible provisioning can run
bodhi.ssh.insert_key = true
bodhi.ssh.username = "root"

# bootstrap and run with ansible
bodhi.vm.provision "ansible" do |ansible|
ansible.playbook = "devel/ansible/playbook.yml"
ansible.extra_vars = {
in_container: true,
use_freeipa: false,
use_httpd: false,
vagrant_user: "root"
}
end

# Forward traffic on the host to the development server on the guest,
# so you can access bodhi at http://localhost:6543
bodhi.vm.network "forwarded_port", guest: 6543, host: 6543

# wait for other containers to be up before proceeding with provisioning
bodhi.trigger.before :provision do |trigger1|
trigger1.run = {inline: "bash -c 'until (podman container exists wdb && podman container exists waiverdb && podman container exists greenwave && podman container exists rabbitmq) || (( count++ >= 600 )); do echo Waiting for other containers...; sleep 1; done'"}
end

bodhi.vm.provider "docker" do |domain|
# we build the container image on the fly from Dockerfile.dev-container
domain.build_dir = "."
domain.dockerfile = "Dockerfile.dev-container"
# we have to disable label separation for this container as we want to
# map this entire working directory into the container. we can't copy
# it to a temp dir as we do for the other containers as then live
# changes wouldn't work, and we don't want to relabel the checkout in
# place as that would cause other problems
# for AUDIT_WRITE, see https://bugzilla.redhat.com/show_bug.cgi?id=1923728
# it's needed for sshd to work inside the container, on F38 host at least
domain.create_args = ["-i", "--security-opt=label=disable", "--cap-add=AUDIT_WRITE"]
domain.has_ssh = true
domain.name = "bodhi"
domain.volumes = ["./:/home/vagrant/bodhi"]
# uncomment this line to use the DNF cache directory described above
# domain.volumes = ["./:/home/vagrant/bodhi", ".dnf-cache:/var/cache/dnf"]
end
end
end
1 change: 1 addition & 0 deletions bodhi-server/bodhi/server/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,7 @@ def main(global_config, testing=None, session=None, **settings):
config.add_translation_dirs('bodhi.server:locale/')

# Authentication & Authorization
testing = testing or bodhi_config.get('auth.completely-insecure-testing')
if testing:
# use a permissive security policy while running unit tests
fake_identity = munchify(
Expand Down
8 changes: 5 additions & 3 deletions devel/ansible/playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,13 @@
shell: "ping -c 5 ipsilon.tinystage.test"
ignore_errors: yes
register: ping_response
when: use_freeipa

- name: Give reason for failure
fail:
msg: Provisioning bodhi requires the base tinystage setup to be running.
when: ping_response.rc != 0
when: "use_freeipa and ping_response.rc != 0"
roles:
- rabbitmq
- bodhi
- role: bodhi
- role: rabbitmq
when: not in_container
79 changes: 79 additions & 0 deletions devel/ansible/roles/bodhi/tasks/freeipa.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
- name: Enroll system as IPA client
shell:
cmd: ipa-client-install --hostname {{ ansible_fqdn }} --domain tinystage.test --realm {{ krb_realm }} --server ipa.tinystage.test -p {{ ipa_admin_user }} -w {{ ipa_admin_password }} -U -N --force-join
creates: /etc/ipa/default.conf

- name: pip install oidc-register
pip:
name: oidc-register
executable: pip3

- name: Get the content of the CA cert
slurp:
src: /etc/ipa/ca.crt
register: ca_crt

- name: Put tinystage root CA in the list of CA's
blockinfile:
block: "{{ ca_crt.content | b64decode }}"
path: "{{ item }}"
loop:
- /etc/pki/tls/certs/ca-bundle.crt
- /usr/local/lib/python{{ python3_version.stdout }}/site-packages/httplib2/cacerts.txt
- /srv/venv/lib/python{{ python3_version.stdout }}/site-packages/certifi/cacert.pem

- name: Register with Ipsilon
command: python3 /home/vagrant/bodhi/devel/register-with-ipsilon.py
register: _ipsilon_registration

- name: Generate and get SSL cert
shell:
cmd: ipa-getcert request -f /etc/pki/tls/certs/server.pem -k /etc/pki/tls/private/server.key -K HTTP/{{ ansible_fqdn }} -N {{ ansible_fqdn }}
creates: /etc/pki/tls/certs/server.pem
when: "use_httpd and not in_container"

- name: Check the cert is there
wait_for:
path: /etc/pki/tls/certs/server.pem
state: present
when: "use_httpd and not in_container"

- name: Check the key is there
wait_for:
path: /etc/pki/tls/private/server.key
state: present
when: "use_httpd and not in_container"

- name: Setup mod_ssl
lineinfile:
path: /etc/httpd/conf.d/ssl.conf
regexp: "^SSLCertificateFile "
line: SSLCertificateFile /etc/pki/tls/certs/server.pem
when: "use_httpd and not in_container"
- name: Setup mod_ssl
lineinfile:
path: /etc/httpd/conf.d/ssl.conf
regexp: "^SSLCertificateKeyFile "
line: SSLCertificateKeyFile /etc/pki/tls/private/server.key
when: "use_httpd and not in_container"
- name: Setup mod_ssl
lineinfile:
path: /etc/httpd/conf.d/ssl.conf
insertbefore: "</VirtualHost>"
regexp: "^RequestHeader set X-Forwarded-Proto https$"
line: RequestHeader set X-Forwarded-Proto https
when: "use_httpd and not in_container"

- name: Copy the create users and groups script
template:
src: create-freeipa-users-grps.py
dest: /home/vagrant/create-freeipa-users-grps.py
mode: 0644
owner: "{{ vagrant_user }}"
group: "{{ vagrant_user }}"

- name: Add development users to tinystage
shell: python3 create-freeipa-users-grps.py > users-creation.log
args:
chdir: /home/vagrant/
creates: users-creation.log
Loading

0 comments on commit e469a70

Please sign in to comment.