Skip to content

Latest commit

 

History

History
595 lines (451 loc) · 25.8 KB

Contributing.asciidoc

File metadata and controls

595 lines (451 loc) · 25.8 KB

openQA developer guide

Introduction

openQA is an automated test tool that makes it possible to test the whole installation process of an operating system. It’s free software released under the GPLv2 license. The source code and documentation are hosted in the os-autoinst organization on GitHub.

This document provides the information needed to start contributing to the openQA development improving the tool, fixing bugs and implementing new features. For information about writing or improving openQA tests, refer to the Tests Developer Guide. In both documents it’s assumed that the reader is already familiar with openQA and has already read the Starter Guide. All those documents are available at the official repository.

Development guidelines

As mentioned, the central point of development is the os-autoinst organization on GitHub where several repositories can be found:

As in most projects hosted on GitHub, pull request are always welcome and are the right way to contribute improvements and fixes.

Rules for commits

  • Every commit is checked by Travis CI as soon as you create a pull request but you should run the tidy script locally, i.e. before every commit call:

./script/tidy

to ensure your Perl code changes are consistent with the style rules.

  • You may also run local tests on your machine or in your own development environment to verify everything works as expected. Call:

make test

for style checks, unit and integration tests.

To execute a single test, one can tweak the test execution with the variables in the Makefile or use prove after pointing to a local test database in the environment variable TEST_PG. Also, If you set a custom base directory, be sure to unset it when running tests.

Example:

TEST_PG='DBI:Pg:dbname=openqa_test;host=/dev/shm/tpg' OPENQA_BASEDIR= prove -v t/14-grutasks.t

In the case of wanting to tweak the tests as above, to speed up the test initialization, start PostgreSQL using t/test_postgresql instead of using the system service. E.g.

t/test_postgresql /dev/shm/tpg

To check the coverage by individual test files easily call e.g.

env CHECKSTYLE=0 PROVE_ARGS=t/24-worker-engine.t make coverage

and take a look into the generated coverage HTML report in cover_db/coverage.html.

  • For git commit messages use the rules stated on How to Write a Git Commit Message as a reference

  • Every pull request is reviewed in a peer review to give feedback on possible implications and how we can help each other to improve

If this is too much hassle for you feel free to provide incomplete pull requests for consideration or create an issue with a code change proposal.

Getting involved into development

But developers willing to get really involved into the development of openQA or people interested in following the always-changing roadmap should take a look at the openQAv3 project in openSUSE’s project management tool. This Redmine instance is used to coordinate the main development effort organizing the existing issues (bugs and desired features) into 'target versions'.

Currently developers meet in IRC channel #opensuse-factory and in a weekly jangouts call of the core developer team.

In addition to the ones representing development sprints there is another version that is always open. Future improvements groups features that are in the developers' and users' wish list but that have little chances to be addressed in the short term, either because the return of investment is not worth it or because they are out of the current scope of the development. Developers looking for a place to start contributing are encouraged to simply go to that list and assign any open issue to themselves.

openQA and os-autoinst repositories also include test suites aimed at preventing bugs and regressions in the software. codecov is configured in the repositories to encourage contributors to raise the tests coverage with every commit and pull request. New features and bug fixes are expected to be backed with the corresponding tests.

Technologies

Everything in openQA, from os-autoinst to the web frontend and from the tests to the support scripts is written in Perl. So having some basic knowledge about that language is really desirable in order to understand and develop openQA. Of course, in addition to bare Perl, several libraries and additional tools are required. The easiest way to install all needed dependencies is using the available os-autoinst and openQA packages, as described in the Installation Guide.

In the case of os-autoinst, only a few CPAN modules are required. Basically Carp::Always, Data::Dump. JSON and YAML. On the other hand, several external tools are needed including QEMU, Tesseract and OptiPNG. Last but not least, the OpenCV library is the core of the openQA image matching mechanism, so it must be available on the system.

The openQA package is built on top of Mojolicious, an excellent Perl framework for web development that will be extremely familiar to developers coming from other modern web frameworks like Sinatra and that have nice and comprehensive documentation available at its home page.

In addition to Mojolicious and its dependencies, several other CPAN modules are required by the openQA package. For a full list of hard dependencies, see the file cpanfile at the root of the openQA repository.

openQA relies on PostgreSQL to store the information. It used to support SQLite, but that is no longer possible.

As stated in the previous section, every feature implemented in both packages should be backed by proper tests. Test::More is used to implement those tests. As usual, tests are located under the /t/ directory. In the openQA package, one of the tests consists of a call to Perltidy to ensure that the contributed code follows the most common Perl style conventions.

Starting the webserver from local Git checkout

  • To start the webserver for development, use the scripts/openqa daemon.

  • The other daemons (mentioned in the architecture diagram) are started in the same way, e.g. script/openqa-scheduler daemon.

  • openQA will pull the required asssets on the first run.

  • openQA uses SASS. Under openSUSE, installing rubygem(sass) should be sufficient.

  • It is also useful to start openQA with morbo which allows applying changes without restarting the server: morbo -m development -w assets -w lib -w templates -l http://localhost:9526 script/openqa daemon

  • In case you have problems with broken rendering of the web page it can help to delete the asset cache and let the webserver regenerate it on first startup. For this delete the subdirectories .sass-cache/, assets/cache/ and assets/assetpack.db. Make sure to look for error messages on startup of the webserver and to force the refresh of the web page in your browser.

Handling of dependencies

  • Add 3rd party JavaScript and CSS file to assets/assetpack.def. When restarting the web server the new/updated files are pulled automatically. Also take care to update the asset cache for the openSUSE RPM package.

  • Other dependencies need to be added to openQA.spec or os-autoinst.spec.

  • Perl dependencies need to be added additionally to cpanfile.

  • To easily get all necessary dependencies on openSUSE you can install the package openQA-devel. In other cases one can rely on the cpanfile and read out the dependencies from the spec file for the rest.

Remarks

  • New dependencies are only available in the Docker container which is used to run CI tests after the PR adding these dependencies has been merged. Besides, the build of that container must not be broken (see build results on OBS).

  • The os-autoinst repository uses the same container as the openQA repository which is made using docker/travis_test/Dockerfile within the openQA repository.

Update asset cache for openSUSE RPM package

  1. Clone the repository (or a branch to it if you do not have the rights to push directly) locally, e.g. osc co devel:openQA/openQA.

  2. Run bash update-cache.sh inside the repository folder. Follow the log checking no download errors occurred.

  3. Do a sanity check on the generated cache.txz. It usually should not be smaller than before, contain the newly added sources and must not contain any empty files.

  4. Add an entry to the changes file using osc vc openQA.changes.

  5. osc ci -m 'Update asset cache'

Managing the database

During the development process there are cases in which the database schema needs to be changed. there are some steps that have to be followed so that new database instances and upgrades include those changes.

When is it required to update the database schema?

After modifying files in lib/OpenQA/Schema/Result. However, not all changes require to update the schema. Adding just another method or altering/adding functions like has_many doesn’t require an update. However, adding new columns, modifying or removing existing ones requires to follow the steps mentioned above.

How to update the database schema

  1. First, you need to increase the database version number in the $VERSION variable in the lib/OpenQA/Schema.pm file. Note that it’s recommended to notify the other developers before doing so, to synchronize in case there are more developers wanting to increase the version number at the same time.

  2. Then you need to generate the deployment files for new installations, this is done by running ./script/initdb --prepare_init.

  3. Afterwards you need to generate the deployment files for existing installations, this is done by running ./script/upgradedb --prepare_upgrade. After doing so, the directories dbicdh/$ENGINE/deploy/<new version> and dbicdh/$ENGINE/upgrade/<prev version>-<new version> for PosgreSQL should have been created with some SQL files inside containing the statements to initialize the schema and to upgrade from one version to the next in the corresponding database engine.

  4. Migration scripts to upgrade from previous versions can be added under dbicdh/_common/upgrade. Create a <prev_version>-<new_version> directory and put some files there with DBIx commands for the migration. For examples just have a look at the migrations which are already there.

The above steps are only for preparing the required SQL statements, but do not actually alter the database. Before doing so, it is recommended to backup your database to be able to downgrade again if something goes wrong or you just need to continue working on another branch. To do so, the following command can be used to create a copy:

createdb -O ownername -T originaldb newdb

To actually create or update the database (after creating a backup as described), you should run either ./script/initdb --init_database or ./script/upgradedb --upgrade_database. This is also required when the changes are installed in a production server.

How to add fixtures to the database

Note: This section is not about the fixtures for the testsuite. Those are located under t/fixtures.

Note: This section might not be relevant anymore. At least there are currently none of the mentioned directories with files containing SQL statements present.

Fixtures (initial data stored in tables at installation time) are stored in files into the dbicdh/_common/deploy/_any/<version> and dbicdh/_common/upgrade/<prev_version>-<next_version> directories.

You can create as many files as you want in each directory. These files contain SQL statements that will be executed when initializing or upgrading a database. Note that those files (and directories) have to be created manually.

Executed SQL statements can be traced by setting the DBIC_TRACE environment variable.

export DBIC_TRACE=1

How to setup PostgreSQL to test locally with production data

  1. Install PosgreSQL - under openSUSE the following package are required: postgresql-server postgresql-init

  2. Start the server: systemctl start postgresql

  3. The following steps need to be done by the user postgres: su - postgres

  4. Create user: createuser your_username where your_username must be the same as the UNIX user you start your local openQA instance with.

  5. Create database: createdb -O your_username openqa

  6. The next steps must be done by the user you start your local openQA instance with.

  7. Import dump: pg_restore -c -d openqa path/to/dump

  8. Configure openQA to use PostgreSQL as described in the section Database of the installation guide. User name and password are not required.

How to overwrite config files

It can be necessary during development to change the config files in etc/. For example you have to edit etc/openqa/database.ini to use another database. Or to increase the log level it’s useful to set the loglevel to debug in etc/openqa/openqa.ini.

To avoid these changes getting in your git workflow, copy them to a new directory and set OPENQA_CONFIG in your shell setup files.

cp -ar etc/openqa etc/mine
export OPENQA_CONFIG=$PWD/etc/mine

Note that OPENQA_CONFIG points to the directory containing openqa.ini, database.ini, client.conf and workers.ini.

Adding new authentication module

OpenQA comes with three authentication modules providing authentication methods: OpenID, iChain and Fake (see User authentication).

All authentication modules reside in lib/OpenQA/Auth directory. During OpenQA start, [auth]/method section of /etc/openqa/openqa.ini is read and according to its value (or default OpenID) OpenQA tries to require OpenQA::WebAPI::Auth::$method. If successful, module for given method is imported or the OpenQA ends with error.

Each authentication module is expected to export auth_login and auth_logout functions. In case of request-response mechanism (as in OpenID), auth_response is imported on demand.

Currently there is no login page because all implemented methods use either 3rd party page or none.

Authentication module is expected to return HASH:

%res = (
    # error = 1 signals auth error
    error => 0|1
    # where to redirect the user
    redirect => ''
);

Authentication module is expected to create or update user entry in OpenQA database after user validation. See included modules for inspiration.

Customize base directory

It is possible to customize the openQA base directory (which is for instance used to store test results) by setting the environment variable OPENQA_BASEDIR. The default value is /var/lib. Be sure to clear that variable when running unit tests locally (see next section).

Running tests of openQA itself

Beside simply running the testsuite, it is also possible to use containers. Using containers, tests are executed in the same environment as on the Travis CI. This allows to reproduce issues specific to that environment.

Run tests without Docker

Be sure to install all required dependencies. Those can be found in the file openQA.spec in the openQA repository.

To run UI tests the package perl-Selenium-Remote-Driver is required. The version provided by Leap 42.2 is too old. The version from the repository devel-languages-perl can be used instead. You also need to install chromedriver and either chrome or chromium for the UI tests.

Run t/test_postgresql /dev/shm/tpg to initialize a temporary PostgreSQL database. Export the environment variable as instructed by that script.

To execute the testsuite use make test. It is also possible to run a particular test, for example prove t/api/01-workers.t.

To watch the execution of the UI tests, set the environment variable NOT_HEADLESS.

Run tests with Docker

To run tests in Docker please be sure that Docker is installed and the Docker daemon is running. To launch the test suite first it is required to pull the docker image:

docker pull registry.opensuse.org/devel/openqa/containers/openqa_dev:latest

This Docker image is provided by the OBS repository https://build.opensuse.org/package/show/devel:openQA/openqa_dev and based on the Dockerfile within the docker/travis_tests sub directory of the openQA repository.

Build the image using Makefile target:

make docker-test-build

Note that the image created by that target is called openqa:latest while the raw container pulled from OBS is called openqa_dev:latest.

Launch the tests using Makefile target:

make launch-docker-to-run-tests-within

Run tests by invoking Docker manually, e.g.:

docker run -v OPENQA_LOCAL_CODE:/opt/openqa -e VAR1=1 -e VAR2=1 openqa:latest make run-tests-within-container

Replace OPENQA_LOCAL_CODE with the location where you have the openQA code.

The command line to run tests manually reveals that the Makefile target run-tests-within-container is used to run the tests inside the container. It does some preparations to be able to run the full stack test within Docker and considers a few environment variables defining our test matrix:

FULLSTACK=0

UITESTS=0

FULLSTACK=0

UITESTS=1

FULLSTACK=1

SCHEDULER_FULLSTACK=1

DEVELOPER_FULLSTACK=1

GH_PUBLISH=true

So by replacing VAR1 and VAR2 with those values one can trigger the different tests of the matrix.

Of course it is also possible to run (specific) tests directly via prove instead of using the Makefile targets.

Tips

Commands passed to docker run will be executed after the initialization script (which does database creation and so on). So if there is the need to run an interactive session after it just do:

docker run -it -v OPENQA_LOCAL_CODE:/opt/openqa openqa:latest bash

Of course you can also use make run-tests-within-container \; bash to run the tests first and then open a shell for further investigation.

There is also the possibility to change the initialization scripts with the --entrypoint switch. This allows us to go into an interactive session without any initialization script run:

docker run -it --entrypoint /bin/bash -v OPENQA_LOCAL_CODE:/opt/openqa registry.opensuse.org/devel/openqa/containers/openqa_dev

In case there is the need to follow what is happening in the currently running container (the execution will terminate the session):

docker exec -ti $(docker ps | awk '!/CONTAINER/{print $1}') /bin/bash

Running UI tests in non-headless mode is also possible, eg.:

xhost +local:root
docker run --rm -ti --name openqa-testsuite -v /tmp/.X11-unix:/tmp/.X11-unix:rw -e DISPLAY="$DISPLAY" -e NOT_HEADLESS=1 openqa:latest prove -v t/ui/14-dashboard.t
xhost -local:root

It is also possible to use a custom os-autoinst checkout using the following arguments:

docker run … -e CUSTOM_OS_AUTOINST=1 -v /path/to/your/os-autoinst:/opt/os-autoinst make run-tests-within-container

By default, configure and make are still executed (so a clean checkout is expected). If your checkout is already prepared to use, set CUSTOM_OS_AUTOINST_SKIP_BUILD to prevent this. Be aware that the build produced outside of the container might not work inside the container if both environments provide different, incompatible library versions (eg. OpenCV).

It is also important to mention that your local repositories will be copied into the container. This can take very long if those are big, e.g. when the openQA repo contains a lot of profiling data because you enabled Mojolicious::Plugin::NYTProf.

In general, if starting the tests via Docker seems to hang, it is a good idea to inspect the process tree to see which command is currently executed.

Logging behavior

Logs are redirected to a logfile when running tests within Travis. The output can therefore not be asserted using Test::Output. This can be worked around by temporarily assigning a different Mojo::Log object to the application. To test locally under the same condition set the environment variable OPENQA_LOGFILE.

Note that redirecting the logs to a logfile only works for tests which run OpenQA::Setup::setup_log. In other tests the log is just printed to the standard output. This makes use of Test::Output simple but it should be taken care that the test output is not cluttered by log messages which can be quite irritating.

Building Plugins

Not all code needs to be included in openQA itself. openQA also supports the use of 3rd party plugins that follow the standards for plugins used by the Mojolicious web framework. These can be distributed as normal CPAN modules and installed as such alongside openQA.

Plugins are a good choice especially for extensions to the UI and HTTP API, but also for notification systems listening to various events inside the web server.

If your plugin was named OpenQA::WebAPI::Plugin::Hello, you would install it in one of the include directories of the Perl used to run openQA, and then configure it in openqa.ini. The plugins setting in the global section will tell openQA what plugins to load.

# Tell openQA to load the plugin
[global]
plugins = Hello

# Plugin specific configuration (optional)
[hello_plugin]
some = value

The plugin specific configuration is optional, but if defined would be available in $app→config→{hello_plugin}.

To extend the UI or HTTP API there are various named routes already defined that will take care of authentication for your plugin. You just attach the plugin routes to them and only authenticated requests will get through.

package OpenQA::WebAPI::Plugin::Hello;
use Mojo::Base 'Mojolicious::Plugin';

sub register {
    my ($self, $app, $config) = @_;

    # Only operators may use our plugin
    my $ensure_operator = $app->routes->find('ensure_operator');
    my $plugin_prefix = $ensure_operator->any('/hello_plugin');

    # Plain text response (under "/admin/hello_plugin/")
    $plugin_prefix->get('/' => sub {
      my $c = shift;
      $c->render(text => 'Hello openQA!');
    })->name('hello_plugin_index');

    # Add a link to the UI menu
    $app->config->{plugin_links}{operator}{'Hello'} = 'hello_plugin_index';
}

1;

The plugin_links configuration setting can be modified by plugins to add links to the operator and admin sections of the openQA UI menu. Route names or fully qualified URLs can be used as link targets. If your plugin uses templates, you should reuse the bootstrap layout provided by openQA. This will ensure a consistent look, and make the UI menu available everywhere.

% layout 'bootstrap';
% title 'Hello openQA!';
<div>
  <h2>Hello openQA!</h2>
</div>

For UI plugins there are two named authentication routes defined:

  1. ensure_operator: under /admin/, only allows logged in users with operator privileges

  2. ensure_admin: under /admin/, only allows logged in users with admin privileges

And for HTTP API plugins there are four named authentication routes defined:

  1. api_public: under /api/v1/, allows access to everyone

  2. api_ensure_user: under /api/v1/, only allows authenticated users

  3. api_ensure_operator: under /api/v1/, only allows authenticated users with operator privileges

  4. api_ensure_admin: under /api/v1/, only allows authenticated nusers with admin privileges

To generate a minimal installable plugin with a CPAN distribution directory structure you can use the Mojolicious tools. It can be packaged just like any other Perl module from CPAN.

$ mojo generate plugin -f OpenQA::WebAPI::Plugin::Hello
...
$ cd OpenQA-WebAPI-Plugin-Hello/
$ perl Makefile.PL
...
$ make test
...
-------------------------------------------------------------------------------

And if you need code examples, there are some plugins
https://github.com/os-autoinst/openQA/tree/master/lib/OpenQA/WebAPI/Plugin[included with openQA].