Skip to content

Latest commit

 

History

History
492 lines (392 loc) · 18.5 KB

README_V2.md

File metadata and controls

492 lines (392 loc) · 18.5 KB

Logo

Erebus

Erebus is a powerful, open-source test harness designed to bring flexibility and robustness to your software testing workflow. Named after the primordial deity of darkness in Greek mythology, Erebus sheds light on the hidden corners of your software systems, uncovering performance bottlenecks and functional issues.

View Demo · Report Bug · Request Feature

Table of Contents
  1. Introduction
  2. Quickstart Guide
  3. Installation Guide
  4. Deployment
  5. Usage

Introduction

What is Erebus?

At its core, Erebus is an extensible testing framework that empowers developers and QA engineers to conduct comprehensive performance and functional tests on diverse software systems. Whether you're optimising a high-traffic web service or ensuring the reliability of a complex event-driven application, Erebus provides the tools you need.

Key Features

  • 🔧 Extensible Architecture: Tailor Erebus to fit your unique testing requirements.
  • 🚀 Performance Testing: Identify bottlenecks and optimise your system's speed.
  • ✅ Functional Testing: Ensure every component works as intended.
  • 🌐 Multi-Protocol Support: Send test files via HTTP, Kafka, and more.
  • 🛠️ Easy Test Setup: Streamlined base functionality for quick test configuration.

Erebus isn't just a tool; it's a test harness that adapts to your needs, ensuring your software not only works but excels under real-world conditions.


Quickstart Guide

  1. Clone the repo:
git clone https://github.com/xtuml/erebus.git
cd erebus
  1. Run with docker compose:
docker compose up --build

This assumes you have docker installed on your machine (https://www.docker.com/products/docker-desktop/)


Installation Guide

Prerequisites

Before installing Erebus, ensure you have the following:

  • Python (v3.11 or later)
  • Docker (latest stable version)

Python 3.11 introduces performance improvements and new features that Erebus leverages for efficient test execution. Docker is used for containerisation, ensuring consistent environments across different setups.

Using Docker Compose (Recommended)

We provide a Docker Compose file that sets up Erebus with the correct volumes, ports, and environment variables. This is the easiest and most consistent way to run Erebus:

  1. Clone the repo:
git clone https://github.com/xtuml/erebus.git
cd erebus
  1. (Optional) Customise default settings:
  • If you are setting up the test harness for deployment, please go to Deployment

  • To override default settings within Erebus, first create a directory called config within the project's root directory, if it doesn't already exist.

  • Copy the default config file from ./test_harness/config/default_config.config file to to the newly created config folder in the root directory.

  • Rename the config file from default_config.config to config.config

  • Override default values by copying the property under [non-default]. Eg.

[DEFAULT]
requests_max_retries = 5
requests_timeout = 10

[non-default]
requests_max_retries = 10 # This will override the default setting of 5
  • NOTE: If you do not provide a custom config file, when running the test harness, you may recieve a message WARNING:root:Given harness config path does not exist: /config/config.config. This is fine, the default_config.config file will be used instead.
  1. Build and run using Docker Compose:
docker compose up --build # Ensure that you are in the project's root directory

After running the command, and after the build process has finished, the following output should be visibile in the terminal:

[+] Running 1/0
✔ Container test-harness-dev-test-harness-1 Created 0.0s
Attaching to test-harness-1
test-harness-1 | INFO:root:Test Harness Listener started
test-harness-1 | _ Serving Flask app 'test_harness'
test-harness-1 | _ Debug mode: off
test-harness-1 | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
test-harness-1 | _ Running on all addresses (0.0.0.0)
test-harness-1 | _ Running on http://127.0.0.1:8800
test-harness-1 | \* Running on http://172.21.0.2:8800
test-harness-1 | INFO:werkzeug:Press CTRL+C to quit
  1. Troubleshooting:
  • If docker compose up --build does not run, try to run as administrator with sudo docker compose up --build

Manual Installation (For Development)

If you're contributing to Erebus or need a custom setup:

  1. Reopen IDE in dev container:
# Clone and navigate
git clone https://github.com/xtuml/erebus.git
cd erebus

To ensure consistency in the working environment, it is recommended that the dev container provided in .devcontainer/devcontainer.json is used.

Using the dev container, this will automatically:

  • Install Python 3.11
  • Run scripts/install_repositories.sh (installs)
  1. Setup virtual environment and install packages:

Setup virtual environment

# Create and activate a virtual environment
python3.11 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

  1. Run Tests
pytest

Deployment

It is recommended to deploy the test harness in the same VPC (or private network) as the machine containing the system to be tested to avoid exposure to the public internet.

Building Docker Image and Running Container

  1. Navigate to deployment folder:
cd deployment
  1. Run application:

This command will pull the latest image of the test harness from the Erebus repo.

docker compose up
  1. Stop application:

There are 2 ways to stop the container running (ensure you are in /deployment):

  • Ctrl+C

  • docker compose stop

To destroy the container:

Ensure you are in /deployment

docker compose down

Configuration

Default config file: test_harness/config/default_config.config (from project root directory).

Custom config file: Place in deployment/config named config.config.

To override defaults, copy the parameter under [non-default] heading and set a new value. Parameters:

General Config

  • requests_max_retries: int - Max retries for synchronous requests. Default: 5
  • requests_timeout: int - Timeout in seconds for synchronous requests. Default: 10

Log Reception and Finishing Time Parameters

  • log_calc_interval_time: int (deprecated, set in test config under test_finish > metrics_get_interval) - Interval between log file requests. Default: 5

Metrics Collection Config

  • Kafka Metrics Collection:
    • metrics_from_kafka: bool - Collect metrics from a Kafka topic. Default: False
    • kafka_metrics_host: str - Kafka host for metrics. Default: host.docker.internal:9092
    • kafka_metrics_topic: str - Kafka topic for metrics. Default: default.BenchmarkingProbe_service0
    • kafka_metrics_collection_interval: int - Interval in seconds to collect metrics. Default: 1

Sending Files Config

  • message_bus_protocol: str - Protocol for sending data (defaults to HTTP for incorrect configs):

    • HTTP: Use HTTP protocol
    • KAFKA: Use Kafka
    • KAFKA3: Use Kafka3 module (more performant)
  • Kafka Config (if message_bus_protocol is "KAFKA" | "KAFKA3"):

    • kafka_message_bus_host: str - Kafka host for messages. Default: host.docker.internal:9092
    • kafka_message_bus_topic: str - Kafka topic for messages. Default: default.AEReception_service0
  • HTTP Server Config (if message_bus_protocol is "HTTP"):

    • pv_send_url: str - Endpoint URL for uploading events. Default: http://host.docker.internal:9000/upload/events

Usage

Currently there are two main ways to use the Test Harness:

  • Flask Service - A flask service that serves http requests to run the test harness
  • Command Line Interface (CLI) Tool - not currently working

Test Configuration

For each method, a custom test configuration can be passed at runtime in the form of JSON (Flask) or YAML (CLI).

Fields in JSON and YAML Files

  • type: "Functional" | "Performance" : str

    • "Functional": Tests the system's functionality
    • "Performance": Tests the system's performance
  • performance_options: dict (for "Performance" type only)

    • num_files_per_sec: int >= 0 - Number of test events per second
  • num_workers: int

    • Number of worker processes for sending files
    • 0 or less: Runs in serial (default)
    • Non-integer: Program fails
  • aggregate_during: bool

    • True: Aggregate metrics during the test
    • False: Do not aggregate (default)
    • If low_memory is True, this value is ignored and metrics are aggregated
  • sample_rate: int

    • Approximate number of events to sample per second
    • 0 or less: No sampling (default)
  • low_memory: bool

    • True: Save results to memory/disk, aggregate metrics
    • False: Do not save results to memory/disk (default)
    • If True, aggregate_during is ignored
  • test_finish: dict

    • Options for stopping a test
    • metric_get_interval: int >= 0, default 5 - Interval to grab metrics
    • finish_interval: int >= 0, default 30 - Interval to check for metric changes (multiple of metric_get_interval)
    • timeout: int >= 0, default 120 - Time to wait after all data is sent

Example JSON Test Config

{
    "type":"Performance",
    "performance_options": {
        "num_files_per_sec": 10,
    },
    "num_workers": 0,
    "aggregate_during": false,
    "sample_rate": 0,
    "low_memory": false
}

Exmaple YAML Test Config

type: "Functional"
performance_options:
  num_files_per_sec: 10
num_workers: 0
aggregate_during: False
sample_rate: 0
low_memory: False

Flask Service

The flask service can be run in two ways:

Run with Docker:

  • Following the instructions in Deployment:Building Docker Image and Running Container above. The following should then appear in stdout:
    [+] Running 1/0
    ✔ Container test-harness-dev-test-harness-1 Created 0.0s
    Attaching to test-harness-1
    test-harness-1 | INFO:root:Test Harness Listener started
    test-harness-1 | _ Serving Flask app 'test_harness'
    test-harness-1 | _ Debug mode: off
    test-harness-1 | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
    test-harness-1 | _ Running on all addresses (0.0.0.0)
    test-harness-1 | _ Running on http://127.0.0.1:8800
    test-harness-1 | \* Running on http://172.20.0.2:8800
    test-harness-1 | INFO:werkzeug:Press CTRL+C to quit
    

Run as a Python Script:

  • Following the instructions in Installation: Manual Installation (For Development) and then running the following command from the project root (with a custom harness config file)

    python -m test_harness.run_app --harness-config-path <path to harness config file>

    Once one of this has been followed the following should appear in stdout of the terminal:

    INFO:root:Test Harness Listener started
     * Serving Flask app 'test_harness'
     * Debug mode: off
    INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
     * Running on all addresses (0.0.0.0)
     * Running on http://127.0.0.1:8800
     * Running on http://172.17.0.3:8800
    INFO:werkzeug:Press CTRL+C to quit

Serving the SwaggerUI

Once the server is running locally, the SwaggerUI can be accessed from http://127.0.0.1:8800/apidocs in any browser. This is a simple UI page designed using Swagger/OpenAPI3 to execute Test-Harness commands without needing the terminal or curl commands as detailed below.

Running a Test

Preparation Stages Before /startTest

  1. (Optional) Upload a Performance Profile:

    • Upload a CSV file to specify points in simulation time where the number of test files sent per second is described.
    • CSV headers: "Time", "Number".
    • Endpoint: /upload/profile, MIME type: multipart/form.
    • Example:

    MacOS/Linux:

    curl --location --request POST 'http://127.0.0.1:8800/upload/profile' --form 'file1=@"test_profile.csv"'

    Windows:

     curl --location --request POST http://127.0.0.1:8800/upload/profile --form "file1=@test_profile.csv"
  2. (Optional) Upload Test Job Files:

    • Upload multiple test files suitable for the system.
    • Endpoint: /upload/test-files, MIME type: multipart/form.
    • Example:

    MacOS/Linux:

    curl --location --request POST 'http://127.0.0.1:8800/upload/test-files' --form 'file1=@"test_file"'

    Windows:

    curl --location --request POST http://127.0.0.1:8800/upload/test-files --form "file1=@test_file"
  3. (Recommended) Upload Test Case Zip Files:

    • Include all necessary test data in a zip file.
    • Zip structure:
      TCASE
      ├── profile_store (optional)
      │   └── test_profile.csv (optional)
      ├── test_config.yaml (optional)
      └── test_file_store (optional)
          ├── test_data_1 (optional)
          └── test_data_2 (optional)
    • Endpoint: /upload/named-zip-files, MIME type: multipart/form.
    • The zip file's form name creates the TestName for the JSON body in /startTest.
    • Example:

    MacOS/Linux:

    curl --location --request POST 'http://127.0.0.1:8800/upload/named-zip-files' --form '<TestName>=@"<Test zip file path>"'

    Windows:

     curl --location --request POST http://127.0.0.1:8800/upload/named-zip-files --form "<TestName>=@<Test zip file path>"

Start Test

  • Send a POST request with JSON test data to start the test.
  • Endpoint: /startTest, Header: 'Content-Type: application/json'.
  • JSON fields:
    • "TestName": str - Name of the test (random UUID if not provided). Matches the zip file form name if using /upload/named-zip-files.
    • "TestConfig": dict - Configuration for the test.
  • Example:
    curl -X POST -d '{"TestName": "A_performance_test", "TestConfig":{"type":"Performance", "performance_options": {"num_files_per_sec":10}}}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/startTest'
    

Check If A Test Is Running

To check if a test is running:

curl 'http://127.0.0.1:8800/isTestRunning'

Stopping a Test

To stop a test gracefully, send a POST request with an empty JSON body to the /stopTest endpoint. Use the header 'Content-Type: application/json'. A successful response returns 200 OK, and a failure returns 400.

Example:

curl -X POST -d '{}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/stopTest'

Retrieving Output Data

To retrieve output data from a finished test, send a POST request with a JSON body to the /getTestOutputFolder endpoint. Use the header 'Content-Type: application/json'. Specify the TestName from the /startTest request.

JSON body format:

{
    "TestName": "<name of test>"
}

A successful request returns a zip file (MIME type: application/zip) containing all test output data.

Example:

curl -X POST -d '{"TestName": "test_1"}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/getTestOutputFolder' --output <file_name>.zip

Command Line Interface (CLI)

Work In Progress


Test Reports

Test reports are stored in directories named after the "TestName" field sent in the POST request to /startTest endpoint. These directories reside in the report_output directory.

  • For deployments using Docker, the report output folder is located at deployment/report_output relative to the project root directory.
  • For deployments using the command python -m test_harness.run_app --harness-config-path <path to harness config file>, the default report output folder is test_harness/report_output relative to the project root directory. Users can customise this location by editing the report_file_store field in the test_harness/config/store_config.config file.
  • For CLI tool usage, the default report output folder is test_harness/report_output. If the --outdir option is specified, the report files will be saved accordingly.

Functional

Arbitrary functional results

Performance

Arbitrary performance results