Erebus is a powerful, open-source test harness designed to bring flexibility and robustness to your software testing workflow. Named after the primordial deity of darkness in Greek mythology, Erebus sheds light on the hidden corners of your software systems, uncovering performance bottlenecks and functional issues.
View Demo
·
Report Bug
·
Request Feature
Table of Contents
At its core, Erebus is an extensible testing framework that empowers developers and QA engineers to conduct comprehensive performance and functional tests on diverse software systems. Whether you're optimising a high-traffic web service or ensuring the reliability of a complex event-driven application, Erebus provides the tools you need.
- 🔧 Extensible Architecture: Tailor Erebus to fit your unique testing requirements.
- 🚀 Performance Testing: Identify bottlenecks and optimise your system's speed.
- ✅ Functional Testing: Ensure every component works as intended.
- 🌐 Multi-Protocol Support: Send test files via HTTP, Kafka, and more.
- 🛠️ Easy Test Setup: Streamlined base functionality for quick test configuration.
Erebus isn't just a tool; it's a test harness that adapts to your needs, ensuring your software not only works but excels under real-world conditions.
- Clone the repo:
git clone https://github.com/xtuml/erebus.git
cd erebus
- Run with docker compose:
docker compose up --build
This assumes you have docker installed on your machine (https://www.docker.com/products/docker-desktop/)
Before installing Erebus, ensure you have the following:
- Python (v3.11 or later)
- Docker (latest stable version)
Python 3.11 introduces performance improvements and new features that Erebus leverages for efficient test execution. Docker is used for containerisation, ensuring consistent environments across different setups.
We provide a Docker Compose file that sets up Erebus with the correct volumes, ports, and environment variables. This is the easiest and most consistent way to run Erebus:
- Clone the repo:
git clone https://github.com/xtuml/erebus.git
cd erebus
- (Optional) Customise default settings:
-
If you are setting up the test harness for deployment, please go to Deployment
-
To override default settings within Erebus, first create a directory called
config
within the project's root directory, if it doesn't already exist. -
Copy the default config file from
./test_harness/config/default_config.config
file to to the newly created config folder in the root directory. -
Rename the config file from
default_config.config
toconfig.config
-
Override default values by copying the property under
[non-default]
. Eg.
[DEFAULT]
requests_max_retries = 5
requests_timeout = 10
[non-default]
requests_max_retries = 10 # This will override the default setting of 5
- NOTE: If you do not provide a custom config file, when running the test harness, you may recieve a message
WARNING:root:Given harness config path does not exist: /config/config.config
. This is fine, thedefault_config.config
file will be used instead.
- Build and run using Docker Compose:
docker compose up --build # Ensure that you are in the project's root directory
After running the command, and after the build process has finished, the following output should be visibile in the terminal:
[+] Running 1/0
✔ Container test-harness-dev-test-harness-1 Created 0.0s
Attaching to test-harness-1
test-harness-1 | INFO:root:Test Harness Listener started
test-harness-1 | _ Serving Flask app 'test_harness'
test-harness-1 | _ Debug mode: off
test-harness-1 | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
test-harness-1 | _ Running on all addresses (0.0.0.0)
test-harness-1 | _ Running on http://127.0.0.1:8800
test-harness-1 | \* Running on http://172.21.0.2:8800
test-harness-1 | INFO:werkzeug:Press CTRL+C to quit
- Troubleshooting:
- If
docker compose up --build
does not run, try to run as administrator withsudo docker compose up --build
If you're contributing to Erebus or need a custom setup:
- Reopen IDE in dev container:
# Clone and navigate
git clone https://github.com/xtuml/erebus.git
cd erebus
To ensure consistency in the working environment, it is recommended that the dev container provided in .devcontainer/devcontainer.json
is used.
Using the dev container, this will automatically:
- Install Python 3.11
- Run
scripts/install_repositories.sh
(installs)
- Setup virtual environment and install packages:
Setup virtual environment
# Create and activate a virtual environment
python3.11 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
- Run Tests
pytest
It is recommended to deploy the test harness in the same VPC (or private network) as the machine containing the system to be tested to avoid exposure to the public internet.
- Navigate to deployment folder:
cd deployment
- Run application:
This command will pull the latest image of the test harness from the Erebus repo.
docker compose up
- Stop application:
There are 2 ways to stop the container running (ensure you are in /deployment
):
-
Ctrl+C
-
docker compose stop
To destroy the container:
Ensure you are in /deployment
docker compose down
Default config file: test_harness/config/default_config.config
(from project root directory).
Custom config file: Place in deployment/config
named config.config
.
To override defaults, copy the parameter under [non-default]
heading and set a new value. Parameters:
requests_max_retries
: int - Max retries for synchronous requests. Default:5
requests_timeout
: int - Timeout in seconds for synchronous requests. Default:10
log_calc_interval_time
: int (deprecated, set in test config undertest_finish > metrics_get_interval
) - Interval between log file requests. Default:5
- Kafka Metrics Collection:
metrics_from_kafka
: bool - Collect metrics from a Kafka topic. Default:False
kafka_metrics_host
: str - Kafka host for metrics. Default:host.docker.internal:9092
kafka_metrics_topic
: str - Kafka topic for metrics. Default:default.BenchmarkingProbe_service0
kafka_metrics_collection_interval
: int - Interval in seconds to collect metrics. Default:1
-
message_bus_protocol
: str - Protocol for sending data (defaults toHTTP
for incorrect configs):HTTP
: Use HTTP protocolKAFKA
: Use KafkaKAFKA3
: Use Kafka3 module (more performant)
-
Kafka Config (if
message_bus_protocol
is "KAFKA" | "KAFKA3"):kafka_message_bus_host
: str - Kafka host for messages. Default:host.docker.internal:9092
kafka_message_bus_topic
: str - Kafka topic for messages. Default:default.AEReception_service0
-
HTTP Server Config (if
message_bus_protocol
is "HTTP"):pv_send_url
: str - Endpoint URL for uploading events. Default:http://host.docker.internal:9000/upload/events
Currently there are two main ways to use the Test Harness:
- Flask Service - A flask service that serves http requests to run the test harness
- Command Line Interface (CLI) Tool - not currently working
For each method, a custom test configuration can be passed at runtime in the form of JSON (Flask) or YAML (CLI).
-
type
:"Functional"
|"Performance"
:str
"Functional"
: Tests the system's functionality"Performance"
: Tests the system's performance
-
performance_options
:dict
(for"Performance"
type only)num_files_per_sec
:int
>= 0 - Number of test events per second
-
num_workers
:int
- Number of worker processes for sending files
0
or less: Runs in serial (default)- Non-integer: Program fails
-
aggregate_during
:bool
True
: Aggregate metrics during the testFalse
: Do not aggregate (default)- If
low_memory
isTrue
, this value is ignored and metrics are aggregated
-
sample_rate
:int
- Approximate number of events to sample per second
0
or less: No sampling (default)
-
low_memory
:bool
True
: Save results to memory/disk, aggregate metricsFalse
: Do not save results to memory/disk (default)- If
True
,aggregate_during
is ignored
-
test_finish
:dict
- Options for stopping a test
metric_get_interval
:int
>= 0, default 5 - Interval to grab metricsfinish_interval
:int
>= 0, default 30 - Interval to check for metric changes (multiple ofmetric_get_interval
)timeout
:int
>= 0, default 120 - Time to wait after all data is sent
{
"type":"Performance",
"performance_options": {
"num_files_per_sec": 10,
},
"num_workers": 0,
"aggregate_during": false,
"sample_rate": 0,
"low_memory": false
}
type: "Functional"
performance_options:
num_files_per_sec: 10
num_workers: 0
aggregate_during: False
sample_rate: 0
low_memory: False
The flask service can be run in two ways:
- Following the instructions in Deployment:Building Docker Image and Running Container above. The following should then appear in stdout:
[+] Running 1/0 ✔ Container test-harness-dev-test-harness-1 Created 0.0s Attaching to test-harness-1 test-harness-1 | INFO:root:Test Harness Listener started test-harness-1 | _ Serving Flask app 'test_harness' test-harness-1 | _ Debug mode: off test-harness-1 | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. test-harness-1 | _ Running on all addresses (0.0.0.0) test-harness-1 | _ Running on http://127.0.0.1:8800 test-harness-1 | \* Running on http://172.20.0.2:8800 test-harness-1 | INFO:werkzeug:Press CTRL+C to quit
-
Following the instructions in Installation: Manual Installation (For Development) and then running the following command from the project root (with a custom harness config file)
python -m test_harness.run_app --harness-config-path <path to harness config file>
Once one of this has been followed the following should appear in stdout of the terminal:
INFO:root:Test Harness Listener started * Serving Flask app 'test_harness' * Debug mode: off INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8800 * Running on http://172.17.0.3:8800 INFO:werkzeug:Press CTRL+C to quit
Once the server is running locally, the SwaggerUI can be accessed from http://127.0.0.1:8800/apidocs in any browser. This is a simple UI page designed using Swagger/OpenAPI3 to execute Test-Harness commands without needing the terminal or curl commands as detailed below.
-
(Optional) Upload a Performance Profile:
- Upload a CSV file to specify points in simulation time where the number of test files sent per second is described.
- CSV headers: "Time", "Number".
- Endpoint:
/upload/profile
, MIME type:multipart/form
. - Example:
MacOS/Linux:
curl --location --request POST 'http://127.0.0.1:8800/upload/profile' --form 'file1=@"test_profile.csv"'
Windows:
curl --location --request POST http://127.0.0.1:8800/upload/profile --form "file1=@test_profile.csv"
-
(Optional) Upload Test Job Files:
- Upload multiple test files suitable for the system.
- Endpoint:
/upload/test-files
, MIME type:multipart/form
. - Example:
MacOS/Linux:
curl --location --request POST 'http://127.0.0.1:8800/upload/test-files' --form 'file1=@"test_file"'
Windows:
curl --location --request POST http://127.0.0.1:8800/upload/test-files --form "file1=@test_file"
-
(Recommended) Upload Test Case Zip Files:
- Include all necessary test data in a zip file.
- Zip structure:
TCASE ├── profile_store (optional) │ └── test_profile.csv (optional) ├── test_config.yaml (optional) └── test_file_store (optional) ├── test_data_1 (optional) └── test_data_2 (optional)
- Endpoint:
/upload/named-zip-files
, MIME type:multipart/form
. - The zip file's form name creates the
TestName
for the JSON body in/startTest
. - Example:
MacOS/Linux:
curl --location --request POST 'http://127.0.0.1:8800/upload/named-zip-files' --form '<TestName>=@"<Test zip file path>"'
Windows:
curl --location --request POST http://127.0.0.1:8800/upload/named-zip-files --form "<TestName>=@<Test zip file path>"
- Send a POST request with JSON test data to start the test.
- Endpoint:
/startTest
, Header:'Content-Type: application/json'
. - JSON fields:
"TestName"
:str
- Name of the test (random UUID if not provided). Matches the zip file form name if using/upload/named-zip-files
."TestConfig"
:dict
- Configuration for the test.
- Example:
curl -X POST -d '{"TestName": "A_performance_test", "TestConfig":{"type":"Performance", "performance_options": {"num_files_per_sec":10}}}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/startTest'
To check if a test is running:
curl 'http://127.0.0.1:8800/isTestRunning'
To stop a test gracefully, send a POST request with an empty JSON body to the /stopTest
endpoint. Use the header 'Content-Type: application/json'
. A successful response returns 200 OK
, and a failure returns 400
.
Example:
curl -X POST -d '{}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/stopTest'
To retrieve output data from a finished test, send a POST request with a JSON body to the /getTestOutputFolder
endpoint. Use the header 'Content-Type: application/json'
. Specify the TestName
from the /startTest
request.
JSON body format:
{
"TestName": "<name of test>"
}
A successful request returns a zip file (MIME type: application/zip
) containing all test output data.
Example:
curl -X POST -d '{"TestName": "test_1"}' -H 'Content-Type: application/json' 'http://127.0.0.1:8800/getTestOutputFolder' --output <file_name>.zip
Work In Progress
Test reports are stored in directories named after the "TestName"
field sent in the POST request to /startTest
endpoint. These directories reside in the report_output
directory.
- For deployments using Docker, the report output folder is located at
deployment/report_output
relative to the project root directory. - For deployments using the command
python -m test_harness.run_app --harness-config-path <path to harness config file>
, the default report output folder istest_harness/report_output
relative to the project root directory. Users can customise this location by editing thereport_file_store
field in thetest_harness/config/store_config.config
file. - For CLI tool usage, the default report output folder is
test_harness/report_output
. If the--outdir
option is specified, the report files will be saved accordingly.
Arbitrary functional results
Arbitrary performance results