Skip to content

swasik/scylla-cluster-tests

This branch is 176 commits behind scylladb/scylla-cluster-tests:master.

Folders and files

NameName
Last commit message
Last commit date
Dec 12, 2024
Oct 28, 2024
Dec 10, 2024
Sep 18, 2024
Dec 3, 2024
Dec 11, 2024
Dec 11, 2024
Jul 14, 2024
Nov 25, 2024
Dec 11, 2024
Jun 18, 2024
Jun 5, 2023
Dec 11, 2024
Nov 25, 2024
Dec 3, 2024
Nov 25, 2024
Dec 10, 2024
Dec 11, 2024
Dec 4, 2024
Jun 17, 2024
Oct 28, 2024
Nov 21, 2024
Sep 7, 2016
Dec 3, 2024
Feb 4, 2016
Sep 18, 2024
Oct 10, 2024
Oct 10, 2024
Nov 12, 2024
Sep 22, 2019
Jul 28, 2024
Oct 10, 2024
Jul 14, 2024
Dec 31, 2020
Dec 6, 2023
Jan 10, 2022
Oct 7, 2020
Oct 10, 2024
Oct 10, 2024
Jul 28, 2024
Oct 10, 2024
Jun 5, 2023
Oct 10, 2024
Aug 13, 2023
Aug 13, 2024
Jul 28, 2024
Oct 10, 2024
Jun 5, 2023
Oct 10, 2024
Nov 10, 2023
Nov 3, 2024
Nov 26, 2024
Nov 3, 2024
Oct 10, 2024
Oct 10, 2024
Nov 27, 2024
Nov 21, 2024
Aug 28, 2024
Sep 29, 2024
Oct 28, 2024
Jul 22, 2021
Oct 28, 2024
Jul 22, 2021
Oct 30, 2022
Oct 28, 2024
Oct 28, 2024
Jan 10, 2022
Jul 28, 2024
Jul 14, 2024
Jul 15, 2024
Sep 29, 2024
Sep 7, 2016
Dec 31, 2023
Dec 11, 2024
Dec 11, 2024
Nov 13, 2024
Jun 30, 2024
Aug 6, 2024
Oct 23, 2024
Oct 10, 2024
Jun 17, 2024
Jun 5, 2023
Nov 14, 2024
Jan 4, 2024
Jun 5, 2023
May 10, 2023
Jun 17, 2024
Nov 28, 2024
May 22, 2022

Repository files navigation

SCT - Scylla Cluster Tests

SCT tests are designed to test Scylla database on physical/virtual servers under high read/write load. Currently, the tests are run using built in unittest These tests automatically create:

  • Scylla clusters - Run Scylla database
  • Loader machines - used to run load generators like cassandra-stress
  • Monitoring server - uses official Scylla Monitoring repo to monitor Scylla clusters and Loaders

Quickstart

Option 1 - Config AWS using OKTA (preferred option)

https://www.notion.so/AWS-864b26157112426f8e74bab61001425d

Option 2 - Config AWS using AWS credentials

# install aws cli
sudo apt install awscli # Debian/Ubuntu
sudo dnf install awscli # Redhat/Fedora
# or follow amazon instructions to get it: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

# Ask your AWS account admin to create a user and access key for AWS) and then configure AWS

> aws configure
AWS Access Key ID [****************7S5A]:
AWS Secret Access Key [****************5NcH]:
Default region name [us-east-1]:
Default output format [None]:

# if using OKTA, use any of the tools to create the AWS profile, and export it as such,
# anywhere you are gonna use hydra command (replace DeveloperAccessRole with the name of your profile):
export AWS_PROFILE=DeveloperAccessRole

# Install hydra (docker holding all requirements for running SCT)
sudo ./install-hydra.sh

# if using podman, we need to disable enforcing of short name usage, without it monitoring stack won't run from withing hydra
echo 'unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "docker.io", "quay.io"]
short-name-mode="permissive"
' > ~/.config/containers/registries.conf

Run a test

Example running test using Hydra using test-cases/PR-provision-test.yaml configuration file

Run test locally with AWS backend:

export SCT_SCYLLA_VERSION=5.2.1
# Test fails to report to Argus. So we need to disable it
export SCT_ENABLE_ARGUS=false
# configuration is needed for running from a local development machine (default communication is via private addresses)
hydra run-test longevity_test.LongevityTest.test_custom_time --backend aws --config test-cases/PR-provision-test.yaml --config configurations/network_config/test_communication_public.yaml

# Run with IPv6 configuration
hydra run-test longevity_test.LongevityTest.test_custom_time --backend aws --config test-cases/PR-provision-test.yaml --config configurations/network_config/all_addresses_ipv6_public.yaml

Run test using SCT Runner with AWS backend:

hydra create-runner-instance --cloud-provider <cloud_name> -r <region_name> -z <az> -t <test-id> -d <run_duration>

export SCT_SCYLLA_VERSION=5.2.1
# For choose correct network configuration, check test jenkins pipeline.
# All predefined configurations are located under `configurations/network_config`
hydra --execute-on-runner <runner-ip|`cat sct_runner_ip> "run-test longevity_test.LongevityTest.test_custom_time --backend aws --config test-cases/PR-provision-test.yaml"

Run test locally with GCE backend:

export SCT_SCYLLA_VERSION=5.2.1
export SCT_IP_SSH_CONNECTIONS="public"
hydra run-test longevity_test.LongevityTest.test_custom_time --backend gce --config test-cases/PR-provision-test.yaml

Run test locally with Azure backend:

export SCT_SCYLLA_VERSION=5.2.1
hydra run-test longevity_test.LongevityTest.test_custom_time --backend azure --config test-cases/PR-provision-test.yaml

Run test locally with docker backend:

# **NOTE:** user should be part of sudo group, and setup with passwordless access,
# see https://unix.stackexchange.com/a/468417 for example on how to setup

# example of running specific docker version
export SCT_SCYLLA_VERSION=5.2.1
hydra run-test longevity_test.LongevityTest.test_custom_time --backend docker --config test-cases/PR-provision-test-docker.yaml

You can also enter the containerized SCT environment using:

hydra bash

List resources being used by user:

# NOTE: Only use `whoami` if your local use is the same as your okta/email username
hydra list-resources --user `whoami`

Reuse already running cluster:

export SCT_REUSE_CLUSTER=$(cat ~/sct-results/latest/test_id)
hydra run-test longevity_test.LongevityTest.test_custom_time --backend aws --config test-cases/PR-provision-test.yaml --config configurations/network_config/test_communication_public.yaml

More details on reusing a cluster can be found in reuse_cluster

Clear resources:

hydra clean-resources --user `whoami`
# by default, it only cleans aws resources
# to clean other backends, specify manually
hydra clean-resources --user `whoami` -b gce

Clear resources being used by the last test run:

SCT_CLUSTER_BACKEND= hydra clean-resources --test-id `cat ~/sct-results/latest/test_id`

Supported backends

  • aws - the mostly used backed, most longevity run on top of this backend

  • gce - most of the artifacts and rolling upgrades run on top of this backend

  • azure -

  • docker - should be used for local development

  • baremetal - can be used to run with already setup cluster

  • k8s-eks -

  • k8s-gke -

  • k8s-local-kind - used for run k8s functional test locally

  • k8s-local-kind-gce - used for run k8s functional test locally on GCE

  • k8s-local-kind-aws - used for run k8s functional test locally on AWS

Configuring test run configuration YAML

Take a look at the test-cases/PR-provision-test.yaml file. It contains a number of configurable test parameters, such as DB cluster instance types and AMI IDs. In this example, we're assuming that you have copied test-cases/PR-provision-test.yaml to test-cases/your_config.yaml.

All the test run configurations are stored in test-cases directory.

Important: Some tests use custom hardcoded operations due to their nature, so those tests won't honor what is set in test-cases/your_config.yaml.

the available configuration options are listed in configuration_options

Types of Tests

Longevity Tests (TODO: write explanation for them)

Upgrade Tests (TODO: write explanation for them)

Performance Tests (TODO: write explanation for them)

Features Tests (TODO: write explanation for them)

Manager Tests (TODO: write explanation for them)

About

Tests for Scylla Clusters

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.9%
  • Groovy 10.0%
  • HTML 2.0%
  • Shell 0.6%
  • Jupyter Notebook 0.2%
  • Dockerfile 0.1%
  • Other 0.2%