- 1. Introduction
- 2. Mailing List
- 3. Building ComplianceAsCode
- 3.1. Installing build dependencies
- 3.2. Downloading the source code
- 3.3. Building
- 3.4. Build outputs
- 3.5. Testing
- 3.6. Installation
- 3.7. (optional) Building a tarball
- 3.8. (optional) Building a package
- 3.9. (optional) Building a ZIP file
- 3.10. Build the Docker container image
- 3.11. Build the content using the container image
- 4. Creating Content
- 5. Updating Reference and Overlay Content
- 6. Tools and Utilities
- 7. Contributing with XCCDFs, OVALs and remediations
- 8. Legacy Notice
This document tries to provide information useful for ComplianceAsCode/content project contributors. We will guide you through the structure of the project. We will explain the directory layout, used formats and the build system.
Join the mailing list at https://fedorahosted.org/mailman/listinfo/scap-security-guide.
On Red Hat Enterprise Linux 6/7 make sure the packages cmake
, openscap-utils
,
PyYAML
, python-jinja2
and their dependencies are installed:
yum install cmake make openscap-utils PyYAML python-jinja2
On Red Hat Enterprise Linux 8 and Fedora the package list is the same but python2 packages need to be replaced with python3 ones:
yum install cmake make openscap-utils python3-pyyaml python3-jinja2
On Ubuntu and Debian, make sure the packages libopenscap8
,
libxml2-utils
, python3-jinja2
, python3-yaml
, xsltproc
and their dependencies are
installed:
apt-get install cmake make expat libopenscap8 libxml2-utils ninja-build python3-jinja2 python3-yaml xsltproc
Important
|
Version 1.0.8 or later of openscap-utils is required to build the content.
|
(optional) Install git if you want to clone the GitHub repository to get the source code:
# Fedora/RHEL
yum install git
# Ubuntu/Debian
apt-get install git
(optional) Install the ShellCheck
package to perform fix script static analysis:
# Fedora/RHEL
yum install ShellCheck
# Ubuntu/Debian
apt-get install shellcheck
(optional) Install yamllint
and ansible-lint
packages to perform Ansible
playbooks checks. These checks are not enabled by default in CTest, to enable
them add -DANSIBLE_CHECKS=ON
option to cmake
.
# Fedora/RHEL
yum install yamllint ansible-lint
# Ubuntu/Debian (to install ansible-lint on Debian you will probably need to
# enable Debian Backports repository)
apt-get install yamllint ansible-lint
(optional) Install the ninja
build system if you want to use it instead of
make
for faster builds:
# Fedora/RHEL
yum install ninja-build
# Ubuntu/Debian
apt-get install ninja-build
(optional) Install the json2html
package if you want to generate HTML report statistics:
pip install json2html
if you are using python3:
pip3 install json2html
Download and extract a tarball from the list of releases:
# change X.Y.Z for desired version
ssg_version="X.Y.Z"
wget "https://github.com/ComplianceAsCode/content/releases/download/v$ssg_version/scap-security-guide-$ssg_version.tar.bz2"
tar -xvjf ./scap-security-guide-$ssg_version.tar.bz2
cd ./scap-security-guide-$ssg_version/
Or clone the GitHub repository:
git clone https://github.com/ComplianceAsCode/content.git
cd content/
# (optional) select release version - change X.Y.Z for desired version
git checkout vX.Y.Z
# (optional) select latest development version
git checkout master
To build all the security content:
cd build/
cmake ../
# To build all security content
make -j4
# To build security content for one specific product, for example for *Red Hat Enterprise Linux 7*
make -j4 rhel7
Or use the build_product
script from base directory that removes whatever is in the build
directory and builds specific product:
./build_product rhel7
(optional) To build only specific content for one specific product:
cd build/
cmake ../
make -j4 rhel7-content # SCAP XML files for RHEL7
make -j4 rhel7-guides # HTML guides for RHEL7
make -j4 rhel7-tables # HTML tables for RHEL7
make -j4 rhel7-profile-bash-scripts # remediation Bash scripts for all RHEL7 profiles
make -j4 rhel7-profile-playbooks # Ansible Playbooks for all RHEL7 profiles
make -j4 rhel7 # everything above for RHEL7
(optional) Configure options before building using a GUI tool:
cd build/
cmake-gui ../
make -j4
(optional) Use the ninja
build system (requires the ninja-build
package):
cd build/
cmake -G Ninja ../
ninja-build # depending on the distribution just "ninja" may also work
(optional) Generate statistics for products and profiles. Some of the statistics generated are: implemented OVAL, bash, ansible for rules, missing CCE, etc:
cd build/
cmake ../
make -j4 stats # create statistics for all products
make -j4 profile-stats # create statistics for all profiles in all products
You can also create statistics per product, to do that just prepend the product name (e.g.: rhel7-stats
) to the make target.
It is possible to generate HTML output by triggering similar command:
cd build/
cmake ../
make -j4 html-stats # create statistics for all products, as a result <product>/stats.html file is created.
make -j4 html-profile-stats # create statistics for all profiles in all products, as a result <product>/profile-stats.html file is created
If you want to go deeper into statistics, refer to Profile Statistics and Utilities section.
By default, the build system builds SCAP content with OVAL 5.11. This means that the SCAP 1.3 datastream conforms to SCAP standard version 1.3. But the SCAP 1.2 datastream is not fully conformant with SCAP standard version 1.2, as up to OVAL 5.10 version is allowed. As SCAP 1.3 allows up to OVAL 5.11 and SCAP 1.2 allows up to OVAL 5.10.
To build fully compliant SCAP 1.2 content:
If you use build_product
script, pass --oval510
option:
./build_product --oval510 <product-name>
If you use cmake
command, pass -DSSG_TARGET_OVAL_MINOR_VERSION:STRING=10
:
cd build/
cmake -DSSG_TARGET_OVAL_MINOR_VERSION:STRING=10 ../
make
And use the datastream with suffix -1.2.xml
.
When the build has completed, the output will be in the build folder.
That can be any folder you choose but if you followed the examples above
it will be the content/build
folder.
The SCAP XML files will be called ssg-${PRODUCT}-${TYPE}.xml
. For example
ssg-rhel7-ds.xml
is the SCAP 1.3 Red Hat Enterprise Linux 7 source datastream,
and ssg-rhel7-ds-1.2.xml
is the SCAP 1.2 source datastream.
We recommend using source datastream if you have a choice. The build system also generates separate XCCDF, OVAL, OCIL and CPE files:
$ ls -1 ssg-rhel7-*.xml
ssg-rhel7-cpe-dictionary.xml
ssg-rhel7-cpe-oval.xml
ssg-rhel7-ds.xml
ssg-rhel7-ds-1.2.xml
ssg-rhel7-ocil.xml
ssg-rhel7-oval.xml
ssg-rhel7-pcidss-xccdf-1.2.xml
ssg-rhel7-xccdf-1.2.xml
ssg-rhel7-xccdf.xml
These can be ingested by any SCAP-compatible scanning tool, to enable automated checking.
The human readable HTML guide index files will be called
ssg-${PRODUCT}-guide-index.html
. For example ssg-rhel7-guide-index.html
.
This file will let the user browse all profiles available for that product.
The prose guide HTML contains practical, actionable information for auditors
and administrators. They are placed in the guides folder.
$ ls -1 guides/ssg-rhel7-*.html
guides/ssg-rhel7-guide-ospp42.html
guides/ssg-rhel7-guide-ospp.html
guides/ssg-rhel7-guide-pci-dss.html
...
Spreadsheet HTML tables - potentially useful as the basis for a Security Requirements Traceability Matrix (SRTM) document:
$ ls -1 tables/table-rhel7-*.html
...
tables/table-rhel7-nistrefs-ospp.html
tables/table-rhel7-nistrefs-stig.html
tables/table-rhel7-pcidssrefs.html
tables/table-rhel7-srgmap-flat.html
tables/table-rhel7-srgmap.html
tables/table-rhel7-stig.html
...
These Playbooks contain the remediations for a profile.
$ ls -1 ansible/rhel7-playbook-*.yml
ansible/rhel7-playbook-C2S.yml
ansible/rhel7-playbook-ospp.yml
ansible/rhel7-playbook-pci-dss.yml
...
These Playbooks contain just the remediation for a rule, in the context of a profile.
$ ls -1 ansible/rhel7-playbook-*.yml
$ ls -1 rhel7/playbooks/pci-dss/*.yml
rhel7/playbooks/pci-dss/account_disable_post_pw_expiration.yml
rhel7/playbooks/pci-dss/accounts_maximum_age_login_defs.yml
rhel7/playbooks/pci-dss/accounts_password_pam_dcredit.yml
rhel7/playbooks/pci-dss/accounts_password_pam_lcredit.yml
...
To ensure validity of built artifacts prior to installation, we recommend
running our test suite against the build output. This is done with CTest.
It is a good idea to execute quick tests first using the -L quick
option passed to ctest
.
cd content/
./build_product
cd build
ctest -L quick
ctest -LE quick -j4
Note: CTest does not run SSG Test Suite which provides simple system of test scenarios for testing profiles and rule remediations.
System-wide installation:
cd content/
cd build/
cmake ../
make -j4
sudo make install
(optional) Custom install location:
cd content/
cd build/
cmake ../
make -j4
sudo make DESTDIR=/opt/absolute/path/to/ssg/ install
(optional) System-wide installation using ninja:
cd content/
cd build/
cmake -G Ninja ../
ninja-build
ninja-build install
To build a tarball with all the source code:
cd build/
make package_source
To build a package for testing purposes:
cd build/
# disable any product you would not like to bundle in the package. For example:
cmake -DSSG_PRODUCT_FEDORA:BOOL=OFF../
# build the package.
make package
Currently, RPM and DEB packages are supported by this mechanism. We recommend only using it for testing. Please follow downstream workflows for production packages.
To build a zip file with all generated source data streams and kickstarts:
cd build/
make zipfile
There is also target to build zip file containing contents specific for a vendor’s product.
cd build/
# To build content zipfiles of all vendors:
make vendor-zipfile
# To build Red Hat zipfiles:
make redhat-zipfile
Find a suitable Dockerfile present in the Dockerfiles directory and build the image. This will take care of the build environment and all necessary setup.
docker build --no-cache --file Dockerfiles/ubuntu --tag oscap:$(date -u +%Y%m%d%H%M) --tag oscap:latest .
To build all the content, run a container without any flags.
docker run --cap-drop=all --name oscap-content oscap:latest
Using docker cp
to copy all the generated content to the your host:
docker cp oscap-content:/home/oscap/content/build $(pwd)/container_build
Under the top level directory, there are directories and/or files for different products, shared content, documentation, READMEs, Licenses, build files/configuration, etc.
Directory | Description |
---|---|
|
Contains security content for Linux operating systems. Contains rules, OVAL checks, Ansible tasks, Bash remediations, etc. |
|
Contains security content for applications such as OpenShift or OpenStack. Contains rules, OVAL checks, Ansible tasks, Bash remediations, etc. |
|
Contains templates which can generate, Jinja macros, Bash remediation functions. |
|
Contains the test suite for content validation and testing, contains also unit tests. |
|
Can be used to build the content using CMake. |
|
Scripts used by the build system. |
|
Contains the CMake build configuration files. |
|
Contains Dockerfiles to build content test suite container backends. |
|
Contains the User Guide and Developer Guide, manual page template, etc. |
|
Contains Python |
|
Miscellaneous scripts used for development but not used by the build system. |
The remaining directories such as fedora
, rhel7
, etc. are product
directories.
File | Description |
---|---|
|
Top-level CMake build configuration file |
|
DO NOT MANUALLY EDIT script-generated file |
|
DO NOT MANUALLY EDIT script-generated file |
|
Disclaimer for usage of content |
|
CentOS7 Docker build file |
|
Content license |
|
Project README file |
Benchmarks are directories that contain benchmark.yml
file.
We have multiple benchmarks in our project:
Name |
Location |
Linux OS |
|
Applications |
|
Java Runtime Environment |
|
Fuse 6 |
|
Firefox |
|
Chromium |
|
The Linux OS benchmark describes Linux Operating System in general.
This benchmark is used by multiple ComplianceAsCode products, eg. rhel7
, fedora
, ubuntu1604
, sle15
etc.
The benchmark is located in /linux_os/guide
.
The products specify which benchmark they use as a source of content in their product.yml
file using benchmark_root
key. For example, rhel7
product specifies that it uses the Linux OS benchmark.
$ cat rhel7/product.yml product: rhel7 full_name: Red Hat Enterprise Linux 7 type: platform benchmark_root: "../linux_os/guide" .....
Rules from multiple locations can be used for a single Benchmark. There is an optional key additional_content_directories
for a list of paths to some arbitrary Groups of Rules
to be included in the benchmark. Note that the additional directories cannot contain a benchmark file (benchmark.yml
), otherwise it fails to build the content. Of all the rules collected only the following would become a part of the benchmark:
-
rules that have the
prodtype
specified in correspondence with the benchmark; -
rules that have no
prodtype
metadata.
..... benchmark_root: "../applications" additional_content_directories: - "../linux_os/guide/services" - "../linux_os/guide/system" .....
The Benchmarks are organized into directory structure. The directories represent either groups or rules. The group directories contain group.yml
and rule directories rule.yml
.
The name of the group directory is the group ID, without the prefix.
Similarly, the name of the rule directory if the rule ID, without the prefix.
For example, the Linux OS Benchmark is structured in this way:
. ├── benchmark.yml ├── intro │ ├── general-principles │ ├── group.yml │ └── how-to-use ├── services │ ├── apt │ ├── avahi │ ├── cron_and_at │ ├── deprecated │ ├── dhcp │ ├── dns │ ├── ftp │ ├── group.yml │ ├── http │ ├── imap │ ├── ldap │ ├── mail │ ├── nfs_and_rpc │ ....... │ ....... └── system ├── accounts ├── auditing ├── bootloader-grub2 ├── bootloader-grub-legacy ├── entropy ├── group.yml ├── logging ......
When creating a new product, use the guidelines below for the directory layout:
-
Do not use capital letters
-
If product versions are required, use major versions only. For example,
rhel7
,ubuntu16
, etc. -
If the content to be produced does not matter on versions, do not add version numbers. For example:
fedora
,firefox
, etc. -
In addition, use only a maxdepth of 3 directories.
-
See the README for more information about the changes needed.
Following these guidelines help with the usability and browsability of using and navigating the content.
For example:
$ tree -d rhel7
rhel7
├── cpe
├── kickstart
├── overlays
├── profiles
└── transforms
7 directories
Directory |
Description |
|
|
|
|
|
|
|
|
|
|
Important
|
For any of the |
Profiles define the set of rules and variables aligned to a compliance standard.
Structurally, a profile is a YAML file that represents a dictionary. A profile YAML file has one implied attribute:
-
id
: The primary identifier for the profile, to be referenced during evaluations. This is inferred from the file name.
A profile YAML file can, optionally, include metadata about the implemented policy and experts in the field, called Subject Matter Experts (SMEs). The SMEs usually are people familiar with the policy requirements or how it is applied.
-
metadata
: Dictionary for profile metadata. -
reference
: URL pointing to page or organization that publishes the policy. -
version
: Version of the policy implemented by the profile. -
SMEs
: List of people experienced with the profile, or how they are applied. The preferred method is the GitHub handle, but email is also accepted.
A profile should define these attributes:
-
title
: Human-readable title of the profile. -
description
: Human-readable HTML description, which provides broader context for non-experts than the rationale. -
extends
: Theid
of a profile to be extended. A profile can make incremental changes based on another profile, viaextends
attribute. The extendee can then, via theselections
attribute, select/unselect rules and change XCCDF Value selectors. -
selections
: List composed of items of these types: -
id`s of rules to be included in the profile, e.g. `accounts_tmout
, or -
id`s of rules to be excluded from the profile prefixed by an exclamation mark, e.g. `!accounts_tmout
, or -
changes to XCCDF Value selectors, e.g.
var_accounts_tmout=10_min
, or -
rule refinements, e.g.
accounts_tmout.severity=high
.
stig_overlay.xml
maps an official product/version STIG release with a
SSG product/version STIG release.
stig_overlay.xml
should never be manually created or updated. It should
always be generated using create-stig-overlay.py
.
To create stig_overlay.xml
, there are two things that are required: an
official non-draft STIG release from DISA containing a XCCDF file
(e.g. U_Red_Hat_Enterprise_Linux_7_STIG_V1R1_Manual-xccdf.xml
and an XCCDF file built
by the project (e.g. ssg-rhel7-xccdf.xml
)
Example using create-stig-overlay.py
:
$ PYTHONPATH=`./.pyenv.sh` utils/create-stig-overlay.py --disa-xccdf=disa-stig-rhel7-v1r12-xccdf-manual.xml --ssg-xccdf=ssg-rhel7-xccdf.xml -o rhel7/overlays/stig_overlay.xml
To update stig_overlay.xml
, use the create-stig-overlay.py
script as
mentioned above. Then, submit a pull request to replace the stig_overlay.xml
file that is needing to be updated. Please note that as a part of this
update rules that have been removed from the official STIG will be removed
here as well.
To run the Python utilities (those ending in .py
), you will need to have the
PYTHONPATH environment variable set. This can be accomplished one of two ways: by
prefixing all commands with a local variable (PYTHONPATH=/path/to/scap-security-guide
),
or by exporting PYTHONPATH
in your shell environment. We provide a script
for making this easier: .pyenv.sh
. To set PYTHONPATH
correctly for the
current shell, simply call source .pyenv.sh
. For more information on how to
use this script, please see the comments at the top of the file.
Located in utils
directory, the testoval.py
script allows easy testing of oval
definitions. It wraps the definition and makes up an oval file ready for
scanning, very useful for testing new OVAL content or modifying existing ones.
Example usage:
$ PYTHONPATH=`./.pyenv.sh` ./utils/testoval.py install_hid.xml
Create or add an alias to the script so that you don’t have to type out the full path
everytime that you would like to use the testoval.py
script.
$ alias testoval='/home/_username_/scap-security-guide/utils/testoval.py'
An alternative is adding the directory where testoval.py
resides to your PATH.
$ export PATH=$PATH:/home/_username_/scap-security-guide/utils/
The profile_tool.py
tool displays XCCDF profile statistics. It can show number of rules in the
profile, how many of these rules have an OVAL check implemented, how many have a remediation available,
shows rule IDs which are missing them and other useful information.
To use the script, first build the content, then pass the built XCCDF (not DataStream) to the script.
For example, to check which rules in RHEL8 OSPP profile are missing remediations, run this command:
$ ./build_product rhel8 $ ./build-scripts/profile_tool.py stats --missing-fixes --profile ospp --benchmark build/ssg-rhel8-xccdf.xml
Note: There is an automated job which provides latest statistics from all products and all profiles, you can view it here: Statistics
The tool also can subtract rules between YAML profiles.
For example, to subtract selected rules from a given profile based on rules selected by another profile, run this command:
$ ./build-scripts/profile_tool.py sub --profile1 rhel7/profiles/ospp.profile --profile2 rhel7/profiles/pci-dss.profile
This will result in a new YAML profile containing exclusive rules to the profile pointed by the --profile1 option.
Rules like banner_etc_issue
and dconf_gnome_login_banner_text
will check for configuration of login banners and remediate them. Both rules source the banner text from the same variable login_banner_text
, and the banner texts need to be in the form of a regular expression.
There are a few utilities you can use to transform your text into the appropriate regular expression:
When adding a new banner directly to the login_banner_text
, use the custom Jinja filter banner_regexify
.
If customizing content via SCAP Workbench, or directly writing your tailoring XML, use utils/regexify_banner.py
to generate the appropriate regular expression.
There are three main types of content in the project, they are rules, defined using the XCCDF standard, checks, usually written in OVAL format, and remediations, that can be executed on ansible, bash, anaconda installer, puppet, ignition and kubernetes. ComplianceAsCode also has its own templating mechanism, allowing content writers to create models and use it to generate a number of checks and remediations.
Contributions can be made for rules, checks, remediations or even utilities. There are different sets of guidelines for each type, for this reason there is a different topic for each of them.
Rules are input described in YAML which mirrors the XCCDF format (an XML container).
Rules are translated to become members of a Group
in an XML file.
All existing rules for Linux products can be found in the linux_os/guide
directory. For non-Linux products (e.g., jre
), this content can be found in the <product>/guide
.
The exact location depends on the group (or category) that a rule belongs to.
For an example of rule group, see linux_os/guide/system/software/disk_partitioning/partition_for_tmp/rule.yml
.
The id of this rule is partition_for_tmp
; this rule belongs to the disk_partitioning
group, which in turn belongs to the software
group (which in turn belongs to the system
group).
Because this rule is in linux_os/guide
, it can be shared by all Linux products.
Rules describe the desired state of the system and may contain references if they are parts of higher-level standards. All rules should reflect only a single configuration change for compliance purposes.
Structurally, a rule is a YAML file (which can contain Jinja macros) that represents a dictionary.
A rule YAML file has one implied attribute:
-
id
: The primary identifier for the rule to be referenced from profiles. This is inferred from the file name and links it to checks and fixes with the same file name.
A rule itself contains these attributes:
-
title
: Human-readable title of the rule. -
rationale
: Human-readable HTML description of the reason why the rule exists and why it is important from the technical point of view. For example, rationale of thepartition_for_tmp
rule states that:The <tt>/tmp</tt> partition is used as temporary storage by many programs. Placing <tt>/tmp</tt> in its own partition enables the setting of more restrictive mount options, which can help protect programs which use it.
-
description
: Human-readable HTML description, which provides broader context for non-experts than the rationale. For example, description of thepartition_for_tmp
rule states that: -
requires
: Theid
of another rule or group that must be selected and enabled in a profile. -
conflicts
: Theid
of another rule or group that must not be selected and disabled in a profile.The <tt>/var/tmp</tt> directory is a world-writable directory used for temporary file storage. Ensure it has its own partition or logical volume at installation time, or migrate it using LVM.
-
severity
: Is used for metrics and tracking. It can have one of the following values:unknown
,info
,low
,medium
, orhigh
.Level Description unknown
Severity not defined (default)
info
Rule is informational only. Failing the rule doesn’t imply failure to conform to the security guidance of the benchmark.
low
Not a serious problem
medium
Fairly serious problem
high
Grave or critical problem
When deciding on severity levels, it is best to follow the following guidelines: .Table Vulnerability Severity Category Code Definitions
Severity
DISA Category
Category Code Guidelines
high
CAT I
Any vulnerability, the exploitation of which will directly and immediately result in loss of Confidentiality, Availability, or Integrity.
medium
CAT II
Any vulnerability, the exploitation of which has a potential to result in loss of Confidentiality, Availability, or Integrity.
low
CAT III
Any vulnerability, the existence of which degrades measures to protect againstloss of Confidentiality, Availability, or Integrity.
The severity of the rule can be overridden by a profile with
refine-rule
selector. -
platform
: Defines applicability of a rule. For example, if a rule is not applicable to containers, this should be set tomachine
, which means it will be evaluated only if the targeted scan environment is either bare-metal or virtual machine. Also, it can restrict applicability on higher software layers. By setting toshadow-utils
, the rule will have its applicability restricted to only environments which haveshadow-utils
package installed. The available options can be found in the file <product>/cpe/<product>-cpe-dictionary.xml (e.g.: rhel8/cpe/rhel8-cpe-dictionary.xml). In order to support a new value, an OVAL check (ofinventory
class) must be created undershared/checks/oval/
and referenced in the dictionary file. -
ocil
: Defines asserting statements to check whether or not the rule is valid. -
ocil_clause
: This attribute contains the statement which describes how to determine whether the statement is true or false. Check outrule.yml
inlinux_os/guide/system/software/disk_partitioning/encrypt_partitions/
: this contains apartitions do not have a type of crypto_LUKS
value forocil_clause
. This clause is prefixed with the phrase "It is the case that".
A rule may contain those reference-type attributes:
-
identifiers
: This is related to products that the rule applies to; this is a dictionary. Currently, only the Common Configuration Enumeration or CCE identifier is supported. Other identifiers can be added as well. Contributions to add these other identifiers are welcomed. The table below shows a list of common identifiers and their current support in a rule:URI Supported Identifier Value Description Yes
Common Configuration Enumeration (CCE) – the identifier value MUST be a CCE version 5 number
No
CPE –the identifier value MUST be a CPE version 2.0 or 2.3 name
No
CVE –the identifier value MUST be a CVE number
No
CERT Coordination Center – the identifier value SHOULD be a CERT advisory identifier (e.g., “CA-2004-02”)
No
US-CERT vulnerability notes database – the identifier value SHOULD be a vulnerability note number (e.g., “709220”)
No
US-CERT technical cyber security alerts –the identifier value SHOULD be a technical cyber security alert ID (e.g., “TA05-189A”)
When the rule is related to RHEL, it should have a CCE. A CEE (e.g. cce@rhel7: CCE-80328-8) is used as a global identifier that maps the rule to the product over the lifetime of a rule. There should only be one CCE mapped to a rule as a global identifier. Any other usage of CCE is no longer considered a best practice. CCEs are also product dependent which means that a different CCE must be used for each different product and product version. For example if
cce@rhel7: 80328-8
exists in a rule, that CCE cannot be used for another product or version (e.g. rhel6), and the CCE MUST be retired with the rule. Available CCEs that can be assigned to new rules are listed in theshared/references/cce-rhel-avail.txt
file. -
references
: This is related to the compliance document line items that the rule applies to. These can be attributes such asstigid
,srg
,nist
, etc., whose keys may be modified with a product (e.g.,stigid@rhel6
) to restrict what products a reference identifier applies to. Depending on the type of reference (e.g. catalog, rulei, etc.) will depend on how many can be added to a single rule. In addition, certain references in a rule such asstigid
only apply to a certain product and product version; they cannot be used for multiple products and versionsKey Reference Type Mapping to Rule Example Format cis
Center for Internet Security (catalog identifier)
0-to-many, 0-to-1 is preferred
5.2.5
cjis
Criminal Justice Information System (catalog identifier)
0-to-1
5.4.1.1
cui
Controlled Unclassified Information (catalog identifier)
0-to-many, 0-to-1 is preferred
3.1.7
disa
DISA Control Correlation Identifiers (catalog identifier)
0-to-many
CCI-000018,CCI-000172,CCI-001403
srg, vmmsrg, etc.
DISA Security Requirements Guide (catalog identifier)
0-to-many
SRG-OS-000003-GPOS-00004
stigid@<product><product_version>
DISA STIG identifier (rule identifier)
0-to-1
RHEL-07-030874
hipaa
Health Insurance Portability and Accountability Act of 1996 (HIPAA) (catalog identifier)
0-to-many
164.308(a)(1)(ii)(D),164.308(a)(3)(ii)(A)
nist
National Institute for Standards and Technology 800-53 (catalog identifier)
0-to-many
AC-2(4),AC-17(7),AU-1(b)
nist-csf
National Institute for Standards and Technology Cybersecurity Framework (catalog identifier)
0-to-many
DE.AE-3,DE.AE-5,DE.CM-1
ospp
National Information Assurance Partnership (selected control identifier)
0-to-many
FMT_MOF_EXT.1
pcidss
Payment Card Industry Data Security Standard
0-to-many, 0-to-1 is preferred
Req-8.7.c
See
linux_os/guide/system/software/disk_partitioning/encrypt_partitions/rule.yml
for an example of reference-type attributes as there are others that are not referenced above.
Some of existing rule definitions contain attributes that use macros. There are two implementations of macros:
-
Jinja macros, that are defined in
shared/macros.jinja
, andshared/macros-highlevel.jinja
. -
Legacy XSLT macros, which are defined in
shared/transforms/*.xslt
.
For example, the ocil
attribute of service_ntpd_enabled
uses the ocil_service_enabled
jinja macro.
Due to the need of supporting Ansible output, which also uses jinja, we had to modify control sequences, so macro operations require one more curly brace.
For example, invocation of the partition macro looks like {{{ complete_ocil_entry_separate_partition(part="/tmp") }}}
- there are three opening and closing curly braces instead of the two that are documented in the Jinja guide.
shared/macros.jinja
contains specific low-level macros s.a. systemd_ocil_service_enabled
, whereas shared/macros-highlevel.jinja
contains general macros s.a. ocil_service_enabled
, that decide which one of the specialized macros to call based on the actual product being used.
The macros that are likely to be used in descriptions begin by describe_
, whereas macros likely to be used in OCIL entries begin with ocil_
.
Sometimes, a rule requires ocil
and ocil_clause
to be specified, and they depend on each other.
Macros that begin with complete_ocil_entry_
were designed for exactly this purpose, as they make sure that OCIL and OCIL clauses are defined and consistent.
Macros that begin with underscores are not meant to be used in descriptions.
To parametrize rules and remediations as well as Jinja macros, you can use product-specific variables defined in product.yml
in product root directory.
Moreover, you can define implied properties which are variables inferred from them.
For example, you can define a condition that checks if the system uses yum
or dnf
as a package manager and based on that populate a variable containing correct path to the configuration file.
The inferring logic is implemented in _get_implied_properties
in ssg/yaml.py
.
Constants and mappings used in implied properties should be defined in ssg/constants.py
.
Rules are unselected by default - even if the scanner reads rule definitions, they are effectively ignored during the scan or remediation.
A rule may be selected by any number of profiles, so when the scanner is scanning using a profile the rule is included in, the rule is taken into account.
For example, the rule identified by partition_for_tmp
defined in shared/xccdf/system/software/disk_partitioning.xml
is included in the RHEL7 C2S
profile in rhel7/profiles/C2S.xml
.
Checks are connected to rules by the oval
element and the filename in which it is found.
Remediations (i.e. fixes) are assigned to rules based on their basename.
Therefore, the rule sshd_print_last_log
has a bash
fix associated as there is a bash
script shared/fixes/bash/sshd_print_last_log.sh
. As there is an Ansible playbook shared/fixes/ansible/sshd_print_last_log.yml
, the rule has also an Ansible fix associated.
The rule directory simplifies the structure of a rule and all of its associated content by placing it all under a common directory. The structure of a rule directory looks like the following example:
linux_os/guide/system/group/rule_id/rule.yml linux_os/guide/system/group/rule_id/bash/ol7.sh linux_os/guide/system/group/rule_id/bash/shared.sh linux_os/guide/system/group/rule_id/oval/rhel7.xml linux_os/guide/system/group/rule_id/oval/shared.xml
To be considered a rule directory, it must be a directory contained in a
benchmark pointed to by some product. The directory must have a name that
is the id of the rule, and must contain a file called rule.yml
which
is a YAML Rule description as described above. This directory can then
contain the following subdirectories:
-
anaconda
— for Anaconda remediation content, ending in.anaconda
-
ansible
— for Ansible remediation content, ending in.yml
-
bash
— for Bash remediation content, ending in.sh
-
oval
— for OVAL check content, ending in.xml
-
puppet
— for Puppet remediation content, ending in.pp
-
ignition
— for Ignition remediation content, ending in.yml
-
kubernetes
— for Kubernetes remediation content, ending in.yml
In each of these subdirectories, a file named shared.ext
will apply to all
products and be included in all builds, but {{{ product }}}.ext
will
only get included in the build for {{{ product }}}
(e.g., rhel7.xml
above
will only be included in the build of the rhel7
guide content and not in the
ol7
content). Note that .ext
must be substituted for the correct
extension for content of that type (e.g., .sh
for bash
content). Further,
all of these directories are optional and will only be searched for content if
present. Lastly, the product naming of content will not override the contents
of platform
or prodtype
fields in the content itself (e.g., if rhel7
is
not present in the rhel7.xml
OVAL check platform specifier, it will be
included in the build artifacts but later removed because it doesn’t match
the platform).
Currently the build system supports both rule files (discussed above) and rule
directories. For example content in this format, please see rules in
linux_os/guide
.
To interact with build directories, the ssg.rules
and ssg.rule_dir_stats
modules have been created, as well as three utilities:
-
utils/rule_dir_json.py
— to generate a JSON tree describing the current content of all guides -
utils/rule_dir_stats.py
— for analyzing the JSON tree and finding information about specific rules, products, or summary statistics -
utils/rule_dir_diff.py
— for diffing two JSON trees (e.g., before and after a major change), using the same interface asrule_dir_stats.py
.
For more information about these utilities, please see their help text.
To interact with rule.yml
files and the OVALs inside a rule directory, the
following utilities are provided:
This utility modifies the prodtype field of rules. It supports several commands:
-
mod_prodtype.py <rule_id> list
- list the computed and actual prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> add <product> [<product> …]
- add additional products to the prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> remove <product> [<product> …]
- remove products to the prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> replace <replacement> [<replacement> …]
- do the specified replacement transformations. A replacement transformation is of the formmatch~replace
wherematch
andreplace
are a comma separated list of products. If all of the products inmatch
exist in the originalprodtype
of the rule, they are removed and the products inreplace
are added.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
This utility modifies the <affected>
element of an OVAL check. It supports
several commands on a given rule:
-
mod_checks.py <rule_id> list
- list all OVALs, their computed products, and their actual platforms. -
mod_checks.py <rule_id> delete <product>
- delete the OVAL for the the specified product. -
mod_checks.py <rule_id> make_shared <product>
- moves the product OVAL to the shared OVAL (e.g.,rhel7.xml
toshared.xml
). -
mod_checks.py <rule_id> diff <product> <product>
- Performs a diff between two OVALs (product can beshared
to diff against the shared OVAL).
In addition, the mod_checks.py
utility supports modifying the shared OVAL
with the following commands:
-
mod_checks.py <rule_id> add <platform> [<platform> …]
- adds the specified platforms to the shared OVAL for the rule specified byrule_id
. -
mod_checks.py <rule_id> remove <platform> [<platform> …]
- removes the specified platforms from the shared OVAL. -
mod_checks.py <rule_id> replace <replacement> [<replacement …]
- do the specified replacement against the platforms in the shared OVAL. See the description ofreplace
undermod_prodtype.py
for more information about the format of a replacement.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
This utility modifies the <affected>
element of a remediation. It supports
several commands on a given rule and for the specified remediation language:
-
mod_fixes.py <rule_id> <lang> list
- list all fixes, their computed products, and their actual platforms. -
mod_fixes.py <rule_id> <lang> delete <product>
- delete the fix for the specified product. -
mod_fixes.py <rule_id> <lang> make_shared <product>
- moves the product fix to the shared fix (e.g.,rhel7.sh
toshared.sh
). -
mod_fixes.py <rule_id> <lang> diff <product> <product>
- Performs a diff between two fixes (product can beshared
to diff against the shared fix).
In addition, the mod_fixes.py
utility supports modifying the shared fixes
with the following commands:
-
mod_fixes.py <rule_id> <lang> add <platform> [<platform> …]
- adds the specified platforms to the shared fix for the rule specified byrule_id
. -
mod_fixes.py <rule_id> <lang> remove <platform> [<platform> …]
- removes the specified platforms from the shared fix. -
mod_fixes.py <rule_id> <lang> replace <replacement> [<replacement …]
- do the specified replacement against the platforms in the shared fix. See the description ofreplace
undermod_prodtype.py
for more information about the format of a replacement.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
This utility can be used to bootstrap and test Kubernetes/OpenShift application checks. See the help output for more detailed usage examples of each of the supported subcommands:
-
utils/add_platform_rule.py create --rule=<rule_name> <options>
- creates files for a new rule. -
utils/add_platform_rule.py test --rule=<rule_name> <options>
- tests a rule against local files using an oscap container. -
utils/add_platform_rule.py cluster-test --rule=<rule_name> <options>
- tests a rule against a running OCP4 cluster using compliance-operator.
This utility requires the following:
-
KUBECONFIG env set to a kubeconfig file for a running OCP4 cluster.
-
oc
andpodman
in PATH.
Tips:
-
The --yamlpath option requires a specialized format to specify the resource element to check. See https://github.com/OpenSCAP/yaml-filter/wiki/YAML-Path-Definition for documentation.
-
To use the local
test
subcommand, first create a yaml file under a directory structure under /tmp that mirrors the API path. For example, if the resource’s full path is /api/v1/foo, save the yaml to /tmp/api/v1/foo. Runningtest
will then check the rule against the local file by launching an openscap-1.3.3 container using podman.
Checks are used to evaluate a Rule. They are written using a custom OVAL syntax and are stored as xml files inside the checks/oval directory for the desired platform. During the building process, the system will transform the checks in OVAL compliant checks.
In order to create a new check, you must create a file in the appropriate directory, and name it the same as the Rule id. This id will also be used as the OVAL id attribute. The content of the file should follow the OVAL specification with these exceptions:
-
The root tag must be
<def-group>
-
If the OVAL check has to be a certain OVAL version, you can add
oval_version="oval_version_number"
as an attribute to the root tag. Otherwise ifoval_version
does not exist in<def-group>
, it is assumed that the OVAL file applies to any OVAL version. -
Don’t use the tags
<definitions>
<tests>
<objects>
<states>
, instead, put the tags<definition>
<*_test>
<*_object>
<*_state>
directly inside the<def-group>
tag. -
TODO Namespaces
This is an example of a check, written using the custom OVAL syntax, that checks if the group that owns the file /etc/cron.allow is the root:
<def-group oval_version="5.11">
<definition class="compliance" id="file_groupowner_cron_allow" version="1">
<metadata>
<title>Verify group who owns 'cron.allow' file</title>
<affected family="unix">
<platform>Red Hat Enterprise Linux 7</platform>
</affected>
<description>The /etc/cron.allow file should be owned by the appropriate
group.</description>
</metadata>
<criteria>
<criterion test_ref="test_groupowner_etc_cron_allow" />
</criteria>
</definition>
<unix:file_test check="all" check_existence="any_exist"
comment="Testing group ownership /etc/cron.allow" id="test_groupowner_etc_cron_allow"
version="1">
<unix:object object_ref="object_groupowner_cron_allow_file" />
<unix:state state_ref="state_groupowner_cron_allow_file" />
</unix:file_test>
<unix:file_state id="state_groupowner_cron_allow_file" version="1">
<unix:group_id datatype="int">0</unix:group_id>
</unix:file_state>
<unix:file_object comment="/etc/cron.allow"
id="object_groupowner_cron_allow_file" version="1">
<unix:filepath>/etc/cron.allow</unix:filepath>
</unix:file_object>
Jinja macros for OVAL checks are located in macros-oval.jinja. These currently include the following high-level macros:
-
oval_sshd_config
— check a parameter and value in the sshd configuration file -
oval_grub_config
— check a parameter and value in the grub configuration file -
oval_check_config_file
— check a parameter and value in a given configuration file -
oval_check_ini_file
— check a parameter and value in a given section of a given configuration file in "INI" format
Always consider reusing oval_check_config_file
when creating new macros, it has some logic that will save you some time (e.g.: platform applicability).
They also include several low-level macros which are used to build the high level macros:
-
set of low-level macros to build the OVAL checks for line in file:
oval_line_in_file_criterion
oval_line_in_file_test
oval_line_in_file_object
oval_line_in_file_state
-
set of low-level macros to build the OVAL checks to test if a file exists:
oval_config_file_exists_criterion
oval_config_file_exists_test
oval_config_file_exists_object
Platform applicability is given by the prodtype
property in the rule.yml file. If you are using oval_check_config_file
macro directly or indirectly, it should be enough to define prodtype
. Default is all
platforms. If you intend to define your own OVAL check please consider using oval_affected
macro from macros.jinja.
Whenever possible, please reuse the macros and form high-level simplifications. This ensures consistent, high quality OVAL checks that we can edit in one place and reuse in many places. For more details on which parameters are accepted by the macros, please refer to the inline documentation in the macros-oval.jinja file.
Remediations, also called fixes, are used to change the state of the machine, so that previously non-passing rules can pass. There can be multiple versions of the same remediation meant to be executed by different applications, more specifically Ansible, Bash, Anaconda, Puppet, Ignition and Kubernetes. By default all remediation languages are built and included in the DataStream.
But each product can specify its own set of remediation to include in the DataStream via a CMake Variable in the product’s CMakeLists.txt
.
See example below, from OCP4 product, ocp4/CMakeLists.txt
:
set(PRODUCT_REMEDIATION_LANGUAGES "ignition;kubernetes")
They also have to be idempotent, meaning that they must be able to be executed multiple times without causing the fixes to accumulate. The Ansible’s language works in such a way that this behavior is built-in, however, for the other versions, the remediations must have it implemented explicitly. Remediations also carry metadata that should be present at the beginning of the files. This meta data will be converted in XCCDF tags during the building process. That is how it looks like and what it means:
# platform = multi_platform_all # reboot = false # strategy = restrict # complexity = low # disruption = low
Field | Description | Accepted values |
---|---|---|
platform |
CPE name, CPE applicability language expression or even wildcards declaring which platforms the fix can be applied |
Default CPE dictionary is packaged along with openscap. Custom CPE dictionaries can be used. Wildcards are multi_platform_[all, oval, fedora, debian, ubuntu, linux, rhel, openstack, opensuse, rhev, sle]. |
reboot |
Whether or not a reboot is necessary after the fix |
true, false |
strategy |
The method or approach for making the described fix. Only informative for now |
unknown, configure, disable, enable, patch, policy, restrict, update |
complexity |
The estimated complexity or difficulty of applying the fix to the target. Only informative for now |
unknown, low, medium, high |
disruption |
An estimate of the potential for disruption or operational degradation that the application of this fix will impose on the target. Only informative for now |
unknown, low, medium, high |
Important
|
The minimum version of Ansible must be at the latest supported version. See https://access.redhat.com/support/policy/updates/ansible-engine for information on the supported Ansible versions. |
Ansible remediations are either:
-
Stored as
.yml
files in directoryansible
in the rule directory. -
Generated from templates.
-
Generated using jinja2 macros.
They are meant to be executed by Ansible itself when requested by openscap, so they are written using Ansible’s own language with the following exceptions:
-
The remediation content must be only the tasks section of what would be a playbook.
-
Tasks can include blocks for grouping related tasks.
-
The
when
clause will get augmented in certain scenarios.
-
-
Notifications and handlers are not supported.
-
Tags are not necessary, because they are automatically generated during build of content.
Here is an example of an Ansible remediation that ensures the SELinux is enabled in grub:
# platform = multi_platform_rhel,multi_platform_fedora # reboot = false # strategy = restrict # complexity = low # disruption = low - name: Ensure SELinux Not Disabled in /etc/default/grub replace: dest: /etc/default/grub regexp: selinux=0
The Ansible remediation will get included by our build system to the SCAP datastream in the fix
element of respective rule.
The build system generates an Ansible Playbook from the remediation for all profiles.
The generated Playbook is located in /build/<product>/playbooks/<profile_id>/<rule_id>.yml
.
For each rule in the given product we also generate an Ansible Playbook regardless presence of the rule in any profile.
The generated Playbook is located in /build/<product>/playbooks/all/<rule_id>.yml
.
The /build/<product>/playbooks/all/
directory represents the virtual (all)
profile which consists of all rules in the product.
Due to undefined XCCDF Value selectors in this pseudo-profile, these Playbooks use defaults of XCCDF Values when applicable.
We also build profile Playbook that contains tasks for all rules in the profile.
The Playbook is generated in /build/ansible/<product>-playbook-<profile_id>.yml
.
Jinja macros for Ansible content are located in /shared/macros-ansible.jinja
. These currently include the following high-level macros:
-
ansible_sshd_set
— set a parameter in the sshd configuration -
ansible_etc_profile_set
— ensure a command gets executed or a variable gets set in /etc/profile or /etc/profile.d -
ansible_tmux_set
— set a command in tmux configuration -
ansible_deregexify_banner_etc_issue
— Formats a banner regex for use in /etc/issue -
ansible_deregexify_banner_dconf_gnome
— Formats a banner regex for use in dconf
They also include several low-level macros:
-
ansible_lineinfile
— ensure a line is in a given file -
ansible_stat
— check the status of a path on the file system -
ansible_find
— find all files with matched content -
ansible_only_lineinfile
— ensure that no lines matching the regex are present and add the given line -
ansible_set_config_file
— for configuration files; set the given configuration value and ensure no conflicting values -
ansible_set_config_file_dir
— for configuration files and files in configuration directories; set the given configuration value and ensure no conflicting values
Low level macros to make login banner regular expressions usable in Ansible remediations
-
ansible_deregexify_multiple_banners
— Strips multibanner regex and keeps only the first banner -
ansible_deregexify_banner_space
— Strips whitespace or newline regex -
ansible_deregexify_banner_newline
— Strips newline or newline escape sequence regex -
ansible_deregexify_banner_newline_token
— Strips newline token for a newline escape sequence regex -
ansible_deregexify_banner_backslash
- Strips backslash regex
When msg
is absent from any of the above macros, rule title will be substituted instead.
Whenever possible, please reuse the macros and form high-level simplifications. This ensures consistent, high quality remediations that we can edit in one place and reuse in many places.
Bash remediations are stored as shell script files in bash directory in rule directory. You can make use of any available command, but beware of too specific or complex solutions, as it may lead to a narrow range of supported platforms. There are a number of already written bash remediations functions available in shared/bash_remediation_functions/ directory, it is possible one of them is exactly what you are looking for.
Following, you can see an example of a bash remediation that sets the maximum number of days a password may be used:
# platform = Red Hat Enterprise Linux 7 . /usr/share/scap-security-guide/remediation_functions populate var_accounts_maximum_age_login_defs grep -q ^PASS_MAX_DAYS /etc/login.defs && \ sed -i "s/PASS_MAX_DAYS.*/PASS_MAX_DAYS $var_accounts_maximum_age_login_defs/g" /etc/login.defs if [ $? -ne 0 ]; then echo "PASS_MAX_DAYS $var_accounts_maximum_age_login_defs" >> /etc/login.defs fi
When writing new bash remediations content, please follow the following guidelins:
-
Use four spaces for indentation rather than tabs.
-
You can use macros from
shared/macros-bash.jinja
in the remediation content. If the macro is used from a nested block, use theindent
jinja2 filter assuming the 4-space indentation. Typically, you want to call the macro with the intended indentation, and asindent
doesn’t indent the first line by default, you just pass the number of spaces as the only argument. See the remediation for ruleensure_fedora_gpgkey_installed
for reference. -
Prefer to use
sed
rather thanawk
. -
Try to keep expressions simple, avoid double negations. Use compound lists with moderation and only if you understand them.
-
Test your script in the "strict mode" with
set -e -o pipefail
specified at the top of it. Make sure that the script doesn’t end prematurely in the strict mode. -
Beware of constructs such as
[ $x = 1 ] && echo "$x is one"
as they violate the previous point.[ $x != 1 ] || echo "$x is one"
is OK. -
Use the
die
function defined inremediation_functions
to handle exceptions, such as[ -f "$config_file" ] || die "Couldn’t find the configuration file '$config_file'"
. -
Run
shellcheck
over your remediation script. Make sure that you fix all warnings that are applicable. If you are not sure, mention those warnings in the pull request description. -
Use POSIX syntax in regular expressions, so prefer
grep '^*something'
overgrep '^\s*something'
.
Jinja macros that generate Bash remediations can be found in shared/macros-bash.jinja
.
Available high-level Jinja macros to generate Bash remediations:
-
bash_sshd_config_set
- Set SSH Daemon configuration option in/etc/ssh/sshd_config
. -
bash_auditd_config_set
- Set Audit Daemon option in/etc/audit/auditd.conf
. -
bash_coredump_config_set
- Set Coredump configuration in/etc/systemd/coredump.conf
-
bash_package_install
- Install a package -
bash_package_remove
- Remove a package -
bash_disable_prelink
- disables prelinking -
bash_dconf_settings
- configure DConf settings for RHEL and Fedora systems -
bash_dconf_lock
- configure DConf locks for RHEL and Fedora systems -
bash_service_command
- enable or disable a service (either with systemctl or xinet.d) -
bash_firefox_js_setting
- configure a setting in a Mozilla Firefox JavaScript configuration file. -
bash_firefox_cfg_setting
- configure a setting in a Mozilla Firefox configuration file.
Available low-level Jinja macros that can be used in Bash remediations:
-
die
- Function to terminate the remediation -
set_config_file
- Add an entry to a text configuration file
Low level macros to make login banner regular expressions usable in Bash remediations
-
bash_deregexify_multiple_banners
- Strips multibanner regex and keeps only the first banner -
bash_deregexify_banner_space
- Strips whitespace or newline regex -
bash_deregexify_banner_newline
- Strips newline or newline escape sequence regex -
bash_deregexify_banner_newline_token
- Strips newline token for a newline escape sequence regex -
bash_deregexify_banner_backslash
- Strips backslash regex
Jinja macros for Kubernetes content are located in /shared/macros-kubernetes.jinja
. These currently include the following high-level macros:
-
kubernetes_sshd_set
- Set SSH Daemon configuration file in/etc/ssh/sshd_config
.
Available low-level Jinja macros that can be used in Kubernetes remediations:
-
kubernetes_machine_config_file
- Set a configuration file to a given path
Writing OVAL checks, Bash, or any other content can be tedious work. For
certain types of rules we provide templates. If there is a template that can be
used for the new rule you only need to specify the template name and its parameters in
rule.yml
and the content will be generated during the build.
The templating system currently supports generating OVAL checks and Ansible,
Bash, Anaconda, Puppet, Ignition and Kubernetes remediations. All templates
can be found in shared/templates
directory.
The files are named template_<TYPE>_<NAME>
, where <TYPE>
should be OVAL,
ANSIBLE, BASH, ANACONDA, PUPPET, IGNITION and KUBERNETES and <NAME>
is the
template name.
To use a template in rule.yml
add template:
key there and fill it
accordingly. The general form is the following:
template: name: template_name vars: param_name: value # these parameters are individual for each template param_name@rhel7: value1 param_name@rhel8: value2 backends: # optional ansible: "off" bash: "on" # on is implicit value
The vars:
key contains template parameters and their values which will be
substituted into the template. Each template has specific parameters. To use
different values of parameters based on product, append @
followed by product
ID to the parameter name.
The backends:
key is optional. By default, all languages supported by a given template will be generated.
with given name exist will be generated. This key can be used to explicitly opt
out from generating a certain type of content for the rule.
For example, to generate templated content except Bash remediation for rule
"Package GCC is Installed" using package_installed
template, add the
following to rule.yml
:
template: name: package_installed vars: pkgname: gcc backends: bash: "off"
Important
|
The build system does not support implicit conversion of bool strings when Python 2 is used, so bash: True argument in the example above would cause a build error. One should always use quoted strings as arguments until Python 2 is completely removed from the list of supported interpreters.
|
- accounts_password
-
-
Checks if PAM enforces password quality requirements. Checks the configuration in
/etc/pam.d/system-auth
(for RHEL 6 systems) or/etc/security/pwquality.conf
(on other systems). -
Parameters:
-
variable - PAM
pam_cracklib
(on RHEL 6) orpam_pwquality
(on other systems) module name, eg.ucredit
,ocredit
-
operation - OVAL operation, eg.
less than or equal
-
-
Languages: OVAL
-
- auditd_lineinfile
-
-
Checks configuration options of the Audit Daemon in
/etc/audit/auditd.conf
. -
Parameters:
-
parameter - auditd configuration item
-
value - the value of configuration item specified by parameter
-
missing_parameter_pass - effective only in OVAL checks, if set to
"false"
and the parameter is not present in the configuration file, the OVAL check will returnfalse
.
-
-
Languages: Ansible, Bash, OVAL
-
- audit_rules_dac_modification
-
-
Checks Audit Discretionary Access Control rules
-
Parameters:
-
attr - value of
-S
argument in Audit rule, eg.chmod
-
-
Languages: Ansible, Bash, OVAL, Kubernetes
-
- audit_rules_file_deletion_events
-
-
Ensure auditd Collects file deletion events
-
Parameters:
-
name - value of
-S
argument in Audit rule, eg.unlink
-
-
Languages: Ansible, Bash, OVAL
-
- audit_rules_login_events
-
-
Checks if there are Audit rules that record attempts to alter logon and logout events.
-
Parameters:
-
path - value of
-w
in the Audit rule, eg./var/run/faillock
-
-
Languages: Ansible, Bash, OVAL, Kubernetes
-
- audit_rules_path_syscall
-
-
Check if there are Audit rules to record events that modify user/group information via a syscall on a specific file.
-
Parameters:
-
path - path of the protected file, eg
/etc/shadow
-
pos - position of argument, eg.
a2
-
syscall - name of the system call, eg.
openat
-
-
Languages: Ansible, Bash, OVAL
-
- audit_rules_privileged_commands
-
-
Ensure Auditd collects information on the use of specified privileged command.
-
Parameters:
-
path - the path of the privileged command - eg.
/usr/bin/mount
-
-
Languages: Ansible, Bash, OVAL, Kubernetes
-
- audit_file_contents
-
-
Ensure that audit
.rules
file specified by parameterfilepath
contains the contents specified in parametercontents
. -
Parameters:
-
filepath - path to audit rules file, e.g.:
/etc/audit/rules.d/10-base-config.rules
-
contents - expected contents of the file
-
-
Languages: Ansible, Bash, OVAL
-
- audit_rules_unsuccessful_file_modification
-
-
Ensure there is an Audit rule to record unsuccessful attempts to access files
-
Parameters:
-
name - name of the unsuccessful system call, eg.
creat
-
-
Languages: Ansible, Bash, OVAL
-
- audit_rules_unsuccessful_file_modification_o_creat
-
-
Ensure there is an Audit rule to record unsuccessful attempts to access files when O_CREAT flag is specified.
-
Parameters:
-
syscall - name of the unsuccessful system call, eg.
openat
-
pos - position of the O_CREAT argument in the syscall, as specified by
-F
audit rule argument, eg.a2
-
-
Languages: OVAL
-
- audit_rules_unsuccessful_file_modification_o_trunc_write
-
-
Ensure there is an Audit rule to record unsuccessful attempts to access files when O_TRUNC_WRITE flag is specified.
-
Parameters:
-
syscall - name of the unsuccessful system call, eg.
openat
-
pos - position of the O_TRUNC_WRITE argument in the syscall, as specified by
-F
audit rule argument, eg.a2
-
-
Languages: OVAL
-
- audit_rules_unsuccessful_file_modification_rule_order
-
-
Ensure that Audit rules for unauthorized attempts to use a specific system call are ordered correctly.
-
Parameters:
-
syscall - name of the unsuccessful system call, eg.
openat
-
pos - position of the flag parameter in the syscall, as specified by
-F
audit rule argument, eg.a2
-
-
Languages: OVAL
-
- audit_rules_usergroup_modification
-
-
Check if Audit is configured to record events that modify account changes.
-
Parameters:
-
path - path that should be part of the audit rule as a value of
-w
argument, eg./etc/group
.
-
-
Languages: Ansible, Bash, OVAL
-
- argument_value_in_line
-
-
Checks that
argument=value
pair is present in (optionally) the line started with line_prefix (and, optionally, ending with line_suffix) in the file(s) defined by filepath. -
Parameters:
-
filepath - File(s) to be checked. The value would be treated as a regular expression pattern.
-
arg_name - Argument name, eg.
audit
-
arg_value - Argument value, eg.
'1'
-
line_prefix - The prefix of the line in which argument-value pair should be present, optional.
-
line_suffix - The suffix of the line in which argument-value pair should be present, optional.
-
-
Languages: OVAL
-
- file_groupowner
-
-
Check group that owns the given file.
-
Parameters:
-
filepath - File path to be checked. If the file path ends with
/
it describes a directory. -
filepath_is_regex - If set to
"true"
the OVAL will consider the value of filepath as a regular expression. -
missing_file_pass - If set to
"true"
the OVAL check will pass when file is absent. Default value is"false"
. -
file_regex - Regular expression that matches file names in a directory specified by filepath. Can be set only if filepath parameter specifies a directory. Note: Applies to base name of files, so if a file
/foo/bar/file.txt
is processed, onlyfile.txt
is tested against file_regex. -
filegid - group ID (GID)
-
-
Languages: Ansible, Bash, OVAL
-
- file_owner
-
-
Check user that owns the given file.
-
Parameters:
-
filepath - File path to be checked. If the file path ends with
/
it describes a directory. -
filepath_is_regex - If set to
"true"
the OVAL will consider the value of filepath as a regular expression. -
missing_file_pass - If set to
"true"
the OVAL check will pass when file is absent. Default value is"false"
. -
file_regex - Regular expression that matches file names in a directory specified by filepath. Can be set only if filepath parameter specifies a directory. Note: Applies to base name of files, so if a file
/foo/bar/file.txt
is processed, onlyfile.txt
is tested against file_regex. -
fileuid - user ID (UID)
-
-
Languages: Ansible, Bash, OVAL
-
- file_permissions
-
-
Checks permissions (mode) on a given file.
-
Parameters:
-
filepath - File path to be checked. If the file path ends with
/
it describes a directory. -
filepath_is_regex - If set to
"true"
the OVAL will consider the value of filepath as a regular expression. -
missing_file_pass - If set to
"true"
the OVAL check will pass when file is absent. Default value is"false"
. -
file_regex - Regular expression that matches file names in a directory specified by filepath. Can be set only if filepath parameter specifies a directory. Note: Applies to base name of files, so if a file
/foo/bar/file.txt
is processed, onlyfile.txt
is tested against file_regex. -
filemode - File permissions in a hexadecimal format, eg.
'0640'
.
-
-
Languages: Ansible, Bash, OVAL
-
- grub2_bootloader_argument
-
-
Checks kernel command line arguments in GRUB 2 configuration.
-
Parameters:
-
arg_name - argument name, eg.
audit
-
arg_value - argument value, eg.
'1'
-
-
Languages: Ansible, Bash, OVAL
-
- kernel_module_disabled
-
-
Checks if the given Linux kernel module is disabled.
-
Parameters:
-
kernmodule - name of the Linux kernel module, eg.
cramfs
-
-
Languages: Ansible, Bash, OVAL
-
- mount
-
-
Checks that a given mount point is located on a separate partition.
-
Parameters:
-
mountpoint - path to the mount point, eg.
/var/tmp
-
-
Languages: Anaconda, OVAL
-
- mount_option
-
-
Checks if a given partition is mounted with a specific option such as "nosuid".
-
Parameters:
-
mountpoint - mount point on the filesystem eg.
/dev/shm
-
mountoption - mount option, eg.
nosuid
-
filesystem - filesystem in
/etc/fstab
, eg.tmpfs
. Used only in Bash remediation. -
type - filesystem type. Used only in Bash remediation.
-
mount_has_to_exist - Specifies if the mountpoint entry has to exist in
/etc/fstab
before the remediation is executed. If set toyes
and the mountpoint entry is not present in/etc/fstab
the Bash remediation terminates. If set tono
the mountpoint entry will be created in/etc/fstab
.
-
-
Languages: Anaconda, Ansible, Bash, OVAL
-
- mount_option_remote_filesystems
-
-
Checks if all remote filesystems (NFS mounts in
/etc/fstab
) are mounted with a specific option. -
Parameters:
-
mountpoint - always set to
remote_filesystems
-
mountoption - mount option, eg.
nodev
-
filesystem - filesystem of new mount point (used when adding new entry in
/etc/fstab
), eg.tmpfs
. Used only in Bash remediation. -
mount_has_to_exist - Used only in Bash remediation. Specifies if the mountpoint entry has to exist in
/etc/fstab
before the remediation is executed. If set toyes
and the mountpoint entry is not present in/etc/fstab
the Bash remediation terminates. If set tono
the mountpoint entry will be created in/etc/fstab
.
-
-
Languages: Ansible, Bash, OVAL
-
- mount_option_removable_partitions
-
-
Checks if all removable media mounts are mounted with a specific option. Unlike other mount option templates, this template doesn’t use the mount point, but the block device. The block device path (eg.
/dev/cdrom
) is always set tovar_removable_partition
. This is an XCCDF Value, defined invar_removable_partition.var
-
Parameters:
-
mountoption - mount option, eg.
nodev
-
-
Languages: Anaconda, Ansible, Bash, OVAL
-
- package_installed
-
-
Checks if a given package is installed. Optionally, it can also check whether a specific version or newer is installed.
-
Parameters:
-
pkgname - name of the RPM or DEB package, eg.
tmux
-
evr - Optional parameter. It can be used to check if the package is of a specific version or newer. Provide epoch, version, release in
epoch:version-release
format, eg.0:2.17-55.0.4.el7_0.3
. Used only in OVAL checks. The OVAL state uses operation "greater than or equal" to compare the collected package version with the version in the OVAL state.
-
-
Languages: Anaconda, Ansible, Bash, OVAL, Puppet
-
- package_removed
-
-
Checks if the given package is not installed.
-
Parameters:
-
pkgname - name of the RPM or DEB package, eg.
tmux
-
-
Languages: Anaconda, Ansible, Bash, OVAL, Puppet
-
- sebool
-
-
Checks values of SELinux booleans.
-
Parameters:
-
seboolid - name of SELinux boolean, eg.
cron_userdomain_transition
-
sebool_bool - the value of the SELinux Boolean. Can be either
"true"
or"false"
. If this parameter is not specified, the rule will use XCCDF Valuevar_<seboolid>
. These XCCDF Values are usually defined in the same directory where therule.yml
that describes the rule is located. The seboolid will be replaced by a SELinux boolean, for example:selinuxuser_execheap
and in the profile you can usevar_selinuxuser_execheap
to turn on or off the SELinux boolean.
-
-
Languages: Ansible, Bash, OVAL
-
- service_disabled
-
-
Checks if a service is disabled. Uses either systemd or SysV init based on the product configuration in
product.yml
. -
Parameters:
-
servicename - name of the service.
-
packagename - name of the package that provides this service. This argument is optional. If packagename is not specified it means the name of the package is the same as the name of service.
-
daemonname - name of the daemon. This argument is optional. If daemonname is not specified it means the name of the daemon is the same as the name of service.
-
-
Languages: Ansible, Bash, OVAL, Puppet, Ignition, Kubernetes
-
- service_enabled
-
-
Checks if a system service is enabled. Uses either systemd or SysV init based on the product configuration in
product.yml
. -
Parameters:
-
servicename - name of the service.
-
packagename - name of the package that provides this service. This argument is optional. If packagename is not specified it means the name of the package is the same as the name of service.
-
daemonname - name of the daemon. This argument is optional. If daemonname is not specified it means the name of the daemon is the same as the name of service.
-
-
Languages: Ansible, Bash, OVAL, Puppet
-
- shell_lineinfile
-
-
Checks shell variable assignments in files. Remediations will paste assignments with single shell quotes unless there is the dollar sign in the value string, in which case double quotes are administered. The OVAL checks for a match with either of no quotes, single quoted string, or double quoted string.
-
Parameters:
-
path - What file to check.
-
parameter - name of the shell variable, eg.
SHELL
. -
value - value of the SSH configuration option specified by parameter, eg.
"/bin/bash"
. Don’t pass extra shell quoting - that will be handled on the lower level. -
no_quotes - If set to
"true"
, the assigned value has to be without quotes during the check and remediation doesn’t quote assignments either. -
missing_parameter_pass - If set to
"true"
the OVAL check will pass if the parameter is not present in the target file.
-
-
Languages: Ansible, Bash, OVAL
-
Example: A template invocation specifying that parameter
HISTSIZE
should be set to value500
in/etc/profile
will produce a check that passes if any of the following lines are present in/etc/profile
:-
HISTSIZE=500
-
HISTSIZE="500"
-
HISTSIZE='500'
The remediation would insert one of the quoted forms if the line was not present.
If the
no_quotes
would be set in the template, only the first form would be checked for, and the unquoted assignment would be inserted to the file by the remediation if not present.
-
-
- sshd_lineinfile
-
-
Checks SSH server configuration items in
/etc/ssh/sshd_config
. -
Parameters:
-
parameter - name of the SSH configuration option, eg.
KerberosAuthentication
-
value - value of the SSH configuration option specified by parameter, eg.
"no"
. -
missing_parameter_pass - If set to
"true"
the OVAL check will pass if the parameter is not present in/etc/ssh/sshd_config
.
-
-
Languages: Ansible, Bash, OVAL, Kubernetes
-
- sysctl
-
-
Checks sysctl parameters. The OVAL definition checks both configuration and runtime settings and require both of them to be set to the desired value to return true.
-
Parameters:
-
sysctlvar - name of the sysctl value, eg.
net.ipv4.conf.all.secure_redirects
. -
datatype - data type of the sysctl value, eg.
int
. -
sysctlval - value of the sysctl value, eg.
'1'
. If this parameter is not specified, XCCDF Value is used instead.
-
-
Languages: Ansible, Bash, OVAL
-
- timer_enabled
-
-
Checks if a SystemD timer unit is enabled.
-
Parameters:
-
timername - name of the SystemD timer unit, without the
timer
suffix, eg.dnf-automatic
. -
packagename - name of the RPM package which provides the SystemD timer unit. This parameter is optional, if it is not provided it is assumed that the name of the RPM package is the same as the name of the SystemD timer unit.
-
-
Languages: Ansible, Bash, OVAL
-
- yamlfile_value
-
-
Check if value(s) of certain type is (are) present in a YAML (or JSON) file at a given path.
-
Parameters:
-
ocp_data - if set to
"true"
then the filepath would be treated as a part of the dump of OCP configuration with theocp_data_root
prefix; optional. -
filepath - full path to the file to check
-
yamlpath - OVAL’s YAML Path expression.
-
entity_check (CheckEnumeration) - entity_check value for state’s value, optional. If omitted, entity_check attribute would not be set and will be treated by OVAL as 'all'.
Possible options areall
,at least one
,none satisfy
andonly one
. -
check_existence (ExistenceEnumeration) -
check_existence
value for theyamlfilecontent_test
, optional. If omitted, check_existence attribute will default to 'only_one_exists'.
Possible options areall_exist
,any_exist
,at_least_one_exists
,none_exist
,only_one_exists
. -
values - a list of dictionaries with values to check, where:
-
key - the yaml key to check, optional. Used when the yamlpath expression yields a map.
-
value - the value to check.
-
type (SimpleDatatypeEnumeration) - datatype for state’s field (child of value), optional. If omitted, datatype would be treated as OVAL’s default 'string'.
Most common datatypes arestring
andint
. For complete list check reference link. -
operation (OperationEnumeration) - operation value for state’s field (child of value), optional. If omitted, operation attribute would not be set. OVAL’s default operation is 'equals'.
Most common operations areequals
,not equal
,pattern match
,greater than or equal
andless than or equal
. For complete list of operations check the reference link. -
entity_check (CheckEnumeration) - entity_check value for state’s field (child of value), optional. If omitted, entity_check attribute would not be set and will be treated by OVAL as 'all'.
Possible options areall
,at least one
,none satisfy
andonly one
.
-
-
-
Languages: OVAL
-
The offer of currently available templates can be extended by developing a new template.
1) Create the template files, one for each type of file. Each one should be
named template_<TYPE>_<NAME>
. Where <TYPE>
should be OVAL, ANSIBLE, BASH,
ANACONDA or PUPPET and <NAME>
is the what we will call the template name.
Create these files in shared/templates
directory.
Use the Jinja syntax we use elsewhere in the project; refer to the earlier
section on Jinja macros for more information. The parameters should be named
using uppercase letters, because the keys from rule.yml
are converted to
uppercase by the code that substitutes the parameters to the template.
Notice that OVAL should be written in shorthand format. This is an example of an OVAL template file called template_OVAL_package_installed:
<def-group>
<definition class="compliance" id="package_{{{ PKGNAME }}}_installed"
version="1">
<metadata>
<title>Package {{{ PKGNAME }}} Installed</title>
<affected family="unix">
<platform>multi_platform_all</platform>
</affected>
<description>The {{{ pkg_system|upper }}} package {{{ PKGNAME }}} should be installed.</description>
</metadata>
<criteria>
<criterion comment="package {{{ PKGNAME }}} is installed"
test_ref="test_package_{{{ PKGNAME }}}_installed" />
</criteria>
</definition>
{{{ oval_test_package_installed(package=PKGNAME, evr=EVR, test_id="test_package_"+PKGNAME+"_installed") }}}
</def-group>
And here is the Ansible template file called template_ANSIBLE_package_installed:
# platform = multi_platform_all # reboot = false # strategy = enable # complexity = low # disruption = low - name: Ensure {{{ PKGNAME }}} is installed package: name: "{{{ PKGNAME }}}" state: present
2) Implement a callback function which will process the parameters before passing them to the Jinja engine. For example, this callback can provide default values, escape characters, check if parameters are correct, or any other processing of the parameters specific for the template.
The callback functions are located in ssg/templates.py
.
The callback function must have the same name as the template name. This is the
name that is used in rule.yml
in name:
key, for example
package_installed
.
The callback must have 2 parameters:
-
data
- dictionary which contains the contents ofvars:
dictionary fromrule.yml
-
lang
- string, describes language, can be one of:"anaconda"
,"ansible"
,"bash"
,"oval"
,"puppet"
,"ignition"
,"kubernetes"
The callback function is executed for every supported language, so it can process the data differently for each language.
The function must always return the (modified) data
dictionary.
The function must be always defined even if no processing of data is needed.
In that situation the function just returns data
parameter.
3) Decorate the callback function by the @template
decorator. The decorator
will register the template in the templating engine. The decorator has a single
parameter which is a list of languages that the template provides. The list can
contain the following values: "anaconda"
, "ansible"
, "bash"
, "oval"
,
"puppet"
, "ignition"
, "kubernetes"
. The decorator parameter is mandatory.
Insert the @template
decorator on the line before the callback function definition.
For example, if the template name is package_installed
and it provides
Ansible template in shared/templates/template_ANSIBLE_package_installed
and
OVAL template in shared/templates/template_OVAL_package_installed
, then there
must be callback function package_installed
in
ssg/templates.py
and the callback must be
decorated by @template(["ansible", "oval")]
. In this example, decorating the
callback function lets the templating engine know that Ansible and OVAL should
be generated if any rule uses package_installed
in rule.yml
.
The following example shows the callback function for the template
mount_option
, including the @template
decorator. The example function
declares that there is a template which name is mount_option
and it provides
Ansible, Bash and OVAL content. The code takes the data
argument which is a
dictionary with template parameters from rule.yml
and based on lang
it
modifies the template parameters and returns the modified dictionary.
@template(["ansible", "bash", "oval"])
def mount_option(data, lang):
if lang == "oval":
data["pointid"] = re.sub(r"[-\./]", "_", data["mountpoint"]).lstrip("_")
else:
data["mountoption"] = re.sub(" ", ",", data["mountoption"])
return data
You can use Jinja macros and Jinja filters in the template code. ComplianceAsCode support all built-in Jinja filters.
There are also some custom filters useful for content authoring defined in the project:
- escape_id
-
-
Replaces all non-word (regex \W) characters with underscore. Useful for sanitizing ID strings as it is compatible with OVAL IDs
oval:[A-Za-z0-9_\-\.]+:ste:[1-9][0-9]*
.
-
- escape_regex
-
-
Escapes characters in the string for it to be usable as a part of some regular expression, behaves similar to the Python 3’s re.escape.
-
ComplianceAsCode uses ctest to orchestrate testing upstream. To run the test suite go to the build folder and execute ctest
:
cd build/ ctest -j 4
Check out the various ctest
options to perform specific testing, you can rerun just one test or skip all tests that match a regex. (See -R, -E and other options in the ctest man page)
Tests are added using the add_test cmake call. Each test should finish with a 0 exit-code in case everything went well and a non-zero if something failed. Output (both stdout and stderr) are collected by ctest and stored in logs or displayed. Make sure you never hard-code a path to any tool when doing testing (or anything really) in the cmake code. Always use configuration to find all the paths and then use the respective variable.
See some of the existing testing code in cmake/SSGCommon.cmake
.
The ComplianceAsCode/content repo runs some end-to-end tests for the ocp4 content. These tests run over the OpenShift infrastructure, spawn an ephemeral cluster and run tests targetted at a specific profile.
The current workflow is as follows:
-
Install needed prerequisites (e.g. the compliance-operator and other resources it might need)
-
Run a scan using the specific profile (for the specific product)
-
Run manual remediations
-
Run automated remediations
-
Wait for remediations to converge
-
Run second scan
The test will pass if: * There are no errors in the scan runs * We have less rule failures after the remediations have been applied * The cluster status is not inconsistent
Rules may have extra verifications on them. For instance, one is able to verify if: * The rule’s expected result is gotten on a clean run. * The rule’s result changes after a remediation has been applied.
If an automated remediation is not possible, one is also able to created a "manual" remediation that will be run as a bash script. The end-to-end tests have a 15 minute timeout for the manual remediation scripts to be executed.
In order to test that a rule is yielding expected results in the e2e tests, one
must create a file called e2e.yml
in a tests/ocp4/
directory which will
exist in the rule’s directory itself.
The format looks as follows:
--- default_result: [PASS|FAIL] result_after_remediation: [PASS|FAIL]
Where:
-
default_result
will look at the result when the first scan is run. -
result_after_remediation
will look at the result when the second scan is run. The second scan takes place after remediations are applied.
Let’s look at an example:
For the controller_use_service_account
rule, which exists in the
applications/openshift/controller/
directory, the directory tree will contain
the rule definition and the test file:
. ├── rule.yml └── tests └── ocp4 └── e2e.yml
In this case, we just want to verify that the default value returns a passing
result. So e2e.yml
has the following content:
--- default_result: PASS
Let’s look at another example:
For the api_server_encryption_provider_config
we want to apply a remediation
which cannot be applied via the compliance-operator
. So we’ll need a manual
remediation for this.
The directory structure looks as follows:
. ├── rule.yml └── tests └── ocp4 ├── e2e-remediation.sh └── e2e.yml
Where our test contains information for both the first default result and the expected result after the remediation has been applied:
--- default_result: FAIL result_after_remediation: PASS
The remediation expects the name of the remediation script to be
e2e-remediation.sh
. The script should:
-
Apply the remediation.
-
Verify that the status has converged.
In the aforementioned case, the remediation script is as follows:
#!/bin/bash oc patch apiservers cluster -p '{"spec":{"encryption":{"type":"aescbc"}}}' --type=merge while true; do status=$(oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}') echo "Current Encryption Status:" oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' if [ "$status" == "EncryptionCompleted" ]; then exit 0 fi sleep 5 done
Here, we apply the remediation (through the patch
command) and probe the
cluster for status. Once the cluster converges, we exit the script with 0
,
which is a successful status.
The e2e test run will time out at 15 minuntes if a script doesn’t converge.
Note that the scripts will be run in parallel, but the test run will wait for all of them to be done.
Note that it’s possible to run the e2e tests on a cluster of your choice.
To do so, ensure that you have a KUBECONFIG
with appropriate credentials that
points to the cluster where you’ll run the tests.
From the root of the ComplianceAsCode/content
repository, run:
$ make -f tests/ocp4e2e/Makefile e2e PROFILE=<profile> PRODUCT=<product>
Where profile is the name of the profile you want to test, and product is a
product relevant to OCP4
, such as ocp4
or rhcos4
.
For instance, to run the tests for the cis
benchmark for ocp4
do:
$ make -f tests/ocp4e2e/Makefile e2e PROFILE=cis PRODUCT=ocp4
For more information on the available options, do:
$ make -f tests/ocp4e2e/Makefile help
It is important to note that the tests will do changes to your cluster and there currently isn’t an option to clean them up. So take that into account before running these tests.
The ComplianceAsCode build and templating system is mostly written in Python.
-
The common pattern is to dynamically add the
shared/modules
to the import path. Thessgcommon
module has many useful utility functions and predefined constants. See scripts at./build-scripts
as an example of this practice. -
Follow the PEP8 standard.
-
Try to keep most of your lines length under 80 characters. Although the 99 character limit is within PEP8 requirements, there is no reason for most lines to be that long.
This project has been created by renaming SCAP Security Guide Project (SSG). It was a project that provides security policies in SCAP format. Project outgrown former name SCAP Security Guide, and changed its name to imply broader scope than just SCAP. Therefore, the SCAP Security Guide has been transformed into ComplianceAsCode/content, which better describes the goal of the project.
This git repository was created by simply renaming and moving the SCAP Security Guide (SSG) repository to a different GitHub organization.
Due to this history, the repository contains mentions of SCAP Security Guide or ssg
.
Some of them are kept due to backwards compatibility.
For example, the output files produced by our build system still start by ssg-
prefix.
Various Linux distributions still ship our files in scap-security-guide
package.