diff --git a/avd_docs/aws/accessanalyzer/AVD-AWS-0175/docs.md b/avd_docs/aws/accessanalyzer/AVD-AWS-0175/docs.md index d5316ab1..5de9dde9 100644 --- a/avd_docs/aws/accessanalyzer/AVD-AWS-0175/docs.md +++ b/avd_docs/aws/accessanalyzer/AVD-AWS-0175/docs.md @@ -1,5 +1,4 @@ - AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer @@ -10,7 +9,7 @@ keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. ### Impact -Reduced visibility of externally shared resources. + {{ remediationActions }} diff --git a/avd_docs/aws/athena/AVD-AWS-0006/docs.md b/avd_docs/aws/athena/AVD-AWS-0006/docs.md index 50b0475c..b18e1ba5 100644 --- a/avd_docs/aws/athena/AVD-AWS-0006/docs.md +++ b/avd_docs/aws/athena/AVD-AWS-0006/docs.md @@ -1,8 +1,9 @@ -Athena databases and workspace result sets should be encrypted at rests. These databases and query sets are generally derived from data in S3 buckets and should have the same level of at rest protection. +Data can be read if the Athena Database is compromised. Athena databases and workspace result sets should be encrypted at rests. These databases and query sets are generally derived from data in S3 buckets and should have the same level of at rest protection. + ### Impact -Data can be read if the Athena Database is compromised + {{ remediationActions }} diff --git a/avd_docs/aws/athena/AVD-AWS-0007/docs.md b/avd_docs/aws/athena/AVD-AWS-0007/docs.md index 17753ac2..868ece26 100644 --- a/avd_docs/aws/athena/AVD-AWS-0007/docs.md +++ b/avd_docs/aws/athena/AVD-AWS-0007/docs.md @@ -1,8 +1,9 @@ -Athena workgroup configuration should be enforced to prevent client side changes to disable encryption settings. +Clients can ignore encryption requirements without enforced configuration. Athena workgroup configuration should be enforced to prevent client side changes to disable encryption settings. + ### Impact -Clients can ignore encryption requirements + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0014/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0014/docs.md index c7964a24..3371c5d0 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0014/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0014/docs.md @@ -1,8 +1,9 @@ -When creating Cloudtrail in the AWS Management Console the trail is configured by default to be multi-region, this isn't the case with the Terraform resource. Cloudtrail should cover the full AWS account to ensure you can track changes in regions you are not actively operting in. +Activity could be happening in your account in a different region. When creating Cloudtrail in the AWS Management Console the trail is configured by default to be multi-region, this isn't the case with the Terraform resource. Cloudtrail should cover the full AWS account to ensure you can track changes in regions you are not actively operting in. + ### Impact -Activity could be happening in your account in a different region + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0015/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0015/docs.md index 88770c40..2575687b 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0015/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0015/docs.md @@ -1,8 +1,9 @@ -Using Customer managed keys provides comprehensive control over cryptographic keys, enabling management of policies, permissions, and rotation, thus enhancing security and compliance measures for sensitive data and systems. +Using AWS managed keys does not allow for fine grained control. Using Customer managed keys provides comprehensive control over cryptographic keys, enabling management of policies, permissions, and rotation, thus enhancing security and compliance measures for sensitive data and systems. + ### Impact -Using AWS managed keys does not allow for fine grained control + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0016/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0016/docs.md index b33a20ae..d07fc2c9 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0016/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0016/docs.md @@ -1,8 +1,9 @@ -Log validation should be activated on Cloudtrail logs to prevent the tampering of the underlying data in the S3 bucket. It is feasible that a rogue actor compromising an AWS account might want to modify the log data to remove trace of their actions. +Illicit activity could be removed from the logs. Log validation should be activated on Cloudtrail logs to prevent the tampering of the underlying data in the S3 bucket. It is feasible that a rogue actor compromising an AWS account might want to modify the log data to remove trace of their actions. + ### Impact -Illicit activity could be removed from the logs + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0161/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0161/docs.md index 6285a1b3..44368079 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0161/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0161/docs.md @@ -1,10 +1,9 @@ - -CloudTrail logs a record of every API call made in your account. These log files are stored in an S3 bucket. CIS recommends that the S3 bucket policy, or access control list (ACL), applied to the S3 bucket that CloudTrail logs to prevents public access to the CloudTrail logs. Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account's use or configuration. +CloudTrail logs will be publicly exposed, potentially containing sensitive information. CloudTrail logs a record of every API call made in your account. These log files are stored in an S3 bucket. CIS recommends that the S3 bucket policy, or access control list (ACL), applied to the S3 bucket that CloudTrail logs to prevents public access to the CloudTrail logs. Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account's use or configuration. ### Impact -CloudTrail logs will be publicly exposed, potentially containing sensitive information + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0162/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0162/docs.md index f525622c..4e5907c5 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0162/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0162/docs.md @@ -1,4 +1,5 @@ +Realtime log analysis is not available without enabling CloudWatch logging. CloudTrail is a web service that records AWS API calls made in a given account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. @@ -8,7 +9,7 @@ For a trail that is enabled in all Regions in an account, CloudTrail sends log f ### Impact -Realtime log analysis is not available without enabling CloudWatch logging + {{ remediationActions }} diff --git a/avd_docs/aws/cloudtrail/AVD-AWS-0163/docs.md b/avd_docs/aws/cloudtrail/AVD-AWS-0163/docs.md index 78adcba9..f8cb1ac5 100644 --- a/avd_docs/aws/cloudtrail/AVD-AWS-0163/docs.md +++ b/avd_docs/aws/cloudtrail/AVD-AWS-0163/docs.md @@ -1,13 +1,11 @@ Amazon S3 bucket access logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. - CIS recommends that you enable bucket access logging on the CloudTrail S3 bucket. - By enabling S3 bucket logging on target S3 buckets, you can capture all events that might affect objects in a target bucket. Configuring logs to be placed in a separate bucket enables access to log information, which can be useful in security and incident response workflows. ### Impact -There is no way to determine the access to this bucket + {{ remediationActions }} diff --git a/avd_docs/aws/codebuild/AVD-AWS-0018/docs.md b/avd_docs/aws/codebuild/AVD-AWS-0018/docs.md index 61295520..8c9e4a20 100644 --- a/avd_docs/aws/codebuild/AVD-AWS-0018/docs.md +++ b/avd_docs/aws/codebuild/AVD-AWS-0018/docs.md @@ -1,8 +1,9 @@ All artifacts produced by your CodeBuild project pipeline should always be encrypted + ### Impact -CodeBuild project artifacts are unencrypted + {{ remediationActions }} diff --git a/avd_docs/aws/config/AVD-AWS-0019/docs.md b/avd_docs/aws/config/AVD-AWS-0019/docs.md index 4a4ce16a..eb3fa783 100644 --- a/avd_docs/aws/config/AVD-AWS-0019/docs.md +++ b/avd_docs/aws/config/AVD-AWS-0019/docs.md @@ -1,10 +1,10 @@ -The configuration aggregator should be configured with all_regions for the source. - +Sources that aren't covered by the aggregator are not include in the configuration. The configuration aggregator should be configured with all_regions for the source. This will help limit the risk of any unmonitored configuration in regions that are thought to be unused. + ### Impact -Sources that aren't covered by the aggregator are not include in the configuration + {{ remediationActions }} diff --git a/avd_docs/aws/documentdb/AVD-AWS-0020/docs.md b/avd_docs/aws/documentdb/AVD-AWS-0020/docs.md index f6f10534..4eeac450 100644 --- a/avd_docs/aws/documentdb/AVD-AWS-0020/docs.md +++ b/avd_docs/aws/documentdb/AVD-AWS-0020/docs.md @@ -1,8 +1,9 @@ Document DB does not have auditing by default. To ensure that you are able to accurately audit the usage of your DocumentDB cluster you should enable export logs. + ### Impact -Limited visibility of audit trail for changes to the DocumentDB + {{ remediationActions }} diff --git a/avd_docs/aws/documentdb/AVD-AWS-0021/docs.md b/avd_docs/aws/documentdb/AVD-AWS-0021/docs.md index 28798f39..24f0c5d1 100644 --- a/avd_docs/aws/documentdb/AVD-AWS-0021/docs.md +++ b/avd_docs/aws/documentdb/AVD-AWS-0021/docs.md @@ -1,8 +1,9 @@ -Encryption of the underlying storage used by DocumentDB ensures that if their is compromise of the disks, the data is still protected. +Unencrypted sensitive data is vulnerable to compromise. Encryption of the underlying storage used by DocumentDB ensures that if their is compromise of the disks, the data is still protected. + ### Impact -Unencrypted sensitive data is vulnerable to compromise. + {{ remediationActions }} diff --git a/avd_docs/aws/documentdb/AVD-AWS-0022/docs.md b/avd_docs/aws/documentdb/AVD-AWS-0022/docs.md index c013e4db..a2e329b7 100644 --- a/avd_docs/aws/documentdb/AVD-AWS-0022/docs.md +++ b/avd_docs/aws/documentdb/AVD-AWS-0022/docs.md @@ -1,8 +1,9 @@ -Encryption using AWS keys provides protection for your DocumentDB underlying storage. To increase control of the encryption and manage factors like rotation use customer managed keys. +Using AWS managed keys does not allow for fine grained control. Encryption using AWS keys provides protection for your DocumentDB underlying storage. To increase control of the encryption and manage factors like rotation use customer managed keys. + ### Impact -Using AWS managed keys does not allow for fine grained control + {{ remediationActions }} diff --git a/avd_docs/aws/dynamodb/AVD-AWS-0023/docs.md b/avd_docs/aws/dynamodb/AVD-AWS-0023/docs.md index 72d8cbf3..1b2a57ae 100644 --- a/avd_docs/aws/dynamodb/AVD-AWS-0023/docs.md +++ b/avd_docs/aws/dynamodb/AVD-AWS-0023/docs.md @@ -1,8 +1,9 @@ -Amazon DynamoDB Accelerator (DAX) encryption at rest provides an additional layer of data protection by helping secure your data from unauthorized access to the underlying storage. +Data can be freely read if compromised. Amazon DynamoDB Accelerator (DAX) encryption at rest provides an additional layer of data protection by helping secure your data from unauthorized access to the underlying storage. + ### Impact -Data can be freely read if compromised + {{ remediationActions }} diff --git a/avd_docs/aws/dynamodb/AVD-AWS-0024/docs.md b/avd_docs/aws/dynamodb/AVD-AWS-0024/docs.md index 0623a53c..c4251db4 100644 --- a/avd_docs/aws/dynamodb/AVD-AWS-0024/docs.md +++ b/avd_docs/aws/dynamodb/AVD-AWS-0024/docs.md @@ -1,10 +1,10 @@ DynamoDB tables should be protected against accidentally or malicious write/delete actions by ensuring that there is adequate protection. - By enabling point-in-time-recovery you can restore to a known point in the event of loss of data. + ### Impact -Accidental or malicious writes and deletes can't be rolled back + {{ remediationActions }} diff --git a/avd_docs/aws/dynamodb/AVD-AWS-0025/docs.md b/avd_docs/aws/dynamodb/AVD-AWS-0025/docs.md index d9bde7a8..8397b845 100644 --- a/avd_docs/aws/dynamodb/AVD-AWS-0025/docs.md +++ b/avd_docs/aws/dynamodb/AVD-AWS-0025/docs.md @@ -1,8 +1,9 @@ -DynamoDB tables are encrypted by default using AWS managed encryption keys. To increase control of the encryption and control the management of factors like key rotation, use a Customer Managed Key. +Using AWS managed keys does not allow for fine grained control. DynamoDB tables are encrypted by default using AWS managed encryption keys. To increase control of the encryption and control the management of factors like key rotation, use a Customer Managed Key. + ### Impact -Using AWS managed keys does not allow for fine grained control + {{ remediationActions }} diff --git a/checks/cloud/aws/accessanalyzer/enable_access_analyzer.go b/checks/cloud/aws/accessanalyzer/enable_access_analyzer.go index 77f5afdf..4453db45 100755 --- a/checks/cloud/aws/accessanalyzer/enable_access_analyzer.go +++ b/checks/cloud/aws/accessanalyzer/enable_access_analyzer.go @@ -34,7 +34,8 @@ keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. Links: []string{ "https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html", }, - Severity: severity.Low, + Severity: severity.Low, + Deprecated: true, }, func(s *state.State) (results scan.Results) { var enabled bool diff --git a/checks/cloud/aws/accessanalyzer/enable_access_analyzer.rego b/checks/cloud/aws/accessanalyzer/enable_access_analyzer.rego new file mode 100644 index 00000000..eb467998 --- /dev/null +++ b/checks/cloud/aws/accessanalyzer/enable_access_analyzer.rego @@ -0,0 +1,45 @@ +# METADATA +# title: Enable IAM Access analyzer for IAM policies about all resources in each region. +# description: | +# AWS IAM Access Analyzer helps you identify the resources in your organization and +# accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. +# This lets you identify unintended access to your resources and data. Access Analyzer +# identifies resources that are shared with external principals by using logic-based reasoning +# to analyze the resource-based policies in your AWS environment. IAM Access Analyzer +# continuously monitors all policies for S3 bucket, IAM roles, KMS(Key Management Service) +# keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html +# custom: +# id: AVD-AWS-0175 +# avd_id: AVD-AWS-0175 +# provider: aws +# service: accessanalyzer +# severity: LOW +# short_code: enable-access-analyzer +# recommended_action: Enable IAM Access analyzer across all regions. +# frameworks: +# cis-aws-1.4: +# - "1.20" +# input: +# selector: +# - type: cloud +# subtypes: +# - service: accessanalyzer +# provider: aws +package builtin.aws.accessanalyzer.aws0175 + +import rego.v1 + +deny contains res if { + not has_active_analyzer + res := result.new("Access Analyzer is not enabled.", {}) +} + +has_active_analyzer if { + some analyzer in input.aws.accessanalyzer.analyzers + analyzer.active.value +} diff --git a/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.go b/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.go deleted file mode 100644 index ecfedd49..00000000 --- a/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.go +++ /dev/null @@ -1,75 +0,0 @@ -package accessanalyzer - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/accessanalyzer" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestASCheckNoSecretsInUserData(t *testing.T) { - tests := []struct { - name string - input accessanalyzer.AccessAnalyzer - expected bool - }{ - { - name: "No analyzers enabled", - input: accessanalyzer.AccessAnalyzer{}, - expected: true, - }, - { - name: "Analyzer disabled", - input: accessanalyzer.AccessAnalyzer{ - Analyzers: []accessanalyzer.Analyzer{ - { - Metadata: trivyTypes.NewTestMetadata(), - ARN: trivyTypes.String("arn:aws:accessanalyzer:us-east-1:123456789012:analyzer/test", trivyTypes.NewTestMetadata()), - Name: trivyTypes.String("test", trivyTypes.NewTestMetadata()), - Active: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "Analyzer enabled", - input: accessanalyzer.AccessAnalyzer{ - Analyzers: []accessanalyzer.Analyzer{ - { - Metadata: trivyTypes.NewTestMetadata(), - ARN: trivyTypes.String("arn:aws:accessanalyzer:us-east-1:123456789012:analyzer/test", trivyTypes.NewTestMetadata()), - Name: trivyTypes.String("test", trivyTypes.NewTestMetadata()), - Active: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.AccessAnalyzer = test.input - results := CheckEnableAccessAnalyzer.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableAccessAnalyzer.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.rego b/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.rego new file mode 100644 index 00000000..1e37b2b2 --- /dev/null +++ b/checks/cloud/aws/accessanalyzer/enable_access_analyzer_test.rego @@ -0,0 +1,26 @@ +package builtin.aws.accessanalyzer.aws0175_test + +import rego.v1 + +import data.builtin.aws.accessanalyzer.aws0175 as check +import data.lib.test + +test_disallow_no_analyzers if { + r := check.deny with input as {"aws": {"accessanalyzer": {"analyzers": []}}} + test.assert_equal_message("Access Analyzer is not enabled.", r) +} + +test_disallow_analyzer_disabled if { + r := check.deny with input as {"aws": {"accessanalyzer": {"analyzers": [{"active": {"value": false}}]}}} + test.assert_equal_message("Access Analyzer is not enabled.", r) +} + +test_allow_one_of_analyzer_disabled if { + r := check.deny with input as {"aws": {"accessanalyzer": {"analyzers": [{"active": {"value": false}}, {"active": {"value": true}}]}}} + test.assert_empty(r) +} + +test_allow_analyzer_enabled if { + r := check.deny with input as {"aws": {"accessanalyzer": {"analyzers": [{"active": {"value": true}}]}}} + test.assert_empty(r) +} diff --git a/checks/cloud/aws/athena/enable_at_rest_encryption.go b/checks/cloud/aws/athena/enable_at_rest_encryption.go index 940db308..32d5d367 100755 --- a/checks/cloud/aws/athena/enable_at_rest_encryption.go +++ b/checks/cloud/aws/athena/enable_at_rest_encryption.go @@ -34,7 +34,8 @@ var CheckEnableAtRestEncryption = rules.Register( Links: cloudFormationEnableAtRestEncryptionLinks, RemediationMarkdown: cloudFormationEnableAtRestEncryptionRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, workgroup := range s.AWS.Athena.Workgroups { diff --git a/checks/cloud/aws/athena/enable_at_rest_encryption.rego b/checks/cloud/aws/athena/enable_at_rest_encryption.rego new file mode 100644 index 00000000..15a90b64 --- /dev/null +++ b/checks/cloud/aws/athena/enable_at_rest_encryption.rego @@ -0,0 +1,53 @@ +# METADATA +# title: Athena databases and workgroup configurations are created unencrypted at rest by default, they should be encrypted +# description: | +# Data can be read if the Athena Database is compromised. Athena databases and workspace result sets should be encrypted at rests. These databases and query sets are generally derived from data in S3 buckets and should have the same level of at rest protection. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/athena/latest/ug/encryption.html +# custom: +# id: AVD-AWS-0006 +# avd_id: AVD-AWS-0006 +# provider: aws +# service: athena +# severity: HIGH +# short_code: enable-at-rest-encryption +# recommended_action: Enable encryption at rest for Athena databases and workgroup configurations +# input: +# selector: +# - type: cloud +# subtypes: +# - service: athena +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/athena_workgroup#encryption_configuration +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/athena_database#encryption_configuration +# good_examples: checks/cloud/aws/athena/enable_at_rest_encryption.tf.go +# bad_examples: checks/cloud/aws/athena/enable_at_rest_encryption.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/athena/enable_at_rest_encryption.cf.go +# bad_examples: checks/cloud/aws/athena/enable_at_rest_encryption.cf.go +package builtin.aws.athena.aws0006 + +import rego.v1 + +encryption_type_none := "" + +deny contains res if { + some workgroup in input.aws.athena.workgroups + is_encryption_type_none(workgroup.encryption) + res := result.new("Workgroup does not have encryption configured.", workgroup) +} + +deny contains res if { + some database in input.aws.athena.databases + is_encryption_type_none(database.encryption) + res := result.new("Database does not have encryption configured.", database) +} + +is_encryption_type_none(encryption) if { + encryption.type.value == encryption_type_none +} diff --git a/checks/cloud/aws/athena/enable_at_rest_encryption_test.go b/checks/cloud/aws/athena/enable_at_rest_encryption_test.go deleted file mode 100644 index 02127836..00000000 --- a/checks/cloud/aws/athena/enable_at_rest_encryption_test.go +++ /dev/null @@ -1,95 +0,0 @@ -package athena - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/athena" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableAtRestEncryption(t *testing.T) { - tests := []struct { - name string - input athena.Athena - expected bool - }{ - { - name: "AWS Athena database unencrypted", - input: athena.Athena{ - Databases: []athena.Database{ - { - Metadata: trivyTypes.NewTestMetadata(), - Encryption: athena.EncryptionConfiguration{ - Metadata: trivyTypes.NewTestMetadata(), - Type: trivyTypes.String(athena.EncryptionTypeNone, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "AWS Athena workgroup unencrypted", - input: athena.Athena{ - Workgroups: []athena.Workgroup{ - { - Metadata: trivyTypes.NewTestMetadata(), - Encryption: athena.EncryptionConfiguration{ - Metadata: trivyTypes.NewTestMetadata(), - Type: trivyTypes.String(athena.EncryptionTypeNone, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "AWS Athena database and workgroup encrypted", - input: athena.Athena{ - Databases: []athena.Database{ - { - Metadata: trivyTypes.NewTestMetadata(), - Encryption: athena.EncryptionConfiguration{ - Metadata: trivyTypes.NewTestMetadata(), - Type: trivyTypes.String(athena.EncryptionTypeSSEKMS, trivyTypes.NewTestMetadata()), - }, - }, - }, - Workgroups: []athena.Workgroup{ - { - Metadata: trivyTypes.NewTestMetadata(), - Encryption: athena.EncryptionConfiguration{ - Metadata: trivyTypes.NewTestMetadata(), - Type: trivyTypes.String(athena.EncryptionTypeSSEKMS, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.Athena = test.input - results := CheckEnableAtRestEncryption.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableAtRestEncryption.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/athena/enable_at_rest_encryption_test.rego b/checks/cloud/aws/athena/enable_at_rest_encryption_test.rego new file mode 100644 index 00000000..4272ac39 --- /dev/null +++ b/checks/cloud/aws/athena/enable_at_rest_encryption_test.rego @@ -0,0 +1,26 @@ +package builtin.aws.athena.aws0006_test + +import rego.v1 + +import data.builtin.aws.athena.aws0006 as check +import data.lib.test + +test_disallow_database_unencrypted if { + inp := {"aws": {"athena": {"databases": [{"encryption": {"type": {"value": ""}}}]}}} + test.assert_equal_message("Database does not have encryption configured.", check.deny) with input as inp +} + +test_disallow_workgroup_unencrypted if { + inp := {"aws": {"athena": {"workgroups": [{"encryption": {"type": {"value": ""}}}]}}} + test.assert_equal_message("Workgroup does not have encryption configured.", check.deny) with input as inp +} + +test_allow_database_encrypted if { + inp := {"aws": {"athena": {"databases": [{"encryption": {"type": {"value": "SSE_S3"}}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_allow_workgroup_encrypted if { + inp := {"aws": {"athena": {"workgroups": [{"encryption": {"type": {"value": "SSE_S3"}}}]}}} + test.assert_empty(check.deny) with input as inp +} diff --git a/checks/cloud/aws/athena/no_encryption_override.go b/checks/cloud/aws/athena/no_encryption_override.go index 54d94d01..ba40c161 100755 --- a/checks/cloud/aws/athena/no_encryption_override.go +++ b/checks/cloud/aws/athena/no_encryption_override.go @@ -33,7 +33,8 @@ var CheckNoEncryptionOverride = rules.Register( Links: cloudFormationNoEncryptionOverrideLinks, RemediationMarkdown: cloudFormationNoEncryptionOverrideRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, workgroup := range s.AWS.Athena.Workgroups { diff --git a/checks/cloud/aws/athena/no_encryption_override.rego b/checks/cloud/aws/athena/no_encryption_override.rego new file mode 100644 index 00000000..c64ab962 --- /dev/null +++ b/checks/cloud/aws/athena/no_encryption_override.rego @@ -0,0 +1,40 @@ +# METADATA +# title: Athena workgroups should enforce configuration to prevent client disabling encryption +# description: | +# Clients can ignore encryption requirements without enforced configuration. Athena workgroup configuration should be enforced to prevent client side changes to disable encryption settings. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/athena/latest/ug/manage-queries-control-costs-with-workgroups.html +# custom: +# id: AVD-AWS-0007 +# avd_id: AVD-AWS-0007 +# provider: aws +# service: athena +# severity: HIGH +# short_code: no-encryption-override +# recommended_action: Enforce the configuration to prevent client overrides +# input: +# selector: +# - type: cloud +# subtypes: +# - service: athena +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/athena_workgroup#configuration +# good_examples: checks/cloud/aws/athena/no_encryption_override.tf.go +# bad_examples: checks/cloud/aws/athena/no_encryption_override.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/athena/no_encryption_override.cf.go +# bad_examples: checks/cloud/aws/athena/no_encryption_override.cf.go +package builtin.aws.athena.aws0007 + +import rego.v1 + +deny contains res if { + some workgroup in input.aws.athena.workgroups + not workgroup.enforceconfiguration.value + res := result.new("The workgroup configuration is not enforced.", workgroup.enforceconfiguration) +} diff --git a/checks/cloud/aws/athena/no_encryption_override_test.go b/checks/cloud/aws/athena/no_encryption_override_test.go deleted file mode 100644 index 55ec5241..00000000 --- a/checks/cloud/aws/athena/no_encryption_override_test.go +++ /dev/null @@ -1,65 +0,0 @@ -package athena - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/athena" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckNoEncryptionOverride(t *testing.T) { - tests := []struct { - name string - input athena.Athena - expected bool - }{ - { - name: "AWS Athena workgroup doesn't enforce configuration", - input: athena.Athena{ - Workgroups: []athena.Workgroup{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnforceConfiguration: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "AWS Athena workgroup enforces configuration", - input: athena.Athena{ - Workgroups: []athena.Workgroup{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnforceConfiguration: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.Athena = test.input - results := CheckNoEncryptionOverride.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckNoEncryptionOverride.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/athena/no_encryption_override_test.rego b/checks/cloud/aws/athena/no_encryption_override_test.rego new file mode 100644 index 00000000..55c8140d --- /dev/null +++ b/checks/cloud/aws/athena/no_encryption_override_test.rego @@ -0,0 +1,16 @@ +package builtin.aws.athena.aws0007_test + +import rego.v1 + +import data.builtin.aws.athena.aws0007 as check +import data.lib.test + +test_allow_workgroup_enforce_configuration if { + inp := {"aws": {"athena": {"workgroups": [{"enforceconfiguration": {"value": true}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_disallow_workgroup_no_enforce_configuration if { + inp := {"aws": {"athena": {"workgroups": [{"enforceconfiguration": {"value": false}}]}}} + test.assert_equal_message("The workgroup configuration is not enforced.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/cloudfront/enable_waf.go b/checks/cloud/aws/cloudfront/enable_waf.go index 38e94b0e..e28ec9e0 100755 --- a/checks/cloud/aws/cloudfront/enable_waf.go +++ b/checks/cloud/aws/cloudfront/enable_waf.go @@ -33,7 +33,8 @@ var CheckEnableWaf = rules.Register( Links: cloudFormationEnableWafLinks, RemediationMarkdown: cloudFormationEnableWafRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, dist := range s.AWS.Cloudfront.Distributions { diff --git a/checks/cloud/aws/cloudtrail/enable_all_regions.go b/checks/cloud/aws/cloudtrail/enable_all_regions.go index 35cf183b..153ca0cf 100755 --- a/checks/cloud/aws/cloudtrail/enable_all_regions.go +++ b/checks/cloud/aws/cloudtrail/enable_all_regions.go @@ -38,7 +38,8 @@ var CheckEnableAllRegions = rules.Register( Links: cloudFormationEnableAllRegionsLinks, RemediationMarkdown: cloudFormationEnableAllRegionsRemediationMarkdown, }, - Severity: severity.Medium, + Severity: severity.Medium, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, trail := range s.AWS.CloudTrail.Trails { diff --git a/checks/cloud/aws/cloudtrail/enable_all_regions.rego b/checks/cloud/aws/cloudtrail/enable_all_regions.rego new file mode 100644 index 00000000..c11c8080 --- /dev/null +++ b/checks/cloud/aws/cloudtrail/enable_all_regions.rego @@ -0,0 +1,43 @@ +# METADATA +# title: Cloudtrail should be enabled in all regions regardless of where your AWS resources are generally homed +# description: | +# Activity could be happening in your account in a different region. When creating Cloudtrail in the AWS Management Console the trail is configured by default to be multi-region, this isn't the case with the Terraform resource. Cloudtrail should cover the full AWS account to ensure you can track changes in regions you are not actively operting in. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html +# custom: +# id: AVD-AWS-0014 +# avd_id: AVD-AWS-0014 +# provider: aws +# service: cloudtrail +# severity: MEDIUM +# short_code: enable-all-regions +# recommended_action: Enable Cloudtrail in all regions +# frameworks: +# cis-aws-1.2: +# - "2.5" +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail#is_multi_region_trail +# good_examples: checks/cloud/aws/cloudtrail/enable_all_regions.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/enable_all_regions.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/cloudtrail/enable_all_regions.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/enable_all_regions.cf.go +package builtin.aws.cloudtrail.aws0014 + +import rego.v1 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + not trail.ismultiregion.value + res := result.new("Trail is not enabled across all regions.", trail.ismultiregion) +} diff --git a/checks/cloud/aws/cloudtrail/enable_all_regions_test.go b/checks/cloud/aws/cloudtrail/enable_all_regions_test.go deleted file mode 100644 index 4ca1c625..00000000 --- a/checks/cloud/aws/cloudtrail/enable_all_regions_test.go +++ /dev/null @@ -1,65 +0,0 @@ -package cloudtrail - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableAllRegions(t *testing.T) { - tests := []struct { - name string - input cloudtrail.CloudTrail - expected bool - }{ - { - name: "AWS CloudTrail not enabled across all regions", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - IsMultiRegion: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "AWS CloudTrail enabled across all regions", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - IsMultiRegion: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.input - results := CheckEnableAllRegions.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableAllRegions.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/enable_all_regions_test.rego b/checks/cloud/aws/cloudtrail/enable_all_regions_test.rego new file mode 100644 index 00000000..c004db30 --- /dev/null +++ b/checks/cloud/aws/cloudtrail/enable_all_regions_test.rego @@ -0,0 +1,16 @@ +package builtin.aws.cloudtrail.aws0014_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0014 as check +import data.lib.test + +test_disallow_cloudtrail_without_all_regions if { + r := check.deny with input as {"aws": {"cloudtrail": {"trails": [{"ismultiregion": {"value": false}}]}}} + test.assert_equal_message("CloudTrail is not enabled across all regions.", r) +} + +test_allow_cloudtrail_with_all_regions if { + r := check.deny with input as {"aws": {"cloudtrail": {"trails": [{"ismultiregion": {"value": true}}]}}} + test.assert_empty(r) +} diff --git a/checks/cloud/aws/cloudtrail/enable_log_validation.go b/checks/cloud/aws/cloudtrail/enable_log_validation.go index 39ae7313..1afa5ecb 100755 --- a/checks/cloud/aws/cloudtrail/enable_log_validation.go +++ b/checks/cloud/aws/cloudtrail/enable_log_validation.go @@ -33,7 +33,8 @@ var CheckEnableLogValidation = rules.Register( Links: cloudFormationEnableLogValidationLinks, RemediationMarkdown: cloudFormationEnableLogValidationRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, trail := range s.AWS.CloudTrail.Trails { diff --git a/checks/cloud/aws/cloudtrail/enable_log_validation.rego b/checks/cloud/aws/cloudtrail/enable_log_validation.rego new file mode 100644 index 00000000..f101c52d --- /dev/null +++ b/checks/cloud/aws/cloudtrail/enable_log_validation.rego @@ -0,0 +1,40 @@ +# METADATA +# title: Cloudtrail log validation should be enabled to prevent tampering of log data +# description: | +# Illicit activity could be removed from the logs. Log validation should be activated on Cloudtrail logs to prevent the tampering of the underlying data in the S3 bucket. It is feasible that a rogue actor compromising an AWS account might want to modify the log data to remove trace of their actions. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html +# custom: +# id: AVD-AWS-0016 +# avd_id: AVD-AWS-0016 +# provider: aws +# service: cloudtrail +# severity: HIGH +# short_code: enable-log-validation +# recommended_action: Turn on log validation for Cloudtrail +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail#enable_log_file_validation +# good_examples: checks/cloud/aws/cloudtrail/enable_log_validation.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/enable_log_validation.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/cloudtrail/enable_log_validation.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/enable_log_validation.cf.go +package builtin.aws.cloudtrail.aws0016 + +import rego.v1 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + not trail.enablelogfilevalidation.value + res := result.new("Trail does not have log validation enabled.", trail.enablelogfilevalidation) +} diff --git a/checks/cloud/aws/cloudtrail/enable_log_validation_test.go b/checks/cloud/aws/cloudtrail/enable_log_validation_test.go deleted file mode 100644 index bfe1d465..00000000 --- a/checks/cloud/aws/cloudtrail/enable_log_validation_test.go +++ /dev/null @@ -1,65 +0,0 @@ -package cloudtrail - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableLogValidation(t *testing.T) { - tests := []struct { - name string - input cloudtrail.CloudTrail - expected bool - }{ - { - name: "AWS CloudTrail without logfile validation", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnableLogFileValidation: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "AWS CloudTrail with logfile validation enabled", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnableLogFileValidation: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.input - results := CheckEnableLogValidation.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableLogValidation.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/enable_log_validation_test.rego b/checks/cloud/aws/cloudtrail/enable_log_validation_test.rego new file mode 100644 index 00000000..7436046e --- /dev/null +++ b/checks/cloud/aws/cloudtrail/enable_log_validation_test.rego @@ -0,0 +1,16 @@ +package builtin.aws.cloudtrail.aws0016_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0016 as check +import data.lib.test + +test_allow_trail_with_log_validation if { + inp := {"aws": {"cloudtrail": {"trails": [{"enablelogfilevalidation": {"value": true}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_disallow_trail_without_log_validation if { + inp := {"aws": {"cloudtrail": {"trails": [{"enablelogfilevalidation": {"value": false}}]}}} + test.assert_equal_message("Trail does not have log validation enabled.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/cloudtrail/encryption_customer_key.rego b/checks/cloud/aws/cloudtrail/encryption_customer_key.rego new file mode 100644 index 00000000..16cbc173 --- /dev/null +++ b/checks/cloud/aws/cloudtrail/encryption_customer_key.rego @@ -0,0 +1,43 @@ +# METADATA +# title: CloudTrail should use Customer managed keys to encrypt the logs +# description: | +# Using AWS managed keys does not allow for fine grained control. Using Customer managed keys provides comprehensive control over cryptographic keys, enabling management of policies, permissions, and rotation, thus enhancing security and compliance measures for sensitive data and systems. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html +# - https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt +# custom: +# id: AVD-AWS-0015 +# avd_id: AVD-AWS-0015 +# provider: aws +# service: cloudtrail +# severity: HIGH +# short_code: encryption-customer-managed-key +# recommended_action: Use Customer managed key +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail#kms_key_id +# good_examples: checks/cloud/aws/cloudtrail/encryption_customer_key.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/encryption_customer_key.tf.go +# cloudformation: +# links: +# - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudtrail-trail.html#cfn-cloudtrail-trail-kmskeyid +# good_examples: checks/cloud/aws/cloudtrail/encryption_customer_key.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/encryption_customer_key.cf.go +package builtin.aws.cloudtrail.aws0015 + +import rego.v1 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + trail.kmskeyid.value == "" + res := result.new("CloudTrail does not use a customer managed key to encrypt the logs.", trail.kmskeyid) +} diff --git a/checks/cloud/aws/cloudtrail/encryption_customer_key_test.go b/checks/cloud/aws/cloudtrail/encryption_customer_key_test.go deleted file mode 100644 index b0d3f61b..00000000 --- a/checks/cloud/aws/cloudtrail/encryption_customer_key_test.go +++ /dev/null @@ -1,63 +0,0 @@ -package cloudtrail - -import ( - "testing" - - "github.com/stretchr/testify/assert" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/aquasecurity/trivy/pkg/iac/scan" - "github.com/aquasecurity/trivy/pkg/iac/state" - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" -) - -func TestEncryptionCustomerManagedKey(t *testing.T) { - tests := []struct { - name string - input cloudtrail.CloudTrail - expected bool - }{ - { - name: "AWS CloudTrail without CMK", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "AWS CloudTrail with CMK", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("some-kms-key", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.input - results := EncryptionCustomerManagedKey.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == EncryptionCustomerManagedKey.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/encryption_customer_key_test.rego b/checks/cloud/aws/cloudtrail/encryption_customer_key_test.rego new file mode 100644 index 00000000..3005c9ba --- /dev/null +++ b/checks/cloud/aws/cloudtrail/encryption_customer_key_test.rego @@ -0,0 +1,16 @@ +package builtin.aws.cloudtrail.aws0015_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0015 as check +import data.lib.test + +test_allow_trail_with_cmk if { + inp := {"aws": {"cloudtrail": {"trails": [{"kmskeyid": {"value": "key-id"}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_disallow_trail_without_cmk if { + inp := {"aws": {"cloudtrail": {"trails": [{"kmskeyid": {"value": ""}}]}}} + test.assert_equal_message("CloudTrail does not use a customer managed key to encrypt the logs.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.go b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.go index 4f151796..969baa67 100755 --- a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.go +++ b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.go @@ -9,7 +9,7 @@ import ( "github.com/aquasecurity/trivy/pkg/iac/state" ) -var checkEnsureCloudwatchIntegration = rules.Register( +var CheckEnsureCloudwatchIntegration = rules.Register( scan.Rule{ AVDID: "AVD-AWS-0162", Provider: providers.AWSProvider, @@ -45,7 +45,8 @@ For a trail that is enabled in all Regions in an account, CloudTrail sends log f Links: cloudFormationEnsureCloudwatchIntegrationLinks, RemediationMarkdown: cloudFormationEnsureCloudwatchIntegrationRemediationMarkdown, }, - Severity: severity.Low, + Severity: severity.Low, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, trail := range s.AWS.CloudTrail.Trails { diff --git a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.rego b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.rego new file mode 100644 index 00000000..ca4964df --- /dev/null +++ b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.rego @@ -0,0 +1,51 @@ +# METADATA +# title: CloudTrail logs should be stored in S3 and also sent to CloudWatch Logs +# description: | +# Realtime log analysis is not available without enabling CloudWatch logging. +# +# CloudTrail is a web service that records AWS API calls made in a given account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. +# +# CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs in a specified Amazon S3 bucket for long-term analysis, you can perform real-time analysis by configuring CloudTrail to send logs to CloudWatch Logs. +# +# For a trail that is enabled in all Regions in an account, CloudTrail sends log files from all those Regions to a CloudWatch Logs log group. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html#send-cloudtrail-events-to-cloudwatch-logs-console +# custom: +# id: AVD-AWS-0162 +# avd_id: AVD-AWS-0162 +# provider: aws +# service: cloudtrail +# severity: LOW +# short_code: ensure-cloudwatch-integration +# recommended_action: Enable logging to CloudWatch +# frameworks: +# cis-aws-1.2: +# - "2.4" +# cis-aws-1.4: +# - "3.4" +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail +# good_examples: checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration.cf.go +package builtin.aws.cloudtrail.aws0162 + +import rego.v1 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + trail.cloudwatchlogsloggrouparn.value == "" + res := result.new("Trail does not have CloudWatch logging configured", trail) +} diff --git a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.go b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.go deleted file mode 100644 index 3700afcb..00000000 --- a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.go +++ /dev/null @@ -1,64 +0,0 @@ -package cloudtrail - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/scan" - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/stretchr/testify/assert" -) - -func TestCheckEnsureCloudwatchIntegration(t *testing.T) { - tests := []struct { - name string - input cloudtrail.CloudTrail - expected bool - }{ - { - name: "Trail has cloudwatch configured", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - CloudWatchLogsLogGroupArn: trivyTypes.String("arn:aws:logs:us-east-1:123456789012:log-group:my-log-group", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - { - name: "Trail does not have cloudwatch configured", - input: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - CloudWatchLogsLogGroupArn: trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.input - results := checkEnsureCloudwatchIntegration.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == checkEnsureCloudwatchIntegration.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.rego b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.rego new file mode 100644 index 00000000..c04d79ed --- /dev/null +++ b/checks/cloud/aws/cloudtrail/ensure_cloudwatch_integration_test.rego @@ -0,0 +1,16 @@ +package builtin.aws.cloudtrail.aws0162_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0162 as check +import data.lib.test + +test_allow_cloudwatch_integration if { + inp := {"aws": {"cloudtrail": {"trails": [{"cloudwatchlogsloggrouparn": {"value": "log-group-arn"}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_disallow_without_cloudwatch_integration if { + inp := {"aws": {"cloudtrail": {"trails": [{"cloudwatchlogsloggrouparn": {"value": ""}}]}}} + test.assert_equal_message("CloudWatch integration is not configured.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/cloudtrail/no_public_log_access.go b/checks/cloud/aws/cloudtrail/no_public_log_access.go index 0b19e1f4..ac6cedd9 100755 --- a/checks/cloud/aws/cloudtrail/no_public_log_access.go +++ b/checks/cloud/aws/cloudtrail/no_public_log_access.go @@ -9,7 +9,7 @@ import ( "github.com/aquasecurity/trivy/pkg/iac/state" ) -var checkNoPublicLogAccess = rules.Register( +var CheckNoPublicLogAccess = rules.Register( scan.Rule{ AVDID: "AVD-AWS-0161", Provider: providers.AWSProvider, @@ -41,7 +41,8 @@ CloudTrail logs a record of every API call made in your account. These log files Links: cloudFormationNoPublicLogAccessLinks, RemediationMarkdown: cloudFormationNoPublicLogAccessRemediationMarkdown, }, - Severity: severity.Critical, + Severity: severity.Critical, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, trail := range s.AWS.CloudTrail.Trails { diff --git a/checks/cloud/aws/cloudtrail/no_public_log_access.rego b/checks/cloud/aws/cloudtrail/no_public_log_access.rego new file mode 100644 index 00000000..57a4352d --- /dev/null +++ b/checks/cloud/aws/cloudtrail/no_public_log_access.rego @@ -0,0 +1,52 @@ +# METADATA +# title: The S3 Bucket backing Cloudtrail should be private +# description: | +# CloudTrail logs will be publicly exposed, potentially containing sensitive information. CloudTrail logs a record of every API call made in your account. These log files are stored in an S3 bucket. CIS recommends that the S3 bucket policy, or access control list (ACL), applied to the S3 bucket that CloudTrail logs to prevents public access to the CloudTrail logs. Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account's use or configuration. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-bucket.html +# custom: +# id: AVD-AWS-0161 +# avd_id: AVD-AWS-0161 +# provider: aws +# service: cloudtrail +# severity: CRITICAL +# short_code: no-public-log-access +# recommended_action: Restrict public access to the S3 bucket +# frameworks: +# cis-aws-1.2: +# - "2.3" +# cis-aws-1.4: +# - "3.3" +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail#is_multi_region_trail +# good_examples: checks/cloud/aws/cloudtrail/no_public_log_access.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/no_public_log_access.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/cloudtrail/no_public_log_access.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/no_public_log_access.cf.go +package builtin.aws.cloudtrail.aws0161 + +import rego.v1 + +import data.lib.s3 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + trail.bucketname.value != "" + + some bucket in input.aws.s3.buckets + bucket.name.value == trail.bucketname.value + + s3.bucket_has_public_access(bucket) + res := result.new("Trail S3 bucket is publicly exposed", bucket) +} diff --git a/checks/cloud/aws/cloudtrail/no_public_log_access_test.go b/checks/cloud/aws/cloudtrail/no_public_log_access_test.go deleted file mode 100644 index f5db0160..00000000 --- a/checks/cloud/aws/cloudtrail/no_public_log_access_test.go +++ /dev/null @@ -1,86 +0,0 @@ -package cloudtrail - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/s3" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckNoPublicLogAccess(t *testing.T) { - tests := []struct { - name string - inputCT cloudtrail.CloudTrail - inputS3 s3.S3 - expected bool - }{ - { - name: "Trail has bucket with no public access", - inputCT: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - BucketName: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - }, - }, - }, - inputS3: s3.S3{ - Buckets: []s3.Bucket{ - { - Metadata: trivyTypes.NewTestMetadata(), - Name: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - ACL: trivyTypes.String("private", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - { - name: "Trail has bucket with public access", - inputCT: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - BucketName: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - }, - }, - }, - inputS3: s3.S3{ - Buckets: []s3.Bucket{ - { - Metadata: trivyTypes.NewTestMetadata(), - Name: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - ACL: trivyTypes.String("public-read", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.inputCT - testState.AWS.S3 = test.inputS3 - results := checkNoPublicLogAccess.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == checkNoPublicLogAccess.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/no_public_log_access_test.rego b/checks/cloud/aws/cloudtrail/no_public_log_access_test.rego new file mode 100644 index 00000000..4e76f6d3 --- /dev/null +++ b/checks/cloud/aws/cloudtrail/no_public_log_access_test.rego @@ -0,0 +1,24 @@ +package builtin.aws.cloudtrail.aws0161_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0161 as check +import data.lib.test + +test_allow_bucket_without_public_access if { + inp := {"aws": { + "cloudtrail": {"trails": [{"bucketname": {"value": "bucket_name"}}]}, + "s3": {"buckets": [{"name": {"value": "bucket_name"}, "acl": {"value": "private"}}]}, + }} + test.assert_empty(check.deny) with input as inp +} + +# TODO: count should be 2 +test_disallow_bucket_with_public_access if { + inp := {"aws": { + "cloudtrail": {"trails": [{"bucketname": {"value": "bucket_name"}}]}, + "s3": {"buckets": [{"name": {"value": "bucket_name"}, "acl": {"value": "public-read"}}]}, + }} + + test.assert_equal_message("Bucket has public access", check.deny) with input as inp +} diff --git a/checks/cloud/aws/cloudtrail/require_bucket_access_logging.go b/checks/cloud/aws/cloudtrail/require_bucket_access_logging.go index be4e6b04..e181f7b7 100755 --- a/checks/cloud/aws/cloudtrail/require_bucket_access_logging.go +++ b/checks/cloud/aws/cloudtrail/require_bucket_access_logging.go @@ -9,7 +9,7 @@ import ( "github.com/aquasecurity/trivy/pkg/iac/state" ) -var checkBucketAccessLoggingRequired = rules.Register( +var CheckBucketAccessLoggingRequired = rules.Register( scan.Rule{ AVDID: "AVD-AWS-0163", Provider: providers.AWSProvider, @@ -44,7 +44,8 @@ By enabling S3 bucket logging on target S3 buckets, you can capture all events t Links: cloudFormationBucketAccessLoggingRequiredLinks, RemediationMarkdown: cloudFormationBucketAccessLoggingRequiredRemediationMarkdown, }, - Severity: severity.Low, + Severity: severity.Low, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, trail := range s.AWS.CloudTrail.Trails { diff --git a/checks/cloud/aws/cloudtrail/require_bucket_access_logging.rego b/checks/cloud/aws/cloudtrail/require_bucket_access_logging.rego new file mode 100644 index 00000000..512cbef2 --- /dev/null +++ b/checks/cloud/aws/cloudtrail/require_bucket_access_logging.rego @@ -0,0 +1,52 @@ +# METADATA +# title: You should enable bucket access logging on the CloudTrail S3 bucket. +# description: | +# Amazon S3 bucket access logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. +# CIS recommends that you enable bucket access logging on the CloudTrail S3 bucket. +# By enabling S3 bucket logging on target S3 buckets, you can capture all events that might affect objects in a target bucket. Configuring logs to be placed in a separate bucket enables access to log information, which can be useful in security and incident response workflows. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html +# custom: +# id: AVD-AWS-0163 +# avd_id: AVD-AWS-0163 +# provider: aws +# service: cloudtrail +# severity: LOW +# short_code: require-bucket-access-logging +# recommended_action: Enable access logging on the bucket +# frameworks: +# cis-aws-1.2: +# - "2.6" +# cis-aws-1.4: +# - "3.6" +# input: +# selector: +# - type: cloud +# subtypes: +# - service: cloudtrail +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudtrail#is_multi_region_trail +# good_examples: checks/cloud/aws/cloudtrail/require_bucket_access_logging.tf.go +# bad_examples: checks/cloud/aws/cloudtrail/require_bucket_access_logging.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/cloudtrail/require_bucket_access_logging.cf.go +# bad_examples: checks/cloud/aws/cloudtrail/require_bucket_access_logging.cf.go +package builtin.aws.cloudtrail.aws0163 + +import rego.v1 + +deny contains res if { + some trail in input.aws.cloudtrail.trails + trail.bucketname.value != "" + + some bucket in input.aws.s3.buckets + bucket.name.value == trail.bucketname.value + not bucket.logging.enabled.value + + res := result.new("Trail S3 bucket does not have logging enabled", bucket) +} diff --git a/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.go b/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.go deleted file mode 100644 index 60b89080..00000000 --- a/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.go +++ /dev/null @@ -1,92 +0,0 @@ -package cloudtrail - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/cloudtrail" - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/s3" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckBucketAccessLoggingRequired(t *testing.T) { - tests := []struct { - name string - inputCT cloudtrail.CloudTrail - inputS3 s3.S3 - expected bool - }{ - { - name: "Trail has bucket with logging enabled", - inputCT: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - BucketName: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - }, - }, - }, - inputS3: s3.S3{ - Buckets: []s3.Bucket{ - { - Metadata: trivyTypes.NewTestMetadata(), - Name: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - Logging: s3.Logging{ - Metadata: trivyTypes.NewTestMetadata(), - Enabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - { - name: "Trail has bucket without logging enabled", - inputCT: cloudtrail.CloudTrail{ - Trails: []cloudtrail.Trail{ - { - Metadata: trivyTypes.NewTestMetadata(), - BucketName: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - }, - }, - }, - inputS3: s3.S3{ - Buckets: []s3.Bucket{ - { - Metadata: trivyTypes.NewTestMetadata(), - Name: trivyTypes.String("my-bucket", trivyTypes.NewTestMetadata()), - Logging: s3.Logging{ - Metadata: trivyTypes.NewTestMetadata(), - Enabled: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CloudTrail = test.inputCT - testState.AWS.S3 = test.inputS3 - results := checkBucketAccessLoggingRequired.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == checkBucketAccessLoggingRequired.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.rego b/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.rego new file mode 100644 index 00000000..5b09f1ef --- /dev/null +++ b/checks/cloud/aws/cloudtrail/require_bucket_access_logging_test.rego @@ -0,0 +1,33 @@ +package builtin.aws.cloudtrail.aws0163_test + +import rego.v1 + +import data.builtin.aws.cloudtrail.aws0163 as check +import data.lib.test + +test_allow_bucket_with_logging_enabled if { + inp := {"aws": { + "cloudtrail": {"trails": [{"bucketname": {"value": "bucket1"}}]}, + "s3": {"buckets": [{ + "name": {"value": "bucket1"}, + "logging": {"enabled": {"value": true}}, + }]}, + }} + + test.assert_empty(check.deny) with input as inp +} + +test_disallow_bucket_with_logging_disabled if { + inp := {"aws": { + "cloudtrail": {"trails": [{"bucketname": {"value": "bucket1"}}]}, + "s3": {"buckets": [{ + "name": {"value": "bucket1"}, + "logging": {"enabled": {"value": false}}, + }]}, + }} + + test.assert_equal_message( + "Trail S3 bucket does not have logging enabled", + check.deny, + ) with input as inp +} diff --git a/checks/cloud/aws/codebuild/enable_encryption.go b/checks/cloud/aws/codebuild/enable_encryption.go index bb0fca17..921ce2d4 100755 --- a/checks/cloud/aws/codebuild/enable_encryption.go +++ b/checks/cloud/aws/codebuild/enable_encryption.go @@ -34,7 +34,8 @@ var CheckEnableEncryption = rules.Register( Links: cloudFormationEnableEncryptionLinks, RemediationMarkdown: cloudFormationEnableEncryptionRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, project := range s.AWS.CodeBuild.Projects { diff --git a/checks/cloud/aws/codebuild/enable_encryption.rego b/checks/cloud/aws/codebuild/enable_encryption.rego new file mode 100644 index 00000000..fd37bab4 --- /dev/null +++ b/checks/cloud/aws/codebuild/enable_encryption.rego @@ -0,0 +1,49 @@ +# METADATA +# title: CodeBuild Project artifacts encryption should not be disabled +# description: | +# All artifacts produced by your CodeBuild project pipeline should always be encrypted +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codebuild-project-artifacts.html +# - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codebuild-project.html +# custom: +# id: AVD-AWS-0018 +# avd_id: AVD-AWS-0018 +# provider: aws +# service: codebuild +# severity: HIGH +# short_code: enable-encryption +# recommended_action: Enable encryption for CodeBuild project artifacts +# input: +# selector: +# - type: cloud +# subtypes: +# - service: codebuild +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codebuild_project#encryption_disabled +# good_examples: checks/cloud/aws/codebuild/enable_encryption.tf.go +# bad_examples: checks/cloud/aws/codebuild/enable_encryption.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/codebuild/enable_encryption.cf.go +# bad_examples: checks/cloud/aws/codebuild/enable_encryption.cf.go +package builtin.aws.codebuild.aws0018 + +import rego.v1 + +deny contains res if { + some project in input.aws.codebuild.projects + encryptionenabled := project.artifactsettings.encryptionenabled + not encryptionenabled.value + res := result.new("Encryption is not enabled for project artifacts.", encryptionenabled) +} + +deny contains res if { + some project in input.aws.codebuild.projects + some setting in project.secondaryartifactsettings + not setting.encryptionenabled.value + res := result.new("Encryption is not enabled for secondary project artifacts.", setting.encryptionenabled) +} diff --git a/checks/cloud/aws/codebuild/enable_encryption_test.go b/checks/cloud/aws/codebuild/enable_encryption_test.go deleted file mode 100644 index 15493589..00000000 --- a/checks/cloud/aws/codebuild/enable_encryption_test.go +++ /dev/null @@ -1,98 +0,0 @@ -package codebuild - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/codebuild" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableEncryption(t *testing.T) { - tests := []struct { - name string - input codebuild.CodeBuild - expected bool - }{ - { - name: "AWS Codebuild project with unencrypted artifact", - input: codebuild.CodeBuild{ - Projects: []codebuild.Project{ - { - Metadata: trivyTypes.NewTestMetadata(), - ArtifactSettings: codebuild.ArtifactSettings{ - Metadata: trivyTypes.NewTestMetadata(), - EncryptionEnabled: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "AWS Codebuild project with unencrypted secondary artifact", - input: codebuild.CodeBuild{ - Projects: []codebuild.Project{ - { - Metadata: trivyTypes.NewTestMetadata(), - ArtifactSettings: codebuild.ArtifactSettings{ - Metadata: trivyTypes.NewTestMetadata(), - EncryptionEnabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - SecondaryArtifactSettings: []codebuild.ArtifactSettings{ - { - Metadata: trivyTypes.NewTestMetadata(), - EncryptionEnabled: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - }, - expected: true, - }, - { - name: "AWS Codebuild with encrypted artifacts", - input: codebuild.CodeBuild{ - Projects: []codebuild.Project{ - { - Metadata: trivyTypes.NewTestMetadata(), - ArtifactSettings: codebuild.ArtifactSettings{ - Metadata: trivyTypes.NewTestMetadata(), - EncryptionEnabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - SecondaryArtifactSettings: []codebuild.ArtifactSettings{ - { - Metadata: trivyTypes.NewTestMetadata(), - EncryptionEnabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.CodeBuild = test.input - results := CheckEnableEncryption.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableEncryption.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/codebuild/enable_encryption_test.rego b/checks/cloud/aws/codebuild/enable_encryption_test.rego new file mode 100644 index 00000000..a5ea58b1 --- /dev/null +++ b/checks/cloud/aws/codebuild/enable_encryption_test.rego @@ -0,0 +1,24 @@ +package builtin.aws.codebuild.aws0018_test + +import rego.v1 + +import data.builtin.aws.codebuild.aws0018 as check +import data.lib.test + +test_allow_artifact_settings_with_encryption if { + test.assert_empty(check.deny) with input as build_input({"artifactsettings": {"encryptionenabled": {"value": true}}}) +} + +test_allow_secondary_artifact_settings_with_encryption if { + test.assert_empty(check.deny) with input as build_input({"secondaryartifactsettings": [{"encryptionenabled": {"value": true}}]}) +} + +test_disallow_artifact_settings_without_encryption if { + test.assert_equal_message("Encryption is not enabled for project artifacts.", check.deny) with input as build_input({"artifactsettings": {"encryptionenabled": {"value": false}}}) +} + +test_disallow_secondary_artifact_settings_without_encryption if { + test.assert_equal_message("Encryption is not enabled for secondary project artifacts.", check.deny) with input as build_input({"secondaryartifactsettings": [{"encryptionenabled": {"value": false}}]}) +} + +build_input(project) := {"aws": {"codebuild": {"projects": [project]}}} diff --git a/checks/cloud/aws/config/aggregate_all_regions.go b/checks/cloud/aws/config/aggregate_all_regions.go index c534b942..1a9c987e 100755 --- a/checks/cloud/aws/config/aggregate_all_regions.go +++ b/checks/cloud/aws/config/aggregate_all_regions.go @@ -35,7 +35,8 @@ This will help limit the risk of any unmonitored configuration in regions that a Links: cloudFormationAggregateAllRegionsLinks, RemediationMarkdown: cloudFormationAggregateAllRegionsRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { if s.AWS.Config.ConfigurationAggregrator.Metadata.IsUnmanaged() { diff --git a/checks/cloud/aws/config/aggregate_all_regions.rego b/checks/cloud/aws/config/aggregate_all_regions.rego new file mode 100644 index 00000000..5c5ec7c9 --- /dev/null +++ b/checks/cloud/aws/config/aggregate_all_regions.rego @@ -0,0 +1,42 @@ +# METADATA +# title: Config configuration aggregator should be using all regions for source +# description: | +# Sources that aren't covered by the aggregator are not include in the configuration. The configuration aggregator should be configured with all_regions for the source. +# This will help limit the risk of any unmonitored configuration in regions that are thought to be unused. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html +# custom: +# id: AVD-AWS-0019 +# avd_id: AVD-AWS-0019 +# provider: aws +# service: config +# severity: HIGH +# short_code: aggregate-all-regions +# recommended_action: Set the aggregator to cover all regions +# input: +# selector: +# - type: cloud +# subtypes: +# - service: config +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/config_configuration_aggregator#all_regions +# good_examples: checks/cloud/aws/config/aggregate_all_regions.tf.go +# bad_examples: checks/cloud/aws/config/aggregate_all_regions.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/config/aggregate_all_regions.cf.go +# bad_examples: checks/cloud/aws/config/aggregate_all_regions.cf.go +package builtin.aws.config.aws0019 + +import rego.v1 + +deny contains res if { + cfg_aggregator := input.aws.config.configurationaggregrator + cfg_aggregator.__defsec_metadata.managed + not cfg_aggregator.sourceallregions.value + res := result.new("Configuration aggregation is not set to source from all regions.", cfg_aggregator.sourceallregions) +} diff --git a/checks/cloud/aws/config/aggregate_all_regions_test.go b/checks/cloud/aws/config/aggregate_all_regions_test.go deleted file mode 100644 index af2b6d0e..00000000 --- a/checks/cloud/aws/config/aggregate_all_regions_test.go +++ /dev/null @@ -1,61 +0,0 @@ -package config - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/config" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckAggregateAllRegions(t *testing.T) { - tests := []struct { - name string - input config.Config - expected bool - }{ - { - name: "AWS Config aggregator source with all regions set to false", - input: config.Config{ - ConfigurationAggregrator: config.ConfigurationAggregrator{ - Metadata: trivyTypes.NewTestMetadata(), - SourceAllRegions: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - expected: true, - }, - { - name: "AWS Config aggregator source with all regions set to true", - input: config.Config{ - ConfigurationAggregrator: config.ConfigurationAggregrator{ - Metadata: trivyTypes.NewTestMetadata(), - SourceAllRegions: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.Config = test.input - results := CheckAggregateAllRegions.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckAggregateAllRegions.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/config/aggregate_all_regions_test.rego b/checks/cloud/aws/config/aggregate_all_regions_test.rego new file mode 100644 index 00000000..854ec618 --- /dev/null +++ b/checks/cloud/aws/config/aggregate_all_regions_test.rego @@ -0,0 +1,20 @@ +package builtin.aws.config.aws0019_test + +import rego.v1 + +import data.builtin.aws.config.aws0019 as check +import data.lib.test + +test_allow_all_regions if { + test.assert_empty(check.deny) with input as {"aws": {"config": {"configurationaggregrator": { + "__defsec_metadata": {"managed": true}, + "sourceallregions": {"value": true}, + }}}} +} + +test_disallow_all_regions if { + test.assert_equal_message("Configuration aggregation is not set to source from all regions.", check.deny) with input as {"aws": {"config": {"configurationaggregrator": { + "__defsec_metadata": {"managed": true}, + "sourceallregions": {"value": false}, + }}}} +} diff --git a/checks/cloud/aws/documentdb/enable_log_export.go b/checks/cloud/aws/documentdb/enable_log_export.go index 47d41f38..889d33b9 100755 --- a/checks/cloud/aws/documentdb/enable_log_export.go +++ b/checks/cloud/aws/documentdb/enable_log_export.go @@ -34,7 +34,8 @@ var CheckEnableLogExport = rules.Register( Links: cloudFormationEnableLogExportLinks, RemediationMarkdown: cloudFormationEnableLogExportRemediationMarkdown, }, - Severity: severity.Medium, + Severity: severity.Medium, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, cluster := range s.AWS.DocumentDB.Clusters { diff --git a/checks/cloud/aws/documentdb/enable_log_export.rego b/checks/cloud/aws/documentdb/enable_log_export.rego new file mode 100644 index 00000000..e0cbd385 --- /dev/null +++ b/checks/cloud/aws/documentdb/enable_log_export.rego @@ -0,0 +1,49 @@ +# METADATA +# title: DocumentDB logs export should be enabled +# description: | +# Document DB does not have auditing by default. To ensure that you are able to accurately audit the usage of your DocumentDB cluster you should enable export logs. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/documentdb/latest/developerguide/event-auditing.html +# custom: +# id: AVD-AWS-0020 +# avd_id: AVD-AWS-0020 +# provider: aws +# service: documentdb +# severity: MEDIUM +# short_code: enable-log-export +# recommended_action: Enable export logs +# input: +# selector: +# - type: cloud +# subtypes: +# - service: documentdb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster#enabled_cloudwatch_logs_exports +# good_examples: checks/cloud/aws/documentdb/enable_log_export.tf.go +# bad_examples: checks/cloud/aws/documentdb/enable_log_export.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/documentdb/enable_log_export.cf.go +# bad_examples: checks/cloud/aws/documentdb/enable_log_export.cf.go +package builtin.aws.documentdb.aws0020 + +import rego.v1 + +log_export_audit := "audit" + +log_export_profiler := "profiler" + +deny contains res if { + some cluster in input.aws.documentdb.clusters + not export_audit_or_profiler(cluster) + res := result.new("Neither CloudWatch audit nor profiler log exports are enabled.", cluster) +} + +export_audit_or_profiler(cluster) if { + some log in cluster.enabledlogexports + log.value in [log_export_audit, log_export_profiler] +} diff --git a/checks/cloud/aws/documentdb/enable_log_export_test.go b/checks/cloud/aws/documentdb/enable_log_export_test.go deleted file mode 100644 index 9fd21b5a..00000000 --- a/checks/cloud/aws/documentdb/enable_log_export_test.go +++ /dev/null @@ -1,83 +0,0 @@ -package documentdb - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/documentdb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableLogExport(t *testing.T) { - tests := []struct { - name string - input documentdb.DocumentDB - expected bool - }{ - { - name: "DocDB Cluster not exporting logs", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnabledLogExports: []trivyTypes.StringValue{ - trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "DocDB Cluster exporting audit logs", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnabledLogExports: []trivyTypes.StringValue{ - trivyTypes.String(documentdb.LogExportAudit, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - { - name: "DocDB Cluster exporting profiler logs", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - EnabledLogExports: []trivyTypes.StringValue{ - trivyTypes.String(documentdb.LogExportProfiler, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DocumentDB = test.input - results := CheckEnableLogExport.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableLogExport.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/documentdb/enable_log_export_test.rego b/checks/cloud/aws/documentdb/enable_log_export_test.rego new file mode 100644 index 00000000..95cb64a1 --- /dev/null +++ b/checks/cloud/aws/documentdb/enable_log_export_test.rego @@ -0,0 +1,26 @@ +package builtin.aws.documentdb.aws0020_test + +import rego.v1 + +import data.builtin.aws.documentdb.aws0020 as check +import data.lib.test + +test_disallow_no_export_log if { + inp := {"aws": {"documentdb": {"clusters": [{"enabledlogexports": []}]}}} + test.assert_equal_message("Neither CloudWatch audit nor profiler log exports are enabled.", check.deny) with input as inp +} + +test_allow_export_audit if { + inp := {"aws": {"documentdb": {"clusters": [{"enabledlogexports": [{"value": "audit"}]}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_allow_export_profiler if { + inp := {"aws": {"documentdb": {"clusters": [{"enabledlogexports": [{"value": "profiler"}]}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_allow_export_mixed if { + inp := {"aws": {"documentdb": {"clusters": [{"enabledlogexports": [{"value": "audit"}, {"value": "profiler"}]}]}}} + test.assert_empty(check.deny) with input as inp +} diff --git a/checks/cloud/aws/documentdb/enable_storage_encryption.go b/checks/cloud/aws/documentdb/enable_storage_encryption.go index ba8eb653..c34c98d5 100755 --- a/checks/cloud/aws/documentdb/enable_storage_encryption.go +++ b/checks/cloud/aws/documentdb/enable_storage_encryption.go @@ -31,7 +31,8 @@ var CheckEnableStorageEncryption = rules.Register( Links: cloudFormationEnableStorageEncryptionLinks, RemediationMarkdown: cloudFormationEnableStorageEncryptionRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, cluster := range s.AWS.DocumentDB.Clusters { diff --git a/checks/cloud/aws/documentdb/enable_storage_encryption.rego b/checks/cloud/aws/documentdb/enable_storage_encryption.rego new file mode 100644 index 00000000..a7810613 --- /dev/null +++ b/checks/cloud/aws/documentdb/enable_storage_encryption.rego @@ -0,0 +1,40 @@ +# METADATA +# title: DocumentDB storage must be encrypted +# description: | +# Unencrypted sensitive data is vulnerable to compromise. Encryption of the underlying storage used by DocumentDB ensures that if their is compromise of the disks, the data is still protected. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/documentdb/latest/developerguide/encryption-at-rest.html +# custom: +# id: AVD-AWS-0021 +# avd_id: AVD-AWS-0021 +# provider: aws +# service: documentdb +# severity: HIGH +# short_code: enable-storage-encryption +# recommended_action: Enable storage encryption +# input: +# selector: +# - type: cloud +# subtypes: +# - service: documentdb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster#storage_encrypted +# good_examples: checks/cloud/aws/documentdb/enable_storage_encryption.tf.go +# bad_examples: checks/cloud/aws/documentdb/enable_storage_encryption.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/documentdb/enable_storage_encryption.cf.go +# bad_examples: checks/cloud/aws/documentdb/enable_storage_encryption.cf.go +package builtin.aws.documentdb.aws0021 + +import rego.v1 + +deny contains res if { + some cluster in input.aws.documentdb.clusters + not cluster.storageencrypted.value + res := result.new("Cluster storage does not have encryption enabled.", cluster.storageencrypted) +} diff --git a/checks/cloud/aws/documentdb/enable_storage_encryption_test.go b/checks/cloud/aws/documentdb/enable_storage_encryption_test.go deleted file mode 100644 index 7b289cd7..00000000 --- a/checks/cloud/aws/documentdb/enable_storage_encryption_test.go +++ /dev/null @@ -1,65 +0,0 @@ -package documentdb - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/documentdb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableStorageEncryption(t *testing.T) { - tests := []struct { - name string - input documentdb.DocumentDB - expected bool - }{ - { - name: "DocDB unencrypted storage", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - StorageEncrypted: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "DocDB encrypted storage", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - StorageEncrypted: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DocumentDB = test.input - results := CheckEnableStorageEncryption.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableStorageEncryption.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/documentdb/enable_storage_encryption_test.rego b/checks/cloud/aws/documentdb/enable_storage_encryption_test.rego new file mode 100644 index 00000000..399a2796 --- /dev/null +++ b/checks/cloud/aws/documentdb/enable_storage_encryption_test.rego @@ -0,0 +1,17 @@ +package builtin.aws.documentdb.aws0021_test + +import rego.v1 + +import data.builtin.aws.documentdb.aws0021 as check +import data.lib.test + +test_allow_with_encryption if { + inp := {"aws": {"documentdb": {"clusters": [{"storageencrypted": {"value": true}}]}}} + test.assert_empty(check.deny) with input as inp +} + +test_disallow_without_encryption if { + inp := {"aws": {"documentdb": {"clusters": [{"storageencrypted": {"value": false}}]}}} + + test.assert_equal_message("Cluster storage does not have encryption enabled.", check) with input as inp +} diff --git a/checks/cloud/aws/documentdb/encryption_customer_key.go b/checks/cloud/aws/documentdb/encryption_customer_key.go index 4ba0ebd5..c23f6376 100755 --- a/checks/cloud/aws/documentdb/encryption_customer_key.go +++ b/checks/cloud/aws/documentdb/encryption_customer_key.go @@ -31,7 +31,8 @@ var CheckEncryptionCustomerKey = rules.Register( Links: cloudFormationEncryptionCustomerKeyLinks, RemediationMarkdown: cloudFormationEncryptionCustomerKeyRemediationMarkdown, }, - Severity: severity.Low, + Severity: severity.Low, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, cluster := range s.AWS.DocumentDB.Clusters { diff --git a/checks/cloud/aws/documentdb/encryption_customer_key.rego b/checks/cloud/aws/documentdb/encryption_customer_key.rego new file mode 100644 index 00000000..4f95d554 --- /dev/null +++ b/checks/cloud/aws/documentdb/encryption_customer_key.rego @@ -0,0 +1,49 @@ +# METADATA +# title: DocumentDB encryption should use Customer Managed Keys +# description: | +# Using AWS managed keys does not allow for fine grained control. Encryption using AWS keys provides protection for your DocumentDB underlying storage. To increase control of the encryption and manage factors like rotation use customer managed keys. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/documentdb/latest/developerguide/security.encryption.ssl.public-key.html +# custom: +# id: AVD-AWS-0022 +# avd_id: AVD-AWS-0022 +# provider: aws +# service: documentdb +# severity: LOW +# short_code: encryption-customer-key +# recommended_action: Enable encryption using customer managed keys +# input: +# selector: +# - type: cloud +# subtypes: +# - service: documentdb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster#kms_key_id +# good_examples: checks/cloud/aws/documentdb/encryption_customer_key.tf.go +# bad_examples: checks/cloud/aws/documentdb/encryption_customer_key.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/documentdb/encryption_customer_key.cf.go +# bad_examples: checks/cloud/aws/documentdb/encryption_customer_key.cf.go +package builtin.aws.documentdb.aws0022 + +import rego.v1 + +deny contains res if { + some cluster in input.aws.documentdb.clusters + cluster.kmskeyid.value == "" + + res := result.new("Cluster encryption does not use a customer-managed KMS key.", cluster) +} + +deny contains res if { + some cluster in input.aws.documentdb.clusters + some instance in cluster.instances + instance.kmskeyid.value == "" + + res := result.new("Instance encryption does not use a customer-managed KMS key.", cluster) +} diff --git a/checks/cloud/aws/documentdb/encryption_customer_key_test.go b/checks/cloud/aws/documentdb/encryption_customer_key_test.go deleted file mode 100644 index 86f1c1f2..00000000 --- a/checks/cloud/aws/documentdb/encryption_customer_key_test.go +++ /dev/null @@ -1,89 +0,0 @@ -package documentdb - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/documentdb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEncryptionCustomerKey(t *testing.T) { - tests := []struct { - name string - input documentdb.DocumentDB - expected bool - }{ - { - name: "DocDB Cluster encryption missing KMS key", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "DocDB Instance encryption missing KMS key", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("kms-key", trivyTypes.NewTestMetadata()), - Instances: []documentdb.Instance{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - }, - expected: true, - }, - { - name: "DocDB Cluster and Instance encrypted with proper KMS keys", - input: documentdb.DocumentDB{ - Clusters: []documentdb.Cluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("kms-key", trivyTypes.NewTestMetadata()), - Instances: []documentdb.Instance{ - { - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("kms-key", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DocumentDB = test.input - results := CheckEncryptionCustomerKey.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEncryptionCustomerKey.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/documentdb/encryption_customer_key_test.rego b/checks/cloud/aws/documentdb/encryption_customer_key_test.rego new file mode 100644 index 00000000..107a4113 --- /dev/null +++ b/checks/cloud/aws/documentdb/encryption_customer_key_test.rego @@ -0,0 +1,30 @@ +package builtin.aws.documentdb.aws0022_test + +import rego.v1 + +import data.builtin.aws.documentdb.aws0022 as check +import data.lib.test + +test_allow_cluster_with_kms_key if { + inp := {"aws": {"documentdb": {"clusters": [{"kmskeyid": {"value": "test"}}]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_allow_instance_with_kms_key if { + inp := {"aws": {"documentdb": {"clusters": [{"instances": [{"kmskeyid": {"value": "test"}}]}]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_disallow_cluster_without_kms_key if { + inp := {"aws": {"documentdb": {"clusters": [{"kmskeyid": {"value": ""}}]}}} + + test.assert_equal_message("Cluster encryption does not use a customer-managed KMS key.", check.deny) with input as inp +} + +test_disallow_instance_without_kms_key if { + inp := {"aws": {"documentdb": {"clusters": [{"instances": [{"kmskeyid": {"value": ""}}]}]}}} + + test.assert_equal_message("Instance encryption does not use a customer-managed KMS key.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/dynamodb/enable_at_rest_encryption.go b/checks/cloud/aws/dynamodb/enable_at_rest_encryption.go index b0cac3be..444b0d47 100755 --- a/checks/cloud/aws/dynamodb/enable_at_rest_encryption.go +++ b/checks/cloud/aws/dynamodb/enable_at_rest_encryption.go @@ -34,7 +34,8 @@ var CheckEnableAtRestEncryption = rules.Register( Links: cloudFormationEnableAtRestEncryptionLinks, RemediationMarkdown: cloudFormationEnableAtRestEncryptionRemediationMarkdown, }, - Severity: severity.High, + Severity: severity.High, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, cluster := range s.AWS.DynamoDB.DAXClusters { diff --git a/checks/cloud/aws/dynamodb/enable_at_rest_encryption.rego b/checks/cloud/aws/dynamodb/enable_at_rest_encryption.rego new file mode 100644 index 00000000..2ce4ea7e --- /dev/null +++ b/checks/cloud/aws/dynamodb/enable_at_rest_encryption.rego @@ -0,0 +1,41 @@ +# METADATA +# title: DAX Cluster should always encrypt data at rest +# description: | +# Data can be freely read if compromised. Amazon DynamoDB Accelerator (DAX) encryption at rest provides an additional layer of data protection by helping secure your data from unauthorized access to the underlying storage. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAXEncryptionAtRest.html +# - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dax-cluster.html +# custom: +# id: AVD-AWS-0023 +# avd_id: AVD-AWS-0023 +# provider: aws +# service: dynamodb +# severity: HIGH +# short_code: enable-at-rest-encryption +# recommended_action: Enable encryption at rest for DAX Cluster +# input: +# selector: +# - type: cloud +# subtypes: +# - service: dynamodb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dax_cluster#server_side_encryption +# good_examples: checks/cloud/aws/dynamodb/enable_at_rest_encryption.tf.go +# bad_examples: checks/cloud/aws/dynamodb/enable_at_rest_encryption.tf.go +# cloudformation: +# good_examples: checks/cloud/aws/dynamodb/enable_at_rest_encryption.cf.go +# bad_examples: checks/cloud/aws/dynamodb/enable_at_rest_encryption.cf.go +package builtin.aws.dynamodb.aws0023 + +import rego.v1 + +deny contains res if { + some cluster in input.aws.dynamodb.daxclusters + cluster.serversideencryption.enabled.value == false + res := result.new("DAX encryption is not enabled.", cluster.serversideencryption.enabled) +} diff --git a/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.go b/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.go deleted file mode 100644 index 66c02a1b..00000000 --- a/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.go +++ /dev/null @@ -1,71 +0,0 @@ -package dynamodb - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/dynamodb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableAtRestEncryption(t *testing.T) { - tests := []struct { - name string - input dynamodb.DynamoDB - expected bool - }{ - { - name: "Cluster with SSE disabled", - input: dynamodb.DynamoDB{ - DAXClusters: []dynamodb.DAXCluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Metadata: trivyTypes.NewTestMetadata(), - Enabled: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "Cluster with SSE enabled", - input: dynamodb.DynamoDB{ - DAXClusters: []dynamodb.DAXCluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Metadata: trivyTypes.NewTestMetadata(), - Enabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DynamoDB = test.input - results := CheckEnableAtRestEncryption.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableAtRestEncryption.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.rego b/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.rego new file mode 100644 index 00000000..237e2f20 --- /dev/null +++ b/checks/cloud/aws/dynamodb/enable_at_rest_encryption_test.rego @@ -0,0 +1,18 @@ +package builtin.aws.dynamodb.aws0023_test + +import rego.v1 + +import data.builtin.aws.dynamodb.aws0023 as check +import data.lib.test + +test_allow_with_encryption if { + inp := {"aws": {"dynamodb": {"daxclusters": [{"serversideencryption": {"enabled": {"value": true}}}]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_disallow_without_encryption if { + inp := {"aws": {"dynamodb": {"daxclusters": [{"serversideencryption": {"enabled": {"value": false}}}]}}} + + test.assert_equal_message("DAX encryption is not enabled.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/dynamodb/enable_recovery.go b/checks/cloud/aws/dynamodb/enable_recovery.go index 8fa5e687..0cf2b6dc 100755 --- a/checks/cloud/aws/dynamodb/enable_recovery.go +++ b/checks/cloud/aws/dynamodb/enable_recovery.go @@ -29,7 +29,8 @@ By enabling point-in-time-recovery you can restore to a known point in the event Links: terraformEnableRecoveryLinks, RemediationMarkdown: terraformEnableRecoveryRemediationMarkdown, }, - Severity: severity.Medium, + Severity: severity.Medium, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, cluster := range s.AWS.DynamoDB.DAXClusters { diff --git a/checks/cloud/aws/dynamodb/enable_recovery.rego b/checks/cloud/aws/dynamodb/enable_recovery.rego new file mode 100644 index 00000000..3e19dca2 --- /dev/null +++ b/checks/cloud/aws/dynamodb/enable_recovery.rego @@ -0,0 +1,46 @@ +# METADATA +# title: Point in time recovery should be enabled to protect DynamoDB table +# description: | +# DynamoDB tables should be protected against accidentally or malicious write/delete actions by ensuring that there is adequate protection. +# By enabling point-in-time-recovery you can restore to a known point in the event of loss of data. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html +# custom: +# id: AVD-AWS-0024 +# avd_id: AVD-AWS-0024 +# provider: aws +# service: dynamodb +# severity: MEDIUM +# short_code: enable-recovery +# recommended_action: Enable point in time recovery +# input: +# selector: +# - type: cloud +# subtypes: +# - service: dynamodb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table#point_in_time_recovery +# good_examples: checks/cloud/aws/dynamodb/enable_recovery.tf.go +# bad_examples: checks/cloud/aws/dynamodb/enable_recovery.tf.go +package builtin.aws.dynamodb.aws0024 + +import rego.v1 + +deny contains res if { + some cluster in input.aws.dynamodb.daxclusters + cluster.pointintimerecovery.value == false + + res := result.new("Point-in-time recovery is not enabled.", cluster.pointintimerecovery) +} + +deny contains res if { + some table in input.aws.dynamodb.tables + table.pointintimerecovery.value == false + + res := result.new("Point-in-time recovery is not enabled.", table.pointintimerecovery) +} diff --git a/checks/cloud/aws/dynamodb/enable_recovery_test.go b/checks/cloud/aws/dynamodb/enable_recovery_test.go deleted file mode 100644 index 9df6d104..00000000 --- a/checks/cloud/aws/dynamodb/enable_recovery_test.go +++ /dev/null @@ -1,65 +0,0 @@ -package dynamodb - -import ( - "testing" - - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - - "github.com/aquasecurity/trivy/pkg/iac/state" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/dynamodb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - - "github.com/stretchr/testify/assert" -) - -func TestCheckEnableRecovery(t *testing.T) { - tests := []struct { - name string - input dynamodb.DynamoDB - expected bool - }{ - { - name: "Cluster with point in time recovery disabled", - input: dynamodb.DynamoDB{ - DAXClusters: []dynamodb.DAXCluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - PointInTimeRecovery: trivyTypes.Bool(false, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: true, - }, - { - name: "Cluster with point in time recovery enabled", - input: dynamodb.DynamoDB{ - DAXClusters: []dynamodb.DAXCluster{ - { - Metadata: trivyTypes.NewTestMetadata(), - PointInTimeRecovery: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - }, - }, - }, - expected: false, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DynamoDB = test.input - results := CheckEnableRecovery.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckEnableRecovery.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/dynamodb/enable_recovery_test.rego b/checks/cloud/aws/dynamodb/enable_recovery_test.rego new file mode 100644 index 00000000..e73a8d09 --- /dev/null +++ b/checks/cloud/aws/dynamodb/enable_recovery_test.rego @@ -0,0 +1,30 @@ +package builtin.aws.dynamodb.aws0024_test + +import rego.v1 + +import data.builtin.aws.dynamodb.aws0024 as check +import data.lib.test + +test_allow_cluster_with_recovery if { + inp := {"aws": {"dynamodb": {"tables": [{"pointintimerecovery": {"value": true}}]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_deny_cluster_without_recovery if { + inp := {"aws": {"dynamodb": {"tables": [{"pointintimerecovery": {"value": false}}]}}} + + test.assert_equal_message("Point-in-time recovery is not enabled.", check.deny) with input as inp +} + +test_allow_table_with_recovery if { + inp := {"aws": {"dynamodb": {"tables": [{"pointintimerecovery": {"value": true}}]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_deny_table_without_recovery if { + inp := {"aws": {"dynamodb": {"tables": [{"pointintimerecovery": {"value": false}}]}}} + + test.assert_equal_message("Point-in-time recovery is not enabled.", check.deny) with input as inp +} diff --git a/checks/cloud/aws/dynamodb/table_customer_key.go b/checks/cloud/aws/dynamodb/table_customer_key.go index 643e3bdd..e0dbca69 100755 --- a/checks/cloud/aws/dynamodb/table_customer_key.go +++ b/checks/cloud/aws/dynamodb/table_customer_key.go @@ -28,7 +28,8 @@ var CheckTableCustomerKey = rules.Register( Links: terraformTableCustomerKeyLinks, RemediationMarkdown: terraformTableCustomerKeyRemediationMarkdown, }, - Severity: severity.Low, + Severity: severity.Low, + Deprecated: true, }, func(s *state.State) (results scan.Results) { for _, table := range s.AWS.DynamoDB.Tables { diff --git a/checks/cloud/aws/dynamodb/table_customer_key.rego b/checks/cloud/aws/dynamodb/table_customer_key.rego new file mode 100644 index 00000000..efd1f542 --- /dev/null +++ b/checks/cloud/aws/dynamodb/table_customer_key.rego @@ -0,0 +1,44 @@ +# METADATA +# title: DynamoDB tables should use at rest encryption with a Customer Managed Key +# description: | +# Using AWS managed keys does not allow for fine grained control. DynamoDB tables are encrypted by default using AWS managed encryption keys. To increase control of the encryption and control the management of factors like key rotation, use a Customer Managed Key. +# scope: package +# schemas: +# - input: schema["cloud"] +# related_resources: +# - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html +# custom: +# id: AVD-AWS-0025 +# avd_id: AVD-AWS-0025 +# provider: aws +# service: dynamodb +# severity: LOW +# short_code: table-customer-key +# recommended_action: Enable server side encryption with a customer managed key +# input: +# selector: +# - type: cloud +# subtypes: +# - service: dynamodb +# provider: aws +# terraform: +# links: +# - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table#server_side_encryption +# good_examples: checks/cloud/aws/dynamodb/table_customer_key.tf.go +# bad_examples: checks/cloud/aws/dynamodb/table_customer_key.tf.go +package builtin.aws.dynamodb.aws0025 + +import rego.v1 + +deny contains res if { + some table in input.aws.dynamodb.tables + table.serversideencryption.enabled.value == false + res := result.new("Table encryption does not use a customer-managed KMS key.", table.serversideencryption.enabled) +} + +deny contains res if { + some table in input.aws.dynamodb.tables + table.serversideencryption.enabled.value + table.serversideencryption.kmskeyid.value == "" + res := result.new("Table encryption explicitly uses the default KMS key.", table.serversideencryption.kmskeyid) +} diff --git a/checks/cloud/aws/dynamodb/table_customer_key_test.go b/checks/cloud/aws/dynamodb/table_customer_key_test.go deleted file mode 100644 index 56daa731..00000000 --- a/checks/cloud/aws/dynamodb/table_customer_key_test.go +++ /dev/null @@ -1,102 +0,0 @@ -package dynamodb - -import ( - "testing" - - "github.com/aquasecurity/trivy/pkg/iac/providers/aws/dynamodb" - "github.com/aquasecurity/trivy/pkg/iac/scan" - "github.com/aquasecurity/trivy/pkg/iac/state" - trivyTypes "github.com/aquasecurity/trivy/pkg/iac/types" - "github.com/stretchr/testify/assert" -) - -func TestCheckTableCustomerKey(t *testing.T) { - tests := []struct { - name string - input dynamodb.DynamoDB - expected bool - }{ - { - name: "Cluster encryption missing KMS key", - input: dynamodb.DynamoDB{ - Tables: []dynamodb.Table{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Enabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "Cluster encryption using default KMS key", - input: dynamodb.DynamoDB{ - Tables: []dynamodb.Table{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Enabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String(dynamodb.DefaultKMSKeyID, trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - { - name: "Cluster encryption using proper KMS key", - input: dynamodb.DynamoDB{ - Tables: []dynamodb.Table{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Enabled: trivyTypes.Bool(true, trivyTypes.NewTestMetadata()), - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("some-ok-key", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: false, - }, - { - name: "KMS key exist, but SSE is not enabled", - input: dynamodb.DynamoDB{ - Tables: []dynamodb.Table{ - { - Metadata: trivyTypes.NewTestMetadata(), - ServerSideEncryption: dynamodb.ServerSideEncryption{ - Enabled: trivyTypes.BoolDefault(false, trivyTypes.NewTestMetadata()), - Metadata: trivyTypes.NewTestMetadata(), - KMSKeyID: trivyTypes.String("some-ok-key", trivyTypes.NewTestMetadata()), - }, - }, - }, - }, - expected: true, - }, - } - for _, test := range tests { - t.Run(test.name, func(t *testing.T) { - var testState state.State - testState.AWS.DynamoDB = test.input - results := CheckTableCustomerKey.Evaluate(&testState) - var found bool - for _, result := range results { - if result.Status() == scan.StatusFailed && result.Rule().LongID() == CheckTableCustomerKey.LongID() { - found = true - } - } - if test.expected { - assert.True(t, found, "Rule should have been found") - } else { - assert.False(t, found, "Rule should not have been found") - } - }) - } -} diff --git a/checks/cloud/aws/dynamodb/table_customer_key_test.rego b/checks/cloud/aws/dynamodb/table_customer_key_test.rego new file mode 100644 index 00000000..74b24f7c --- /dev/null +++ b/checks/cloud/aws/dynamodb/table_customer_key_test.rego @@ -0,0 +1,42 @@ +package builtin.aws.dynamodb.aws0025_test + +import rego.v1 + +import data.builtin.aws.dynamodb.aws0025 as check +import data.lib.test + +test_allow_table_with_cmk if { + inp := {"aws": {"dynamodb": {"tables": [{ + "name": "test", + "serversideencryption": { + "enabled": {"value": true}, + "kmskeyid": {"value": "alias/test"}, + }, + }]}}} + + test.assert_empty(check.deny) with input as inp +} + +test_deny_table_without_cmk if { + inp := {"aws": {"dynamodb": {"tables": [{ + "name": "test", + "serversideencryption": { + "enabled": {"value": true}, + "kmskeyid": {"value": ""}, + }, + }]}}} + + test.assert_equal_message("Table encryption explicitly uses the default KMS key.", check.deny) with input as inp +} + +test_deny_table_sse_disabled if { + inp := {"aws": {"dynamodb": {"tables": [{ + "name": "test", + "serversideencryption": { + "enabled": {"value": false}, + "kmskeyid": {"value": ""}, + }, + }]}}} + + test.assert_equal_message("Table encryption explicitly uses the default KMS key.", check.deny) with input as inp +} diff --git a/cmd/command_id/main.go b/cmd/command_id/main.go index 395d2f34..802614e2 100644 --- a/cmd/command_id/main.go +++ b/cmd/command_id/main.go @@ -7,7 +7,7 @@ import ( "strings" trivy_checks "github.com/aquasecurity/trivy-checks" - "gopkg.in/yaml.v2" + "gopkg.in/yaml.v3" ) const ( diff --git a/go.mod b/go.mod index 9e4bd63c..47c26376 100644 --- a/go.mod +++ b/go.mod @@ -4,6 +4,8 @@ go 1.22.0 toolchain go1.22.2 +replace github.com/aquasecurity/trivy => /Users/nikita/projects/trivy + require ( github.com/aquasecurity/trivy v0.51.2-0.20240527214045-349caf96bc3d github.com/docker/docker v26.1.3+incompatible @@ -19,8 +21,7 @@ require ( require ( cloud.google.com/go v0.112.1 // indirect - cloud.google.com/go/compute v1.25.1 // indirect - cloud.google.com/go/compute/metadata v0.2.3 // indirect + cloud.google.com/go/compute/metadata v0.3.0 // indirect cloud.google.com/go/iam v1.1.6 // indirect cloud.google.com/go/storage v1.39.1 // indirect dario.cat/mergo v1.0.0 // indirect @@ -35,15 +36,15 @@ require ( github.com/alecthomas/chroma v0.10.0 // indirect github.com/apparentlymart/go-cidr v1.1.0 // indirect github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect - github.com/aquasecurity/go-version v0.0.0-20210121072130-637058cfe492 // indirect + github.com/aquasecurity/go-version v0.0.0-20240603093900-cf8a8d29271d // indirect github.com/aws/aws-sdk-go v1.53.0 // indirect - github.com/aws/aws-sdk-go-v2/service/s3 v1.54.2 // indirect + github.com/aws/aws-sdk-go-v2/service/s3 v1.55.1 // indirect github.com/aws/smithy-go v1.20.2 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect github.com/bmatcuk/doublestar/v4 v4.6.1 // indirect github.com/bytecodealliance/wasmtime-go/v3 v3.0.2 // indirect - github.com/cenkalti/backoff/v4 v4.2.1 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect github.com/cespare/xxhash v1.1.0 // indirect github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/cloudflare/circl v1.3.7 // indirect @@ -83,12 +84,12 @@ require ( github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect github.com/googleapis/gax-go/v2 v2.12.3 // indirect github.com/gorilla/mux v1.8.1 // indirect - github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect github.com/hashicorp/go-getter v1.7.4 // indirect github.com/hashicorp/go-safetemp v1.0.0 // indirect github.com/hashicorp/go-uuid v1.0.3 // indirect - github.com/hashicorp/go-version v1.6.0 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect github.com/hashicorp/hcl v1.0.0 // indirect github.com/hashicorp/hcl/v2 v2.20.1 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect @@ -117,7 +118,7 @@ require ( github.com/olekukonko/tablewriter v0.0.5 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/image-spec v1.1.0 // indirect - github.com/pelletier/go-toml/v2 v2.1.0 // indirect + github.com/pelletier/go-toml/v2 v2.2.2 // indirect github.com/peterh/liner v1.2.2 // indirect github.com/pjbgf/sha1cd v0.3.0 // indirect github.com/pkg/errors v0.9.1 // indirect @@ -142,7 +143,7 @@ require ( github.com/spf13/cast v1.6.0 // indirect github.com/spf13/cobra v1.8.0 // indirect github.com/spf13/pflag v1.0.5 // indirect - github.com/spf13/viper v1.18.2 // indirect + github.com/spf13/viper v1.19.0 // indirect github.com/subosito/gotenv v1.6.0 // indirect github.com/tchap/go-patricia/v2 v2.3.1 // indirect github.com/tklauser/go-sysconf v0.3.13 // indirect @@ -157,32 +158,31 @@ require ( github.com/zclconf/go-cty-yaml v1.0.3 // indirect go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect - go.opentelemetry.io/otel v1.24.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0 // indirect + go.opentelemetry.io/otel v1.27.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect - go.opentelemetry.io/otel/metric v1.24.0 // indirect - go.opentelemetry.io/otel/sdk v1.24.0 // indirect - go.opentelemetry.io/otel/trace v1.24.0 // indirect - go.opentelemetry.io/proto/otlp v1.1.0 // indirect + go.opentelemetry.io/otel/metric v1.27.0 // indirect + go.opentelemetry.io/otel/sdk v1.27.0 // indirect + go.opentelemetry.io/otel/trace v1.27.0 // indirect + go.opentelemetry.io/proto/otlp v1.2.0 // indirect go.uber.org/automaxprocs v1.5.3 // indirect go.uber.org/multierr v1.11.0 // indirect - golang.org/x/crypto v0.23.0 // indirect + golang.org/x/crypto v0.24.0 // indirect golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa // indirect golang.org/x/mod v0.17.0 // indirect - golang.org/x/net v0.25.0 // indirect - golang.org/x/oauth2 v0.18.0 // indirect + golang.org/x/net v0.26.0 // indirect + golang.org/x/oauth2 v0.20.0 // indirect golang.org/x/sync v0.7.0 // indirect - golang.org/x/sys v0.20.0 // indirect - golang.org/x/text v0.15.0 // indirect + golang.org/x/sys v0.21.0 // indirect + golang.org/x/text v0.16.0 // indirect golang.org/x/time v0.5.0 // indirect - golang.org/x/tools v0.19.0 // indirect + golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect google.golang.org/api v0.172.0 // indirect - google.golang.org/appengine v1.6.8 // indirect google.golang.org/genproto v0.0.0-20240311173647-c811ad7063a7 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240520151616-dc85e6b867a5 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240515191416-fc5f0ca64291 // indirect google.golang.org/grpc v1.64.0 // indirect google.golang.org/protobuf v1.34.1 // indirect gopkg.in/ini.v1 v1.67.0 // indirect diff --git a/go.sum b/go.sum index e180a7df..b01b5f11 100644 --- a/go.sum +++ b/go.sum @@ -68,10 +68,8 @@ cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U= cloud.google.com/go/compute v1.10.0/go.mod h1:ER5CLbMxl90o2jtNbGSbtfOpQKR0t15FOtRsugnLrlU= -cloud.google.com/go/compute v1.25.1 h1:ZRpHJedLtTpKgr3RV1Fx23NuaAEN1Zfx9hw1u4aJdjU= -cloud.google.com/go/compute v1.25.1/go.mod h1:oopOIR53ly6viBYxaDhBfJwzUAxf1zE//uf3IB011ls= -cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= -cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA= +cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc= +cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= cloud.google.com/go/containeranalysis v0.5.1/go.mod h1:1D92jd8gRR/c0fGMlymRgxWD3Qw9C1ff6/T7mLgVL8I= cloud.google.com/go/containeranalysis v0.6.0/go.mod h1:HEJoiEIu+lEXM+k7+qLCci0h33lX3ZqoYFdmPcoO7s4= cloud.google.com/go/datacatalog v1.3.0/go.mod h1:g9svFY6tuR+j+hrTw3J2dNcmI0dzmSiyOzm8kpLq0a0= @@ -216,10 +214,8 @@ github.com/apparentlymart/go-cidr v1.1.0 h1:2mAhrMoF+nhXqxTzSZMUzDHkLjmIHC+Zzn4t github.com/apparentlymart/go-cidr v1.1.0/go.mod h1:EBcsNrHc3zQeuaeCeCtQruQm+n9/YjEn/vI25Lg7Gwc= github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY= github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4= -github.com/aquasecurity/go-version v0.0.0-20210121072130-637058cfe492 h1:rcEG5HI490FF0a7zuvxOxen52ddygCfNVjP0XOCMl+M= -github.com/aquasecurity/go-version v0.0.0-20210121072130-637058cfe492/go.mod h1:9Beu8XsUNNfzml7WBf3QmyPToP1wm1Gj/Vc5UJKqTzU= -github.com/aquasecurity/trivy v0.51.2-0.20240527214045-349caf96bc3d h1:bZ0GSDma9vT0wwfU5XZoauM9ma1jvYkoeDllnCsQ1PU= -github.com/aquasecurity/trivy v0.51.2-0.20240527214045-349caf96bc3d/go.mod h1:IdxA0M9/6MpT+GvwEtGDnU1O7hZ+M1kWbDoNJQL0BCw= +github.com/aquasecurity/go-version v0.0.0-20240603093900-cf8a8d29271d h1:4zour5Sh9chOg+IqIinIcJ3qtr3cIf8FdFY6aArlXBw= +github.com/aquasecurity/go-version v0.0.0-20240603093900-cf8a8d29271d/go.mod h1:1cPOp4BaQZ1G2F5fnw4dFz6pkOyXJI9KTuak8ghIl3U= github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0 h1:jfIu9sQUG6Ig+0+Ap1h4unLjW6YQJpKZVmUzxsD4E/Q= github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0/go.mod h1:t2tdKJDJF9BV14lnkjHmOQgcvEKgtqs5a1N3LNdJhGE= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= @@ -228,8 +224,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go v1.53.0 h1:MMo1x1ggPPxDfHMXJnQudTbGXYlD4UigUAud1DJxPVo= github.com/aws/aws-sdk-go v1.53.0/go.mod h1:LF8svs817+Nz+DmiMQKTO3ubZ/6IaTpq3TjupRn3Eqk= -github.com/aws/aws-sdk-go-v2/service/s3 v1.54.2 h1:gYSJhNiOF6J9xaYxu2NFNstoiNELwt0T9w29FxSfN+Y= -github.com/aws/aws-sdk-go-v2/service/s3 v1.54.2/go.mod h1:739CllldowZiPPsDFcJHNF4FXrVxaSGVnZ9Ez9Iz9hc= +github.com/aws/aws-sdk-go-v2/service/s3 v1.55.1 h1:UAxBuh0/8sFJk1qOkvOKewP5sWeWaTPDknbQz0ZkDm0= +github.com/aws/aws-sdk-go-v2/service/s3 v1.55.1/go.mod h1:hWjsYGjVuqCgfoveVcVFPXIWgz0aByzwaxKlN1StKcM= github.com/aws/smithy-go v1.20.2 h1:tbp628ireGtzcHDDmLT/6ADHidqnwgF57XOXZe6tp4Q= github.com/aws/smithy-go v1.20.2/go.mod h1:krry+ya/rV9RDcV/Q16kpu6ypI4K2czasz0NC3qS14E= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= @@ -240,8 +236,8 @@ github.com/bmatcuk/doublestar/v4 v4.6.1 h1:FH9SifrbvJhnlQpztAx++wlkk70QBf0iBWDwN github.com/bmatcuk/doublestar/v4 v4.6.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= github.com/bytecodealliance/wasmtime-go/v3 v3.0.2 h1:3uZCA/BLTIu+DqCfguByNMJa2HVHpXvjfy0Dy7g6fuA= github.com/bytecodealliance/wasmtime-go/v3 v3.0.2/go.mod h1:RnUjnIXxEJcL6BgCvNyzCCRzZcxCgsZCi+RNlvYor5Q= -github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= -github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= @@ -283,7 +279,6 @@ github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3Ee github.com/cpuguy83/dockercfg v0.3.1 h1:/FpZ+JaygUR/lZP2NlFI2DVfrOEMAIKP5wWEJdoYe9E= github.com/cpuguy83/dockercfg v0.3.1/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc= github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= -github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/creack/pty v1.1.21 h1:1/QdRyBaHHJP61QkWMXlOIBfsgdDeeKfK8SYVUWJKf0= github.com/creack/pty v1.1.21/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= @@ -482,8 +477,8 @@ github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 h1:Wqo399gCIufwto+VfwCSvsnfGpF/w5E9CNxSwbpD6No= -github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0/go.mod h1:qmOFXW2epJhM0qSnUUYpldc7gVz2KMQwJ/QYCDIa7XU= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-getter v1.7.4 h1:3yQjWuxICvSpYwqSayAdKRFcvBl1y/vogCxczWSmix0= @@ -492,8 +487,9 @@ github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhE github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I= github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek= github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= @@ -594,8 +590,8 @@ github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2sz github.com/owenrumney/squealer v1.2.2 h1:zsnZSwkWi8Y2lgwmg77b565vlHQovlvBrSBzmAs3oiE= github.com/owenrumney/squealer v1.2.2/go.mod h1:pDCW33bWJ2kDOuz7+2BSXDgY38qusVX0MtjPCSFtdSo= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pelletier/go-toml/v2 v2.1.0 h1:FnwAJ4oYMvbT/34k9zzHuZNrhlz48GB3/s6at6/MHO4= -github.com/pelletier/go-toml/v2 v2.1.0/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc= +github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM= +github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs= github.com/peterh/liner v1.2.2 h1:aJ4AOodmL+JxOZZEL2u9iJf8omNRpqHc/EbrK+3mAXw= github.com/peterh/liner v1.2.2/go.mod h1:xFwJyiKIXJZUKItq5dGHZSTBRAuG/CpeNpWLyiNRNwI= github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4= @@ -628,7 +624,6 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= -github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ= github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4= @@ -644,7 +639,6 @@ github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFt github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ= github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU= github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= @@ -669,11 +663,12 @@ github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= -github.com/spf13/viper v1.18.2 h1:LUXCnvUvSM6FXAsj6nnfc8Q2tp1dIgUfY9Kc8GsSOiQ= -github.com/spf13/viper v1.18.2/go.mod h1:EKmWIqdnk5lOcmR72yw6hS+8OPYcwD0jteitLMVB+yk= +github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI= +github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= @@ -703,7 +698,6 @@ github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljT github.com/ulikunitz/xz v0.5.10/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14= github.com/ulikunitz/xz v0.5.11 h1:kpFauv27b6ynzBNT/Xy+1k+fK4WswhN/6PN5WhFAGw8= github.com/ulikunitz/xz v0.5.11/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14= -github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI= github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw= github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo= @@ -738,25 +732,25 @@ go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw= -go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= -go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0 h1:9l89oX4ba9kHbBol3Xin3leYJ+252h0zszDtBwyKe2A= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.52.0/go.mod h1:XLZfZboOJWHNKUv7eH0inh0E9VV6eWDFB/9yJyTLPp0= +go.opentelemetry.io/otel v1.27.0 h1:9BZoF3yMK/O1AafMiQTVu0YDj5Ea4hPhxCs7sGva+cg= +go.opentelemetry.io/otel v1.27.0/go.mod h1:DMpAK8fzYRzs+bi3rS5REupisuqTheUlSZJ1WnZaPAQ= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 h1:R9DE4kQ4k+YtfLI2ULwX82VtNQ2J8yZmA7ZIF/D+7Mc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0/go.mod h1:OQFyQVrDlbe+R7xrEyDr/2Wr67Ol0hRUgsfA+V5A95s= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 h1:tIqheXEFWAZ7O8A7m+J0aPTmpJN3YQ7qetUAdkkkKpk= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0/go.mod h1:nUeKExfxAQVbiVFn32YXpXZZHZ61Cc3s3Rn1pDBGAb0= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkEZCJWobwBqMwC0cwCq8/wkkRy/OowZg5OArWZrM= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I= -go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= -go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= -go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw= -go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg= -go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= -go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= +go.opentelemetry.io/otel/metric v1.27.0 h1:hvj3vdEKyeCi4YaYfNjv2NUje8FqKqUY8IlF0FxV/ik= +go.opentelemetry.io/otel/metric v1.27.0/go.mod h1:mVFgmRlhljgBiuk/MP/oKylr4hs85GZAylncepAX/ak= +go.opentelemetry.io/otel/sdk v1.27.0 h1:mlk+/Y1gLPLn84U4tI8d3GNJmGT/eXe3ZuOXN9kTWmI= +go.opentelemetry.io/otel/sdk v1.27.0/go.mod h1:Ha9vbLwJE6W86YstIywK2xFfPjbWlCuwPtMkKdz/Y4A= +go.opentelemetry.io/otel/trace v1.27.0 h1:IqYb813p7cmbHk0a5y6pD5JPakbVfftRXABGt5/Rscw= +go.opentelemetry.io/otel/trace v1.27.0/go.mod h1:6RiD1hkAprV4/q+yd2ln1HG9GoPx39SuvvstaLBl+l4= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= -go.opentelemetry.io/proto/otlp v1.1.0 h1:2Di21piLrCqJ3U3eXGCTPHE9R8Nh+0uglSnOyxikMeI= -go.opentelemetry.io/proto/otlp v1.1.0/go.mod h1:GpBHCBWiqvVLDqmHZsoMM3C5ySeKTC7ej/RNTae6MdY= +go.opentelemetry.io/proto/otlp v1.2.0 h1:pVeZGk7nXDC9O2hncA6nHldxEjm6LByfA2aN8IOkz94= +go.opentelemetry.io/proto/otlp v1.2.0/go.mod h1:gGpR8txAl5M03pDhMC79G6SdqNV26naRm/KDsgaHD8A= go.uber.org/automaxprocs v1.5.3 h1:kWazyxZUrS3Gs4qUpbwo5kEIMGe/DAvi5Z4tl2NW4j8= go.uber.org/automaxprocs v1.5.3/go.mod h1:eRbA25aqJrxAbsLO0xy5jVwPt7FQnRgjW+efnwa1WM0= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= @@ -771,8 +765,8 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI= -golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= +golang.org/x/crypto v0.24.0 h1:mnl8DM0o513X8fdIkmyFE/5hTYxbwYOjDS/+rK6qpRI= +golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -861,8 +855,8 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= -golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= -golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= +golang.org/x/net v0.26.0 h1:soB7SVo0PWrY4vPW/+ay0jKDNScG2X9wFeYlXIvJsOQ= +golang.org/x/net v0.26.0/go.mod h1:5YKkiSynbBIh3p6iOc/vibscux0x38BZDkn8sCUPxHE= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -888,8 +882,8 @@ golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A= -golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI= -golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8= +golang.org/x/oauth2 v0.20.0 h1:4mQdhULixXKP1rwYBW0vAijoXnkTG0BLCDRzfe1idMo= +golang.org/x/oauth2 v0.20.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -982,13 +976,13 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y= -golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.21.0 h1:rF+pYz3DAGSQAxAu1CbC7catZg4ebC4UIeIhKxBZvws= +golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.20.0 h1:VnkxpohqXaOBYJtBmEppKUG6mXpi+4O6purfc2+sMhw= -golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY= +golang.org/x/term v0.21.0 h1:WVXCp+/EBEHOj53Rvu+7KiT/iElMrO8ACK16SMZ3jaA= +golang.org/x/term v0.21.0/go.mod h1:ooXLefLobQVslOqselCNF4SxFAaoS6KujMbsGzSDmX0= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -998,10 +992,9 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk= -golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4= +golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1061,8 +1054,8 @@ golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/tools v0.19.0 h1:tfGCXNR1OsFG+sVdLAitlpjAvD/I6dHDKnYrpEZUHkw= -golang.org/x/tools v0.19.0/go.mod h1:qoJWxmGSIBmAeriMx19ogtrEPrGtDbPK634QFIcLAhc= +golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg= +golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -1130,8 +1123,6 @@ google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= -google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= @@ -1235,10 +1226,10 @@ google.golang.org/genproto v0.0.0-20221014213838-99cd37c6964a/go.mod h1:1vXfmgAz google.golang.org/genproto v0.0.0-20221025140454-527a21cfbd71/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s= google.golang.org/genproto v0.0.0-20240311173647-c811ad7063a7 h1:ImUcDPHjTrAqNhlOkSocDLfG9rrNHH7w7uoKWPaWZ8s= google.golang.org/genproto v0.0.0-20240311173647-c811ad7063a7/go.mod h1:/3XmxOjePkvmKrHuBy4zNFw7IzxJXtAgdpXi8Ll990U= -google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237 h1:RFiFrvy37/mpSpdySBDrUdipW/dHwsRwh3J3+A9VgT4= -google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237/go.mod h1:Z5Iiy3jtmioajWHDGFk7CeugTyHtPvMHA4UTmUkyalE= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237 h1:NnYq6UN9ReLM9/Y01KWNOWyI5xQ9kbIms5GGJVwS/Yc= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= +google.golang.org/genproto/googleapis/api v0.0.0-20240520151616-dc85e6b867a5 h1:P8OJ/WCl/Xo4E4zoe4/bifHpSmmKwARqyqE4nW6J2GQ= +google.golang.org/genproto/googleapis/api v0.0.0-20240520151616-dc85e6b867a5/go.mod h1:RGnPtTG7r4i8sPlNyDeikXF99hMM+hN6QMm4ooG9g2g= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240515191416-fc5f0ca64291 h1:AgADTJarZTBqgjiUzRgfaBchgYB3/WFTC80GPwsMcRI= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240515191416-fc5f0ca64291/go.mod h1:EfXuqaE1J41VCDicxHzUDm+8rk+7ZdXzHV0IhO/I6s0= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= diff --git a/lib/s3.rego b/lib/s3.rego new file mode 100644 index 00000000..20497051 --- /dev/null +++ b/lib/s3.rego @@ -0,0 +1,11 @@ +package lib.s3 + +import rego.v1 + +public_acls = {"public-read", "public-read-write", "website", "authenticated-read"} + +bucket_has_public_access(bucket) if { + bucket.acl.value in public_acls + not bucket.publicaccessblock.ignorepublicacls.value + not bucket.publicaccessblock.blockpublicacls.value +} diff --git a/lib/test.rego b/lib/test.rego new file mode 100644 index 00000000..8e9ceb59 --- /dev/null +++ b/lib/test.rego @@ -0,0 +1,37 @@ +package lib.test + +import rego.v1 + +assert_empty(v) if { + not _assert_not_empty(v) +} + +_assert_not_empty(v) if { + count(v) > 0 + trace_and_print(sprintf("assert_not_empty:\n %v", [v])) +} + +assert_equal_message(expected, results) if { + assert_count(results, 1) + not _assert_not_equal_message(results, expected) +} + +_assert_not_equal_message(expected, results) if { + msg := [res.msg | some res in results][0] + msg != expected + trace_and_print(sprintf("assert_equal_message:\n Got %q\n Expected %q", [msg, expected])) +} + +assert_count(results, expected) if { + not _assert_not_count(results, expected) +} + +_assert_not_count(results, expected) if { + count(results) != expected + trace_and_print(sprintf("assert_count:\n Got %v\n Expected %v", [count(results), expected])) +} + +trace_and_print(v) if { + trace(v) + print(v) +}