Skip to content

Commit

Permalink
fix(aws): remove duplicate bucket logging rule (#1423)
Browse files Browse the repository at this point in the history
* fix(aws): remove duplicate bucket logging rule

* fix test

* Revert "fix test"

This reverts commit 7335001.

* Revert "fix(aws): remove duplicate bucket logging rule"

This reverts commit 507529e.

* remove GO rule

* fix typo

* fix rego and tests

* fix examples

* fix bucket logging rule

* generate avd docs

* ignore U1000 for vars referenced from Rego
  • Loading branch information
nikpivkin authored Aug 30, 2023
1 parent 6fa220e commit 656b014
Show file tree
Hide file tree
Showing 30 changed files with 90 additions and 199 deletions.
13 changes: 11 additions & 2 deletions .golangci.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

linters:
disable-all: true
enable:
Expand All @@ -21,7 +20,7 @@ linters:
- gocritic
- gosec

linters-settings:
linters-settings:
cyclop:
max-complexity: 18
gocritic:
Expand All @@ -36,6 +35,16 @@ issues:
- path: rules/
linters:
- gosec

# Allow unused variables that are necessary for EngineMetadata,
# since they are referenced from .rego files.
- path: 'rules/*/(.+)\.tf|cf\.go'
text: "`.+(Good|Bad)Examples|Links|RemediationMarkdown` is unused"
linters:
- unused
- deadcode
- varcheck

- path: pkg/scanners/terraform/parser/funcs/
linters:
- cyclop
Expand Down
6 changes: 2 additions & 4 deletions avd_docs/aws/iam/AVD-AWS-0342/docs.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@

In iam:PassRole the service carrying out the actions is "provided" a role by the calling principal and implicitly takes on that role to carry out the actions (instead of executing sts:AssumeRole).
The privileges attached to the role are distinct from those of the primary ordering the action and may even be larger and can cause security issues.

Ensures any IAM pass role attched to roles are flagged and warned.

### Impact
Compromise on security of aws resources.
<!-- Add Impact here -->

<!-- DO NOT CHANGE -->
{{ remediationActions }}
Expand Down
2 changes: 1 addition & 1 deletion avd_docs/aws/neptune/AVD-AWS-0075/CloudFormation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Enable export logs

```yaml---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example
Description: Good example
Resources:
Cluster:
Type: AWS::Neptune::DBCluster
Expand Down
2 changes: 1 addition & 1 deletion avd_docs/aws/redshift/AVD-AWS-0084/CloudFormation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Enable encryption using CMK

```yaml---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example of redshift cluster
Description: Good example of redshift cluster
Resources:
Queue:
Type: AWS::Redshift::Cluster
Expand Down
2 changes: 1 addition & 1 deletion avd_docs/aws/redshift/AVD-AWS-0127/CloudFormation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Deploy Redshift cluster into a non default VPC

```yaml---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example of redshift cluster
Description: Good example of redshift cluster
Resources:
Queue:
Type: AWS::Redshift::Cluster
Expand Down
19 changes: 0 additions & 19 deletions avd_docs/aws/s3/AVD-AWS-0089/CloudFormation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,5 @@ Resources:
Type: AWS::S3::Bucket
```
```yaml---
Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
BucketName: !Sub my-s3-bucket-${BucketSuffix}
LoggingConfiguration:
DestinationBucketName: !FindInMap [EnvironmentMapping, s3, logging]
LogFilePrefix: !Sub s3-logs/AWSLogs/${AWS::AccountId}/my-s3-bucket-${BucketSuffix}
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
```


14 changes: 0 additions & 14 deletions avd_docs/aws/s3/AVD-AWS-0089/Terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,6 @@ resource "aws_s3_bucket" "good_example" {
}
}
```
```hcl
resource "aws_s3_bucket" "example" {
bucket = "yournamehere"
# ... other configuration ...
}
resource "aws_s3_bucket_logging" "example" {
bucket = aws_s3_bucket.example.id
target_bucket = aws_s3_bucket.log_bucket.id
target_prefix = "log/"
}
```

#### Remediation Links
Expand Down
6 changes: 3 additions & 3 deletions avd_docs/aws/s3/AVD-AWS-0089/docs.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@

Buckets should have logging enabled so that access can be audited.
Ensures S3 bucket logging is enabled for S3 buckets

### Impact
There is no way to determine the access to this bucket
<!-- Add Impact here -->

<!-- DO NOT CHANGE -->
{{ remediationActions }}

### Links
- https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
- https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html


2 changes: 1 addition & 1 deletion avd_docs/azure/monitor/AVD-AZU-0032/Terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Enable capture for all locations

```hcl
resource "azurerm_monitor_log_profile" "bad_example" {
resource "azurerm_monitor_log_profile" "good_example" {
name = "bad_example"
categories = []
Expand Down
1 change: 1 addition & 0 deletions avd_docs/azure/storage/AVD-AZU-0011/Terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Use a more recent TLS/SSL policy for the load balancer
name = "storageaccountname"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
min_tls_version = "TLS1_2"
}
```
Expand Down
2 changes: 1 addition & 1 deletion avd_docs/digitalocean/compute/AVD-DIG-0002/Terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Switch to HTTPS to benefit from TLS security features

```hcl
resource "digitalocean_loadbalancer" "bad_example" {
resource "digitalocean_loadbalancer" "good_example" {
name = "bad_example-1"
region = "nyc3"
Expand Down
27 changes: 19 additions & 8 deletions avd_docs/google/iam/AVD-GCP-0068/Terraform.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,18 @@

Set conditions on this provider, for example by restricting it to only be allowed from repositories in your GitHub organization.
Set conditions on this provider, for example by restricting it to only be allowed from repositories in your GitHub organization

```hcl
resource "google_iam_workload_identity_pool_provider" "github" {
project = "example-project"
workload_identity_pool_id = "example-pool"
workload_identity_pool_provider_id = "example-provider"
resource "google_iam_workload_identity_pool" "github" {
provider = google
project = data.google_project.project.project_id
workload_identity_pool_id = "github"
}
resource "google_iam_workload_identity_pool_provider" "github" {
provider = google
project = data.google_project.project.project_id
workload_identity_pool_id = google_iam_workload_identity_pool.github-actions[0].workload_identity_pool_id
workload_identity_pool_provider_id = "github"
attribute_condition = "assertion.repository_owner=='your-github-organization'"
Expand All @@ -15,10 +22,14 @@ Set conditions on this provider, for example by restricting it to only be allowe
"attribute.aud" = "assertion.aud"
"attribute.repository" = "assertion.repository"
}
}
oidc {
issuer_uri = "https://token.actions.githubusercontent.com"
}
}
```

#### Remediation Links

- https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_workload_identity_pool_provider#attribute_condition
- https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_workload_identity_pool_provider#attribute_condition

4 changes: 3 additions & 1 deletion avd_docs/google/iam/AVD-GCP-0068/docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
In GitHub Actions, one can authenticate to Google Cloud by setting values for workload_identity_provider and service_account and requesting a short-lived OIDC token which is then used to execute commands as that Service Account. If you don't specify a condition in the workload identity provider pool configuration, then any GitHub Action can assume this role and act as that Service Account.

### Impact
Privilege escalation, impersonation of service accounts
Allows an external attacker to authenticate as the attached service account and act with its permissions

<!-- DO NOT CHANGE -->
{{ remediationActions }}

### Links
- https://www.revblock.dev/exploiting-misconfigured-google-cloud-service-accounts-from-github-actions/


2 changes: 1 addition & 1 deletion pkg/rego/metadata.go
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ func (m *MetadataRetriever) updateMetadata(meta map[string]interface{}, metadata
metadata.References = append(metadata.References, fmt.Sprintf("%s", raw))
}
} else if relatedResourceString, ok := relatedResource.(string); ok {
metadata.References = append(metadata.References, fmt.Sprintf("%s", relatedResourceString))
metadata.References = append(metadata.References, relatedResourceString)
}
}
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/scanners/cloudformation/test/cf_scanning_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import (
)

func Test_basic_cloudformation_scanning(t *testing.T) {
cfScanner := cloudformation.New()
cfScanner := cloudformation.New(options.ScannerWithEmbeddedPolicies(true), options.ScannerWithEmbeddedLibraries(true))

results, err := cfScanner.ScanFS(context.TODO(), os.DirFS("./examples/bucket"), ".")
require.NoError(t, err)
Expand All @@ -24,7 +24,7 @@ func Test_basic_cloudformation_scanning(t *testing.T) {
}

func Test_cloudformation_scanning_has_expected_errors(t *testing.T) {
cfScanner := cloudformation.New()
cfScanner := cloudformation.New(options.ScannerWithEmbeddedPolicies(true), options.ScannerWithEmbeddedLibraries(true))

results, err := cfScanner.ScanFS(context.TODO(), os.DirFS("./examples/bucket"), ".")
require.NoError(t, err)
Expand Down
2 changes: 1 addition & 1 deletion pkg/scanners/terraform/scanner_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -538,7 +538,7 @@ deny[cause] {
}`,
})

scanner := New()
scanner := New(options.ScannerWithEmbeddedPolicies(true), options.ScannerWithEmbeddedLibraries(true))
results, err := scanner.ScanFS(context.TODO(), fs, "test")
assert.NoError(t, err)
assert.Greater(t, len(results.GetFailed()), 0)
Expand Down
6 changes: 4 additions & 2 deletions pkg/scanners/terraformplan/test/scanner_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,16 @@ import (
"testing"
"testing/fstest"

"github.com/aquasecurity/defsec/pkg/scanners/options"

"github.com/aquasecurity/defsec/pkg/scan"
"github.com/aquasecurity/defsec/pkg/scanners/terraformplan"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func Test_Scanning_Plan(t *testing.T) {
scanner := terraformplan.New()
scanner := terraformplan.New(options.ScannerWithEmbeddedPolicies(true), options.ScannerWithEmbeddedLibraries(true))
b, _ := os.ReadFile("testdata/plan.json")
testFS := fstest.MapFS{
"testdata/plan.json": {Data: b},
Expand All @@ -28,7 +30,7 @@ func Test_Scanning_Plan(t *testing.T) {
failedResults = append(failedResults, r)
}
}
assert.Len(t, results, 13)
assert.Len(t, results, 15)
assert.Len(t, failedResults, 9)

}
1 change: 1 addition & 0 deletions rules/cloud/policies/aws/iam/filter_iam_pass_role.rego
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
# related_resources:
# - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html
# custom:
# id: AVD-AWS-0342
# avd_id: AVD-AWS-0342
# provider: aws
# service: iam
Expand Down
2 changes: 1 addition & 1 deletion rules/cloud/policies/aws/neptune/enable_log_export.cf.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package neptune
var cloudFormationEnableLogExportGoodExamples = []string{
`---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example
Description: Good example
Resources:
Cluster:
Type: AWS::Neptune::DBCluster
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package redshift
var cloudFormationEncryptionCustomerKeyGoodExamples = []string{
`---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example of redshift cluster
Description: Good example of redshift cluster
Resources:
Queue:
Type: AWS::Redshift::Cluster
Expand Down
2 changes: 1 addition & 1 deletion rules/cloud/policies/aws/redshift/use_vpc.cf.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package redshift
var cloudFormationUseVpcGoodExamples = []string{
`---
AWSTemplateFormatVersion: 2010-09-09
Description: Bad example of redshift cluster
Description: Good example of redshift cluster
Resources:
Queue:
Type: AWS::Redshift::Cluster
Expand Down
1 change: 1 addition & 0 deletions rules/cloud/policies/aws/s3/dns_compliant_name.rego
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
# related_resources:
# - https://docs.aws.amazon.com/AmazonS3/latest./dev/transfer-acceleration.html
# custom:
# id: AVD-AWS-0320
# avd_id: AVD-AWS-0320
# provider: aws
# service: s3
Expand Down
51 changes: 0 additions & 51 deletions rules/cloud/policies/aws/s3/enable_bucket_logging.go

This file was deleted.

Loading

0 comments on commit 656b014

Please sign in to comment.