This document is provided for informational purposes only. It represents the current product offerings and practices from Amazon Web Services (AWS) as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS products or services, each of which is provided “as is” without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions, or assurances from AWS, its affiliates, suppliers, or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.
© 2024 Amazon Web Services, Inc. or its affiliates. All Rights Reserved. This work is licensed under a Creative Commons Attribution 4.0 International License.
This AWS Content is provided subject to the terms of the AWS Customer Agreement available at http://aws.amazon.com/agreement or other written agreement between the Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL or both.
Author: Author Name
Approver: Approver Name
Last Date Approved:
This playbook outlines the process to identify owners of code resources, determine how they were exposed, who may have accessed the exposed code, determine impact of exposure, required remediation actions, and determine root cause of code exposure.
- Data Loss Prevention (DLP) Alerts
- Known code publication or exposure
- Suspicious CloudTrail logs
- Behavior:EC2/NetworkPortUnusual
- Behavior:EC2/TrafficVolumeUnusual
- CredentialAccess:IAMUser/AnomalousBehavior
- DefenseEvasion:IAMUser/AnomalousBehavior
- Discovery:IAMUser/AnomalousBehavior
- Exfiltration:IAMUser/AnomalousBehavior
- Exfiltration:S3/MaliciousIPCaller
- Exfiltration:S3/ObjectRead.Unusual
- Impact:IAMUser/AnomalousBehavior
- InitialAccess:IAMUser/AnomalousBehavior
- Persistence:IAMUser/AnomalousBehavior
- PrivilegeEscalation:IAMUser/AnomalousBehavior
- UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.InsideAWS
- UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS
Throughout the execution of the playbook, focus on the desired outcomes, taking notes for enhancement of incident response capabilities.
- Code copies, transfers, and publication
- Credential exposure
- Vulnerabilities exposed
- Environmental intelligence exposed
- Tools used
- Configuration information
- Actor's intent
- Actor's attribution
- Other damage inflicted to the environment and business
- Risk created for environment and business
- Expire or reset any exposed authentication credentials
- Enumerate risks for potential attack vectors based on exposed environmental intelligence
- Minimize or eliminate risks of exposed code
AWS Cloud Adoption Framework Security Perspective
- Directive
- Detective
- Responsive
- Preventive
- [PREPARATION] Create an account asset inventory
- [PREPARATION] Create a repository inventory
- [PREPARATION] Enable logging as appropriate
- [PREPARATION] Identify what type of data is in each repository
- [PREPARATION] Identify, document, and test escalation procedures
- [PREPARATION] Implement training to address exfiltration attacks
- [DETECTION AND ANALYSIS] Perform exfiltration and DLP checks
- [DETECTION AND ANALYSIS] Review repository (CodeCommit) read and write actions
- [DETECTION AND ANALYSIS] Review DNS logs
- [DETECTION AND ANALYSIS] Review VPC flow logs
- [DETECTION AND ANALYSIS] Review endpoint / host-based logs
- [CONTAINMENT] Block access for affected accounts
- [ERADICATION] Remove unrecognized and unauthorized objects in repositories
- [RECOVERY] Perform recovery procedures as appropriate
***The response steps follow the Incident Response Life Cycle from NIST Special Publication 800-61r2 Computer Security Incident Handling Guide
- Tactics, techniques, and procedures: Data Exfiltration, AWS Service Abuse
- Category: Data Loss
- Resource: CodeCommit, S3, EC2
- Indicators: Cyber Threat Intelligence, Third Party Notice, CloudWatch Metrics
- Log Sources: DNS Logs, VPC Flow Logs, CloudTrail, CloudWatch
- Teams: Security Operations Center (SOC), Forensic Investigators, Cloud Engineering
- Preparation
- Detection & Analysis
- Containment & Eradication
- Recovery
- Post-Incident Activity
This playbook references and integrates, where possible, with Prowler which is a command line tool that helps you with AWS security assessment, auditing, hardening and incident response.
It follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has more than 100 additional checks including related to GDPR, HIPAA, PCI-DSS, ISO-27001, FFIEC, SOC2 and others.
This tool provides a rapid snapshot of the current state of security within a customer environment. Additionally, AWS Security Hub provides for automated compliance scanning and can integrate with Prowler
Identify all existing resources and have an updated asset inventory list coupled with who owns each resource. Source code may be stored in one of the following assets:
- (CodeCommit)[https://aws.amazon.com/codecommit/] repositories
- (S3)[https://aws.amazon.com/s3/] backed code storage
- (EC2)[https://aws.amazon.com/ec2/] self-hosted code storage mechanisms
- CloudWatch and CloudTrail logs stored in S3 can be queried using (Amazon Athena)[https://aws.amazon.com/athena/] to identify undesired actions
- (Amazon GuardDuty)[https://aws.amazon.com/guardduty/] may automatically detect anomalous traffic. This can be accessed with (AWS Security Hub)[https://aws.amazon.com/security-hub/] or (Amazon Detective)[https://aws.amazon.com/detective/]
- Application and instance logs stored in S3 can also be queried using Athena.
What training is in place for analysts within the company to become familiar with AWS API (command-line environment), CodeCommit, S3, RDS, and other AWS services?
Opportunities here for Threat Detection and incident response include:
AWS RE:INFORCE
Self-Service Security Assessment
Which roles are able to make changes to services within your account?
Which users have those roles assigned to them? Is least privilege being followed, or do super admin users exist?
Has a security assessment been performed against your environment? Do you have a known baseline to detect "new" or "suspicious" things?
What technology is used within the team/company to communicate issues? Is there anything automated?
Telephone
E-mail
SMS
AWS SES
AWS SNS
Slack
Chime
Teams
Other?
Implementing a Data Loss Prevention (DLP) solution may provide additional detection capabilities and alerts. A DLP solution may provide greatest value in EC2 environments or evaluating network traffic. DLP solutions can be found in the AWS Marketplace.
- Ensure CloudTrail is enabled in all regions:
./prowler -c check21
- Ensure CloudTrail log file validation is enabled:
./prowler -c check22
- Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket:
./prowler -c check26
- Ensure there are no S3 buckets open to Everyone or Any AWS user:
./prowler -c extra73
- Identify the resources in your organization and accounts; such as Amazon S3 buckets or IAM roles; that are shared with an external entity:
./prowler -c extra769
- Find resources exposed to the internet:
./prowler -g group17
- Monitor AWS CodeCommit events in EventBridge, which delivers a stream of real-time data. These events are the same as those that appear in Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources.
- Create a CloudTrail trail to enable continuous delivery of CodeCommit events to an S3 bucket. CloudTrail captures all API calls for CodeCommit as events.
Implementing a DLP solution may provide additional detection capabilities and alerts. Details are available from the DLP solution provider.
Who is monitoring the logs/alerts, receiving them and acting upon each?
Who gets notified when an alert is discovered?
When do public relations and legal get involved in the process?
When would you reach out to AWS Support for help?
It is highly recommended to export logs to a security incident event management (SIEM) solution (such as Splunk, ELK stack, etc.) to aid in viewing and analyzing a variety of logs for a more complete attack timeline analysis.
CloudTrail provides up to 90 days of event logs for all AWS API calls. This information can be used to identify and track malicious or anomalous actions. More information is available in the CloudTrail Log Event Reference.
By default, CloudTrail logs API calls that were made in the last 90 days, but not log requests made to objects. You can see bucket-level events on the CloudTrail console. However, you can't view data events (Amazon S3 object-level calls) by default--you must enable object level logging before those events appear in CloudTrail.
- Navigate to your CloudTrail Dashboard
- In the left-hand margin select
Event History
- In the drop-down change from
Read-Only
toEvent Name
- Review CloudTrail logs for the eventnames
GetPublicAccessBlock
andDeletePublicAccessBlock
You can also get CloudTrail logs for object-level Amazon S3 actions. To do this, enable data events for your S3 bucket or all buckets in your account. When an object-level action occurs in your account, CloudTrail evaluates your trail settings. If the event matches the object that you specified in a trail, the event is logged.
- Navigate to your CloudTrail Dashboard
- In the left-hand margin select
Event History
- In the drop-down change from
Read-Only
toEvent Name
- Review CloudTrail logs for the eventnames
GetObjectAcl
andPutObjectAcl
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. This can be useful for IP addresses discovered within CloudTrail to determine the types of external connections to any public resources.
For further information and steps, including querying with Athena, please refer to the AWS Documentation for VPC Flow Logs. It is recommended that Athena analysis be included in a separate playbook and linked to other relevant items.
DNS Logs is a feature that enables you to capture information about the DNS traffic going to and from network interfaces in your VPC. This can be useful in identifying anomalies or high-risk domains.
Data from EC2 instances and other sources may be ingested into CloudWatch. This data may be used to trigger alarms or perform analysis. CloudWatch also provides for anomaly detection where enabled.
- Navigate to your CloudTrail Dashboard
- In the left-hand margin select
Event History
- In the drop-down change from
Read-Only
toEvent Name
- Review CloudTrail for
PutObject
andDeleteObject
requests from public IP addresses
-
Review EC2 operating system and application logs for inappropriate logins, installation of unknown software, or the presence of unrecognized files.
-
It is highly recommended to have a third-party host-based intrusion detection system (HIDS) solution (such as OSSEC, Tripwire, Wazuh, Amazon Inspector, other)
DLP solutions may detect and alert as configured. DLP solutions are available at the AWS Marketplace.
aws s3api put-public-access-block --bucket bucket-name-here --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
You can also review Blocking public access to your Amazon S3 storage for additional details on blocking public S3 access across your account.
Audit access control for all users with the AWS Identity and Access Management (IAM) service in conjunction with CodeCommit.
Permissions may also be changed or restricted with the CodeCommit permissions reference.
Remove any unrecognized objects from buckets
- Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/
- In the Bucket name list, choose the name of the bucket that you want to delete an object from.
- Choose the name of the object that you want to delete.
- To delete the current version of the object, choose Latest version, and choose the trash can icon.
- To delete a previous version of the object, choose Latest version, and choose the trash can icon beside the version that you want to delete.
Audit identities and access to CodeCommit.
Where possible:
- Launch a replacement EC2 instance using EBS Snapshots or Amazon Machine Image (AMI) backups created from the source
- Attach an EBS volume from the terminated instance to the new EC2 instance.
Same procedures as those listed for Eradication
- Ensure multi-factor authentication (MFA) is enabled:
./prowler -c check12
Encryption
- Check if S3 buckets have default encryption (SSE) enabled:
./prowler -c extra734
Disaster Recovery
- Check if S3 buckets have object versioning enabled:
./prowler -c extra763
Prevent Users from Modifying S3 Block Public Access Settings
Regularly review bucket access and policies on a monthly basis and utilize CloudWatch Events or Security Hub for automated detections
Using versioning in S3 buckets to mitigate accidental or intentional deletion of top-level objects
Managing access with ACLs to limit unauthorized access to resources on a bucket and object level
Use multi-factor authentication (MFA) with each account.
Use TLS to communicate with AWS resources. We recommend TLS 1.2 or later. Some services have this enabled by default, others will need to be implemented (for example in the JavaScript SDK).
Set up API and user activity logging with AWS CloudTrail.
Use AWS encryption solutions such as KMS, along with all default security controls within AWS services.
Use multi-factor authentication (MFA) with each account.
Use TLS to communicate with AWS resources. We recommend TLS 1.2 or later. Some services have this enabled by default, others will need to be implemented (for example in the JavaScript SDK).
Set up API and user activity logging with AWS CloudTrail.
Use AWS encryption solutions such as KMS, along with all default security controls within AWS services.
Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in Amazon S3.
Amazon Macie can detect stored credentials, private keys, and other access data by using managed data identifiers.
AWS Config has multiple automated rules to manage code exposure including codebuild-project-envvar-awscred-check to check if credentials are stored in code.
Execute a Self-Service Security Assessment against the environment to further identify other risks and potentially other public exposure not identified throughout this playbook.
Implementing a Data Loss Prevention (DLP) solution may provide additional detection capabilities and alerts. DLP solutions can be found in the AWS Marketplace and should be configured as prescribed.
This is a place to add items specific to your company that do not need "fixing," but are important to know when executing this playbook in tandem with operational and business requirements.
- As an incident responder I need a runbook on how to detect code exposure
- As an incident responder I need a runbook on how to detect code exfiltration
- As an incident responder I need to be able to detect public resources (AMIs, EBS Volumes, ECR Repos, etc)
- As an incident responder I need to know which roles are capable of making critical changes within AWS
- As an incident responder I need a playbook on mitigating a code exposure and required escalation points
- As an incident responder I need documentation on logs required for different data classifications