Skip to content

Commit

Permalink
fix spelling, add clean up step, add disclaimer
Browse files Browse the repository at this point in the history
Signed-off-by: Manabu McCloskey <[email protected]>
  • Loading branch information
nabuskey committed Jan 9, 2025
1 parent 3000ecc commit 2d16954
Show file tree
Hide file tree
Showing 4 changed files with 49 additions and 15 deletions.
2 changes: 1 addition & 1 deletion analytics/terraform/spark-k8s-operator/addons.tf
Original file line number Diff line number Diff line change
Expand Up @@ -652,7 +652,7 @@ resource "aws_secretsmanager_secret_version" "grafana" {
resource "aws_iam_policy" "s3tables_policy" {
name_prefix = "${local.name}-s3tables"
path = "/"
description = "S3Tables Metdata access for Nodes"
description = "S3Tables Metadata access for Nodes"

policy = jsonencode({
Version = "2012-10-17"
Expand Down
53 changes: 41 additions & 12 deletions analytics/terraform/spark-k8s-operator/examples/s3-tables/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ This guide provides step-by-step instructions for setting up and running a Spark

- Latest version of AWS CLI installed (must include S3Tables API support)

## Step1: Deploy Spark Cluster on EKS
## Step 1: Deploy Spark Cluster on EKS

Follow the steps to deploy Spark Cluster on EKS

[Spark Operator on EKS with YuniKorn Scheduler](https://awslabs.github.io/data-on-eks/docs/blueprints/data-analytics/spark-operator-yunikorn#prerequisites)

Once your cluster is up and running, proceed with the following steps to execute a sample Spark job using S3Tables.

## Step2: Create Test Data for the job
## Step 2: Create Test Data for the job

Navigate to the example directory and Generate sample data:

Expand All @@ -25,23 +25,23 @@ cd analytics/terraform/spark-k8s-operator/examples/s3-tables

This will create a file called `employee_data.csv` locally with 100 records. Modify the script to adjust the number of records as needed.

## Step3: Upload Test Input data to your S3 Bucket
## Step 3: Upload Test Input data to your S3 Bucket

Replace `<YOUR_S3_BUCKET>` with the name of the S3 bucket created by your blueprint and run the below command.

```sh
aws s3 cp employee_data.csv s3://<S3_BUCKET>/s3table-example/input/
```

## Step4: Upload PySpark Script to S3 Bucket
## Step 4: Upload PySpark Script to S3 Bucket

Replace `<S3_BUCKET>` with the name of the S3 bucket created by your blueprint and run the below command to upload sample Spark job to S3 buckets.

```sh
aws s3 cp s3table-iceberg-pyspark.py s3://<S3_BUCKET>/s3table-example/scripts/
```

## Step5: Create S3Table
## Step 5: Create S3Table

Replace <REGION> and <S3TABLE_BUCKET_NAME> with desired names.

Expand All @@ -53,29 +53,30 @@ aws s3tables create-table-bucket \

Make note of the S3TABLE ARN generated by this command.

## Step6: Update Spark Operator YAML File
## Step 6: Update Spark Operator YAML File

- Open `s3table-spark-operator.yaml` file in your preferred text editor.
- Replace `<S3_BUCKET>` with your S3 bucket created by this blueprint(Check Terraform outputs). S3 Bucket where you copied Test Data and Smaple spark job in the above steps.
- REPLACE `<S3TABLE_ARN>` with actaul S3 Table ARN.
- Replace `<S3_BUCKET>` with your S3 bucket created by this blueprint(Check Terraform outputs). S3 Bucket where you copied test data and sample spark job in the above steps.
- REPLACE `<S3TABLE_ARN>` with your S3 Table ARN.

## Step7: Execute Spark Job
## Step 7: Execute Spark Job

Apply the updated YAML file to your Kubernetes cluster to submit the Spark Job.

```sh
cd analytics/terraform/spark-k8s-operator/examples/s3-tables
kubectl apply -f s3table-spark-operator.yaml
```

## Step8: Verify the Spark Driver log for the output
## Step 8: Verify the Spark Driver log for the output

Check the Spark driver logs to verify job progress and output:

```sh
kubectl logs <spark-driver-pod-name> -n spark-team-a
```

## Step9: Verify the S3Table using S3Table API
## Step 9: Verify the S3Table using S3Table API

Use the S3Table API to confirm the table was created successfully. Just replace the `<ACCOUNT_ID>` and run the command.

Expand Down Expand Up @@ -132,7 +133,35 @@ This command provides information about Iceberg compaction, snapshot management,
}
```

## Conclusion
## Step10: Clean up

Delete the table.

```bash
aws s3tables delete-table \
--namespace doeks_namespace \
--table-bucket-arn ${S3TABLE_ARN} \
--name employee_s3_table
```

Delete the namespace.

```bash
aws s3tables delete-namespace \
--namespace doeks_namespace \
--table-bucket-arn ${S3TABLE_ARN}
```

Finally, delete the bucket table

```bash
aws s3tables delete-table-bucket \
--region "<REGION>" \
--table-bucket-arn ${S3TABLE_ARN}
```


# Conclusion
You have successfully set up and run a Spark job on Amazon EKS using S3Table for data storage. This setup provides a scalable and efficient way to process large datasets using Spark on Kubernetes with the added benefits of S3Table's data management capabilities.

For more advanced usage, refer to the official AWS documentation on S3Table and Spark on EKS.
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def main(args):
print(f"DataFrame count: {iceberg_data_df.count()}")

# List the table snapshots
logger.info("List the s3table snaphot versions:")
logger.info("List the s3table snapshot versions:")
spark.sql(f"SELECT * FROM {full_table_name}.history LIMIT 10").show()

# Stop Spark session
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Pre-requisite before running this job
# Replace <S3_BUCKET> with your S3 bucket created by this blueprint(Check Terraform outputs)
# REPLACE <S3TABLE_ARN> with actaul S3 Table ARN
# REPLACE <S3TABLE_ARN> with actual S3 Table ARN
---
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
Expand All @@ -14,6 +14,11 @@ spec:
type: Python
sparkVersion: "3.5.3"
mode: cluster
# CAUTION: Unsupported test image
# This image is created solely for testing and reference purposes.
# Before use, please:
# 1. Review the Dockerfile used to create this image
# 2. Create your own image that meets your organization's security requirements
image: "public.ecr.aws/data-on-eks/spark:3.5.3-scala2.12-java17-python3-ubuntu-s3table0.1.3-iceberg1.6.1"
imagePullPolicy: IfNotPresent
mainApplicationFile: "s3a://<S3_BUCKET>/s3table-example/scripts/s3table-iceberg-pyspark.py"
Expand Down

0 comments on commit 2d16954

Please sign in to comment.