Skip to content

Commit

Permalink
[Updated] Edit S3 language in guides (#7204)
Browse files Browse the repository at this point in the history
  • Loading branch information
jddocs authored Mar 4, 2025
1 parent 3f08f7f commit b04871d
Show file tree
Hide file tree
Showing 23 changed files with 50 additions and 50 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ This solution creates a streamlined delivery pipeline that allows developers to

### Systems and Components

- **Linode Object Storage:** An S3 compatible Object Storage bucket
- **Linode Object Storage:** An Amazon S3-compatible Object Storage bucket

- **Linode VM:** A Dedicated 16GB Linode virtual machine

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Coupling cloud-based multiplexing with DataStream edge logging allows you to con

### Integration and Migration Effort

The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. s3 buckets, Google Cloud Storage, etc.).
The multiplexing solution in this guide does not require the migration of any application-critical software or data. This solution exists as a location-agnostic, cloud-based pipeline between your edge delivery infrastructure and log storage endpoints (i.e. Amazon S3-compatible buckets, Google Cloud Storage, etc.).

Using the following example, you can reduce your overall egress costs by pointing your cloud multiplexing architecture to Akamai’s Object Storage rather than a third-party object storage solution.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,4 +87,4 @@ Below is a high-level diagram and walkthrough of a DataStream and TrafficPeak ar

- **VMs:** Compute Instances used to run TrafficPeak’s log ingest and processing software. Managed by Akamai.

- **Object Storage:** S3 compatible object storage used to store log data from TrafficPeak. Managed by Akamai.
- **Object Storage:** Amazon S3-compatible object storage used to store log data from TrafficPeak. Managed by Akamai.
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ An optimized pipeline consists of a set of one or more "gold images". These beco

Lightning code is configured to include multiple data loader steps to train neural networks. Depending on the desired training iterations and epochs, configured code can optionally store numerous intermediate storage objects and spaces. This allows for the isolation of training and validation steps for further testing, validation, and feedback loops.

Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other S3 buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired.
Throughout the modeling process, various storage spaces are used for staging purposes. These spaces might be confined to the Linux instance running PyTorch Lightning. Alternatively, they can have inputs sourced from static or streaming objects located either within or outside the instance. Such sourced locations can include various URLs, local Linode volumes, Linode (or other Amazon S3-compatible buckets), or external sources. This allows instances to be chained across multiple GPU instances if desired.

This introduces an additional stage in the pipeline between and among instances for high-volume or large tensor data source research.

Expand Down Expand Up @@ -91,7 +91,7 @@ Several storage profiles work for the needs of modeling research, including:

- **Mounted Linode Volumes**: Up to eight logical disk volumes ranging from 10 GB to 80 TB can be optionally added to any Linode. Volumes are mounted and unmounted either manually or programmatically. Volumes may be added, deleted, and/or backed-up during the research cycle. Volume storage costs are optional.

- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so S3 objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional.
- **Linode Object Storage**: Similar to CORS S3 storage, Linode Object Storage emulates AWS or DreamHost S3 storage, so Amazon S3-compatible objects can be migrated to Linode and behave similarly. Standard S3 buckets can be imported, stored, or deleted as needed during the research cycle. Object storage costs are optional.

- **External URL Code Calls**: External networked data sources are subject to the data flow charges associated with the Linode GPU or other instance cost.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ As of the writing of this guide, **sensitive information used to generate your T

### Remote Backends

Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an S3 compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text.
Terraform [*backends*](https://www.terraform.io/docs/backends/index.html) allow the user to securely store their state in a remote location. For example, a key/value store like [Consul](https://www.consul.io/), or an Amazon S3-compatible bucket storage like [Minio](https://www.minio.io/). This allows the Terraform state to be read from the remote store. Because the state only ever exists locally in memory, there is no worry about storing secrets in plain text.

Some backends, like Consul, also allow for state locking. If one user is applying a state, another user cannot make any changes.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,13 @@ Mastodon by default stores its media attachments locally. Every upload is saved

If your Mastodon instance stays below a certain size and traffic level, these image uploads might not cause issues. But as your Mastodon instance grows, the local storage approach can cause difficulties. Media stored in this way is often difficult to manage and a burden on your server.

But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately.
But object storage, by contrast, excels when it comes to storing static files — like Mastodon's media attachments. An Amazon S3-compatible object storage bucket can more readily store a large number of static files and scale appropriately.

To learn more about the features of object storage generally and Linode Object Storage more particularly, take a look at our [Linode Object Storage overview](/docs/products/storage/object-storage/).

## How to Use Linode Object Storage with Mastodon

The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other S3-compatible object storage buckets with Mastodon.
The rest of this guide walks you through setting up a Mastodon instance to use Linode Object Storage for storing its media attachments. Although the guide uses Linode Object Storage, the steps should also provide an effective model for using other Amazon S3-compatible object storage buckets with Mastodon.

The tutorial gives instructions for creating a new Mastodon instance, but the instructions should also work for most existing Mastodon instances regardless of whether it was installed on Docker or from source. Additionally, the tutorial includes steps for migrating existing, locally-stored Mastodon media to the object storage instance.

Expand Down Expand Up @@ -195,7 +195,7 @@ At this point, your Mastodon instance is ready to start storing media on your Li
If you are adding object storage to an existing Mastodon instance, likely already have content stored locally. And likely you want to migrate that content to your new Linode Object Storage bucket.
To do so, you can use a tool for managing S3 storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage.
To do so, you can use a tool for managing Amazon S3-compatible storage to copy local contents to your remote object storage bucket. For instance, AWS has a command-line S3 tool that should be configurable for Linode Object Storage.
However, this guide uses the powerful and flexible [rclone](https://rclone.org/s3/). `rclone` operates on a wide range of storage devices and platforms, not just S3, and it is exceptional for syncing across storage mediums.
Expand Down Expand Up @@ -239,6 +239,6 @@ Perhaps the simplest way to verify your Mastodon configuration is by making a po
You Mastodon instance now has its media storage needs being handled by object storage. And with that your server has become more scalable and prepared for an expanding user base.
The links below provide additional information on how the setup between Mastodon and an S3-compatible storage works.
The links below provide additional information on how the setup between Mastodon and Amazon S3-compatible storage works.
To keep learning about Mastodon, be sure to take a look at the official [Mastodon blog](https://blog.joinmastodon.org/) and the [Mastodon discussion board](https://discourse.joinmastodon.org/).
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ test-job-1:
By default, cached files are stored locally alongside your GitLab Runner Manager. But that option may not be the most efficient, especially as your GitLab pipelines become more complicated and your projects' storage needs expand.
To remedy this, you can adjust your GitLab Runner configuration to use an S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/).
To remedy this, you can adjust your GitLab Runner configuration to use an Amazon S3-compatible object storage solution, like [Linode Object Storage](/docs/products/storage/object-storage/get-started/).
These next steps show you how you can integrate a Linode Object Storage bucket with your GitLab Runner to store cached resources from CI/CD jobs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ external_resources:

## What is Minio?

Minio is an open source, S3 compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service.
Minio is an open source, Amazon S3-compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses [Kubespray](https://github.com/kubernetes-incubator/kubespray) to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service.

## Before You Begin

Expand Down Expand Up @@ -411,6 +411,6 @@ Persistent Volumes(PV) are an abstraction in Kubernetes that represents a unit o

![Minio Login Screen](minio-login-screen.png)

1. Minio has similar functionality to S3: file uploads, creating buckets, and storing other data.
1. Minio has similar functionality to Amazon S3: file uploads, creating buckets, and storing other data.

![Minio Browser](minio-browser.png)
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ external_resources:
- '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)'
---

Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from AWS S3 to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.

## Migration Considerations

Expand All @@ -37,7 +37,7 @@ Linode Object Storage is an S3-compatible service used for storing large amounts

There are two architecture options for completing a data migration from AWS S3 to Linode Object Storage. One of these architectures is required to be in place prior to initiating the data migration:

**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source S3 bucket. Data is transferred internally from the S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket.
**Architecture 1:** Utilizes an EC2 instance running rclone in the same region as the source AWS S3 bucket. Data is transferred internally from the AWS S3 bucket to the EC2 instance and then over the public internet from the EC2 instance to the target Linode Object Storage bucket.

- **Recommended for:** speed of transfer, users with AWS platform familiarity

Expand All @@ -53,7 +53,7 @@ Rclone generally performs better when placed closer to the source data being cop

1. A source AWS S3 bucket with the content to be transferred.

1. An AWS EC2 instance running rclone in the same region as the source S3 bucket. The S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint.
1. An AWS EC2 instance running rclone in the same region as the source AWS S3 bucket. The AWS S3 bucket communicates with the EC2 instance via VPC Endpoint within the AWS region. Your IAM policy should allow S3 access only via your VPC Endpoint.

1. Data is copied across the public internet from the AWS EC2 instance to a target Linode Object Storage bucket. This results in egress (outbound traffic) being calculated by AWS.

Expand Down Expand Up @@ -93,7 +93,7 @@ Rclone generally performs better when placed closer to the source data being cop
- Secret key
- Region ID

- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your S3 bucket.
- If using Architecture 1, there must be a [VPC gateway endpoint created](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html) for S3 in the same VPC where your EC2 instance is deployed. This should be the same region as your AWS S3 bucket.

- An **existing Linode Object Storage bucket** with:

Expand Down Expand Up @@ -194,7 +194,7 @@ Rclone generally performs better when placed closer to the source data being cop
#### Rclone Copy Command Breakdown
- `aws:aws-bucket-name/`: The AWS remote provider and source S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket.
- `aws:aws-bucket-name/`: The AWS remote provider and source AWS S3 bucket. Including the slash at the end informs the `copy` command to include everything within the bucket.
- `linode:linode-bucket-name/`: The Linode remote provider and target Object Storage bucket.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ external_resources:
- '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)'
---

Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Azure Blob Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.

## Migration Considerations

Expand Down Expand Up @@ -305,4 +305,4 @@ There are several next steps to consider after a successful object storage migra
- **Confirm the changeover is functioning as expected.** Allow some time to make sure your updated workloads and jobs are interacting successfully with Linode Object Storage. Once you confirm everything is working as expected, you can safely delete the original source bucket and its contents.
- **Take any additional steps to update your system for S3 compatibility.** Since the Azure Blob Storage API is not S3 compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage.
- **Take any additional steps to update your system for Amazon S3 compatibility.** Since the Azure Blob Storage API is not Amazon S3-compatible, you may need to make internal configuration changes to ensure your system is set up to communicate using S3 protocol. This means your system should be updated to use an Amazon S3-compatible [SDK](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingAWSSDK.html) like [Boto3](https://aws.amazon.com/sdk-for-python/) or Amazon S3-compatible command line utility like [s3cmd](https://s3tools.org/s3cmd). The [AWS SDK](https://techdocs.akamai.com/cloud-computing/docs/using-the-aws-sdk-for-php-with-object-storage) can also be configured to function with Linode Object Storage.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ external_resources:
- '[Linode Object Storage guides & tutorials](/docs/guides/platform/object-storage/)'
---

Linode Object Storage is an S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Google Cloud Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.
Linode Object Storage is an Amazon S3-compatible service used for storing large amounts of unstructured data. This guide includes steps on how to migrate up to 100TB of static content from Google Cloud Storage to Linode Object Storage using rclone, along with how to monitor your migration using rclone’s WebUI GUI.

## Migration Considerations

Expand Down Expand Up @@ -310,4 +310,4 @@ There are several next steps to consider after a successful object storage migra
- **Confirm the changeover is functioning as expected.** Allow some time to make sure your updated workloads and jobs are interacting successfully with Linode Object Storage. Once you confirm everything is working as expected, you can safely delete the original source bucket and its contents.
- **Take any additional steps to update your system for S3 compatibility.** You may need to make additional internal configuration changes to ensure your system is set up to communicate using S3 protocol. See Google’s documentation for [interoperability with other storage providers](https://cloud.google.com/storage/docs/interoperability).
- **Take any additional steps to update your system for Amazon S3 compatibility.** You may need to make additional internal configuration changes to ensure your system is set up to communicate using S3 protocol. See Google’s documentation for [interoperability with other storage providers](https://cloud.google.com/storage/docs/interoperability).
Loading

0 comments on commit b04871d

Please sign in to comment.