Skip to content

Commit

Permalink
Update to version 2.1.2 (#269)
Browse files Browse the repository at this point in the history
* Update to version v2.1.2

* chore: update doc for 2.1.2 release

---------

Co-authored-by: chenyuc <[email protected]>
  • Loading branch information
owenCCY and chenyuc authored Mar 19, 2024
1 parent 8e2a146 commit e175d48
Show file tree
Hide file tree
Showing 101 changed files with 748 additions and 48,328 deletions.
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [2.1.2] - 2024-03-19

### Fixed

- Resolved a bug where upgrading from versions earlier than 2.1.0 led to the loss of S3 notifications, preventing the proper collection of logs from the S3 buffer. #261
- Addressed a problem where including the "@timestamp" field in log configurations caused failures in creating index_templates, leading to an inability to write data to OpenSearch. #262
- Fixed a bug in the log processor Lambda due to the absence of the 'batch_size' variable, causing process failures. #242
- Solved a deployment issue with the Log Analytics Pipeline, which previously could not deploy cross-account Lambda pipelines. #227
- Corrected an issue with the ALB Service Log Parser that resulted in the omission of numerous log lines. #243
- Amended an inaccurate warning message displayed during pipeline creation with an existing index in OpenSearch. #260
- Amended an inaccurate error messaging when deleting an Instance Group in application log pipelines. #229

## [2.1.1] - 2023-12-05

### Fixed
Expand Down
4 changes: 2 additions & 2 deletions NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ This software includes third party software subject to the following copyrights:

binaryornot under BSD License
chardet under GNU Lesser General Public License v2 or later (LGPLv2+) (LGPL)
func-timeout under GNU Lesser General Public License v2 (LGPLv2) (LGPLv2)
numpy under The 3-Clause BSD License
pyarrow under Apache License, Version 2.0
pytest-httpserver under The MIT License
Expand Down Expand Up @@ -163,4 +162,5 @@ js-json-schema-inferrer under ISC License
@reduxjs/toolkit under The MIT License
redux-mock-store under The MIT License
@types/redux-mock-store under The MIT License
jsonschema-path under the Apache License, Version 2.0
jsonschema-path under the Apache License, Version 2.0
pytest_httpserver under The MIT License
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The solution has the following features:

- **Codeless log processor**: supports log processor plugins developed by AWS. You are allowed to enrich the raw log data through a few clicks on the web console.

- **Out-of-box dashboard template**: offers a collection of reference designs of visualization templates, for both commonly used software such as Nginx and Apache HTTP Server, and AWS services such as Amazon S3 and Amazon CloudTrail.
- **Out-of-box dashboard template**: offers a collection of reference designs of visualization templates, for both commonly used software such as Nginx and Apache HTTP Server, and AWS services such as Amazon S3 and AWS CloudTrail.



Expand Down
1 change: 0 additions & 1 deletion docs/en/images

This file was deleted.

18 changes: 10 additions & 8 deletions docs/en/implementation-guide/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,10 @@ The log pipeline runs the following workflow:
1. AWS service logs are stored in an Amazon S3 bucket (Log Bucket).
2. An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created.
3. Amazon SQS initiates AWS Lambda.
4. AWS Lambda copies objects from the log bucket to the staging bucket.
5. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches. It converts them into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region.
4. AWS Lambda get objects from the Amazon S3 log bucket.
5. AWS Lambda put objects to the staging bucket.
6. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches.
7. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region.

### Logs through Amazon Kinesis Data Streams

Expand Down Expand Up @@ -146,12 +148,12 @@ Centralized Logging with OpenSearch supports log analysis for application logs,
The log pipeline runs the following workflow:

1. [Fluent Bit](https://fluentbit.io/) works as the underlying log agent to collect logs from application servers and send them to an optional [Log Buffer](./applications/index.md#log-buffer), or ingest into OpenSearch domain directly.

2. The Log Buffer triggers the Lambda (Log Processor) to run.

3. The log processor reads and processes the log records and ingests the logs into the OpenSearch domain.

4. Logs that fail to be processed are exported to an Amazon S3 bucket (Backup Bucket).
2. An event notification is sent to Amazon SQS using S3 Event Notifications when a new log file is created.
3. Amazon SQS initiates AWS Lambda.
4. AWS Lambda get objects from the Amazon S3 log bucket.
5. AWS Lambda put objects to the staging bucket.
6. The Log Processor, AWS Step Functions, processes raw log files stored in the staging bucket in batches.
7. The Log Processor, AWS Step Functions, converts log data into Apache Parquet format and automatically partitions all incoming data based on criteria including time and region.

<br/>

Expand Down
6 changes: 3 additions & 3 deletions docs/en/implementation-guide/aws-services/cloudtrail.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Amazon CloudTrail Logs
Amazon CloudTrail monitors and records account activity across your AWS infrastructure. It outputs all the data to the specified S3 bucket or a CloudWatch log group.
# AWS CloudTrail Logs
AWS CloudTrail monitors and records account activity across your AWS infrastructure. It outputs all the data to the specified S3 bucket or a CloudWatch log group.
## Create log ingestion
You can create a log ingestion into Amazon OpenSearch Service either by using the Centralized Logging with OpenSearch console or by deploying a standalone CloudFormation stack.

Expand All @@ -10,7 +10,7 @@ You can create a log ingestion into Amazon OpenSearch Service either by using th
1. Sign in to the Centralized Logging with OpenSearch console.
2. In the navigation pane, under **Log Analytics Pipelines**, choose **Service Log**.
3. Choose **Create a log ingestion**.
4. In the **AWS Services** section, choose **Amazon CloudTrail**.
4. In the **AWS Services** section, choose **AWS CloudTrail**.
5. Choose **Next**.
6. Under **Specify settings**, for **Trail**, select one from the dropdown list. (Optional) If you are ingesting CloudTrail logs from another account, select a [linked account](../link-account/index.md) from the **Account** dropdown list first.
7. Under **Log Source**, Select **S3** or **CloudWatch** as the log source.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ The following table lists the supported AWS services and the corresponding featu

| AWS Service | Log Type | Log Location | Automatic Ingestion | Built-in Dashboard |
| ----------- | -------- |------------------ | ---------- | ---------- |
| Amazon CloudTrail | N/A | S3 | Yes | Yes |
| AWS CloudTrail | N/A | S3 | Yes | Yes |
| Amazon S3 | [Access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html) | S3 | Yes | Yes |
| Amazon RDS/Aurora | [MySQL Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.LogFileSize.html) | CloudWatch Logs | Yes | Yes |
| Amazon CloudFront | [Standard access logs](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html) | S3 | Yes | Yes |
Expand Down
4 changes: 2 additions & 2 deletions docs/en/implementation-guide/aws-services/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Most of supported AWS services in Centralized Logging with OpenSearch offers bui

In this chapter, you will learn how to create log ingestion and dashboards for the following AWS services:

- [Amazon CloudTrail](cloudtrail.md)
- [AWS CloudTrail](cloudtrail.md)
- [Amazon S3](s3.md)
- [Amazon RDS/Aurora](rds.md)
- [Amazon CloudFront](cloudfront.md)
Expand All @@ -45,7 +45,7 @@ When you deploy Centralized Logging with OpenSearch in one Region, the solution

The Region where the service resides is referred to as “Source Region”, while the Region where the Centralized Logging with OpenSearch console is deployed as “Logging Region”.

For Amazon CloudTrail, you can create a new trail which send logs into a S3 bucket in the Logging Region, and you can find the CloudTrail in the list. To learn how to create a new trail, please refer to [Creating a trail][cloudtrail].
For AWS CloudTrail, you can create a new trail which send logs into a S3 bucket in the Logging Region, and you can find the CloudTrail in the list. To learn how to create a new trail, please refer to [Creating a trail][cloudtrail].

For other services with logs located in S3 buckets, you can manually transfer logs (for example, using S3 Cross-Region Replication feature) to the Logging Region S3 bucket.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Step 3: Ingest Amazon CloudTrail Logs
# Step 3: Ingest AWS CloudTrail Logs

You can build a log analytics pipeline to ingest Amazon CloudTrail logs.
You can build a log analytics pipeline to ingest AWS CloudTrail logs.

!!! important "Important"

Expand All @@ -9,7 +9,7 @@ You can build a log analytics pipeline to ingest Amazon CloudTrail logs.
1. Sign in to the Centralized Logging with OpenSearch Console.
2. In the navigation pane, select **AWS Service Log Analytics Pipelines**.
3. Choose **Create a log ingestion**.
4. In the **AWS Services** section, choose **Amazon CloudTrail**.
4. In the **AWS Services** section, choose **AWS CloudTrail**.
5. Choose **Next**.
6. Under **Specify settings**, for **Trail**, select one from the dropdown list.
7. Choose **Next**.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/implementation-guide/getting-started/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Getting Started

After [deploying the solution](../deployment/index.md), refer to this section to quickly learn how to leverage Centralized Logging with OpenSearch for log ingestion (Amazon CloudTrail logs as an example), and log visualization.
After [deploying the solution](../deployment/index.md), refer to this section to quickly learn how to leverage Centralized Logging with OpenSearch for log ingestion (AWS CloudTrail logs as an example), and log visualization.

You can also choose to start with [Domain management](../domains/index.md) , then build [AWS Service Log Analytics Pipelines](../aws-services/index.md) and [Application Log Analytics Pipelines](../applications/index.md).

Expand Down
3 changes: 2 additions & 1 deletion docs/en/implementation-guide/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@
| Aug 2023 | Released version 2.0.0</br> <li> Added feature of ingesting log from S3 bucket continuously or on-demand</br> <li> Added log pipeline monitoring dashboard into the solution console</br> <li>Supported one-click enablement of pipeline alarms</br> <li> Added an option to automatically attach required IAM policies when creating an Instance Group</br> <li> Displayed an error message on the console when the installation of log agent fails</br> <li> Updated Application log pipeline creation process by allowing customer to specify a log source</br> <li> Added validations to OpenSearch domain when importing a domain or selecting a domain to create log pipeline</br> <li> Supported installing log agent on AL2023 instances</br> <li>Supported ingesting WAF (associated with CloudFront) sampled logs to OpenSearch in other regions except us-east-1</br> <li> Allowed the same index name in different OpenSearch domains |
| September 2023 | Released version 2.0.1</br>Fixed the following issues: <li> Automatically adjust log processor Lambda request's body size based on AOS instance type </br><li>When you create an application log pipeline and select Nginx as log format, the default sample dashboard option is set to "Yes" </br> <li>Monitoring page cannot show metrics when there is only one dot</br> <li> The time of the data point of the monitoring metrics does not match the time of the abscissa |
| Nov 2023 | Released version 2.1.0</br><li> Added Light Engine to provide an Athena-based serverless and cost-effective log analytics engine to analyze infrequent access logs </br><li>Added OpenSearch Ingestion to provide more log processing capabilities, with which OSI can provision compute resource (OCU)and pay per ingestion capacity </br> <li> Supported parsing logs in nested JSON format</br> <li> Supported CloudTrail logs ingestion from the specified bucket manually </br> <li> Fix can not list instances when creating instance group issue </br> <li> Fix the EC2 instance launch by the Auto Scaling group will fail to pass the health check issue |
| Dec 2023 | Released version 2.1.1</br> Fixed the following issues: <li> Instance should not be added to the same Instance Group </br><li>Cannot deploy CLO in UAE region </br> <li> Log ingestion error in light engine when not specified time key in the log config </br> |
| Dec 2023 | Released version 2.1.1</br> Fixed the following issues: <li> Instance should not be added to the same Instance Group </br><li>Cannot deploy CLO in UAE region </br> <li> Log ingestion error in light engine when not specified time key in the log config </br> |
| Mar 2024 | Released version 2.1.2</br>Fixed the following issues: <li> The upgrade from versions earlier than 2.1.0 leads to the loss of Amazon S3 notifications, preventing the proper collection of logs from the Amazon S3 buffer </br><li>Including the "@timestamp" field in log configurations leads to failures in creating index_templates and an inability to write data to Amazon OpenSearch </br> <li>Due to the absence of the 'batch_size' variable, process failures occur in the Log Processor Lambda</br> <li> The Log Analytics Pipeline could not deploy cross-account AWS Lambda pipelines </br> <li> An issue with the ELB Service Log Parser resulted in the omission of numerous log lines</br> <li>An inaccurate warning message is displayed during pipeline creation with an existing index in Amazon OpenSearch </br> <li> Incorrect error message occurs when deleting an instance group in Application Logs |
2 changes: 1 addition & 1 deletion docs/en/implementation-guide/solution-overview/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The solution has the following features:

- **Codeless log processor**: supports log processor plugins developed by AWS. You are allowed to enrich the raw log data through a few steps on the web console.

- **Out-of-the-box dashboard template**: offers a collection of reference designs of visualization templates, for both commonly used software such as Nginx and Apache HTTP Server, and AWS services such as Amazon S3 and Amazon CloudTrail.
- **Out-of-the-box dashboard template**: offers a collection of reference designs of visualization templates, for both commonly used software such as Nginx and Apache HTTP Server, and AWS services such as Amazon S3 and AWS CloudTrail.



29 changes: 14 additions & 15 deletions docs/en/implementation-guide/trouble-shooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,24 +168,23 @@ sudo ln -s /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /usr/lib/libsasl2.so.3

#### Amazon Linux 2023

##### x86-64:

```
wget https://europe.mirror.pkgbuild.com/core/os/x86_64/openssl-1.1-1.1.1.u-1-x86_64.pkg.tar.zst
unzstd openssl-1.1-1.1.1.u-1-x86_64.pkg.tar.zst
tar -xvf openssl-1.1-1.1.1.u-1-x86_64.pkg.tar
sudo cp usr/lib/libcrypto.so.1.1 /usr/lib64/libcrypto.so.1.1
sudo cp usr/lib/libssl.so.1.1 /usr/lib64/libssl.so.1.1

```

##### aarch64:
sudo su -
yum install -y wget perl unzip gcc zlib-devel
mkdir /tmp/openssl
cd /tmp/openssl
wget https://www.openssl.org/source/openssl-1.1.1s.tar.gz
tar xzvf openssl-1.1.1s.tar.gz
cd openssl-1.1.1s
./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 shared zlib
make
make install
echo /usr/local/openssl11/lib/ >> /etc/ld.so.conf
ldconfig
```
wget https://eu.mirror.archlinuxarm.org/aarch64/core/openssl-1.1-1.1.1.t-1-aarch64.pkg.tar.xz
xz --decompress openssl-1.1-1.1.1.t-1-aarch64.pkg.tar.xz
tar -xvf openssl-1.1-1.1.1.t-1-aarch64.pkg.tar
sudo cp usr/lib/libcrypto.so.1.1 /usr/lib64/libcrypto.so.1.1
sudo cp usr/lib/libssl.so.1.1 /usr/lib64/libssl.so.1.1

```
8 changes: 7 additions & 1 deletion docs/en/stylesheets/extra.css
Original file line number Diff line number Diff line change
@@ -1 +1,7 @@
../../stylesheets/extra.css
.icon_check {
color: green;
}

.icon_cross {
color: red;
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/architecture/logs-in-s3-light-engine.drawio.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion docs/zh/images

This file was deleted.

12 changes: 9 additions & 3 deletions docs/zh/implementation-guide/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,10 @@ AWS 服务将日志输出到不同的目的地,包括 Amazon S3 存储桶、Cl
1. AWS服务的日志存储在Amazon S3存储桶(Log Bucket)中。
2. 创建新日志文件时,将使用 S3 事件通知将事件通知发送到 Amazon SQS。
3. Amazon SQS 触发 Amazon Lambda 执行。
4. Amazon Lambda 将对象从日志存储桶复制到暂存存储桶。
5. 日志处理器作为 Amazon StepFunction 实现,批量处理存储在暂存存储桶中的原始日志文件。 它将它们转换为 Apache Parquet 格式,并根据时间和区域等标准自动对所有传入数据进行分区。
4. AWS Lambda 从 Amazon S3 日志存储桶获取对象。
5. AWS Lambda 将对象放入暂存桶中。
6. 日志处理器 AWS Step Functions 批量处理存储在暂存存储桶中的原始日志文件。
7. 日志处理器 AWS Step Functions 将日志数据转换为 Apache Parquet 格式,并根据时间和区域等条件自动对所有传入数据进行分区。


### 通过 Amazon Kinesis Data Streams (KDS) 提取日志
Expand Down Expand Up @@ -171,7 +173,11 @@ _图 :应用程序日志分析架构_

1. Fluent Bit 作为底层日志代理,从应用程序服务器收集日志并将其发送到可选的日志缓冲区。
2. Log Buffer 触发 Lambda 将日志存储桶中的对象复制到暂存存储桶。
3. 日志处理器 (Amazon SteFunction) 批量处理存储在 staging bucekt 上的原始日志文件,转换为 Apache Parquet,并按时间和区域等自动对所有传入数据进行分区。
3. Amazon SQS 启动 AWS Lambda。
4. AWS Lambda 从 Amazon S3 日志存储桶获取对象。
5. AWS Lambda 将对象放入暂存桶中。
6. 日志处理器 AWS Step Functions 批量处理存储在暂存存储桶中的原始日志文件。
7. 日志处理器 AWS Step Functions 将日志数据转换为 Apache Parquet 格式,并根据时间和区域等条件自动对所有传入数据进行分区。


### 来自 Syslog 客户端的日志
Expand Down
4 changes: 2 additions & 2 deletions docs/zh/implementation-guide/aws-services/cloudtrail.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# CloudTrail 日志
Amazon CloudTrail 监控和记录您的 AWS 基础设施中的账户活动。它将所有数据输出到指定的 S3 存储桶或者 CloudWatch 日志组中。
AWS CloudTrail 监控和记录您的 AWS 基础设施中的账户活动。它将所有数据输出到指定的 S3 存储桶或者 CloudWatch 日志组中。

## 创建日志摄取
您可以使用日志通控制台或通过部署独立的 CloudFormation 堆栈来将日志摄取到 Amazon OpenSearch Service 中。
Expand All @@ -12,7 +12,7 @@ Amazon CloudTrail 监控和记录您的 AWS 基础设施中的账户活动。它
1. 登录日志通控制台。
2. 在导航窗格中的 **日志分析管道** 下,选择 **AWS 服务日志**
3. 单击 **创建日志摄取** 按钮。
4.**AWS 服务** 部分,选择 **Amazon CloudTrail**
4.**AWS 服务** 部分,选择 **AWS CloudTrail**
5. 选择**下一步**
6.**指定设置**,对于 **Trail**,从下拉列表中选择一项。(可选步骤)如果需要跨账户摄取日志,需要先在 **账户** 的下拉列表中选择一个[链接的 AWS 账户](../link-account/index.md)
7.**日志来源**,选择 **S3** 或者 **CloudWatch** 作为日志源。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

| AWS Service | 日志类型 | 日志位置 | 自动摄取 | 内置仪表板 |
| ----------- | -------- |------------------ | ---------- | ---------- |
| Amazon CloudTrail | N/A | S3 |||
| AWS CloudTrail | N/A | S3 |||
| Amazon S3 | [Access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html) | S3 |||
| Amazon RDS/Aurora | [MySQL Logs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.LogFileSize.html) | CloudWatch Logs |||
| Amazon CloudFront | [Standard access logs](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html) | S3 |||
Expand Down
Loading

0 comments on commit e175d48

Please sign in to comment.