Skip to content

Commit

Permalink
Update to v8.0.0-beta1
Browse files Browse the repository at this point in the history
List of changes impacting docker-elk:

- [logstash]: The output to Elasticsearch is handled as a data stream.

  Starting with v8.0.0, the `elasticsearch` output for Logstash sends
  log data to a data stream instead of `logstash-*` indices by default.
  The name of the default data stream is `logs-generic-default`.
  docker-elk remains unopinionated and simply uses Elastic's defaults
  like it always has, so users who prefer to retain the old behaviour
  need to explicitly opt-out of data streams in their Logstash
  pipelines.

  Refs:
  - https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams.html
  - https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-data-streams

- [logstash]: The `host` field injected by some input plugins gets
  automatically mutated to `host.name`.

  Because `logstash-*` indices aren't used by default due to the output
  data now being handled as a data stream, the index template created by
  Logstash does not apply. Instead, the built-in `logs` template
  applies, and ensures that the data is validated against the Elastic
  Common Schema (ECS), where `host` is a reserved object field.

  Ref: https://www.elastic.co/guide/en/ecs/current/ecs-host.html

- [logstash]: The (legacy) monitoring data collection is now disabled.

  This feature was deprecated since v7.9.0, and removed in v8.0.0.

  Ref: https://www.elastic.co/guide/en/logstash/current/monitoring-internal-collection-legacy.html

- [enterprise-search]: Kibana is now the new management interface, and
  the only one available moving forward.

  The old standalone Enterprise Search interface was removed in v8.0.0.

  Ref: https://www.elastic.co/guide/en/enterprise-search/current/user-interfaces.html

- [elasticsearch]: The command line tool `elasticsearch-setup-passwords`
  was deprecated in favour of a new `elasticsearch-reset-password` tool.

  Passwords for built-in users must now be generated one by one.

  Ref: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-passwords.html
  • Loading branch information
antoineco committed Nov 11, 2021
1 parent 8055143 commit b161def
Show file tree
Hide file tree
Showing 11 changed files with 92 additions and 43 deletions.
2 changes: 1 addition & 1 deletion .env
Original file line number Diff line number Diff line change
@@ -1 +1 @@
ELK_VERSION=7.15.2
ELK_VERSION=8.0.0-beta1
32 changes: 21 additions & 11 deletions .github/workflows/scripts/elasticsearch-setup-passwords.exp
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/expect -f

# List of expected users with dummy password
set user "(elastic|apm_system|kibana_system|logstash_system|beats_system|remote_monitoring_user)"
set users {"xelastic" "kibana_system" "logstash_system" "beats_system" "apm_system" "remote_monitoring_user"}
set password "testpasswd"

# Find elasticsearch container id
Expand All @@ -12,17 +12,27 @@ if { [string match "swarm" $MODE] } {
set cid [exec docker ps -q -f label=com.docker.compose.service=elasticsearch]
}

set cmd "docker exec -it $cid bin/elasticsearch-setup-passwords interactive -s -b -u http://localhost:9200"
foreach user $users {
set cmd "docker exec -it $cid bin/elasticsearch-reset-password --batch --user $user -i"

spawn {*}$cmd
spawn {*}$cmd

expect {
-re "(E|Ree)nter password for \\\[$user\\\]: " {
send "$password\r"
exp_continue
expect {
-re "(E|Re-e)nter password for \\\[$user\\\]: " {
send "$password\r"
exp_continue
}
timeout {
puts "\ntimed out waiting for input"
exit 4
}
eof
}
eof
}

lassign [wait] pid spawnid os_error_flag value
exit $value
lassign [wait] pid spawnid os_error_flag value

if {$value != 0} {
if {$os_error_flag == 0} { puts "exit status: $value" } else { puts "errno: $value" }
exit $value
}
}
4 changes: 2 additions & 2 deletions .github/workflows/scripts/run-tests-core.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ curl -X POST -D- "http://${ip_kb}:5601/api/saved_objects/index-pattern" \
-H 'Content-Type: application/json' \
-H "kbn-version: ${ELK_VERSION}" \
-u elastic:testpasswd \
-d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'
-d '{"attributes":{"title":"logs-generic-default","timeFieldName":"@timestamp"}}'

log 'Searching index pattern via Kibana API'
response="$(curl "http://${ip_kb}:5601/api/saved_objects/_find?type=index-pattern" -s -u elastic:testpasswd)"
Expand Down Expand Up @@ -67,7 +67,7 @@ curl -X POST "http://${ip_es}:9200/_refresh" -u elastic:testpasswd \
-s -w '\n'

log 'Searching message in Elasticsearch'
response="$(curl "http://${ip_es}:9200/logstash-*/_search?q=message:dockerelk&pretty" -s -u elastic:testpasswd)"
response="$(curl "http://${ip_es}:9200/logs-generic-default/_search?q=message:dockerelk&pretty" -s -u elastic:testpasswd)"
echo "$response"
count="$(jq -rn --argjson data "${response}" '$data.hits.total.value')"
if (( count != 1 )); then
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/scripts/run-tests-logspout.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ declare -i was_retried=0

# retry for max 60s (30*2s)
for _ in $(seq 1 30); do
response="$(curl "http://${ip_es}:9200/logstash-*/_search?q=docker.image:%22docker-elk_logspout%22%20AND%20message:%22logspout%20gliderlabs%22~3&pretty" -s -u elastic:testpasswd)"
response="$(curl "http://${ip_es}:9200/logs-generic-default/_search?q=docker.image:%22docker-elk_logspout%22%20AND%20message:%22logspout%20gliderlabs%22~3&pretty" -s -u elastic:testpasswd)"

set +u # prevent "unbound variable" if assigned value is not an integer
count="$(jq -rn --argjson data "${response}" '$data.hits.total.value')"
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/update.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ jobs:
strategy:
matrix:
release:
- 8.x
- 7.x
- 6.x
include:
- release: 7.x
- release: 8.x
branch: main
- release: 6.x
branch: release-6.x
- release: 7.x
branch: release-7.x

steps:
- uses: actions/setup-node@v2
Expand Down
45 changes: 33 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Elastic stack (ELK) on Docker

[![Elastic Stack version](https://img.shields.io/badge/Elastic%20Stack-7.15.2-00bfb3?style=flat&logo=elastic-stack)](https://www.elastic.co/blog/category/releases)
[![Elastic Stack version](https://img.shields.io/badge/Elastic%20Stack-8.0.0--beta1-00bfb3?style=flat&logo=elastic-stack)](https://www.elastic.co/blog/category/releases)
[![Build Status](https://github.com/deviantony/docker-elk/workflows/CI/badge.svg?branch=main)](https://github.com/deviantony/docker-elk/actions?query=workflow%3ACI+branch%3Amain)
[![Join the chat at https://gitter.im/deviantony/docker-elk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deviantony/docker-elk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

Expand All @@ -24,7 +24,7 @@ Based on the official Docker images from Elastic:

Other available stack variants:

* [`tls`](https://github.com/deviantony/docker-elk/tree/tls): TLS encryption enabled in Elasticsearch.
* [`tls`](https://github.com/deviantony/docker-elk/tree/tls): TLS encryption enabled in Elasticsearch
* [`searchguard`](https://github.com/deviantony/docker-elk/tree/searchguard): Search Guard support

---
Expand Down Expand Up @@ -125,7 +125,7 @@ instructions from the [documentation][mac-filesharing] to add more locations.
### Version selection

This repository tries to stay aligned with the latest version of the Elastic stack. The `main` branch tracks the current
major version (7.x).
major version (8.x).

To use a different version of the core Elastic components, simply change the version number inside the `.env` file. If
you are upgrading an existing stack, please carefully read the note in the next section.
Expand All @@ -135,6 +135,7 @@ performing a stack upgrade.**

Older major versions are also supported on separate branches:

* [`release-7.x`](https://github.com/deviantony/docker-elk/tree/release-7.x): 7.x series
* [`release-6.x`](https://github.com/deviantony/docker-elk/tree/release-6.x): 6.x series
* [`release-5.x`](https://github.com/deviantony/docker-elk/tree/release-5.x): 5.x series (End-Of-Life)

Expand Down Expand Up @@ -179,11 +180,31 @@ users][builtin-users] instead for increased security.

1. Initialize passwords for built-in users

The commands below generate random passwords for all 6 built-in users. Take note of them.

```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user elastic
```

```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batch
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user kibana_system
```

Passwords for all 6 built-in users will be randomly generated. Take note of them.
```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user logstash_system
```

```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user beats_system
```

```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user apm_system
```

```console
$ docker-compose exec -T elasticsearch bin/elasticsearch-reset-password --batch --user remote_monitoring_user
```

1. Unset the bootstrap password (_optional_)

Expand All @@ -192,9 +213,8 @@ users][builtin-users] instead for increased security.

1. Replace usernames and passwords in configuration files

Use the `kibana_system` user (`kibana` for releases <7.8.0) inside the Kibana configuration file
(`kibana/config/kibana.yml`) and the `logstash_system` user inside the Logstash configuration file
(`logstash/config/logstash.yml`) in place of the existing `elastic` user.
Use the `kibana_system` user inside the Kibana configuration file (`kibana/config/kibana.yml`) in place of the
existing `elastic` user.

Replace the password for the `elastic` user inside the Logstash pipeline file (`logstash/pipeline/logstash.conf`).

Expand Down Expand Up @@ -246,8 +266,9 @@ When Kibana launches for the first time, it is not configured with any index pat
the Kibana web UI.*

Navigate to the _Discover_ view of Kibana from the left sidebar. You will be prompted to create an index pattern. Enter
`logstash-*` to match Logstash indices then, on the next page, select `@timestamp` as the time filter field. Finally,
click _Create index pattern_ and return to the _Discover_ view to inspect your log entries.
`logs-generic-default` to match the data stream backing Logstash indices then, on the next page, select `@timestamp` as
the time filter field. Finally, click _Create index pattern_ and return to the _Discover_ view to inspect your log
entries.

Refer to [Connect Kibana with Elasticsearch][connect-kibana] and [Creating an index pattern][index-pattern] for detailed
instructions about the index pattern configuration.
Expand All @@ -259,9 +280,9 @@ Create an index pattern via the Kibana API:
```console
$ curl -XPOST -D- 'http://localhost:5601/api/saved_objects/index-pattern' \
-H 'Content-Type: application/json' \
-H 'kbn-version: 7.15.2' \
-H 'kbn-version: 8.0.0-beta1' \
-u elastic:<your generated elastic password> \
-d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'
-d '{"attributes":{"title":"logs-generic-default","timeFieldName":"@timestamp"}}'
```

The created pattern will automatically be marked as the default index pattern as soon as the Kibana UI is opened for the
Expand Down
6 changes: 3 additions & 3 deletions docker-stack.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ version: '3.3'
services:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0-beta1
ports:
- "9200:9200"
- "9300:9300"
Expand All @@ -25,7 +25,7 @@ services:
replicas: 1

logstash:
image: docker.elastic.co/logstash/logstash:7.15.2
image: docker.elastic.co/logstash/logstash:8.0.0-beta1
ports:
- "5044:5044"
- "5000:5000"
Expand All @@ -44,7 +44,7 @@ services:
replicas: 1

kibana:
image: docker.elastic.co/kibana/kibana:7.15.2
image: docker.elastic.co/kibana/kibana:8.0.0-beta1
ports:
- "5601:5601"
configs:
Expand Down
13 changes: 13 additions & 0 deletions extensions/enterprise-search/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,17 @@ add the following setting:
xpack.security.authc.api_key.enabled: true
```

### Configure the Enterprise Search host in Kibana

Kibana acts as the [management interface][enterprisesearch-ui] to Enterprise Search.

To enable the management experience for Enterprise Search, modify the Kibana configuration file in
[`kibana/config/kibana.yml`][config-kbn] and add the following setting:

```yaml
enterpriseSearch.host: http://enterprise-search:3002
```

### Start the server

To include Enterprise Search in the stack, run Docker Compose from the root of the repository with an additional command
Expand Down Expand Up @@ -129,6 +140,8 @@ Docker container: [Running Enterprise Search Using Docker][enterprisesearch-dock
[enterprisesearch-config]: https://www.elastic.co/guide/en/enterprise-search/current/configuration.html
[enterprisesearch-docker]: https://www.elastic.co/guide/en/enterprise-search/current/docker.html
[enterprisesearch-docs]: https://www.elastic.co/guide/en/enterprise-search/current/index.html
[enterprisesearch-ui]: https://www.elastic.co/guide/en/enterprise-search/current/user-interfaces.html

[es-security]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#api-key-service-settings
[config-es]: ../../elasticsearch/config/elasticsearch.yml
[config-kbn]: ../../kibana/config/kibana.yml
3 changes: 2 additions & 1 deletion extensions/enterprise-search/config/enterprise-search.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@ secret_management.encryption_keys:
# IP address Enterprise Search listens on
ent_search.listen_host: 0.0.0.0

# URL at which users reach Enterprise Search
# URL at which users reach Enterprise Search / Kibana
ent_search.external_url: http://localhost:3002
kibana.host: http://localhost:5601

# Elasticsearch URL and credentials
elasticsearch.host: http://elasticsearch:9200
Expand Down
7 changes: 0 additions & 7 deletions logstash/config/logstash.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,3 @@
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
13 changes: 12 additions & 1 deletion logstash/pipeline/logstash.conf
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,22 @@ input {

## Add your filters / logstash plugins configuration here

filter {
# Both the `beats` and `tcp` inputs inject a top-level [host] field,
# which string format is incompatible with the structured top-level
# [host] field reserved by the Elastic Common Schema (ECS).
# Ref. https://www.elastic.co/guide/en/ecs/current/ecs-host.html
if [host] and ![host][name] {
mutate {
rename => { "[host]" => "[host][name]" }
}
}
}

output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
}
}

0 comments on commit b161def

Please sign in to comment.