Skip to content

Commit

Permalink
Changes to accomodate uuid_indices including migration script and imp…
Browse files Browse the repository at this point in the history
…rove deployer process
  • Loading branch information
ewolinetz committed Mar 31, 2016
1 parent 2d23373 commit 1e627fe
Show file tree
Hide file tree
Showing 17 changed files with 1,001 additions and 765 deletions.
42 changes: 39 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,11 @@ Scale down your Fluentd instances to 0.

$ oc scale dc/logging-fluentd --replicas=0

Or if your Fluentd is being deployed using the daemonset controller unlabel all
your nodes.

$ oc label nodes --all logging-infra-fluentd-

Wait until they have properly terminated, this gives them time to properly
flush their current buffer and send any logs they were processing to
Elasticsearch. This helps prevent loss of data.
Expand All @@ -287,8 +292,17 @@ Once your ES pods are confirmed to be terminated we can now pull in the latest
EFK images to use as described [here](https://docs.openshift.org/latest/install_config/upgrading/manual_upgrades.html#importing-the-latest-images),
replacing the default namespace with the namespace where logging was installed.

With the latest images in your repository we can now begin to scale back up.
We want to scale ES back up incrementally so that the cluster has time to rebuild.
With the latest images in your repository we can now rerun the deployer to generate
any missing or changed features.

Be sure to delete your oauth client

$ oc delete oauthclient --selector logging-infra=support

Then proceed to follow the same steps as done previously for using the deployer.
After the deployer completes, re-attach your persistent volumes you were using
previously. Next, we want to scale ES back up incrementally so that the cluster
has time to rebuild.

$ oc scale dc/logging-es-{unique_name} --replicas=1

Expand All @@ -304,4 +318,26 @@ recovered.
We can now scale Kibana and Fluentd back up to their previous state. Since Fluentd
was shut down and allowed to push its remaining records to ES in the previous
steps it can now pick back up from where it left off with no loss of logs -- so long
as the log files that were not read in are still available on the node.
as the log files that were not read in are still available on the node.

Note:
If your previous deployment did not use a daemonset to schedule Fluentd pods you
will now need to label your nodes to deploy Fluentd to.

$ oc label nodes <node_name> logging-infra-fluentd=true

Or to deploy Fluentd to all your nodes.

$ oc label nodes --all logging-infra-fluentd=true

With this latest version, Kibana will display indices differently now in order
to prevent users from being able to access the logs of previously created
projects that have been deleted.

Due to this change your old logs will not appear automatically. To migrate your
old indices to the new format, rerun the deployer with `-v MODE=migrate` in addition
to your prior flags. This should be run while your ES cluster is running as the
script will need to connect to it to make changes.
Note: This only impacts non-operations logs, operations logs will appear the
same as in previous versions. There should be minimal performance impact to ES
while running this and it will not perform an install.
43 changes: 13 additions & 30 deletions deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,14 @@ For examples in this document we will assume the `logging` project.
You can use the `default` or another project if you want. This
implementation has no need to run in any specific project.

## Create missing templates

If your installation did not create templates in the `openshift`
namespace, the `logging-deployer-template` and `logging-deployer-account-template`
templates may not exist. In that case you can create them with the following:

$ oc create -n openshift -f https://raw.githubusercontent.com/openshift/origin-aggregated-logging/v0.2/deployment/deployer.yaml ...

## Create the Deployer Secret

Security parameters for the logging infrastructure
Expand Down Expand Up @@ -98,20 +106,14 @@ An invocation supplying a properly signed Kibana cert might be:
## Create Supporting ServiceAccounts

The deployer must run under a service account defined as follows:
(Note: change `:logging:` below to match the project name.)

$ oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
name: logging-deployer
secrets:
- name: logging-deployer
API

$ oc policy add-role-to-user edit \
$ oc process -n openshift logging-deployer-account-template | oc create -f -
$ oc policy add-role-to-user edit --serviceaccount logging-deployer
$ oc policy add-role-to-user daemonset-admin --serviceaccount logging-deployer
$ oadm policy add-cluster-role-to-user oauth-editor \
system:serviceaccount:logging:logging-deployer

Note: change `:logging:` above to match the project name.

The policy manipulation is required in order for the deployer pod to
create secrets, templates, and deployments in the project. By default
Expand Down Expand Up @@ -156,12 +158,6 @@ You run the deployer by instantiating a template. Here is an example with some p
-v KIBANA_HOSTNAME=kibana.example.com,PUBLIC_MASTER_URL=https://localhost:8443 \
| oc create -f -

If your installation did not create templates in the `openshift`
namespace, the `logging-deployer-template` template may not exist. In
that case you can just process the template source:

$ oc process -f https://raw.githubusercontent.com/openshift/origin-aggregated-logging/v0.1/deployment/deployer.yaml ...

This creates a deployer pod and prints its name. Wait until the pod
is running; this can take up to a few minutes to retrieve the deployer
image from its registry. You can watch it with:
Expand All @@ -179,19 +175,6 @@ are given below.

## Deploy the templates created by the deployer

### Supporting definitions

Create the supporting definitions from template (you must be cluster admin):

$ oc process logging-support-template | oc create -f -

Tip: Check the output to make sure that all objects were created
successfully. If any were not, it is probably because one or more
already existed from a previous deployment (potentially in a different
project). You can delete them all before trying again:

$ oc process logging-support-template | oc delete -f -

### ElasticSearch

The deployer creates the number of ElasticSearch instances specified by
Expand Down
Loading

0 comments on commit 1e627fe

Please sign in to comment.