Skip to content

Commit

Permalink
Merge pull request #100 from the-overengineer/fix/ls-spelling-fixes
Browse files Browse the repository at this point in the history
Spelling/phrasing fixes
  • Loading branch information
the-overengineer authored Apr 27, 2021
2 parents dfb5e4c + 5e8a4ec commit 784da2e
Show file tree
Hide file tree
Showing 32 changed files with 51 additions and 12,772 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,7 @@ server.pid

# Default sigar library provision location.
native/
.jekyll-cache
.jekyll-cache

# Tags file
.tags
12,724 changes: 0 additions & 12,724 deletions .tags

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ have finally arrived to a point where we can share a working feature with you!
Initially our plans were to deliver support for remoting and cluster in two releases from now 0.2.6/0.3.6 but during the
last few months we received requests for this feature from many of our users, so we decided to deliver a experimental
version earlier than what we initially expected and, if possible, ship this at least as experimental in 0.2.5/0.3.5.
Here is a brief summary of what is included in the snapshots detailed bellow:
Here is a brief summary of what is included in the snapshots detailed below:

* __First, it doesn't blow up in your face anymore__, previous to this snapshot users of remoting and cluster might experiment
errors because some parts of Kamon were not `Serializable`, making it impossible for them to use Kamon in such projects.
Expand Down
2 changes: 1 addition & 1 deletion _posts/2014-11-11-kamon-0.2.5-and-0.3.5-has-landed.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Improvements to our Core

The most notable improvement, besides fixing a few issues reported by our users, is that now our Segments have a more
clearly defined API that works nicely and that became a solid base to other improvements related to segments that you
will see bellow. We also moved from using `Option[TraceContext]` to the null object idiom, this helps to make a cleaner
will see below. We also moved from using `Option[TraceContext]` to the null object idiom, this helps to make a cleaner
API for Trace Context manipulation and hopefully will make the life of our Java users less painful. If you are in the
Java land stay tuned, we will certainly show more love to you in the days to come!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ now we have 5 URLs that we can hit:
### Bootstrapping Scalatra with Kamon ###

We will need to bootstrap `Scalatra` and hook `Kamon` into it's lifecycle and the best place for this is using `ScalatraBootstrap`'s
`init` and `destroy` hooks as shown bellow:
`init` and `destroy` hooks as shown below:

{% code_block scala %}

Expand Down
2 changes: 1 addition & 1 deletion docs/latest/core/advanced/metric-instruments.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The HdrHistogram mixes linear and exponential bucket systems to produce a unique
measurements with configurable precision and fixed memory and CPU costs, regardless of the number of measurements
recorded. Under the hood, the HdrHistogram stores all the data in a single array of `long`s as occurrences of a given value,
adjusted with the precision configuration provided when creating the HdrHistogram. For example, if we were to store a
recording of 10 units in an HdrHistogram with an underlying array similar to the one shown in the diagram bellow, all
recording of 10 units in an HdrHistogram with an underlying array similar to the one shown in the diagram below, all
that's needed is to add one to the value in the ninth bucket.

<img class="img-fluid" src="/assets/img/diagrams/hdr-layout.png">
Expand Down
2 changes: 1 addition & 1 deletion docs/latest/core/utilities.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Additionally, a matcher type prefix can be added to select a different type of m
- `glob:` specifies that the remaining of the string is a glob-like pattern.
- `regex:` specifies that the remaining of the string is a regular expression pattern.

After filters have been defined they can be applied by using the `Kamon.filter(...)` function as shown bellow:
After filters have been defined they can be applied by using the `Kamon.filter(...)` function as shown below:


{% code_example %}
Expand Down
4 changes: 2 additions & 2 deletions docs/latest/guides/how-to/log-trace-id-and-context-info.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ There are four built-in converters in the Logback module, which should be added
</configuration>
```

Once they are there, use the conversion words to include pieces of the Context in your log patters as shown bellow.
Once they are there, use the conversion words to include pieces of the Context in your log patters as shown below.


### Trace and Span Identifiers
Expand Down Expand Up @@ -71,7 +71,7 @@ conversion word configured above in the desired position of the log pattern:

Including Context Tags and Entries is very similar to including the trace and span identifiers, but the conversion words
must be provided with a parameter that specifies the name of the tag or entry that you want to include in the log. For
example, the configuration bellow will write the value of the `user.id` tag and the `someKey` entry in the logs:
example, the configuration below will write the value of the `user.id` tag and the `someKey` entry in the logs:

```xml
<configuration scan="false" debug="false">
Expand Down
4 changes: 2 additions & 2 deletions docs/latest/guides/how-to/start-with-the-kanela-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ In a Nutshell

All you need to do is download the latest release from our [Kanela releases][kanela-releases]{:target="_blank" rel="noopener"}
repository and start your JVM with the `-javaagent:path-to-kanela.jar` JVM option. For example, if you are running your
application from IntelliJ IDEA you will need to add the `-javaagent` option to the "VM options" section as shown bellow:
application from IntelliJ IDEA you will need to add the `-javaagent` option to the "VM options" section as shown below:

<img class="img-fluid rounded" src="/assets/img/agent/intellij-javaagent.png">

And that is pretty much it. Even though it is a simple task, it can be challenging in different environments so please,
follow the instructions bellow when:
follow the instructions below when:
1. [Running applications from SBT](#running-from-sbt)
2. [Running a Play Framework application on development mode](#play-framework)
3. [Packaging applications with sbt-native-packager](#using-sbt-native-packager)
Expand Down
12 changes: 6 additions & 6 deletions docs/latest/guides/migration/from-1.x-to-2.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Migrating from Kamon 1.x to 2.0
===============================

Most of the work put into Kamon `2.0` has been geared towards having cleaner, easier to use APIs and instrumentation
mechanisms and some of those improvements resulted in breaking changes that we are enumerating bellow. The amount of
mechanisms and some of those improvements resulted in breaking changes that we are enumerating below. The amount of
effort needed to upgrade can vary based on whether you were just using plain Kamon to gather standard metrics or you
were actively using the APIs to manage context and create your own metrics and traces, but in general this should not be
a big effort and you are like to remove lines rather than add.
Expand All @@ -21,7 +21,7 @@ we'll give you a hand and update this guide accordingly.

The `Kamon.init()` method takes care of a few common tasks performed during initialization:
- It will try to attach the instrumentation agent to the current JVM if you have the bundle dependency (more on that
bellow).
below).
- It will scan your classpath for modules and automatically start them.
- It can optionally take a new `Config` instance to be used by Kamon.

Expand Down Expand Up @@ -68,7 +68,7 @@ Kamon.registerModule("reporter name", reporter);
The `refine` method has been renamed to `withTag`, which return a new instrument with the specified tags. This also
allows for chaining calls to `withTag` and the parent tags will be preserved.

Also, it was possible to call instrument actions directly on a metric (see the example bellow) which would result in
Also, it was possible to call instrument actions directly on a metric (see the example below) which would result in
recording values on an instrument without any tags. In order to keep the separation between a metric and its instruments
as clearly defined as possible, those APIs are no longer available and if you were doing this, you will need to
explicitly call `withoutTags` to get the instrument without tags:
Expand All @@ -88,7 +88,7 @@ counter.withTag("zone", "east").increment()

#### Metrics changes

Gauges changed as show bellow:
Gauges changed as show below:

{% code_block scala %}
// Kamon 1.x
Expand All @@ -111,7 +111,7 @@ hold both entries and tags, and since tags are made out of known types (String,
them without additional intervention across HTTP and Binary propagation channels.

Context instances are immutable and you can create a new Context that includes or overrides certain tag using the
`withTag` function as show bellow:
`withTag` function as show below:

{% code_block scala %}

Expand All @@ -123,7 +123,7 @@ val context = Context.Empty
{% endcode_block %}

Remember though, creating a Context has nothing to do with making it current or propagating it, make sure you use the
appropriate functions for that (see more bellow).
appropriate functions for that (see more below).


### Tags and Metrics Names
Expand Down
2 changes: 1 addition & 1 deletion docs/latest/instrumentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Instrumentation Modules
=======================

All the instrumentation modules are included in the Kamon Bundle so, out of the box, you get instrumentation for
everything bellow! If you are not using the Kamon Bundle please refer to each module's Manual Installation section.
everything below! If you are not using the Kamon Bundle please refer to each module's Manual Installation section.

- **[Akka](./akka/)** instrumentation provides context propagation, metrics and tracing for Akka actors, routers,
dispatchers, actor systems, cluster sharding and remoting components.
Expand Down
8 changes: 4 additions & 4 deletions docs/latest/instrumentation/akka-http.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ by the instrumentation is:
will also get HTTP endpoint metrics via the `span.processing-time` metric.
3. Lower level HTTP server metrics will be collected for the HTTP server side.

Bellow, you will find a more detailed descriptions of each feature and relevant configuration settings in case you want
Below, you will find a more detailed descriptions of each feature and relevant configuration settings in case you want
to customize the behavior, but you don't need to learn any of it start using the instrumentation! Just start to your
application with the instrumentation agent and you are good to go.

Expand All @@ -37,7 +37,7 @@ happening under the hood and how to modify the instrumentation behavior.
The instrumentation will automatically read/write Kamon's Context from/tp HTTP headers in all HTTP requests and set that
Context as current while requests are being processed, enabling higher level features like distributed tracing. If you
want to change the propagation channel or completely disable Context propagation you can use the `propagation` settings
bellow:
below:

```hcl
kamon.instrumentation.akka.http {
Expand Down Expand Up @@ -86,7 +86,7 @@ Request Tracing

HTTP Server and Client requests processed by the application will be automatically traced, which in turn means that
metrics can (and will) be recorded for the HTTP operations. You can control whether tracing is enabled or not under the
`tracking` section bellow, as well as controlling whether Span Metrics will be recorded when tracing is enabled:
`tracking` section below, as well as controlling whether Span Metrics will be recorded when tracing is enabled:

```hcl
kamon.instrumentation.akka.http {
Expand Down Expand Up @@ -119,7 +119,7 @@ by adding one of the following modes to each setting:
tags, that's one of the reasons the URL is only set as a span tag.

Also, it is possible to make Kamon copy tags from the current Context into the HTTP operation Spans by using the
`from-context` section. In the example bellow we are showing the default settings for the Akka HTTP instrumentation and
`from-context` section. In the example below we are showing the default settings for the Akka HTTP instrumentation and
additionally, we are instructing Kamon to copy the `requestID` tag as a Span tag for both the client and server side
instrumentation.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ can know for sure what failed. This implies a bit more of coding, but certainly
Nevertheless, sometimes you can't introduce these code changes but you still need to know what is going on. If you get
to this point, Kamon offers you the possibility to gather information about the call sites where the ask pattern was
used and log a warning message with this info in case the ask times out. You can enable this timeout warning by setting
the `kamon.akka.ask-pattern-timeout-warning` configuration key. This warning comes in two flavors as described bellow:
the `kamon.akka.ask-pattern-timeout-warning` configuration key. This warning comes in two flavors as described below:


### Lightweight Warning ###
Expand Down
4 changes: 2 additions & 2 deletions docs/latest/instrumentation/akka/context-propagation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@ Besides the metrics recording side of our Akka integration, we also provide byte
automatically propagate Kamon's Context through certain specific events in order to keep the principle of having a
single and predictable place to look for the "current" context.

In the examples bellow we will explore the conditions under which Kamon will automatically propagate the currently
In the examples below we will explore the conditions under which Kamon will automatically propagate the currently
available context. Please note that even while in these examples we are explicitly wrapping the code sections with a new
`Context`, it is very unlikely that you will need to do so yourself if you are using other supported toolkits such
as Akka HTTP and Play Framework. You will commonly need a context to be present only when the first event is generated
and then Kamon will take care of propagating the `Context` to all related events, under the conditions explained bellow.
and then Kamon will take care of propagating the `Context` to all related events, under the conditions explained below.


### Tell, ! and Forward ###
Expand Down
2 changes: 1 addition & 1 deletion docs/latest/instrumentation/akka/tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Customizing Spans
-----------------

In case you would like to modify the Span automatically created by instrumentation, you can access it using the
`Kamon.currentSpan()` shortcut and do anything you want with it! This example bellow adds a custom tag to the Span:
`Kamon.currentSpan()` shortcut and do anything you want with it! This example below adds a custom tag to the Span:

{% code_example %}
{% language scala instrumentation/akka/src/main/scala/kamon/examples/akka/scala/ContextPropagation.scala tag:customizing-a-span label:"Customizing the Spans" %}
Expand Down
2 changes: 1 addition & 1 deletion docs/latest/instrumentation/executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Executor Service Instrumentation

This module lets you collect metrics from an Executor Service, be it a Thread Pool Executor or a Fork Join Pool. To
start tracking an Executor Service you will need to register it with the executors module by calling
`ExecutorsInstrumentation.instrument(...)` as shown bellow:
`ExecutorsInstrumentation.instrument(...)` as shown below:

{% code_example %}
{% language scala instrumentation/executors/src/main/scala/kamon/examples/executors/FuturesAndExecutors.scala tag:registering-a-executor label:"Registering a Executor Service" %}
Expand Down
2 changes: 1 addition & 1 deletion docs/latest/instrumentation/logback.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Converters need to be registered manually by adding these conversion rulers to y
```

Once your conversion rules are in place, all you need to do is decide where you include them in your log patter. For
example, in the pattern bellow we are including the current Trace and Span identifiers, as well as the value of the
example, in the pattern below we are including the current Trace and Span identifiers, as well as the value of the
`user.id` Context tag:


Expand Down
8 changes: 4 additions & 4 deletions docs/latest/instrumentation/play-framework.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ applications if you plan to use Kamon while running on development mode. Please
Started</a> guide if you need help with the setup.
{% endalert %}

Bellow, you will find a more detailed descriptions of each feature and relevant configuration settings in case you want
Below, you will find a more detailed descriptions of each feature and relevant configuration settings in case you want
to customize the behavior, but you don't need to learn any of it start using the instrumentation! Just start to your
application with the instrumentation agent and you are good to go.

Expand All @@ -46,7 +46,7 @@ happening under the hood and how to modify the instrumentation behavior.
The instrumentation will automatically read/write Kamon's Context from/tp HTTP headers in all HTTP requests and set that
Context as current while requests are being processed, enabling higher level features like distributed tracing. If you
want to change the propagation channel or completely disable Context propagation you can use the `propagation` settings
bellow:
below:

```hcl
kamon.instrumentation.play.http {
Expand Down Expand Up @@ -95,7 +95,7 @@ Request Tracing

HTTP Server and Client requests processed by the application will be automatically traced, which in turn means that
metrics can (and will) be recorded for the HTTP operations. You can control whether tracing is enabled or not under the
`tracking` section bellow, as well as controlling whether Span Metrics will be recorded when tracing is enabled:
`tracking` section below, as well as controlling whether Span Metrics will be recorded when tracing is enabled:

```hcl
kamon.instrumentation.play.http {
Expand Down Expand Up @@ -128,7 +128,7 @@ by adding one of the following modes to each setting:
tags, that's one of the reasons the URL is only set as a span tag.

Also, it is possible to make Kamon copy tags from the current Context into the HTTP operation Spans by using the
`from-context` section. In the example bellow we are showing the default settings for the Akka HTTP instrumentation and
`from-context` section. In the example below we are showing the default settings for the Akka HTTP instrumentation and
additionally, we are instructing Kamon to copy the `requestID` tag as a Span tag for both the client and server side
instrumentation.

Expand Down
2 changes: 1 addition & 1 deletion docs/latest/reporters/statsd.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ will make the `kamon-statsd` module scale all timing measurements to millisecond
StatsD is widely used and there are many integrations available, even alternative implementations that can receive UDP
messages with the StatsD protocol, you just have to pick the option that best suits you. For our internal testing we
choose to use [Graphite] as the StatsD backend and [Grafana] to create beautiful dashboards with very useful metrics.
Have an idea of how your metrics data might look like in Grafana with the screenshot bellow or use our [docker image] to
Have an idea of how your metrics data might look like in Grafana with the screenshot below or use our [docker image] to
get up and running in a few minutes and see it with your own metrics!

TODO: Update the dashboards and images.
Expand Down
2 changes: 1 addition & 1 deletion docs/v1/core/advanced/metric-instruments.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The HdrHistogram mixes linear and exponential bucket systems to produce a unique
measurements with configurable precision and fixed memory and CPU costs, regardless of the number of measurements
recorded. Under the hood, the HdrHistogram stores all the data in a single array of `long`s as occurrences of a given value,
adjusted with the precision configuration provided when creating the HdrHistogram. For example, if we were to store a
recording of 10 units in an HdrHistogram with an underlying array similar to the one shown in the diagram bellow, all
recording of 10 units in an HdrHistogram with an underlying array similar to the one shown in the diagram below, all
that's needed is to add one to the value in the ninth bucket.

<img class="img-fluid" src="/assets/img/diagrams/hdr-layout.png">
Expand Down
2 changes: 1 addition & 1 deletion docs/v1/core/utilities.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Additionally, a matcher type prefix can be added to select a different type of m
- `glob:` specifies that the remaining of the string is a glob-like pattern.
- `regex:` specifies that the remaining of the string is a regular expression pattern.

After filters have been defined they can be applied by using the `Kamon.filter(...)` function as shown bellow:
After filters have been defined they can be applied by using the `Kamon.filter(...)` function as shown below:


{% code_example %}
Expand Down
2 changes: 1 addition & 1 deletion docs/v1/guides/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Enabling Instrumentation

The bytecode instrumentation is powered by [AspectJ][aspectj], all you need to do is add the `-javaagent` JVM
startup parameter pointing to the `aspectjweaver.jar` file from the latest [AspectJ distribution][aspectjweaver] as shown
bellow:
below:

{% code_block %}
java -javaagent:/path/to/aspectjweaver.jar ...
Expand Down
Loading

0 comments on commit 784da2e

Please sign in to comment.