Skip to content

Commit

Permalink
Eliminate some specific clunky phrases
Browse files Browse the repository at this point in the history
  • Loading branch information
floren committed Jun 17, 2024
1 parent 4c24e0e commit 13921ec
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 9 deletions.
2 changes: 1 addition & 1 deletion API/api.tex
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ \section{Accessing the Gravwell API}

\section{Direct Search API}
\index{API!direct search}
The Gravwell Direct Query API is designed to provide atomic, REST-powered access to the Gravwell query system. This API allows for simple integration with external tools and systems that do not normally know how to interact with Gravwell. The API is designed to be as flexible as possible and support any tool that knows how to interact with an HTTP API.
The Gravwell Direct Query API is designed to provide atomic, REST-powered access to the Gravwell query system. This API enables simple integration with external tools and systems that do not normally know how to interact with Gravwell. The API is designed to be as flexible as possible and support any tool that knows how to interact with an HTTP API.

The Direct Query API is authenticated and requires a valid Gravwell account with access to the Gravwell query system; a Gravwell token is the best way to access the API.

Expand Down
2 changes: 1 addition & 1 deletion Architecture/architecture.tex
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ \section{Replication}
\subsection{Online Replication}

Online replication requires that indexers communicate directly with one
another and coordinate data replication. Online replication allows for
another and coordinate data replication. Online replication provides
hot-failover, meaning that if an indexer fails the other indexers in the
replication group will detect the failure and serve the failed node's
data during a query.
Expand Down
4 changes: 1 addition & 3 deletions Indexers/indexers.tex
Original file line number Diff line number Diff line change
Expand Up @@ -960,7 +960,7 @@ \section{Query Acceleration and Indexing}
A Gravwell well without any acceleration configuration will employ only
temporal indexing, which means that every entry is grouped according to a
timestamp that is indexed using a temporal index. The temporal index
allows for specifying subsections of time without combing through data
allows searches over subsets of time without combing through data
that isn't in the time region specified by the query. Wells can also be
configured to enable a secondary index which takes into account data
contents. The secondary indexes use feature extraction modules which
Expand Down Expand Up @@ -1384,8 +1384,6 @@ \section{Indexer Optimization}
As data is ingested into a Gravwell indexer it is grouped in storage
units called blocks. The more efficiently that like data can be
colocated into blocks, the more efficient we can store and query data.
Gravwell indexers allow for fine tuning maximum block sizes and the
facilities used to generate those blocks.

\begin{figure}
\includegraphics{images/prebuffer.png}
Expand Down
8 changes: 4 additions & 4 deletions Search/search.tex
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,7 @@ \section{Inline Filtering}
acceleration. Search modules that support inline filtering know how to
communicate the filters to an indexer's acceleration engine, which can
enable dramatic speedups. Even when not filtering in a manner that can
invoke the acceleration engine, inline filtering allows for fast type-native operations.
invoke the acceleration engine, inline filtering still enables fast type-native operations.

Let's start by examining some modules that support inline filtering and
examine a query that would not invoke the accelerators and then adapt it
Expand Down Expand Up @@ -775,7 +775,7 @@ \section{Search Modules}
modules are great for performing analysis to identify
trends, deviences and abnormalities in very large datasets. Many
problems have been identified simply by counting log entries over time.
Other search modules allow for plugging in logic which can allow for
Other search modules can execute arbitrary user-specified logic for
very complicated processing. The \code{eval} module allows for arbitrarily
complex boolean logic, and the \code{anko} module lets you plug a
Turing-complete script into the pipeline. If there isn't a module that
Expand Down Expand Up @@ -1436,8 +1436,8 @@ \subsection{Chart Renderer}
values. If you would like to see more lines or fewer lines, you can add the \code{limit \textless{}n\textgreater{}}
argument, which tells the charting library
to not introduce the ``other'' grouping until it reaches the given limit
of n values. The limit maximum specifies the total number of data sets for a category; if the limit is 4 there may be 3 keyed sets and 1 other group. The user interface for charting allows for a rapid
transition between line, area, bar, pie, and donut charts.
of n values. The limit maximum specifies the total number of data sets for a category; if the limit is 4 there may be 3 keyed sets and 1 other group. The user interface for charting allows rapid
transitions between line, area, bar, pie, and donut charts.

The following query generates a chart showing the most common invalid usernames seen on incoming SSH connections--indicators of brute-forcing:

Expand Down

0 comments on commit 13921ec

Please sign in to comment.