diff --git a/API/api.tex b/API/api.tex index d793889..1d5373e 100644 --- a/API/api.tex +++ b/API/api.tex @@ -61,7 +61,7 @@ \section{Accessing the Gravwell API} \section{Direct Search API} \index{API!direct search} -The Gravwell Direct Query API is designed to provide atomic, REST-powered access to the Gravwell query system. This API allows for simple integration with external tools and systems that do not normally know how to interact with Gravwell. The API is designed to be as flexible as possible and support any tool that knows how to interact with an HTTP API. +The Gravwell Direct Query API is designed to provide atomic, REST-powered access to the Gravwell query system. This API enables simple integration with external tools and systems that do not normally know how to interact with Gravwell. The API is designed to be as flexible as possible and support any tool that knows how to interact with an HTTP API. The Direct Query API is authenticated and requires a valid Gravwell account with access to the Gravwell query system; a Gravwell token is the best way to access the API. diff --git a/Architecture/architecture.tex b/Architecture/architecture.tex index c41cc80..f0b2849 100644 --- a/Architecture/architecture.tex +++ b/Architecture/architecture.tex @@ -242,7 +242,7 @@ \section{Replication} \subsection{Online Replication} Online replication requires that indexers communicate directly with one -another and coordinate data replication. Online replication allows for +another and coordinate data replication. Online replication provides hot-failover, meaning that if an indexer fails the other indexers in the replication group will detect the failure and serve the failed node's data during a query. diff --git a/Indexers/indexers.tex b/Indexers/indexers.tex index c58c782..e0a7790 100644 --- a/Indexers/indexers.tex +++ b/Indexers/indexers.tex @@ -960,7 +960,7 @@ \section{Query Acceleration and Indexing} A Gravwell well without any acceleration configuration will employ only temporal indexing, which means that every entry is grouped according to a timestamp that is indexed using a temporal index. The temporal index -allows for specifying subsections of time without combing through data +allows searches over subsets of time without combing through data that isn't in the time region specified by the query. Wells can also be configured to enable a secondary index which takes into account data contents. The secondary indexes use feature extraction modules which @@ -1384,8 +1384,6 @@ \section{Indexer Optimization} As data is ingested into a Gravwell indexer it is grouped in storage units called blocks. The more efficiently that like data can be colocated into blocks, the more efficient we can store and query data. -Gravwell indexers allow for fine tuning maximum block sizes and the -facilities used to generate those blocks. \begin{figure} \includegraphics{images/prebuffer.png} diff --git a/Search/search.tex b/Search/search.tex index 1e01823..5708e66 100644 --- a/Search/search.tex +++ b/Search/search.tex @@ -402,7 +402,7 @@ \section{Inline Filtering} acceleration. Search modules that support inline filtering know how to communicate the filters to an indexer's acceleration engine, which can enable dramatic speedups. Even when not filtering in a manner that can -invoke the acceleration engine, inline filtering allows for fast type-native operations. +invoke the acceleration engine, inline filtering still enables fast type-native operations. Let's start by examining some modules that support inline filtering and examine a query that would not invoke the accelerators and then adapt it @@ -775,7 +775,7 @@ \section{Search Modules} modules are great for performing analysis to identify trends, deviences and abnormalities in very large datasets. Many problems have been identified simply by counting log entries over time. -Other search modules allow for plugging in logic which can allow for +Other search modules can execute arbitrary user-specified logic for very complicated processing. The \code{eval} module allows for arbitrarily complex boolean logic, and the \code{anko} module lets you plug a Turing-complete script into the pipeline. If there isn't a module that @@ -1436,8 +1436,8 @@ \subsection{Chart Renderer} values. If you would like to see more lines or fewer lines, you can add the \code{limit \textless{}n\textgreater{}} argument, which tells the charting library to not introduce the ``other'' grouping until it reaches the given limit -of n values. The limit maximum specifies the total number of data sets for a category; if the limit is 4 there may be 3 keyed sets and 1 other group. The user interface for charting allows for a rapid -transition between line, area, bar, pie, and donut charts. +of n values. The limit maximum specifies the total number of data sets for a category; if the limit is 4 there may be 3 keyed sets and 1 other group. The user interface for charting allows rapid +transitions between line, area, bar, pie, and donut charts. The following query generates a chart showing the most common invalid usernames seen on incoming SSH connections--indicators of brute-forcing: