Skip to content
This repository has been archived by the owner. It is now read-only.

Commit

Permalink
Merge pull request #20 from HubbleStack/develop
Browse files Browse the repository at this point in the history
Merge to master (prep for v2016.10.1)
  • Loading branch information
basepi authored Oct 18, 2016
2 parents 1515416 + 32ea4e6 commit c7430b3
Show file tree
Hide file tree
Showing 4 changed files with 116 additions and 64 deletions.
2 changes: 1 addition & 1 deletion FORMULA
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: hubblestack_nebula
os: RedHat, CentOS, Debian, Ubuntu
os_family: RedHat, Debian
version: 2016.7.1
version: 2016.9.1
release: 1
summary: HubbleStack Nebula
description: HubbleStack Nebula
72 changes: 20 additions & 52 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ repo for updates and bugfixes!)

.. code-block:: shell
wget https://spm.hubblestack.io/2016.7.1/hubblestack_nebula-2016.7.1-1.spm
spm local install hubblestack_nebula-2016.7.1-1.spm
wget https://spm.hubblestack.io/nebula/hubblestack_nebula-2016.9.1-1.spm
spm local install hubblestack_nebula-2016.9.1-1.spm
You should now be able to sync the new modules to your minion(s) using the
``sync_modules`` Salt utility:
Expand All @@ -63,19 +63,6 @@ You should now be able to sync the new modules to your minion(s) using the
salt \* saltutil.sync_modules
Copy the ``hubblestack_nebula.sls.orig`` into your Salt pillar, dropping the
``.orig`` extension and target it to selected minions.

.. code-block:: shell
base:
'*':
- hubblestack_nebula
.. code-block:: shell
salt \* saltutil.refresh_pillar
Once these modules are synced you are ready to schedule HubbleStack Nebula
queries.

Expand All @@ -100,18 +87,6 @@ it to the minions.
salt \* saltutil.sync_modules
Target the ``hubblestack_nebula.sls`` to selected minions.

.. code-block:: shell
base:
'*':
- hubblestack_nebula
.. code-block:: shell
salt \* saltutil.refresh_pillar
Once these modules are synced you are ready to schedule HubbleStack Nebula
queries.

Expand All @@ -120,42 +95,35 @@ queries.
Usage
=====

This module also requires pillar data to function. The default pillar key for
this data is ``nebula_osquery``. The queries themselves should be grouped
under one or more group identifiers. Usually, these identifiers will be
frequencies, such as ``fifteen_min`` or ``hourly`` or ``daily``. The module
targets the queries using these identifiers.

Your pillar data might look like this:
These queries have been designed to give detailed insight into system activity.

**hubble_nebula.sls**
**hubblestack_nebula/hubblestack_nebula_queries.yaml**

.. code-block:: yaml
nebula_osquery:
fifteen_min:
- query_name: running_procs
query: select p.name as process, p.pid as process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size as mem_used, p.parent, g.groupname, u.username as user, p.path, h.md5, h.sha1, h.sha256 from processes as p left join users as u on p.uid=u.uid left join groups as g on p.gid=g.gid left join hash as h on p.path=h.path;
- query_name: established_outbound
query: select t.iso_8601 as _time, pos.family, h.*, ltrim(pos.local_address, ':f') as src, pos.local_port as src_port, pos.remote_port as dest_port, ltrim(remote_address, ':f') as dest, name, p.path as file_path, cmdline, pos.protocol, lp.protocol from process_open_sockets as pos join processes as p on p.pid=pos.pid left join time as t LEFT JOIN listening_ports as lp on lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash as h on h.path=p.path where not remote_address='' and not remote_address='::' and not remote_address='0.0.0.0' and not remote_address='127.0.0.1' and port is NULL;
- query_name: listening_procs
query: select t.iso_8601 as _time, h.md5 as md5, p.pid, name, ltrim(address, ':f') as address, port, p.path as file_path, cmdline, root, parent from listening_ports as lp JOIN processes as p on lp.pid=p.pid left JOIN time as t JOIN hash as h on h.path=p.path WHERE not address='127.0.0.1';
- query_name: suid_binaries
query: select sb.*, t.iso_8601 as _time from suid_bin as sb join time as t;
hour:
- query_name: crontab
query: select c.*,t.iso_8601 as _time from crontab as c join time as t;
day:
- query_name: rpm_packages
query: select rpm.*, t.iso_8601 from rpm_packages as rpm join time as t;
fifteen_min:
- query_name: running_procs
query: SELECT p.name AS process, p.pid AS process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size AS mem_used, p.parent, g.groupname, u.username AS user, p.path, h.md5, h.sha1, h.sha256 FROM processes AS p LEFT JOIN users AS u ON p.uid=u.uid LEFT JOIN groups AS g ON p.gid=g.gid LEFT JOIN hash AS h ON p.path=h.path;
- query_name: established_outbound
query: SELECT t.iso_8601 AS _time, pos.family, h.*, ltrim(pos.local_address, ':f') AS src, pos.local_port AS src_port, pos.remote_port AS dest_port, ltrim(remote_address, ':f') AS dest, name, p.path AS file_path, cmdline, pos.protocol, lp.protocol FROM process_open_sockets AS pos JOIN processes AS p ON p.pid=pos.pid LEFT JOIN time AS t LEFT JOIN (SELECT * FROM listening_ports) AS lp ON lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash AS h ON h.path=p.path WHERE NOT remote_address='' AND NOT remote_address='::' AND NOT remote_address='0.0.0.0' AND NOT remote_address='127.0.0.1' AND port is NULL;
- query_name: listening_procs
query: SELECT t.iso_8601 AS _time, h.md5 AS md5, p.pid, name, ltrim(address, ':f') AS address, port, p.path AS file_path, cmdline, root, parent FROM listening_ports AS lp LEFT JOIN processes AS p ON lp.pid=p.pid LEFT JOIN time AS t LEFT JOIN hash AS h ON h.path=p.path WHERE NOT address='127.0.0.1';
- query_name: suid_binaries
query: SELECT sb.*, t.iso_8601 AS _time FROM suid_bin AS sb JOIN time AS t;
hour:
- query_name: crontab
query: SELECT c.*,t.iso_8601 AS _time FROM crontab AS c JOIN time AS t;
day:
- query_name: rpm_packages
query: SELECT rpm.name, rpm.version, rpm.release, rpm.source AS package_source, rpm.size, rpm.sha1, rpm.arch, t.iso_8601 FROM rpm_packages AS rpm JOIN time AS t;
.. _nebula_usage_schedule:

Schedule
--------

Nebula is designed to be used on a schedule. Here is a set of sample schedules
for use with the sample pillar data contained in this repo:
for use with the sample queries.

**hubble_nebula.sls (cont.)**

Expand Down
90 changes: 85 additions & 5 deletions _modules/nebula_osquery.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,27 +30,27 @@

import copy
import logging
import os
import sys
import yaml

import salt.utils
from salt.exceptions import CommandExecutionError

log = logging.getLogger(__name__)

__version__ = 'v2016.10.1'
__virtualname__ = 'nebula'


def __virtual__():
if salt.utils.is_windows():
return False, 'Windows not supported'
if 'osquery.query' not in __salt__:
return False, 'osquery not available'
return __virtualname__


def queries(query_group,
query_file='salt://hubblestack_nebula/hubblestack_nebula_queries.yaml',
verbose=False):
verbose=False,
report_version_with_day=True):
'''
Run the set of queries represented by ``query_group`` from the
configuration in the file query_file
Expand All @@ -73,6 +73,33 @@ def queries(query_group,
salt '*' nebula.queries hour verbose=True
salt '*' nebula.queries hour pillar_key=sec_osqueries
'''
if salt.utils.is_windows() or 'osquery.query' not in __salt__:
if query_group == 'day':
log.warning('osquery not installed on this host. Returning baseline data')
# Match the formatting of normal osquery results. Not super
# readable, but just add new dictionaries to the list as we need
# more data
ret = []
ret.append(
{'fallback_osfinger': {
'data': [{'osfinger': __grains__.get('osfinger', __grains__.get('osfullname'))}],
'result': True
}}
)
if 'pkg.list_pkgs' in __salt__:
ret.append(
{'fallback_pkgs': {
'data': [{'name': k, 'version': v} for k, v in __salt__['pkg.list_pkgs']().iteritems()],
'result': True
}}
)
if report_version_with_day:
ret.append(hubble_versions())
return ret
else:
log.debug('osquery not installed on this host. Skipping.')
return None

query_file = __salt__['cp.cache_file'](query_file)
with open(query_file, 'r') as fh:
query_data = yaml.safe_load(fh)
Expand Down Expand Up @@ -100,4 +127,57 @@ def queries(query_group,
else:
ret.append({name: query_ret})

if query_group == 'day' and report_version_with_day:
ret.append(hubble_versions())

return ret


def version():
'''
Report version of this module
'''
return __version__


def hubble_versions():
'''
Report version of all hubble modules as query
'''
versions = {}

# Nova
if 'hubble.version' in __salt__:
versions['nova'] = __salt__['hubble.version']()
else:
versions['nova'] = None

# Nebula
versions['nebula'] = version()

# Pulsar
if salt.utils.is_windows():
try:
sys.path.insert(0, os.path.dirname(__salt__['cp.cache_file']('salt://_beacons/win_pulsar.py')))
import win_pulsar
versions['pulsar'] = win_pulsar.__version__
except:
versions['pulsar'] = None
else:
try:
sys.path.insert(0, os.path.dirname(__salt__['cp.cache_file']('salt://_beacons/pulsar.py')))
import pulsar
versions['pulsar'] = pulsar.__version__
except:
versions['pulsar'] = None

# Quasar
try:
sys.path.insert(0, os.path.dirname(__salt__['cp.cache_file']('salt://_returners/splunk_nova_return.py')))
import splunk_nova_return
versions['quasar'] = splunk_nova_return.__version__
except:
versions['quasar'] = None

return {'hubble_versions': {'data': [versions],
'result': True}}
16 changes: 10 additions & 6 deletions hubblestack_nebula/hubblestack_nebula_queries.yaml
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
fifteen_min:
- query_name: running_procs
query: select p.name as process, p.pid as process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size as mem_used, p.parent, g.groupname, u.username as user, p.path, h.md5, h.sha1, h.sha256 from processes as p left join users as u on p.uid=u.uid left join groups as g on p.gid=g.gid left join hash as h on p.path=h.path;
query: SELECT p.name AS process, p.pid AS process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size AS mem_used, p.parent, g.groupname, u.username AS user, p.path, h.md5, h.sha1, h.sha256 FROM processes AS p LEFT JOIN users AS u ON p.uid=u.uid LEFT JOIN groups AS g ON p.gid=g.gid LEFT JOIN hash AS h ON p.path=h.path;
- query_name: established_outbound
query: select t.iso_8601 as _time, pos.family, h.*, ltrim(pos.local_address, ':f') as src, pos.local_port as src_port, pos.remote_port as dest_port, ltrim(remote_address, ':f') as dest, name, p.path as file_path, cmdline, pos.protocol, lp.protocol from process_open_sockets as pos join processes as p on p.pid=pos.pid left join time as t LEFT JOIN listening_ports as lp on lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash as h on h.path=p.path where not remote_address='' and not remote_address='::' and not remote_address='0.0.0.0' and not remote_address='127.0.0.1' and port is NULL;
query: SELECT t.iso_8601 AS _time, pos.family, h.*, ltrim(pos.local_address, ':f') AS src, pos.local_port AS src_port, pos.remote_port AS dest_port, ltrim(remote_address, ':f') AS dest, name, p.path AS file_path, cmdline, pos.protocol, lp.protocol FROM process_open_sockets AS pos JOIN processes AS p ON p.pid=pos.pid LEFT JOIN time AS t LEFT JOIN (SELECT * FROM listening_ports) AS lp ON lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash AS h ON h.path=p.path WHERE NOT remote_address='' AND NOT remote_address='::' AND NOT remote_address='0.0.0.0' AND NOT remote_address='127.0.0.1' AND port is NULL;
- query_name: listening_procs
query: select t.iso_8601 as _time, h.md5 as md5, p.pid, name, ltrim(address, ':f') as address, port, p.path as file_path, cmdline, root, parent from listening_ports as lp left JOIN processes as p on lp.pid=p.pid left JOIN time as t left JOIN hash as h on h.path=p.path WHERE not address='127.0.0.1';
query: SELECT t.iso_8601 AS _time, h.md5 AS md5, p.pid, name, ltrim(address, ':f') AS address, port, p.path AS file_path, cmdline, root, parent FROM listening_ports AS lp LEFT JOIN processes AS p ON lp.pid=p.pid LEFT JOIN time AS t LEFT JOIN hash AS h ON h.path=p.path WHERE NOT address='127.0.0.1';
- query_name: suid_binaries
query: select sb.*, t.iso_8601 as _time from suid_bin as sb join time as t;
query: SELECT sb.*, t.iso_8601 AS _time FROM suid_bin AS sb JOIN time AS t;
hour:
- query_name: crontab
query: select c.*,t.iso_8601 as _time from crontab as c join time as t;
query: SELECT c.*,t.iso_8601 AS _time FROM crontab AS c JOIN time AS t;
day:
- query_name: rpm_packages
query: select rpm.name, rpm.version, rpm.release, rpm.source as package_source, rpm.size, rpm.sha1, rpm.arch, t.iso_8601 from rpm_packages as rpm join time as t;
query: SELECT rpm.name, rpm.version, rpm.release, rpm.source AS package_source, rpm.size, rpm.sha1, rpm.arch, t.iso_8601 FROM rpm_packages AS rpm JOIN time AS t;
- query_name: os_info
query: select * from os_version;
- query_name: interface_addresses
query: SELECT interface, address FROM interface_addresses WHERE NOT interface='lo';

0 comments on commit c7430b3

Please sign in to comment.