Skip to content

Commit

Permalink
Merge pull request #375 from stratosphereips/develop
Browse files Browse the repository at this point in the history
Slips v1.0.6
  • Loading branch information
AlyaGomaa authored Jun 30, 2023
2 parents 6d58b91 + 2c83cb0 commit 87c52b6
Show file tree
Hide file tree
Showing 109 changed files with 10,092 additions and 9,072 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/CI-production-testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,10 @@ jobs:
run: ./slips.py -cc

- name: Integration tests
run: python3 -m pytest -s tests/integration_tests/test_dataset.py -n 3 -p no:warnings -vv
run: python3 -m pytest -s tests/integration_tests/test_dataset.py -p no:warnings -vv

- name: Config file tests
run: python3 -m pytest -s tests/integration_tests/test_config_files.py -n 2 -p no:warnings -vv
run: python3 -m pytest -s tests/integration_tests/test_config_files.py -p no:warnings -vv

- name: Upload Artifact
# run this job whether the above jobs failed or passed
Expand Down Expand Up @@ -155,7 +155,7 @@ jobs:
git reset --hard
git pull & git checkout origin/develop
redis-server --daemonize yes
python3 -m pytest -s tests/integration_tests/test_dataset.py -n 4 -p no:warnings -vv
python3 -m pytest -s tests/integration_tests/test_dataset.py -p no:warnings -vv
- name: Run config file integration tests inside docker
uses: addnab/docker-run-action@v3
Expand All @@ -168,7 +168,7 @@ jobs:
git reset --hard
git pull & git checkout origin/develop
redis-server --daemonize yes
python3 -m pytest -s tests/integration_tests/test_config_files.py -n 2 -p no:warnings -vv
python3 -m pytest -s tests/integration_tests/test_config_files.py -p no:warnings -vv
- name: Upload Artifact
# run this job whether the above jobs failed or passed
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/CI-staging.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ jobs:
run: ./slips.py -cc

- name: Integration tests
run: python3 -m pytest -s tests/integration_tests/test_dataset.py -n 3 -p no:warnings -vv
run: python3 -m pytest -s tests/integration_tests/test_dataset.py -p no:warnings -vv

- name: Config file tests
run: python3 -m pytest -s tests/integration_tests/test_config_files.py -n 2 -p no:warnings -vv
run: python3 -m pytest -s tests/integration_tests/test_config_files.py -p no:warnings -vv

- name: Upload Artifact
# run this job whether the above jobs failed or passed
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -171,3 +171,4 @@ modules/flowmldetection/model.bin
output/
config-live-macos-*
dataset-private/*
appendonly.aof
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
-1.0.6 (June 2023):
- Store flows in SQLite database in the output directory instead of redis.
- 55% RAM usage decrease.
- Support the labeling of flows based on Slips detections.
- Add support for exporting labeled flows in json and tsv formats.
- Code improvements. Change the structure of all modules.
- Graceful shutdown of all modules thanks to @danieltherealyang
- Print the number of evidence generated by Slips when running on PCAPs and interface.
- Improved the detection of ports that belong to a specific organization.
- Fix bugs in CYST module.
- Fix URLhaus evidence desciption.
- Fix the freezing progress bar issue.
- Fix problem starting Slips in docker in linux.
- Ignore ICMP scans if the flow has ICMP type 3
- Improve our whitelist. Slips now checks for whitelisted attackers and victims in the generated evidence.
- Add embedded documentation in the web interface thanks to @shubhangi013
- Improved the choosing of random redis ports using the -m parameter.

-1.0.5 (May 2023):
- Fix missing flows due to modules stopping before the processing is done.
- Code improvements. Change the structure of all modules.
Expand Down
3 changes: 3 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
bash:
docker run --rm -it slips /bin/bash

shell:
docker exec -it slips /bin/bash

image:
docker build -t slips -f docker/ubuntu-image/Dockerfile .
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<h1 align="center">
Slips v1.0.5
Slips v1.0.6
</h1>

[Documentation](https://stratospherelinuxips.readthedocs.io/en/develop/)[Features](https://stratospherelinuxips.readthedocs.io/en/develop/features.html)[Installation](#installation)[Authors](#people-involved)[Contributions](#contribute-to-slips)
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.0.5
1.0.6
8 changes: 3 additions & 5 deletions checker.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import shutil
import psutil
import sys
import os
Expand Down Expand Up @@ -146,11 +145,10 @@ def check_given_flags(self):

def delete_blocking_chain(self):
# start only the blocking module process and the db
from slips_files.core.database.database import __database__
from multiprocessing import Queue, active_children
from modules.blocking.blocking import Module
from modules.blocking.blocking import Blocking

blocking = Module(Queue())
blocking = Blocking(Queue())
blocking.start()
blocking.delete_slipsBlocking_chain()
# kill the blocking module manually because we can't
Expand All @@ -162,7 +160,7 @@ def clear_redis_cache(self):
print('Deleting Cache DB in Redis.')
self.main.redis_man.clear_redis_cache_database()
self.main.input_information = ''
self.main.zeek_folder = ''
self.main.zeek_dir = ''
self.main.redis_man.log_redis_server_PID(6379, self.main.redis_man.get_pid_of_redis_server(6379))
self.main.terminate_slips()

Expand Down
1 change: 1 addition & 0 deletions config/TI_feeds.csv
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,4 @@ https://raw.githubusercontent.com/Orange-Cyberdefense/russia-ukraine_IOCs/main/O
https://hole.cert.pl/domains/domains.json,medium, ['malicious']
# this feed is random, not the full version of rstcloud ips
https://raw.githubusercontent.com/rstcloud/rstthreats/master/feeds/full/random100_ioc_ip_latest.json,medium,['malicious']
https://raw.githubusercontent.com/CriticalPathSecurity/Zeek-Intelligence-Feeds/master/binarydefense.intel,medium, ['honeypot']
4 changes: 4 additions & 0 deletions config/redis.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
daemonize yes
stop-writes-on-bgsave-error no
save ""
appendonly no
9 changes: 7 additions & 2 deletions config/slips.conf
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,11 @@ keep_rotated_files_for = 1 day
# how many minutes to wait for all modules to finish before killing them
wait_for_modules_to_finish = 15 mins

# flows are labeled to normal/malicious and added to the sqlite db in the output dir by default
export_labeled_flows = no
# export_format can be tsv or json. this parameter is ignored if export_labeled_flows is set to no
export_format = json

#####################
# [2] Configuration for the detections
[detection]
Expand All @@ -160,8 +165,8 @@ popup_alerts = no
[modules]
# List of modules to ignore. By default we always ignore the template! do not remove it from the list
disable = [template, ensembling]
# Names of other modules that you can disable: ensembling, threat_intelligence, blocking,
# network_discovery, timeline, virustotal, rnn-cc-detection, flowmldetection
# Names of other modules that you can disable: ensembling, threatintelligence, blocking,
# networkdiscovery, timeline, virustotal, rnnccdetection, flowmldetection, updatemanager

# For each line in timeline file there is a timestamp.
# By default the timestamp is seconds in unix time. However
Expand Down
50 changes: 28 additions & 22 deletions conftest.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
"""
This file will contain the fixtures that are commonly needed by all other test files
for example: setting up the database, inputqueue, outputqueue, etc..
for example: setting up the database, input_queue, output_queue, etc..
"""
import pytest
import os, sys, inspect
from multiprocessing import Queue
from unittest.mock import patch
from slips_files.core.database.database_manager import DBManager


# add parent dir to path for imports to work
Expand All @@ -15,39 +17,43 @@
sys.path.insert(0, parent_dir)



@pytest.fixture
def mock_db():
# Create a mock version of the database object
with patch('slips_files.core.database.database_manager.DBManager') as mock:
yield mock.return_value

def do_nothing(*arg):
"""Used to override the print function because using the self.print causes broken pipes"""
pass


@pytest.fixture
def outputQueue():
"""This outputqueue will be passed to all module constructors that need it"""
outputQueue = Queue()
outputQueue.put = do_nothing
def output_queue():
"""This output_queue will be passed to all module constructors that need it"""
output_queue = Queue()
output_queue.put = do_nothing
return Queue()


@pytest.fixture
def inputQueue():
"""This inputQueue will be passed to all module constructors that need it"""
inputQueue = Queue()
inputQueue.put = do_nothing
return inputQueue
def input_queue():
"""This input_queue will be passed to all module constructors that need it"""
input_queue = Queue()
input_queue.put = do_nothing
return input_queue


@pytest.fixture
def profilerQueue():
"""This profilerQueue will be passed to all module constructors that need it"""
profilerQueue = Queue()
profilerQueue.put = do_nothing
return profilerQueue
def profiler_queue():
"""This profiler_queue will be passed to all module constructors that need it"""
profiler_queue = Queue()
profiler_queue.put = do_nothing
return profiler_queue


@pytest.fixture
def database(outputQueue):
from slips_files.core.database.database import __database__
__database__.start(1234)
__database__.outputqueue = outputQueue
__database__.print = do_nothing
return __database__
def database(output_queue):
db = DBManager('output/', output_queue, 6379)
db.print = do_nothing
return db
19 changes: 14 additions & 5 deletions daemon.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
from slips_files.core.database.database import __database__
from slips_files.common.config_parser import ConfigParser
from slips_files.common.imports import *
import sys
import os
from signal import SIGTERM
Expand All @@ -15,7 +14,7 @@ def __init__(self, slips):

# this is a conf file used to store the pid of the daemon and is deleted when the daemon stops
self.pidfile_dir = '/var/lock'
self.pidfile = os.path.join(self.pidfile_dir, 'slips.lock')
self.pidfile = os.path.join(self.pidfile_dir, 'slips_daemon.lock')
self.read_configuration()
if not self.slips.args.stopdaemon:
self.prepare_output_dir()
Expand Down Expand Up @@ -242,6 +241,16 @@ def stop(self):
self.stdout = 'slips.log'
self.logsfile = 'slips.log'
self.prepare_std_streams(output_dir)
__database__.start(port)
self.slips.c1 = __database__.subscribe('finished_modules')
db = DBManager(output_dir,
multiprocessing.Queue(),
port,
start_sqlite=False,
flush_db=False)
db.set_slips_mode('daemonized')
self.slips.set_mode('daemonized', daemon=self)
# used in shutdown gracefully to print the name of the stopped file in slips.log
self.slips.input_information = db.get_input_file()
self.slips.db = db
# set file used by proc_manto log if slips was shutdown gracefully
self.slips.proc_man.slips_logfile = self.logsfile
self.slips.proc_man.shutdown_gracefully()
55 changes: 52 additions & 3 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@ open source network security monitoring tool. Slips divides flows into profiles
each profile into a timewindows.
Slips runs detection modules on each flow and stores all evidence,
alerts and features in an appropriate profile structure.
All data, i.e. zeek flows, performed detections, profiles and timewindows' data,
All profile info, performed detections, profiles and timewindows' data,
is stored inside a <a href="https://redis.io/">Redis</a> database.
All flows are read, interpreted by Slips, labeled, and stored in the SQLite database in the output/ dir of each run
The output of Slips is a folder with logs (output/ directory) that has alert.json, alerts.log, errors.log.
Kalipso, a terminal graphical user interface. or the Web interface.

Expand Down Expand Up @@ -45,7 +46,46 @@ Below is more explanation on internal representation of data, usage of Zeek and

Slips works at a flow level, instead of a packet level, gaining a high level view of behaviors. Slips creates traffic profiles for each IP that appears in the traffic. A profile contains the complete behavior of an IP address. Each profile is divided into time windows. Each time window is 1 hour long by default and contains dozens of features computed for all connections that start in that time window. Detections are done in each time window, allowing the profile to be marked as uninfected in the next time window.

### Alerts vs Evidence
This is what slips stores for each IP/Profile it creates:

* Ipv4 - ipv4 of this profile
* IPv6 - list of ipv6 used by this profile
* Threat_level - the threat level of this profile, updated every TW.
* Confidence - how confident slips is that the threat level is correct
* Past threat levels - history of past threat levels
* Used software - list of software used by this profile, for example SSH, Browser, etc.
* MAC and MAC Vendor - Ether MAC of the IP and the name of the vendor
* Host-name - the name of the IP
* first User-agent - First UA seen use dby this profile.
* OS Type - Type of OS used by this profile as extracted from the user agent
* OS Name - Name of OS used by this profile as extracted from the user agent
* Browser - Name of the browser used by this profile as extracted from the user agent
* User-agents history - history of the all user agents used by this profile
* DHCP - if the IP is a dhcp or not
* Starttime - epoch formatted timestamp of when the profile first appeared
* Duration - the standard duration of every TW in this profile
* Modules labels - the labels assigned to this profile by each module
* Gateway - if the IP is the gateway (router) of the network
* Timewindow count - Amount of timewindows in this profile
* ASN - autonomous service number of the IP
* Asnorg - name of the org that own the ASN of this IP
* ASN Number
* SNI - Server name indicator
* Reverse DNS - name of the IP in reverse dns
* Threat Intelligence - If the IP appeared in any of Slips blacklist
* Description - Description of this IP as taken from the blacklist
* Blacklist Threat level - threat level of the blacklisted that has this IP
* Passive DNS - All the domains that resolved into this IP
* Certificates - All the certificates that were used by this IP
* Geocountry - Country of this IP
* VirusTotal - contains virustotal scores of this IP
* Down_file: files in virustotal downloaded from this IP
* Ref_file: files in VT that referenced this IP
* Com_file : files in VT communicating with this IP
* Url ratio: The higher the score the more malicious this IP is


### Alerts vs Evidence

When running Slips, the alerts you see in red in the CLI or at the very bottom in kalispo, are a bunch of evidence. Evidence in slips are detections caused by a specific IP in a specific timeframe. Slips doesn't alert on every evidence/detection. it accumulates evidence and only generates and alert when the amount of gathered evidence crosses a threshold. After this threshold Slips generates an alert, marks the timewindow as malicious(displays it in red in kalipso) and blocks the IP causing the alert.

Expand All @@ -56,7 +96,16 @@ Slips uses Zeek to generate files for most input types, and this data is used to

### Usage of Redis database.

All the data inside Slips is stored in Redis, an in-memory data structure. Redis allows all the modules in Slips to access the data in parallel. Apart from read and write operations, Slips takes advantage of the Redis messaging system called Redis PUB/SUB. Processes may publish data into the channels, while others subscribe to these channels and process the new data when it is published.
All the data inside Slips is stored in Redis, an in-memory data structure.
Redis allows all the modules in Slips to access the data in parallel.
Apart from read and write operations, Slips takes advantage of the Redis messaging system called Redis PUB/SUB.
Processes may publish data into the channels, while others subscribe to these channels and process the new data when it is published.

### Usage of SQLite database.

Slips uses SQLite database to store all flows in Slips interpreted format.
The SQLite database is stored in the output/ dir and each flow is labeled to either 'malicious' or 'benign' based on slips detections.
all the labeled flows in the SQLite database can be exported to tsv or json format.


### Threat Levels
Expand Down
4 changes: 3 additions & 1 deletion docs/code_documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ later, when slips is starting all the modules, slips also starts the update mana
it creates profiles and timewindows for each IP it encounters.
5. Profiler process gives each flow to the appropriate module to deal with it. for example flows from http.log will be sent to http_analyzer.py
to analyze them.
6. It also stores the flows, profiles, etc. in the database for later processing. the info stored in the db will be used by all modules later.
6. Profiler process stores the flows, profiles, etc. in slips databases for later processing. the info stored in the dbs will be used by all modules later.
Slips has 2 databases, Redis and SQLite. it uses the sqlite db to store all the flows read and labeled. and uses redis for all other operations. the sqlite db is
created in the output directory, meanwhite the redis database is in-memory.
7-8. using the flows stored in the db in step 6 and with the help of the timeline module, slips puts the given flows in a human-readable form which is
then used by the web UI and kalipso UI.
9. when a module finds a detection, it sends the detection to the evidence process to deal with it (step 10) but first, this evidence is checked by the whitelist to see if it's
Expand Down
Loading

0 comments on commit 87c52b6

Please sign in to comment.