Python 2.7.0, Microsoft Visual C++ Compiler for Python 2.7, Splunk 6.3.x
splunk install app snmpmod.spl -update 1 -auth admin:changeme
cd $SPLUNK_HOME/etc/apps/snmpmod
mkdir local
vim local/inputs.conf
If you are using SNMP version 3 , you have to obtain the PyCrypto package yourself:
As of Python 2.7.9, pip is included with the release. Run
pip2 install pycrypto
- Windows
- Copy the folder
C:\Python27\Lib\site-packages\Crypto
to$SPLUNK_HOME\etc\apps\snmpmod\bin
- Copy the folder
- Linux
cp -Rv /usr/local/lib/python2.7/dist-packages/Crypto $SPLUNK_HOME/etc/apps/snmpmod/bin
[snmpif://hostname]
destination = hostname
snmp_version = 3
v3_securityName = username
v3_authKey = password
snmpinterval = 300
interfaces = 1,5,8,9
index = network
# The sourcetype can be whatever you want
sourcetype = snmpif
[ipsla://hostname]
destination = hostname
snmp_version = 3
v3_securityName = username
v3_authKey = password
snmpinterval = 300
operations = 2,7
index = network
sourcetype = ipsla
[qos://test]
destination = 10.0.0.1
interfaces = 151
snmp_version = 3
v3_securityName = user
v3_authKey = auth
snmpinterval = 60
index = index
sourcetype = cbqos
[snmpEkinops://hostname]
destination = hostname
snmp_version = 2C
communitystring = private
snmpinterval = 2
interfaces = 3:PMOAIL-E,11:PMOAIL-E
index = index
sourcetype = snmpEkinops
Currently, all response handlers set the Splunk host to the value of destination. If you don't have DNS (bad sysadmin!) add an entry to /etc/hosts. I'd be very happy to take a pull request that will look at a host
config option and override destination
with that value.
I strongly recommend you create a search macro snmpif_traffic
that uses streamstats
to calculate the bits per second from the raw snmpif
data.
Note that I am checking first(if*InOctets)
against the current value to see if the router has rebooted and avoid the spike in the graph.
My macro is:
stats first(*) as * by _time host ifIndex
| streamstats window=2 global=false current=true range(if*Octets) as delta*, range(if*Pkts) as delta*Pkts, range(_time) as secs, first(if*InOctets) as prevIn* by host, ifIndex
| eval prevInCounter=coalesce(prevInHC, prevIn)
| eval currInCounter=coalesce(ifHCInOctets, ifInOctets)
| where secs>0 AND currInCounter>prevInCounter
| eval bpsIn=coalesce(deltaHCIn, deltaIn)*8/secs
| eval bpsOut=coalesce(deltaHCOut, deltaOut)*8/secs
| eval mbpsIn=bpsIn/1000000
| eval mbpsOut=bpsOut/1000000
| eval ppsIn=coalesce(deltaHCInUcastPkts, deltaInUcastPkts)/secs
| eval ppsOut=coalesce(deltaHCOutUcastPkts, deltaOutUcastPkts)/secs
| eval kppsIn=ppsIn/1000
| eval kppsOut=ppsOut/1000
Then to call it and display the results as a graph:
index=snmpif host=foo ifIndex=17 | `snmpif_parse`
| timechart bins=500 avg(mbpsIn) as "Mbps IN", avg(mbpsOut) as "Mbps OUT"
And calculate 95th percentile figures
index=snmpif host=foo ifIndex=17 | `snmpif_parse`
| stats perc95(mbpsIn) as "IN", perc95(mbpsOut) as "OUT"
The search term shown above is quite expensive. I am running the query above and collecting the data into a new index.
[search index=network sourcetype=snmp_traffic | stats first(_time) as earliest] index=network sourcetype="snmpif"
| stats first(*) as * by _time host ifIndex
| streamstats window=2 global=false current=true range(if*Octets) as delta*, range(_time) as secs by host, ifIndex
| where secs>0
| eval bpsIn=coalesce(deltaHCIn, deltaIn)*8/secs
| eval bpsOut=coalesce(deltaHCOut, deltaOut)*8/secs
| eval mbpsIn=bpsIn/1000000
| eval mbpsOut=bpsOut/1000000
| fields _time host ifIndex bpsIn bpsOut ifAdminStatus ifDescr ifMtu ifOperStatus ifPhysAddress ifSpecific ifSpeed ifType mbpsIn mbpsOut
| collect index=network sourcetype=snmp_traffic
There is a trick there of using the most recent snmp_traffic event to start the next round of collections. I run this search every 30 minutes.
This project was originally based on SplunkModularInputsPythonFramework. I have taken the SNMP modular input, refactored the python code to be more re-usable and added extra stanzas for polling interfaces and ipsla statistics.