Setup Netflow Monitoring with mikrotik & Graylog (in elasticsearch)

I always wanted to find out what my slow home network is doing when i am not looking with tcpdump or mikrotik quick sniff. Or at least have some charts.
ntop looks quite nice at first but wants to take my money for an open source product. (it also is a bit worse on the second look if i want to drill down).
When running a linux firewall there are more possibilities but it also uses much more power than my little mikrotik router (and it also has 10 ports less normally).

Therefore a solution is needed that takes data from a mikrotik box and gives me some nice insights about what dataflows is making my netflix stream stutter.

routeros has two possible methods to get network traffic out of the box.

Firewall Rules -> Logging -> Remote Syslog
Netflow (v5/9) -> Netflow Receiver

Two incidents made me think about the solution i am going to explain. First Thing was connecting a Graylog Instance or rather the elasticsearch DB to a Grafana Charting System. I was/am very happy how easy, pretty & fast this works. The Insights it provides are very useful.
Second thing was stumbling over the Graylog Netflow Plugin.

After getting a bit more memory into my Little “house-Server” at last i was able to try out this solution.

As it makes it much faster to setup i am using a docker host for all the needed tools.
So lets right jump into the Configuration. I assume a already running Docker Host with systemd.

/etc/systemd/system/docker-mongo.service
# mongo.service #######################################################################
[Unit] Description=Mongo
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill mongo
ExecStartPre=-/usr/bin/docker rm mongo
ExecStartPre=/usr/bin/docker pull mongo:3
ExecStart=/usr/bin/docker run \
--name mongo \
-v /graylog/data/mongo:/data/db \
mongo:3
ExecStop=/usr/bin/docker stop mongo

/etc/systemd/system/docker-elasticsearch.service

# elasticsearch.service #######################################################################
[Unit] Description=Elasticsearch
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill elasticsearch
ExecStartPre=-/usr/bin/docker rm elasticsearch
ExecStartPre=/usr/bin/docker pull elasticsearch:2
ExecStart=/usr/bin/docker run \
--name elasticsearch \
-e discovery.zen.minimum_master_nodes=1 \
-p 9200:9200 \
-e ES_JAVA_OPTS=-Xmx1000m \
-v /graylog/data/elasticsearch:/usr/share/elasticsearch/data \
elasticsearch:2 \
elasticsearch -Des.cluster.name=graylog2
ExecStop=/usr/bin/docker stop elasticsearch

/etc/systemd/system/docker-graylog.service

# graylog.service #######################################################################
[Unit] Description=Graylog
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill graylog
ExecStartPre=-/usr/bin/docker rm graylog
ExecStartPre=/usr/bin/docker pull rbartl/graylog2-netflow
ExecStart=/usr/bin/docker run \
--name graylog \
--link mongo:mongo \
--link elasticsearch:elasticsearch \
-e GRAYLOG_SERVER_JAVA_OPTS=-Xmx1500m \
-p 9000:9000 \
-p 12201:12201/udp \
-p 2055:2055 \
-p 2055:2055/udp \
-p 12201:12201 \
-p 1514:1514/udp \
-p 1514:1514 \
-p 2514:2514/udp \
-p 3514:3514/udp \
-p 4514:4514/udp \
-v /graylog/data/journal:/usr/share/graylog/data/journal \
-e GRAYLOG_SMTP_SERVER=10.10.0.10 \
-e "GRAYLOG_PASSWORD=secret1" \
-e "GRAYLOG_ROOT_PASSWORD_SHA2=5b11618c2e44027877d0cd0921ed166b9f176f50587fc91e7534dd2946db77d6" \
-e "GRAYLOG_WEB_ENDPOINT_URI=http://docker0.eb.localdomain:9000/api/" \
rbartl/graylog2-netflow
ExecStop=/usr/bin/docker stop graylog

those 3 docker systems make up a running graylog system. Not much memory is given to each part but it is running surprisingly well. I expected a much higher baseline load for running graylog on an old system and with actually not enough memory.

If you want you can override the graylog configuration with another Volume mount ” -v /graylog/config:/usr/share/graylog/data/config \” and put a graylog.conf file into /graylog/config/graylog.conf

graylog should be accessible after this and first thing will be to enable a netflow input.
The Password to access it will be “secret1″ (as in the systemd config).
create your own with “echo -n secret1 | shasum -a 256″ and change both variables.

Go to System/Inputs and create a new netflow input with the default port.

Bildschirmfoto 2017-06-09 um 21.16.04

Bildschirmfoto 2017-06-09 um 21.19.15

i set it to global in case my docker graylog wants to give itself a new id.

graylog is now ready to receive netflow data (in version 5) – lets enable mikrotik to send it.

image Bildschirmfoto 2017-06-09 um 21.23.35

Bildschirmfoto 2017-06-09 um 21.23.46

i’ve just enabled netflow and set my docker System as Target for the Netflow Packets.
After some seconds i already got Data visible into my charts and a few hours of recording it looks like this

it seBildschirmfoto 2017-06-10 um 15.58.02

it seems i can transfer a few gigabytes in a few spikes.. awesome !?!?
this happens because routeros is sending the netflow packet when a TCP stream is done or after a defined timeout – which is 30 minutes. As i had a download running we can see spikes appearing every 30 minutes.

After setting the timeout for the netflow packets to a minute (which i think is a fine resolution) this is what it looks like

Bildschirmfoto 2017-06-10 um 16.00.17

Next Part will be how to setup grafana with the data in elasticsearch and get some nice infos out of it.