SYSLOG Logging to stdout in a Docker Container

UPDATE: i created a small repo as an example: https://github.com/rbartl/syslog_container_demo (last commit is with debian:9)

Often i am a bit annoyed by the fact that docker is expecting all Log outputs to come through stderr or stdout.
One Problem i see with this is that i probably want to have more than those 2 channels in some cases, but than can be worked around with the log format.

but the second Problem remains being that some tools want to have a /dev/log or a local syslog UDP port they can talk to.
After playing around with rsyslog – which would not let me output something to stdout apart from its own error messages and thinking about switching to syslog-ng, i searched through apt in debian which packages support syslog.

And i found busybox-syslogd. Small and compact -> should work for a Docker container.
In this example i am using a runit based image and a running postfix smtp server (for which i want to see logs, and into which we will not delve into).

Dockerfile

see the interesting parts in the Dockerfile

FROM tozd/runit
RUN apt-get update -q -q && \
apt-get --yes --force-yes --no-install-recommends install busybox-syslogd
COPY ./etc /etc

runit service file

and the etc/services/syslog/run file which starts the little syslog daemon

#!/bin/bash -e
# example: link the log socket into the postfix chroot environment
# example: ln -sf /dev/log /var/spool/postfix/dev/
exec /sbin/syslogd -n -O /dev/stdout

et voilà – everything in that container can use the dev/log socket and output log lines and they will be forwarded do the docker logs collection.

if you run it it would look like this ->

$ docker run -it test123
Sep 25 12:14:18 9c1223d61db7 syslog.info syslogd started: BusyBox v1.22.1
Sep 25 12:14:28 9c1223d61db7 user.notice root: sdf

 

Ways i tried and failed

Disclaimer: this methods might actually work and i just was to incompetent to configure them correctly…

  • mounting /dev/log into another container with a docker-compose.yml file
  • getting rsyslog to write into stdout (version 7)
  • getting something like haproxy to output to stdout

Ways that would probably be better

  • creating a small syslog container and just using its UDP Port

Setup Netflow Monitoring with mikrotik & Graylog (in elasticsearch)

I always wanted to find out what my slow home network is doing when i am not looking with tcpdump or mikrotik quick sniff. Or at least have some charts.
ntop looks quite nice at first but wants to take my money for an open source product. (it also is a bit worse on the second look if i want to drill down).
When running a linux firewall there are more possibilities but it also uses much more power than my little mikrotik router (and it also has 10 ports less normally).

Therefore a solution is needed that takes data from a mikrotik box and gives me some nice insights about what dataflows is making my netflix stream stutter.

routeros has two possible methods to get network traffic out of the box.

Firewall Rules -> Logging -> Remote Syslog
Netflow (v5/9) -> Netflow Receiver

Two incidents made me think about the solution i am going to explain. First Thing was connecting a Graylog Instance or rather the elasticsearch DB to a Grafana Charting System. I was/am very happy how easy, pretty & fast this works. The Insights it provides are very useful.
Second thing was stumbling over the Graylog Netflow Plugin.

After getting a bit more memory into my Little “house-Server” at last i was able to try out this solution.

As it makes it much faster to setup i am using a docker host for all the needed tools.
So lets right jump into the Configuration. I assume a already running Docker Host with systemd.

/etc/systemd/system/docker-mongo.service
# mongo.service #######################################################################
[Unit] Description=Mongo
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill mongo
ExecStartPre=-/usr/bin/docker rm mongo
ExecStartPre=/usr/bin/docker pull mongo:3
ExecStart=/usr/bin/docker run \
--name mongo \
-v /graylog/data/mongo:/data/db \
mongo:3
ExecStop=/usr/bin/docker stop mongo

/etc/systemd/system/docker-elasticsearch.service

# elasticsearch.service #######################################################################
[Unit] Description=Elasticsearch
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill elasticsearch
ExecStartPre=-/usr/bin/docker rm elasticsearch
ExecStartPre=/usr/bin/docker pull elasticsearch:2
ExecStart=/usr/bin/docker run \
--name elasticsearch \
-e discovery.zen.minimum_master_nodes=1 \
-p 9200:9200 \
-e ES_JAVA_OPTS=-Xmx1000m \
-v /graylog/data/elasticsearch:/usr/share/elasticsearch/data \
elasticsearch:2 \
elasticsearch -Des.cluster.name=graylog2
ExecStop=/usr/bin/docker stop elasticsearch

/etc/systemd/system/docker-graylog.service

# graylog.service #######################################################################
[Unit] Description=Graylog
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill graylog
ExecStartPre=-/usr/bin/docker rm graylog
ExecStartPre=/usr/bin/docker pull rbartl/graylog2-netflow
ExecStart=/usr/bin/docker run \
--name graylog \
--link mongo:mongo \
--link elasticsearch:elasticsearch \
-e GRAYLOG_SERVER_JAVA_OPTS=-Xmx1500m \
-p 9000:9000 \
-p 12201:12201/udp \
-p 2055:2055 \
-p 2055:2055/udp \
-p 12201:12201 \
-p 1514:1514/udp \
-p 1514:1514 \
-p 2514:2514/udp \
-p 3514:3514/udp \
-p 4514:4514/udp \
-v /graylog/data/journal:/usr/share/graylog/data/journal \
-e GRAYLOG_SMTP_SERVER=10.10.0.10 \
-e "GRAYLOG_PASSWORD=secret1" \
-e "GRAYLOG_ROOT_PASSWORD_SHA2=5b11618c2e44027877d0cd0921ed166b9f176f50587fc91e7534dd2946db77d6" \
-e "GRAYLOG_WEB_ENDPOINT_URI=http://docker0.eb.localdomain:9000/api/" \
rbartl/graylog2-netflow
ExecStop=/usr/bin/docker stop graylog

those 3 docker systems make up a running graylog system. Not much memory is given to each part but it is running surprisingly well. I expected a much higher baseline load for running graylog on an old system and with actually not enough memory.

If you want you can override the graylog configuration with another Volume mount ” -v /graylog/config:/usr/share/graylog/data/config \” and put a graylog.conf file into /graylog/config/graylog.conf

graylog should be accessible after this and first thing will be to enable a netflow input.
The Password to access it will be “secret1″ (as in the systemd config).
create your own with “echo -n secret1 | shasum -a 256″ and change both variables.

Go to System/Inputs and create a new netflow input with the default port.

Bildschirmfoto 2017-06-09 um 21.16.04

Bildschirmfoto 2017-06-09 um 21.19.15

i set it to global in case my docker graylog wants to give itself a new id.

graylog is now ready to receive netflow data (in version 5) – lets enable mikrotik to send it.

image Bildschirmfoto 2017-06-09 um 21.23.35

Bildschirmfoto 2017-06-09 um 21.23.46

i’ve just enabled netflow and set my docker System as Target for the Netflow Packets.
After some seconds i already got Data visible into my charts and a few hours of recording it looks like this

it seBildschirmfoto 2017-06-10 um 15.58.02

it seems i can transfer a few gigabytes in a few spikes.. awesome !?!?
this happens because routeros is sending the netflow packet when a TCP stream is done or after a defined timeout – which is 30 minutes. As i had a download running we can see spikes appearing every 30 minutes.

After setting the timeout for the netflow packets to a minute (which i think is a fine resolution) this is what it looks like

Bildschirmfoto 2017-06-10 um 16.00.17

Next Part will be how to setup grafana with the data in elasticsearch and get some nice infos out of it.

Nginx match Url Encoded Stuff in URL, Umlauts, UTF8

In my quest to whitelist stupid drupal and wordpress sites i’ve encountered a little problem with umlauts in urls.
Nowadays those will be encoded with UTF-8 Bytes and converted to URL encoded Format (if your page is also in UTF-8).
If you have a old browser and copy & paste a url or a page that is not UTF-8 the URL gets sent with the encoding of your operating System. (for my windows that would be something like ISO-8859-15)

When nginx gets that URL it will first decode it and that representation will be used in location matches and so on. It will do that with some stupid encoding as HTTP was

  • well
  • (NOT) designed not having a character set Option in requests.
    e.g. you have a URL that looks something like that with german umlauts “/%C3%B6ffentliche-b%C3%BCcherei”
    nginx will represent that as 3 unreadable (and most probably unwritable) characters mixed into the URL.

    Fortunately theres a very easy fix to get nginx to match that correctly, which is quite hard to find in the documentation.
    It should be in there, though i didn’t find it in the documentation.

    So to match the mentioned URL just put a “(*UTF8)” in front of your regex.

    location ~* (*UTF8)^/[öüäÖÜÄAß-Za-z0-9]*$ {

    also put the version in there without a (*UTF8) as some browser might send the url in ISO format.

    Dumping a MBR Partition table to a new (bigger) Disk and using GPT

    While fixing a broken harddisk i just put a bigger disk with 3 TB instead of 2 TB into my Server box.
    First because i dont trust the harddisk manufactures to sell me the exact same size i’ve got currently and secondly because
    i might use that additional 1 TB for some temporary stuff which doesnt need a RAID.

    2 TB was still barely usable with fdisk, but 3 TB will not be. Therefore i needed to dump and load the partition table and convert it on the fly. Sounded complicated but its incredibly easy after i found the right switches.

    Normally i would use sfdisk to dump and restore partitions in commandline but it doesnt like GPT Partitions.
    For this i use gdisk which can handle both Formats. (it even can convert TO MBR if someone has that weird need)

    sda is my working 2 TB disk which has some 300GB partitions (i like to split disks in smaller parts, makes the raid rebuild easier).


    Device Boot Start End Blocks Id System
    /dev/sda1 63 4883759 2441848+ fd Linux raid autodetect
    Partition 1 does not start on physical sector boundary.
    /dev/sda2 4883760 786140774 390628507+ fd Linux raid autodetect
    /dev/sda3 786140775 1567397789 390628507+ fd Linux raid autodetect
    Partition 3 does not start on physical sector boundary.
    /dev/sda4 1567397790 3907024064 1169813137+ 5 Extended
    Partition 4 does not start on physical sector boundary.
    /dev/sda5 1567397853 2348654804 390628476 fd Linux raid autodetect
    Partition 5 does not start on physical sector boundary.
    /dev/sda6 2348654868 3129911819 390628476 fd Linux raid autodetect
    Partition 6 does not start on physical sector boundary.
    /dev/sda7 3129911883 3907024064 388556091 fd Linux raid autodetect
    Partition 7 does not start on physical sector boundary.

    Now i just “open” my sda disk with gdisk and gdisk immediately warns me that it will be convert to GPT if i save, which i dont want to.
    But gdisk already has the GPT Format in Memory, so i can create a backup of this GPT Format with Command “b” and save it to a file named partitions.

    After this i quit gdisk without writing, open my new 3 TB disk (/dev/sdb) and load that partition. gdisk has some commands that say thay load backups but some of them use the GPT backup partition table, but we want to load a saved backup file.
    When the backup is loaded i recheck with “p” and save it with “w” if it is ok.

    All Commands:


    # 2 TB working RAID member
    gdisk /dev/sda
    > b
    Backup> Enter the filename "partitions"
    > q
    # open the new disk
    gdisk /dev/sdb
    > r
    > l
    Restore> Enter the filename "partitions"
    > p
    > w

    Whitelisting wordpress blog urls OR f____ you haxor!!

    WordPress seems to be a big collection of bugs and holes or it is just the most attacked project on the planet.
    Nevertheless its actually quite easy to make it completely secure by denying access to all but the content links.

    In our setup we have a dedicated Apache host that is serving the PHP Pages and contains the Database.
    Protecting that is a nginx in front of this system, Therefore we will put the security stuff into the nginx system (which also gets more trust from me than apache httpd)

    The nginx in front of our wordpress (SEO optimized) blog just gets those rules.
    All but the last location block Allow access to Content (or rather forward it to our apache host).
    The Last catch-all Block denies access and asks for an authentication, so we admins/moderators, etc. can use the admin pages.

    location ~* ^/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/[a-z-]*/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-content/.*$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/sitemap(index)?.xml$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/robots.txt$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/js/jquery/jquery(-migrate)?(.min)?.js$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/images/smilies/[a-z-_]*.gif$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location / {
    proxy_pass http://10.5.8.12:80/;
    proxy_set_header Host $host;
    proxy_set_header Authorization "";
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpasswd;
    }

    And for directories that we know of to not have php content we need to disable PHP also.
    This is done by adding a .htaccess file with the following content.

    RemoveHandler .php .phtml .php3
    RemoveType .php .phtml .php3
    php_flag engine off

    Do this for the directory wp-content which only contains css and so on.

    Check Microsoft-Windows-Backup for Success or Failure via Nagios & nsclient++

    As i’ve tried quite a few Tools to get a Backup Check to work (wbadmin related, powershell based, etc) and didn’t find a good solution at first i’ll explain how to get it working with nsclient quite easily. If you have the commands below its easy ;).

    Fetch nsclient++ from http://www.nsclient.org/ and don’t forget to enable NRPE daemon as we will use this to fetch a „backup success“. Also set your Nagios Server as allowed address.
    After nsclient++ is installed it isn’t yet ready to answer our queries correctly as „allow arguments“ is not enabled and we need it.

    Startup a „notepad“ as Administrator to edit the ini file. it will be in c:\Programs\nsclient++\nsclient.ini . (hidden though, so just enter the filename as it is)
    Sidenote: !!BIG!! Thanks to Microsoft for not showing files we need and naming important directories different in each locale.

    Put the following block at the bottom (after the installation you shouldn’t have a NRPE server block, if you already have a installation adapt your NRPE server block accordingly)

    
    [/settings/NRPE/server]
    allow arguments=1
    allow nasty_meta chars=1 
    

    your server should now be reachable via NRPE and accept arguments to its commands. So you can now use the following command from a linux server to check the windows backup. (nagios-plugins-basic package is needed in debian)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "scan-range=-1d" "filter=source like 'Backup' AND level = 'error'" "crit=count>0"

    This will throw out a Critical Status if it finds a Error in the Backup eventlog in the last day. Sadly i couldn’t convince check_eventlog to be happy about a successful backup after a failed. Therefore a failed backup will show as a CRITICAL for a day in nagios (just approve it if you fixed it).

    But we also want to check if Backups have been created in the last 2 days (change -2d for another timerange)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "show-all" "scan-range=-2d" "filter=id=4 " "ok=id=4" "warn=count=0" "crit=count=0"

    this will return a critical state if none success logs were found. Because of the inner workings of check_eventlog (checking count for each Log Entry) it is currently not possible to issue a warning state if only 2 success backups were found.
    This works for Windows Server 2012. If you have a different Version you might have to change the id from 4 to something else. See the eventlog for a success Message and copy the id.

    Hope this helps as a had to try around for quite some time to convince „Check_Eventlog“ to work as i wanted it to.

    Foreman+Puppet and Foreman Proxy Installation on Debian

    This is just a very short walk through how to install foreman on a Debian System and (if needed) a foreman proxy onto another System.
    Its also possible to install the Proxy on the same System but in our case the networks are different which is why we need a “distant” Foreman Proxy.
    In this Setup we will use puppet 3 instead of the Puppet 2 which is currently in Debian Stable. Therefore we have to add the puppetlabs DEB Repository first.

    
    wget --no-check-certificate https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
    dpkg -i puppetlabs-release-wheezy.deb
    

    To get foreman we also have to add the “theforeman” Repository.

    
    echo "deb http://deb.theforeman.org/ wheezy 1.4" > /etc/apt/sources.list.d/foreman.list
    wget -q http://deb.theforeman.org/foreman.asc -O- | apt-key add -
    

    Now its easy to install foreman-installer and with it foreman on our system.
    This will fetch puppet, foreman and all needed dependencies like postgresql, apache etc.

    
    apt-get update && apt-get install foreman-installer
    foreman-installer
    

    After the Installation you can login with admin/changeme
    https://myforemanserver.local/

    I’ve setup LDAP in my puppet server with our Microsoft Domain.
    Just set the host, baseDN, user + pass according to your infrastructure.
    if you have Active directory you can set the third page like in this screenshot.

    ldap

    And your Foreman powered Puppet is ready to go.

    Proxy

    Now lets install our remote foreman proxy (with a fresh installed debian system)
    again install the foreman deb sources as mentioned above. (but leave the puppet sources)
    One way would be to just install the package and configure the proxy yourself. (add it in the Master Foreman)

    
    apt-get install foreman-proxy
    

    But my prefered way is to again let it be done by foreman-installer. (with *some* Options)

    Prepare the rndc key so the proxy can update the bind daemon running on the master foreman.
    Just create the directory /etc/foreman-proxy and put the *rndc.key* file in there.

    To use the installer for the proxy we have to fetch some infos from our foreman server.
    hostname should be correct of course and we have to get the oauth Keys.
    The Keys is hidden Foreman Server – Manage – Settings – Auth (copy oauth_consumer_key & oauth_consumer_secret)

    i’ve also pre created the Cert for the Fmproxy on the puppet master

    
    puppet cert generate fmproxy.mycompany.local
    

    copy /var/lib/puppet/ssl/certs/fmproxy.mycompany.local.pem and
    /var/lib/puppet/ssl/private_keys/fmproxymycompany.local.pem /var/lib/puppet/ssl/certs/ca_crt.pem onto the new server into
    /etc/foreman-proxy/private_keys and ssl/certs.

    
    foreman-installer --no-enable-foreman --enable-foreman-proxy --foreman-proxy-dhcp true --foreman-proxy-tftp true --foreman-proxy-dns  true  --foreman-foreman-url=https://puppet01.mycompany.local  --foreman-proxy-foreman-base-url=https://puppet01.mycompany.local --foreman-proxy-oauth-consumer-key=sb8fZvY4Sfu4zpsjzcx7eh7MuHuXCqKq --foreman-proxy-oauth-consumer-secret=vc8QkxEhmqXcRVJdhvKQZZmcQP9BNHVi --foreman-proxy-ssl-key=/etc/foreman-proxy/ssl/private_keys/fmproxy.mycompany.local.pem --foreman-proxy-ssl-cert=/etc/foreman-proxy/ssl/certs/fmproxy.mycompany.local.pem  --foreman-proxy-puppetca false --foreman-proxy-ssl-ca=/etc/foreman-proxy/ssl/ca/ca_crt.pem  --foreman-proxy-dns-server=puppet01.mycompany.local --foreman-proxy-dns-zone=ops.mycompany.local --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
    

    If we are lucky foreman-proxy should be installed 😉 (it wasnt for me therefore i executed this command in various forms multiple times)
    It also has registered itself in the foreman master server.

    The Setup is not yet completely usable because some settings are not yet correctly.

    */etc/dhcp/dhcpd.conf*

    update the routers setting accordingly to your network setup. (192.168.100.1 which foreman entered is not correct most of the time)
    dont forget to restart your dhcpd server after changing the settings (/etc/init.d/isc-dhcp-server restart)

    This should be enough to install/provision your servers via foreman. hfgl

    Add a local Port Forward in OSX

    If you like to put various Services into Virtual Machines for development like i do (or use vagrant and get your dependencies installed in a VM) you might want to still connect to localhost to use those services. Or your development machine has its default on localhost.

    Sadly the Firewall Configuration with pfctl which is mentioned in some Blogs didnt work on my Macbook with Parallels. Doesnt like to forward onto another interface probably.

    But the OSX launchd has a nice feature to simulate the behaviour of a inetd service. This means that we can use launchd to listen on a port and take action if a process connects to it…. Like forward it to another host.

    The following Example works for a virtualized mysql host, but you can easily adapt it to other Ports and Services.

    Put this config file into /System/Library/LaunchDaemons and name it mysql.plist.
    It forwards a connection to localhost Port 3306 to a host named mysql and also port 3306.

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
         <key>Enabled</key>
         <true/>
         <key>Label</key>
         <string>com.demo.mysqlforward</string>
         <key>Program</key>
         <string>/usr/bin/nc</string>
         <key>ProgramArguments</key>
         <array>
              <string>/usr/bin/nc</string>
              <string>mysql</string>
              <string>3306</string>
         </array>
         <key>Sockets</key>
         <dict>
              <key>Listeners</key>
              <dict>
                   <key>SockServiceName</key>
                   <string>mysql</string>
                   <key>Bonjour</key>
                   <array>
                        <string>mysql</string>
                   </array>
              </dict>
         </dict>
         <key>inetdCompatibility</key>
         <dict>
              <key>Wait</key>
              <false/>
         </dict>
         <key>POSIXSpawnType</key>
         <string>Interactive</string>
    </dict>
    </plist>

    to enable the service use the launchd control programm:

    sudo launchctl load mysql.plist
    sudo launchctl start com.demo.mysqlforward
    

    if you forward a custom service you might have to edit your /etc/services and add it (e.g. mongod)

    Deploy SSH Keys in authorized_keys via Ansible

    Ansible is a very nice and straightforward tool for server configuration Management. This Post will explain how to easily deploy SSH Keys on multiple Servers. Ansible already has a Module that handles the authorized_keys file. We’ll use this and loop over our saved PUB Files.

    Save this Playbook code as /etc/ansible/ssh-deployment.yml

    ---
    - hosts: all
    user: root
    tasks:
    - name: ensure public key is in authorized_keys
    authorized_key:
    key="{{ lookup('file',item) }}"
    user=root
    with_fileglob:
    - /etc/ansible/sshpubs/*

    Then save some public keys in the /etc/ansible/sshpubs directory. (e.g. as /etc/ansible/sshpubs/u.name.pub)
    Now you can just add your hosts to the /etc/ansible/hosts file and run your ssh deployment which should look something like this ->


    ansible-playbook ssh-deployment.yml

    PLAY [all] ********************************************************************

    GATHERING FACTS ***************************************************************
    ok: [host1.example.com]

    TASK: [ensure public key is in authorized_keys] *******************************
    ok: [host1.example.com] => (item=/etc/ansible/sshpubs/u.ser1.pub)
    ok: [host1.example.com] => (item=/etc/ansible/sshpubs/u.ser2.pub)
    changed: [host1.example.com] => (item=/etc/ansible/sshpubs/u.ser3.pub)

    PLAY RECAP ********************************************************************
    host1.example.com : ok=2 changed=1 unreachable=0 failed=0

    And Done.