SYSLOG Logging to stdout in a Docker Container

UPDATE: i created a small repo as an example: https://github.com/rbartl/syslog_container_demo (last commit is with debian:9)

Often i am a bit annoyed by the fact that docker is expecting all Log outputs to come through stderr or stdout.
One Problem i see with this is that i probably want to have more than those 2 channels in some cases, but than can be worked around with the log format.

but the second Problem remains being that some tools want to have a /dev/log or a local syslog UDP port they can talk to.
After playing around with rsyslog – which would not let me output something to stdout apart from its own error messages and thinking about switching to syslog-ng, i searched through apt in debian which packages support syslog.

And i found busybox-syslogd. Small and compact -> should work for a Docker container.
In this example i am using a runit based image and a running postfix smtp server (for which i want to see logs, and into which we will not delve into).

Dockerfile

see the interesting parts in the Dockerfile

FROM tozd/runit
RUN apt-get update -q -q && \
apt-get --yes --force-yes --no-install-recommends install busybox-syslogd
COPY ./etc /etc

runit service file

and the etc/services/syslog/run file which starts the little syslog daemon

#!/bin/bash -e
# example: link the log socket into the postfix chroot environment
# example: ln -sf /dev/log /var/spool/postfix/dev/
exec /sbin/syslogd -n -O /dev/stdout

et voilà – everything in that container can use the dev/log socket and output log lines and they will be forwarded do the docker logs collection.

if you run it it would look like this ->

$ docker run -it test123
Sep 25 12:14:18 9c1223d61db7 syslog.info syslogd started: BusyBox v1.22.1
Sep 25 12:14:28 9c1223d61db7 user.notice root: sdf

 

Ways i tried and failed

Disclaimer: this methods might actually work and i just was to incompetent to configure them correctly…

  • mounting /dev/log into another container with a docker-compose.yml file
  • getting rsyslog to write into stdout (version 7)
  • getting something like haproxy to output to stdout

Ways that would probably be better

  • creating a small syslog container and just using its UDP Port

Setup Netflow Monitoring with mikrotik & Graylog (in elasticsearch)

I always wanted to find out what my slow home network is doing when i am not looking with tcpdump or mikrotik quick sniff. Or at least have some charts.
ntop looks quite nice at first but wants to take my money for an open source product. (it also is a bit worse on the second look if i want to drill down).
When running a linux firewall there are more possibilities but it also uses much more power than my little mikrotik router (and it also has 10 ports less normally).

Therefore a solution is needed that takes data from a mikrotik box and gives me some nice insights about what dataflows is making my netflix stream stutter.

routeros has two possible methods to get network traffic out of the box.

Firewall Rules -> Logging -> Remote Syslog
Netflow (v5/9) -> Netflow Receiver

Two incidents made me think about the solution i am going to explain. First Thing was connecting a Graylog Instance or rather the elasticsearch DB to a Grafana Charting System. I was/am very happy how easy, pretty & fast this works. The Insights it provides are very useful.
Second thing was stumbling over the Graylog Netflow Plugin.

After getting a bit more memory into my Little “house-Server” at last i was able to try out this solution.

As it makes it much faster to setup i am using a docker host for all the needed tools.
So lets right jump into the Configuration. I assume a already running Docker Host with systemd.

/etc/systemd/system/docker-mongo.service
# mongo.service #######################################################################
[Unit] Description=Mongo
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill mongo
ExecStartPre=-/usr/bin/docker rm mongo
ExecStartPre=/usr/bin/docker pull mongo:3
ExecStart=/usr/bin/docker run \
--name mongo \
-v /graylog/data/mongo:/data/db \
mongo:3
ExecStop=/usr/bin/docker stop mongo

/etc/systemd/system/docker-elasticsearch.service

# elasticsearch.service #######################################################################
[Unit] Description=Elasticsearch
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill elasticsearch
ExecStartPre=-/usr/bin/docker rm elasticsearch
ExecStartPre=/usr/bin/docker pull elasticsearch:2
ExecStart=/usr/bin/docker run \
--name elasticsearch \
-e discovery.zen.minimum_master_nodes=1 \
-p 9200:9200 \
-e ES_JAVA_OPTS=-Xmx1000m \
-v /graylog/data/elasticsearch:/usr/share/elasticsearch/data \
elasticsearch:2 \
elasticsearch -Des.cluster.name=graylog2
ExecStop=/usr/bin/docker stop elasticsearch

/etc/systemd/system/docker-graylog.service

# graylog.service #######################################################################
[Unit] Description=Graylog
After=docker.service
Requires=docker.service

[Service] ExecStartPre=-/usr/bin/docker kill graylog
ExecStartPre=-/usr/bin/docker rm graylog
ExecStartPre=/usr/bin/docker pull rbartl/graylog2-netflow
ExecStart=/usr/bin/docker run \
--name graylog \
--link mongo:mongo \
--link elasticsearch:elasticsearch \
-e GRAYLOG_SERVER_JAVA_OPTS=-Xmx1500m \
-p 9000:9000 \
-p 12201:12201/udp \
-p 2055:2055 \
-p 2055:2055/udp \
-p 12201:12201 \
-p 1514:1514/udp \
-p 1514:1514 \
-p 2514:2514/udp \
-p 3514:3514/udp \
-p 4514:4514/udp \
-v /graylog/data/journal:/usr/share/graylog/data/journal \
-e GRAYLOG_SMTP_SERVER=10.10.0.10 \
-e "GRAYLOG_PASSWORD=secret1" \
-e "GRAYLOG_ROOT_PASSWORD_SHA2=5b11618c2e44027877d0cd0921ed166b9f176f50587fc91e7534dd2946db77d6" \
-e "GRAYLOG_WEB_ENDPOINT_URI=http://docker0.eb.localdomain:9000/api/" \
rbartl/graylog2-netflow
ExecStop=/usr/bin/docker stop graylog

those 3 docker systems make up a running graylog system. Not much memory is given to each part but it is running surprisingly well. I expected a much higher baseline load for running graylog on an old system and with actually not enough memory.

If you want you can override the graylog configuration with another Volume mount ” -v /graylog/config:/usr/share/graylog/data/config \” and put a graylog.conf file into /graylog/config/graylog.conf

graylog should be accessible after this and first thing will be to enable a netflow input.
The Password to access it will be “secret1″ (as in the systemd config).
create your own with “echo -n secret1 | shasum -a 256″ and change both variables.

Go to System/Inputs and create a new netflow input with the default port.

Bildschirmfoto 2017-06-09 um 21.16.04

Bildschirmfoto 2017-06-09 um 21.19.15

i set it to global in case my docker graylog wants to give itself a new id.

graylog is now ready to receive netflow data (in version 5) – lets enable mikrotik to send it.

image Bildschirmfoto 2017-06-09 um 21.23.35

Bildschirmfoto 2017-06-09 um 21.23.46

i’ve just enabled netflow and set my docker System as Target for the Netflow Packets.
After some seconds i already got Data visible into my charts and a few hours of recording it looks like this

it seBildschirmfoto 2017-06-10 um 15.58.02

it seems i can transfer a few gigabytes in a few spikes.. awesome !?!?
this happens because routeros is sending the netflow packet when a TCP stream is done or after a defined timeout – which is 30 minutes. As i had a download running we can see spikes appearing every 30 minutes.

After setting the timeout for the netflow packets to a minute (which i think is a fine resolution) this is what it looks like

Bildschirmfoto 2017-06-10 um 16.00.17

Next Part will be how to setup grafana with the data in elasticsearch and get some nice infos out of it.

Nginx match Url Encoded Stuff in URL, Umlauts, UTF8

In my quest to whitelist stupid drupal and wordpress sites i’ve encountered a little problem with umlauts in urls.
Nowadays those will be encoded with UTF-8 Bytes and converted to URL encoded Format (if your page is also in UTF-8).
If you have a old browser and copy & paste a url or a page that is not UTF-8 the URL gets sent with the encoding of your operating System. (for my windows that would be something like ISO-8859-15)

When nginx gets that URL it will first decode it and that representation will be used in location matches and so on. It will do that with some stupid encoding as HTTP was

  • well
  • (NOT) designed not having a character set Option in requests.
    e.g. you have a URL that looks something like that with german umlauts “/%C3%B6ffentliche-b%C3%BCcherei”
    nginx will represent that as 3 unreadable (and most probably unwritable) characters mixed into the URL.

    Fortunately theres a very easy fix to get nginx to match that correctly, which is quite hard to find in the documentation.
    It should be in there, though i didn’t find it in the documentation.

    So to match the mentioned URL just put a “(*UTF8)” in front of your regex.

    location ~* (*UTF8)^/[öüäÖÜÄAß-Za-z0-9]*$ {

    also put the version in there without a (*UTF8) as some browser might send the url in ISO format.

    Dumping a MBR Partition table to a new (bigger) Disk and using GPT

    While fixing a broken harddisk i just put a bigger disk with 3 TB instead of 2 TB into my Server box.
    First because i dont trust the harddisk manufactures to sell me the exact same size i’ve got currently and secondly because
    i might use that additional 1 TB for some temporary stuff which doesnt need a RAID.

    2 TB was still barely usable with fdisk, but 3 TB will not be. Therefore i needed to dump and load the partition table and convert it on the fly. Sounded complicated but its incredibly easy after i found the right switches.

    Normally i would use sfdisk to dump and restore partitions in commandline but it doesnt like GPT Partitions.
    For this i use gdisk which can handle both Formats. (it even can convert TO MBR if someone has that weird need)

    sda is my working 2 TB disk which has some 300GB partitions (i like to split disks in smaller parts, makes the raid rebuild easier).


    Device Boot Start End Blocks Id System
    /dev/sda1 63 4883759 2441848+ fd Linux raid autodetect
    Partition 1 does not start on physical sector boundary.
    /dev/sda2 4883760 786140774 390628507+ fd Linux raid autodetect
    /dev/sda3 786140775 1567397789 390628507+ fd Linux raid autodetect
    Partition 3 does not start on physical sector boundary.
    /dev/sda4 1567397790 3907024064 1169813137+ 5 Extended
    Partition 4 does not start on physical sector boundary.
    /dev/sda5 1567397853 2348654804 390628476 fd Linux raid autodetect
    Partition 5 does not start on physical sector boundary.
    /dev/sda6 2348654868 3129911819 390628476 fd Linux raid autodetect
    Partition 6 does not start on physical sector boundary.
    /dev/sda7 3129911883 3907024064 388556091 fd Linux raid autodetect
    Partition 7 does not start on physical sector boundary.

    Now i just “open” my sda disk with gdisk and gdisk immediately warns me that it will be convert to GPT if i save, which i dont want to.
    But gdisk already has the GPT Format in Memory, so i can create a backup of this GPT Format with Command “b” and save it to a file named partitions.

    After this i quit gdisk without writing, open my new 3 TB disk (/dev/sdb) and load that partition. gdisk has some commands that say thay load backups but some of them use the GPT backup partition table, but we want to load a saved backup file.
    When the backup is loaded i recheck with “p” and save it with “w” if it is ok.

    All Commands:


    # 2 TB working RAID member
    gdisk /dev/sda
    > b
    Backup> Enter the filename "partitions"
    > q
    # open the new disk
    gdisk /dev/sdb
    > r
    > l
    Restore> Enter the filename "partitions"
    > p
    > w

    Whitelisting wordpress blog urls OR f____ you haxor!!

    WordPress seems to be a big collection of bugs and holes or it is just the most attacked project on the planet.
    Nevertheless its actually quite easy to make it completely secure by denying access to all but the content links.

    In our setup we have a dedicated Apache host that is serving the PHP Pages and contains the Database.
    Protecting that is a nginx in front of this system, Therefore we will put the security stuff into the nginx system (which also gets more trust from me than apache httpd)

    The nginx in front of our wordpress (SEO optimized) blog just gets those rules.
    All but the last location block Allow access to Content (or rather forward it to our apache host).
    The Last catch-all Block denies access and asks for an authentication, so we admins/moderators, etc. can use the admin pages.

    location ~* ^/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/[a-z-]*/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-content/.*$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/sitemap(index)?.xml$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/robots.txt$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/js/jquery/jquery(-migrate)?(.min)?.js$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/images/smilies/[a-z-_]*.gif$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location / {
    proxy_pass http://10.5.8.12:80/;
    proxy_set_header Host $host;
    proxy_set_header Authorization "";
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpasswd;
    }

    And for directories that we know of to not have php content we need to disable PHP also.
    This is done by adding a .htaccess file with the following content.

    RemoveHandler .php .phtml .php3
    RemoveType .php .phtml .php3
    php_flag engine off

    Do this for the directory wp-content which only contains css and so on.

    Check Microsoft-Windows-Backup for Success or Failure via Nagios & nsclient++

    As i’ve tried quite a few Tools to get a Backup Check to work (wbadmin related, powershell based, etc) and didn’t find a good solution at first i’ll explain how to get it working with nsclient quite easily. If you have the commands below its easy ;).

    Fetch nsclient++ from http://www.nsclient.org/ and don’t forget to enable NRPE daemon as we will use this to fetch a „backup success“. Also set your Nagios Server as allowed address.
    After nsclient++ is installed it isn’t yet ready to answer our queries correctly as „allow arguments“ is not enabled and we need it.

    Startup a „notepad“ as Administrator to edit the ini file. it will be in c:\Programs\nsclient++\nsclient.ini . (hidden though, so just enter the filename as it is)
    Sidenote: !!BIG!! Thanks to Microsoft for not showing files we need and naming important directories different in each locale.

    Put the following block at the bottom (after the installation you shouldn’t have a NRPE server block, if you already have a installation adapt your NRPE server block accordingly)

    
    [/settings/NRPE/server]
    allow arguments=1
    allow nasty_meta chars=1 
    

    your server should now be reachable via NRPE and accept arguments to its commands. So you can now use the following command from a linux server to check the windows backup. (nagios-plugins-basic package is needed in debian)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "scan-range=-1d" "filter=source like 'Backup' AND level = 'error'" "crit=count>0"

    This will throw out a Critical Status if it finds a Error in the Backup eventlog in the last day. Sadly i couldn’t convince check_eventlog to be happy about a successful backup after a failed. Therefore a failed backup will show as a CRITICAL for a day in nagios (just approve it if you fixed it).

    But we also want to check if Backups have been created in the last 2 days (change -2d for another timerange)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "show-all" "scan-range=-2d" "filter=id=4 " "ok=id=4" "warn=count=0" "crit=count=0"

    this will return a critical state if none success logs were found. Because of the inner workings of check_eventlog (checking count for each Log Entry) it is currently not possible to issue a warning state if only 2 success backups were found.
    This works for Windows Server 2012. If you have a different Version you might have to change the id from 4 to something else. See the eventlog for a success Message and copy the id.

    Hope this helps as a had to try around for quite some time to convince „Check_Eventlog“ to work as i wanted it to.

    Foreman+Puppet and Foreman Proxy Installation on Debian

    This is just a very short walk through how to install foreman on a Debian System and (if needed) a foreman proxy onto another System.
    Its also possible to install the Proxy on the same System but in our case the networks are different which is why we need a “distant” Foreman Proxy.
    In this Setup we will use puppet 3 instead of the Puppet 2 which is currently in Debian Stable. Therefore we have to add the puppetlabs DEB Repository first.

    
    wget --no-check-certificate https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
    dpkg -i puppetlabs-release-wheezy.deb
    

    To get foreman we also have to add the “theforeman” Repository.

    
    echo "deb http://deb.theforeman.org/ wheezy 1.4" > /etc/apt/sources.list.d/foreman.list
    wget -q http://deb.theforeman.org/foreman.asc -O- | apt-key add -
    

    Now its easy to install foreman-installer and with it foreman on our system.
    This will fetch puppet, foreman and all needed dependencies like postgresql, apache etc.

    
    apt-get update && apt-get install foreman-installer
    foreman-installer
    

    After the Installation you can login with admin/changeme
    https://myforemanserver.local/

    I’ve setup LDAP in my puppet server with our Microsoft Domain.
    Just set the host, baseDN, user + pass according to your infrastructure.
    if you have Active directory you can set the third page like in this screenshot.

    ldap

    And your Foreman powered Puppet is ready to go.

    Proxy

    Now lets install our remote foreman proxy (with a fresh installed debian system)
    again install the foreman deb sources as mentioned above. (but leave the puppet sources)
    One way would be to just install the package and configure the proxy yourself. (add it in the Master Foreman)

    
    apt-get install foreman-proxy
    

    But my prefered way is to again let it be done by foreman-installer. (with *some* Options)

    Prepare the rndc key so the proxy can update the bind daemon running on the master foreman.
    Just create the directory /etc/foreman-proxy and put the *rndc.key* file in there.

    To use the installer for the proxy we have to fetch some infos from our foreman server.
    hostname should be correct of course and we have to get the oauth Keys.
    The Keys is hidden Foreman Server – Manage – Settings – Auth (copy oauth_consumer_key & oauth_consumer_secret)

    i’ve also pre created the Cert for the Fmproxy on the puppet master

    
    puppet cert generate fmproxy.mycompany.local
    

    copy /var/lib/puppet/ssl/certs/fmproxy.mycompany.local.pem and
    /var/lib/puppet/ssl/private_keys/fmproxymycompany.local.pem /var/lib/puppet/ssl/certs/ca_crt.pem onto the new server into
    /etc/foreman-proxy/private_keys and ssl/certs.

    
    foreman-installer --no-enable-foreman --enable-foreman-proxy --foreman-proxy-dhcp true --foreman-proxy-tftp true --foreman-proxy-dns  true  --foreman-foreman-url=https://puppet01.mycompany.local  --foreman-proxy-foreman-base-url=https://puppet01.mycompany.local --foreman-proxy-oauth-consumer-key=sb8fZvY4Sfu4zpsjzcx7eh7MuHuXCqKq --foreman-proxy-oauth-consumer-secret=vc8QkxEhmqXcRVJdhvKQZZmcQP9BNHVi --foreman-proxy-ssl-key=/etc/foreman-proxy/ssl/private_keys/fmproxy.mycompany.local.pem --foreman-proxy-ssl-cert=/etc/foreman-proxy/ssl/certs/fmproxy.mycompany.local.pem  --foreman-proxy-puppetca false --foreman-proxy-ssl-ca=/etc/foreman-proxy/ssl/ca/ca_crt.pem  --foreman-proxy-dns-server=puppet01.mycompany.local --foreman-proxy-dns-zone=ops.mycompany.local --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
    

    If we are lucky foreman-proxy should be installed 😉 (it wasnt for me therefore i executed this command in various forms multiple times)
    It also has registered itself in the foreman master server.

    The Setup is not yet completely usable because some settings are not yet correctly.

    */etc/dhcp/dhcpd.conf*

    update the routers setting accordingly to your network setup. (192.168.100.1 which foreman entered is not correct most of the time)
    dont forget to restart your dhcpd server after changing the settings (/etc/init.d/isc-dhcp-server restart)

    This should be enough to install/provision your servers via foreman. hfgl

    nock – Mocking HTTP requests in NodeJs

    When writing unit tests for a NodeJs app you sometimes want to mock out HTTP requests. Nock (https://github.com/pgte/nock) is a really nice lib for doing just that.

    It’s installed in the usual way with npm:

    
    npm install nock
    
    

    and can be included in a test like this:

    
    var nock = require('nock');
    
    

    Now, let’s say, you want to mock out an API call to http://www.example.com/api/product/15, expecting some basic product data as a result:

    
    nock('http://example.com')
    .get('/api/product/15')
    .reply(200, {
        data: {
            id: '15',
            name: 'Flamethrower 5000',
            price: '5000'
        }
    });
    
    

    You can put this, for example, in the ‘beforeEach’-block of a mocha or jasmine test.

    The basic example above will get you pretty far, but as you can read in the excellent docs (https://github.com/pgte/nock), you can do all kinds of fancy stuff with nock such as:

    * get / post / put / delete
    * custom headers for requests and responses
    * http and https
    * response repetition
    * chaining
    * filtering & matching (e.g.: http://www.example.com/api/*)
    * logging
    * recording (this is really cool for complex responses, you can record and then play back http requests and responses)
    * … much much more!

    Pretty much everything you’ll ever use in your applications, you will be able to mock out with a clean and simple syntax. Really cool.

    Have fun nocking :)

    ~m

    ES6 Harmony Quick Development Setup

    ECMAScript 6, also called “harmony” is the next version of JavaScript and it’s packed with all kinds of awesome improvements and fancy features. (GitHub page for ES6 features).

    Unfortunately, harmony is not out yet and is in fact still a work in progress. There are some features already implemented in the newest versions of Chrome and Firefox, but it will be some time before we will be able to enjoy the full range of features available within the proposal. There is, howevery, a way to test harmony right now using the magical technique of transpiling, i.e. compiling JS files written in ES6 back to ES5 so they can be executed in all of today’s browsers as well as in NodeJS.

    This post will show an easy way to get a development setup running for ES6 with tests and all using Google’s traceur compiler (traceur-compiler). There is also a grunt-task for this compiler, which makes it easier to use (grunt-traceur).

    I expect you have npm, grunt and grunt-cli already installed, there are lots of tutorials on the web on how to do this.

    The first step is to install grunt-traceur using

    npm install grunt-traceur

    Then, create a Gruntfile.js and fill it with the following contents:

    
    module.exports = function(grunt) {
        grunt.initConfig({
            traceur: {
                options: {
                    'blockBinding': true 
                },
                custom: {
                    files: {
                        'build/all.js': ['js/**/*.js'] 
                    } 
                }
            }
        });
    
        grunt.loadNpmTasks('grunt-traceur');
    };
    

    The ‘blockBinding’ flag enables the use of ‘let’ and ‘const’, the new block-scoped variable definition mechanisms.

    Now, create a ‘js’ folder where we will put all of our JavaScript files. If you execute

    grunt traceur

    on your shell will take all .js files within the ‘js’ folder and compile them into the ‘all.js’ file within the build folder.

    Important Note: If you want to use the compiled ‘all.js’ file, you need to include the traceur-runtime.js before including the compiled sources, because some of the harmony features need some workarounds implemented in this. The file can be found in ‘node_modules/traceur/bin/‘ within the ‘grunt-traceur’ npm folder. You can also find it in the officlal GitHub repo mentioned above.

    Important Note2: If you are using JSHint, you need to add the “esnext” flag to your .jshintrc to make it ES6-aware.

    Now, we will write the Conway’s Game of Life implementation from (Conway’s Game of Life implementation) in ES6:

    index.js

    
    var getCellRepresentation = function(x, y) {
        return "x" + x + "y" + y; 
    };
     
    class Cell {
        constructor(x, y, alive) {
            this.x = x;
            this.y = y;
            this.alive = alive;
        }
    
        isAlive() {
            return this.alive; 
        }
    }
     
    class Board {
        constructor() {
            this.cells = {}; 
        }
    
        addCell(cell) {
            this.cells[getCellRepresentation(cell.x, cell.y)] = cell; 
        }
    
        getCellAt(x, y) {
            return this.cells[getCellRepresentation(x, y)]; 
        }
    
        getAliveNeighbors(cell) {
            let x = cell.x,
                y = cell.y,
                aliveCells = 0;
    
            for (let i = -1; i < 2; i++) {
                for(let j = -1; j < 2; j++) {
                    if(i === 0 && i === j) {
                        continue;
                    }
                    let currentCell = this.getCellAt(x + i, y + j);
    
                    if(currentCell && currentCell.isAlive()) {
                        aliveCells++;
                    }
                }
            }
            return aliveCells;
        }
    
        calculateNextState(cell) {
            let tempCell = new Cell(cell.x, cell.y, cell.alive),
                livingNeighbors = this.getAliveNeighbors(cell);
    
            if(cell.isAlive()) {
                if(livingNeighbors === 2 || livingNeighbors === 3) {
                    tempCell.alive = true;
                } else {
                    tempCell.alive = false;
                }
            } else {
                if(livingNeighbors === 3) {
                    tempCell.alive = true;
                }
            }
            return tempCell;
        }
    
        step() {
            let cells = this.cells,
                tempBoard = {},
                keys = Object.keys(cells);
    
            keys.forEach((c) => {
                let cell = this.cells[c],
                    newCell = this.calculateNextState(cell);
                tempBoard[c] = newCell;
            });
    
            this.cells = tempBoard;
        }
    }
    

    This implementation actually doesn’t even use many of the new fancy things, just classes, the fat-arrow syntax for nested closures (=>) and let instead of var. But especially the new classes show you how well structured your future modules will be and that you don’t have to fight for nice code-structure anymore 😉

    We also want to rewrite our Jasmine tests, we have lots of closures there, but not much else, so again, not many features used (code at the bottom of this post):

    The two files we created are our index.js and spec.js in the ‘js’ folder. Now we use grunt-traceur to compile them to ES5 and run the tests using karma (karma).

    Important to note here, is that this won’t work in PhantomJS, because it’s ES5 and PhantomJS’s JS engine is really old. We will have to use Chrome or something else as our test-browser in the karma.conf.js.

    
    module.exports = function(config) {
      config.set({
        basePath: '',
        frameworks: ['jasmine'],
        files: [
          'node_modules/grunt-traceur/node_modules/traceur/bin/traceur-runtime.js',
          'build/*.js'
        ],
        reporters: ['progress'],
        port: 9876,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch:false, 
        browsers: ['Chrome'],
        captureTimeout: 60000,
        singleRun: false
      });
    };
    

    As mentioned above, we have to include the traceur-runtime here as well. Now we can go ‘karma start’ and ‘karma run’ and see our green bars and be happy!

    Alright, that’s it! A short glimpse into the world of ES6. I believe that, although it is a little cumbersome to get running and there are some issues still with the tools we are used to in our daily development, starting to learn how to use harmony and getting good at it will pay off big within the next few years for every professional JavaScript developer and…it’s a LOT of fun 😀

    Code:
    spec.js

    
    /*global describe, it, expect, sinon, stub, assert, before, beforeEach, afterEach, Board, Cell */
    /*jshint expr:true */
    describe('Conways Game of Life', () => {
    
        it('is a sanity test', () => {
            expect(true).toBe(true);
        });
        let board, cell; 
    
        beforeEach(() => {
            board = new Board();
            cell = new Cell(1, 1, true);
            board.addCell(cell);
        });
    
        describe('addCell', () => {
    
            it('adds a cell to a board', () => {
                expect(board.cells.x1y1).toEqual(cell);
            });
    
        });
    
        describe('getCellAt', () => {
    
            it('returns the cell at the provided coordinates', () => {
                expect(board.getCellAt(1, 1)).toEqual(cell);
            });
    
        });
    
        describe('getAliveNeighbors', () => {
    
            it('returns 0 if there are no other cells', () => {
                expect(board.getAliveNeighbors(cell)).toEqual(0);
            });
    
            it('returns 1 if there is one alive cell next to the cell', () => {
                let neighborCell = new Cell(0, 1, true);
                board.addCell(neighborCell);
    
                expect(board.getAliveNeighbors(cell)).toEqual(1);
            });
    
            it('returns 8 if there are 8 neighbors available', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(2, 1, true));
                board.addCell(new Cell(1, 0, true));
                board.addCell(new Cell(2, 2, true));
                board.addCell(new Cell(1, 2, true));
                board.addCell(new Cell(2, 0, true));
    
                expect(board.getAliveNeighbors(cell)).toEqual(8);
            });
    
            it('returns 1 if there are 7 dead cells next available', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, false));
                board.addCell(new Cell(0, 0, false));
                board.addCell(new Cell(2, 1, false));
                board.addCell(new Cell(1, 0, false));
                board.addCell(new Cell(2, 2, false));
                board.addCell(new Cell(1, 2, false));
                board.addCell(new Cell(2, 0, false));
    
                expect(board.getAliveNeighbors(cell)).toEqual(1);
            });
        });
    
        describe('calculateNextState', () => {
    
            it('dies if there are less than 2 living neighbors', () => {
                board.addCell(new Cell(0, 0, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(false);
            });
    
            it('dies if there are more than 3 living neighbors', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(1, 2, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(false);
            });
    
            it('lives if there are 2 or 3 living neighbors', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(0, 1, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(true);
            });
    
            it('comes back to live if there are exactly 3 living neighbors ', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                cell.alive = false; 
    
                expect(board.calculateNextState(cell).isAlive()).toBe(true);
            });
    
        });
    
        describe('step', () => {
    
            it('calculates the new state for all dying cells', () => {
                board.addCell(new Cell(0, 0, true));
    
                board.step();
    
                expect(board.getCellAt(0, 0).isAlive()).toBe(false);
                expect(board.getCellAt(1, 1).isAlive()).toBe(false);
            });
    
            it('calculates the new state for all living cells', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(1, 2, true));
    
                board.step();
    
                expect(board.getCellAt(0, 0).isAlive()).toBe(false);
                expect(board.getCellAt(1, 1).isAlive()).toBe(true);
            });
    
            it('calculates the new state correctly for many cells', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, false));
                board.addCell(new Cell(2, 1, true));
                board.addCell(new Cell(1, 0, true));
                board.addCell(new Cell(2, 2, true));
                board.addCell(new Cell(1, 2, false));
                board.addCell(new Cell(2, 0, false));
                board.step();
    
                expect(board.getCellAt(1, 1).isAlive()).toBe(false);
                expect(board.getCellAt(0, 1).isAlive()).toBe(true);
                expect(board.getCellAt(2, 2).isAlive()).toBe(true);
            });
        });
    
        describe('Cell', () => {
    
            it('is either alive or dead', () => {
                expect(cell.isAlive()).toEqual(true);
            });
        });
    });
    

    ~m

    Angular Default Request Headers and Interceptors

    When building web applications with AngularJS, making requests using the $http service is usually a core concern. The documentation of $http is quite extensive. In this post, we will take a look at setting defaults for requests and at registering request interceptors.

    First, defaults. Defining a request default is fairly simple

    
    .config(function($httpProvider) {
        $httpProvider.defaults.headers.common['X-Requested-With'] = true;
    }
    

    You can also do this directly on $http somewhere in your code, but if you have default headers which won’t change, you can just as well set it in your app’s configuration step.
    With these defaults, you can do simple things, such as setting default headers for your requests e.g. an Authorization Header for basic HTTP Auth:

    
    // only use it for get requests
    $http.defaults.headers.get.Authorization = 'Basic 1233225235'
    

    You can also do more advanced things, such as transforming requests and responses by default:

    
    $httpProvider.defaults.transformResponse = [...array of transformation functions...]
    

    The transformReponse and transformRequest defaults take an array of transformation functions which represent a transformation chain.
    This can be very useful if you have to transform the data you get from a server for every request and want to save some boilerplate code.

    So basically you can save lots of code and reduce complexity by using $http’s defaults.

    However, you can’t be very specific with these defaults, if they are set, they are used for all requests. Sometimes you might want to do certain transformations or apply certain headers just for a few select requests. Of course you can just deactivate the defaults and reactivate them again after the request, but this basically defeats the purpose of having them in the first place.

    The solution to this are $http’s interceptors. Interceptors can be registered with four different modes:
    * request – intercept outgoing request
    * requestError – intercept outgoing request error
    * response – intercept incoming response
    * responseError – incoming incoming error response

    The following is a simple example of an interceptor which, if it encounters a status code 403 (permission denied), will redirect the user to the login page.

    
    angular.module('InterceptorApp',[])
      .factory('httpErrorInterceptor', function($q, $location) {
        return {
          'responseError': function(rejection) {
            // permission denied, better login!
            if(rejection.status === 403) {
              $location.url('app/login');
            }
            $q.reject(rejection);
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push('httpErrorInterceptor');
    });
    

    The next example shows an interceptor which, when a url with ‘product’ in it is called, adds an Authorization Header.

    
    angular.module('InterceptorApp',[])
      .factory('httpProductRequestTransformationInterceptor', function($q, $location) {
        return {
          'request': function(config) {
              if(config.url.indexOf('product')) {
                  config.headers.Authorization = 'BASIC 12345';
              }
              return config;
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push('httpProductReponseTransformationInterceptor');
    });
    

    There are countless examples of using interceptors to implicitly add or remove configuration for certain requests as well as transform responses.

    Another thing that’s important to note is, that interceptors can also be pushed implicitly like in the following example, where, when we receive a 404 error, we broadcast a ‘page-not-found’ event, which we can then catch somewhere else for error handling:

    
    angular.module('InterceptorApp',[])
      .factory('httpProductRequestTransformationInterceptor', function($q, $location) {
        return {
          'request': function(config) {
              if(config.url.indexOf('product')) {
                  config.headers.Authorization = 'BASIC 12345';
              }
              return config;
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push(function($q, $rootScope) {
            'responseError': function(rejection) {
                if(rejection.status === 404) {
                    $rootScope.$broadcast('page-not-found');
                }
                $q.reject(rejection);
            }
        });
    });
    

    That’s it. Have fun playing around with these awesome AngularJS features :)

    ~m