Nginx match Url Encoded Stuff in URL, Umlauts, UTF8

In my quest to whitelist stupid drupal and wordpress sites i’ve encountered a little problem with umlauts in urls.
Nowadays those will be encoded with UTF-8 Bytes and converted to URL encoded Format (if your page is also in UTF-8).
If you have a old browser and copy & paste a url or a page that is not UTF-8 the URL gets sent with the encoding of your operating System. (for my windows that would be something like ISO-8859-15)

When nginx gets that URL it will first decode it and that representation will be used in location matches and so on. It will do that with some stupid encoding as HTTP was

  • well
  • (NOT) designed not having a character set Option in requests.
    e.g. you have a URL that looks something like that with german umlauts “/%C3%B6ffentliche-b%C3%BCcherei”
    nginx will represent that as 3 unreadable (and most probably unwritable) characters mixed into the URL.

    Fortunately theres a very easy fix to get nginx to match that correctly, which is quite hard to find in the documentation.
    It should be in there, though i didn’t find it in the documentation.

    So to match the mentioned URL just put a “(*UTF8)” in front of your regex.

    location ~* (*UTF8)^/[öüäÖÜÄAß-Za-z0-9]*$ {

    also put the version in there without a (*UTF8) as some browser might send the url in ISO format.

    Dumping a MBR Partition table to a new (bigger) Disk and using GPT

    While fixing a broken harddisk i just put a bigger disk with 3 TB instead of 2 TB into my Server box.
    First because i dont trust the harddisk manufactures to sell me the exact same size i’ve got currently and secondly because
    i might use that additional 1 TB for some temporary stuff which doesnt need a RAID.

    2 TB was still barely usable with fdisk, but 3 TB will not be. Therefore i needed to dump and load the partition table and convert it on the fly. Sounded complicated but its incredibly easy after i found the right switches.

    Normally i would use sfdisk to dump and restore partitions in commandline but it doesnt like GPT Partitions.
    For this i use gdisk which can handle both Formats. (it even can convert TO MBR if someone has that weird need)

    sda is my working 2 TB disk which has some 300GB partitions (i like to split disks in smaller parts, makes the raid rebuild easier).


    Device Boot Start End Blocks Id System
    /dev/sda1 63 4883759 2441848+ fd Linux raid autodetect
    Partition 1 does not start on physical sector boundary.
    /dev/sda2 4883760 786140774 390628507+ fd Linux raid autodetect
    /dev/sda3 786140775 1567397789 390628507+ fd Linux raid autodetect
    Partition 3 does not start on physical sector boundary.
    /dev/sda4 1567397790 3907024064 1169813137+ 5 Extended
    Partition 4 does not start on physical sector boundary.
    /dev/sda5 1567397853 2348654804 390628476 fd Linux raid autodetect
    Partition 5 does not start on physical sector boundary.
    /dev/sda6 2348654868 3129911819 390628476 fd Linux raid autodetect
    Partition 6 does not start on physical sector boundary.
    /dev/sda7 3129911883 3907024064 388556091 fd Linux raid autodetect
    Partition 7 does not start on physical sector boundary.

    Now i just “open” my sda disk with gdisk and gdisk immediately warns me that it will be convert to GPT if i save, which i dont want to.
    But gdisk already has the GPT Format in Memory, so i can create a backup of this GPT Format with Command “b” and save it to a file named partitions.

    After this i quit gdisk without writing, open my new 3 TB disk (/dev/sdb) and load that partition. gdisk has some commands that say thay load backups but some of them use the GPT backup partition table, but we want to load a saved backup file.
    When the backup is loaded i recheck with “p” and save it with “w” if it is ok.

    All Commands:


    # 2 TB working RAID member
    gdisk /dev/sda
    > b
    Backup> Enter the filename "partitions"
    > q
    # open the new disk
    gdisk /dev/sdb
    > r
    > l
    Restore> Enter the filename "partitions"
    > p
    > w

    Whitelisting wordpress blog urls OR f____ you haxor!!

    WordPress seems to be a big collection of bugs and holes or it is just the most attacked project on the planet.
    Nevertheless its actually quite easy to make it completely secure by denying access to all but the content links.

    In our setup we have a dedicated Apache host that is serving the PHP Pages and contains the Database.
    Protecting that is a nginx in front of this system, Therefore we will put the security stuff into the nginx system (which also gets more trust from me than apache httpd)

    The nginx in front of our wordpress (SEO optimized) blog just gets those rules.
    All but the last location block Allow access to Content (or rather forward it to our apache host).
    The Last catch-all Block denies access and asks for an authentication, so we admins/moderators, etc. can use the admin pages.

    location ~* ^/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/[a-z-]*/$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-content/.*$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/sitemap(index)?.xml$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/robots.txt$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/js/jquery/jquery(-migrate)?(.min)?.js$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location ~* ^/wp-includes/images/smilies/[a-z-_]*.gif$ {
    proxy_pass http://10.5.8.12:80;
    proxy_set_header Host $host;
    }

    location / {
    proxy_pass http://10.5.8.12:80/;
    proxy_set_header Host $host;
    proxy_set_header Authorization "";
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/htpasswd;
    }

    And for directories that we know of to not have php content we need to disable PHP also.
    This is done by adding a .htaccess file with the following content.

    RemoveHandler .php .phtml .php3
    RemoveType .php .phtml .php3
    php_flag engine off

    Do this for the directory wp-content which only contains css and so on.

    Check Microsoft-Windows-Backup for Success or Failure via Nagios & nsclient++

    As i’ve tried quite a few Tools to get a Backup Check to work (wbadmin related, powershell based, etc) and didn’t find a good solution at first i’ll explain how to get it working with nsclient quite easily. If you have the commands below its easy ;).

    Fetch nsclient++ from http://www.nsclient.org/ and don’t forget to enable NRPE daemon as we will use this to fetch a „backup success“. Also set your Nagios Server as allowed address.
    After nsclient++ is installed it isn’t yet ready to answer our queries correctly as „allow arguments“ is not enabled and we need it.

    Startup a „notepad“ as Administrator to edit the ini file. it will be in c:\Programs\nsclient++\nsclient.ini . (hidden though, so just enter the filename as it is)
    Sidenote: !!BIG!! Thanks to Microsoft for not showing files we need and naming important directories different in each locale.

    Put the following block at the bottom (after the installation you shouldn’t have a NRPE server block, if you already have a installation adapt your NRPE server block accordingly)

    
    [/settings/NRPE/server]
    allow arguments=1
    allow nasty_meta chars=1 
    

    your server should now be reachable via NRPE and accept arguments to its commands. So you can now use the following command from a linux server to check the windows backup. (nagios-plugins-basic package is needed in debian)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "scan-range=-1d" "filter=source like 'Backup' AND level = 'error'" "crit=count>0"

    This will throw out a Critical Status if it finds a Error in the Backup eventlog in the last day. Sadly i couldn’t convince check_eventlog to be happy about a successful backup after a failed. Therefore a failed backup will show as a CRITICAL for a day in nagios (just approve it if you fixed it).

    But we also want to check if Backups have been created in the last 2 days (change -2d for another timerange)

    /usr/lib/nagios/plugins/check_nrpe -H YOURSERVER.local -c Check_EventLog -a "file=Microsoft-Windows-Backup" file=Application "show-all" "scan-range=-2d" "filter=id=4 " "ok=id=4" "warn=count=0" "crit=count=0"

    this will return a critical state if none success logs were found. Because of the inner workings of check_eventlog (checking count for each Log Entry) it is currently not possible to issue a warning state if only 2 success backups were found.
    This works for Windows Server 2012. If you have a different Version you might have to change the id from 4 to something else. See the eventlog for a success Message and copy the id.

    Hope this helps as a had to try around for quite some time to convince „Check_Eventlog“ to work as i wanted it to.

    Foreman+Puppet and Foreman Proxy Installation on Debian

    This is just a very short walk through how to install foreman on a Debian System and (if needed) a foreman proxy onto another System.
    Its also possible to install the Proxy on the same System but in our case the networks are different which is why we need a “distant” Foreman Proxy.
    In this Setup we will use puppet 3 instead of the Puppet 2 which is currently in Debian Stable. Therefore we have to add the puppetlabs DEB Repository first.

    
    wget --no-check-certificate https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
    dpkg -i puppetlabs-release-wheezy.deb
    

    To get foreman we also have to add the “theforeman” Repository.

    
    echo "deb http://deb.theforeman.org/ wheezy 1.4" > /etc/apt/sources.list.d/foreman.list
    wget -q http://deb.theforeman.org/foreman.asc -O- | apt-key add -
    

    Now its easy to install foreman-installer and with it foreman on our system.
    This will fetch puppet, foreman and all needed dependencies like postgresql, apache etc.

    
    apt-get update && apt-get install foreman-installer
    foreman-installer
    

    After the Installation you can login with admin/changeme
    https://myforemanserver.local/

    I’ve setup LDAP in my puppet server with our Microsoft Domain.
    Just set the host, baseDN, user + pass according to your infrastructure.
    if you have Active directory you can set the third page like in this screenshot.

    ldap

    And your Foreman powered Puppet is ready to go.

    Proxy

    Now lets install our remote foreman proxy (with a fresh installed debian system)
    again install the foreman deb sources as mentioned above. (but leave the puppet sources)
    One way would be to just install the package and configure the proxy yourself. (add it in the Master Foreman)

    
    apt-get install foreman-proxy
    

    But my prefered way is to again let it be done by foreman-installer. (with *some* Options)

    Prepare the rndc key so the proxy can update the bind daemon running on the master foreman.
    Just create the directory /etc/foreman-proxy and put the *rndc.key* file in there.

    To use the installer for the proxy we have to fetch some infos from our foreman server.
    hostname should be correct of course and we have to get the oauth Keys.
    The Keys is hidden Foreman Server – Manage – Settings – Auth (copy oauth_consumer_key & oauth_consumer_secret)

    i’ve also pre created the Cert for the Fmproxy on the puppet master

    
    puppet cert generate fmproxy.mycompany.local
    

    copy /var/lib/puppet/ssl/certs/fmproxy.mycompany.local.pem and
    /var/lib/puppet/ssl/private_keys/fmproxymycompany.local.pem /var/lib/puppet/ssl/certs/ca_crt.pem onto the new server into
    /etc/foreman-proxy/private_keys and ssl/certs.

    
    foreman-installer --no-enable-foreman --enable-foreman-proxy --foreman-proxy-dhcp true --foreman-proxy-tftp true --foreman-proxy-dns  true  --foreman-foreman-url=https://puppet01.mycompany.local  --foreman-proxy-foreman-base-url=https://puppet01.mycompany.local --foreman-proxy-oauth-consumer-key=sb8fZvY4Sfu4zpsjzcx7eh7MuHuXCqKq --foreman-proxy-oauth-consumer-secret=vc8QkxEhmqXcRVJdhvKQZZmcQP9BNHVi --foreman-proxy-ssl-key=/etc/foreman-proxy/ssl/private_keys/fmproxy.mycompany.local.pem --foreman-proxy-ssl-cert=/etc/foreman-proxy/ssl/certs/fmproxy.mycompany.local.pem  --foreman-proxy-puppetca false --foreman-proxy-ssl-ca=/etc/foreman-proxy/ssl/ca/ca_crt.pem  --foreman-proxy-dns-server=puppet01.mycompany.local --foreman-proxy-dns-zone=ops.mycompany.local --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key
    

    If we are lucky foreman-proxy should be installed 😉 (it wasnt for me therefore i executed this command in various forms multiple times)
    It also has registered itself in the foreman master server.

    The Setup is not yet completely usable because some settings are not yet correctly.

    */etc/dhcp/dhcpd.conf*

    update the routers setting accordingly to your network setup. (192.168.100.1 which foreman entered is not correct most of the time)
    dont forget to restart your dhcpd server after changing the settings (/etc/init.d/isc-dhcp-server restart)

    This should be enough to install/provision your servers via foreman. hfgl

    nock – Mocking HTTP requests in NodeJs

    When writing unit tests for a NodeJs app you sometimes want to mock out HTTP requests. Nock (https://github.com/pgte/nock) is a really nice lib for doing just that.

    It’s installed in the usual way with npm:

    
    npm install nock
    
    

    and can be included in a test like this:

    
    var nock = require('nock');
    
    

    Now, let’s say, you want to mock out an API call to http://www.example.com/api/product/15, expecting some basic product data as a result:

    
    nock('http://example.com')
    .get('/api/product/15')
    .reply(200, {
        data: {
            id: '15',
            name: 'Flamethrower 5000',
            price: '5000'
        }
    });
    
    

    You can put this, for example, in the ‘beforeEach’-block of a mocha or jasmine test.

    The basic example above will get you pretty far, but as you can read in the excellent docs (https://github.com/pgte/nock), you can do all kinds of fancy stuff with nock such as:

    * get / post / put / delete
    * custom headers for requests and responses
    * http and https
    * response repetition
    * chaining
    * filtering & matching (e.g.: http://www.example.com/api/*)
    * logging
    * recording (this is really cool for complex responses, you can record and then play back http requests and responses)
    * … much much more!

    Pretty much everything you’ll ever use in your applications, you will be able to mock out with a clean and simple syntax. Really cool.

    Have fun nocking :)

    ~m

    ES6 Harmony Quick Development Setup

    ECMAScript 6, also called “harmony” is the next version of JavaScript and it’s packed with all kinds of awesome improvements and fancy features. (GitHub page for ES6 features).

    Unfortunately, harmony is not out yet and is in fact still a work in progress. There are some features already implemented in the newest versions of Chrome and Firefox, but it will be some time before we will be able to enjoy the full range of features available within the proposal. There is, howevery, a way to test harmony right now using the magical technique of transpiling, i.e. compiling JS files written in ES6 back to ES5 so they can be executed in all of today’s browsers as well as in NodeJS.

    This post will show an easy way to get a development setup running for ES6 with tests and all using Google’s traceur compiler (traceur-compiler). There is also a grunt-task for this compiler, which makes it easier to use (grunt-traceur).

    I expect you have npm, grunt and grunt-cli already installed, there are lots of tutorials on the web on how to do this.

    The first step is to install grunt-traceur using

    npm install grunt-traceur

    Then, create a Gruntfile.js and fill it with the following contents:

    
    module.exports = function(grunt) {
        grunt.initConfig({
            traceur: {
                options: {
                    'blockBinding': true 
                },
                custom: {
                    files: {
                        'build/all.js': ['js/**/*.js'] 
                    } 
                }
            }
        });
    
        grunt.loadNpmTasks('grunt-traceur');
    };
    

    The ‘blockBinding’ flag enables the use of ‘let’ and ‘const’, the new block-scoped variable definition mechanisms.

    Now, create a ‘js’ folder where we will put all of our JavaScript files. If you execute

    grunt traceur

    on your shell will take all .js files within the ‘js’ folder and compile them into the ‘all.js’ file within the build folder.

    Important Note: If you want to use the compiled ‘all.js’ file, you need to include the traceur-runtime.js before including the compiled sources, because some of the harmony features need some workarounds implemented in this. The file can be found in ‘node_modules/traceur/bin/‘ within the ‘grunt-traceur’ npm folder. You can also find it in the officlal GitHub repo mentioned above.

    Important Note2: If you are using JSHint, you need to add the “esnext” flag to your .jshintrc to make it ES6-aware.

    Now, we will write the Conway’s Game of Life implementation from (Conway’s Game of Life implementation) in ES6:

    index.js

    
    var getCellRepresentation = function(x, y) {
        return "x" + x + "y" + y; 
    };
     
    class Cell {
        constructor(x, y, alive) {
            this.x = x;
            this.y = y;
            this.alive = alive;
        }
    
        isAlive() {
            return this.alive; 
        }
    }
     
    class Board {
        constructor() {
            this.cells = {}; 
        }
    
        addCell(cell) {
            this.cells[getCellRepresentation(cell.x, cell.y)] = cell; 
        }
    
        getCellAt(x, y) {
            return this.cells[getCellRepresentation(x, y)]; 
        }
    
        getAliveNeighbors(cell) {
            let x = cell.x,
                y = cell.y,
                aliveCells = 0;
    
            for (let i = -1; i < 2; i++) {
                for(let j = -1; j < 2; j++) {
                    if(i === 0 && i === j) {
                        continue;
                    }
                    let currentCell = this.getCellAt(x + i, y + j);
    
                    if(currentCell && currentCell.isAlive()) {
                        aliveCells++;
                    }
                }
            }
            return aliveCells;
        }
    
        calculateNextState(cell) {
            let tempCell = new Cell(cell.x, cell.y, cell.alive),
                livingNeighbors = this.getAliveNeighbors(cell);
    
            if(cell.isAlive()) {
                if(livingNeighbors === 2 || livingNeighbors === 3) {
                    tempCell.alive = true;
                } else {
                    tempCell.alive = false;
                }
            } else {
                if(livingNeighbors === 3) {
                    tempCell.alive = true;
                }
            }
            return tempCell;
        }
    
        step() {
            let cells = this.cells,
                tempBoard = {},
                keys = Object.keys(cells);
    
            keys.forEach((c) => {
                let cell = this.cells[c],
                    newCell = this.calculateNextState(cell);
                tempBoard[c] = newCell;
            });
    
            this.cells = tempBoard;
        }
    }
    

    This implementation actually doesn’t even use many of the new fancy things, just classes, the fat-arrow syntax for nested closures (=>) and let instead of var. But especially the new classes show you how well structured your future modules will be and that you don’t have to fight for nice code-structure anymore 😉

    We also want to rewrite our Jasmine tests, we have lots of closures there, but not much else, so again, not many features used (code at the bottom of this post):

    The two files we created are our index.js and spec.js in the ‘js’ folder. Now we use grunt-traceur to compile them to ES5 and run the tests using karma (karma).

    Important to note here, is that this won’t work in PhantomJS, because it’s ES5 and PhantomJS’s JS engine is really old. We will have to use Chrome or something else as our test-browser in the karma.conf.js.

    
    module.exports = function(config) {
      config.set({
        basePath: '',
        frameworks: ['jasmine'],
        files: [
          'node_modules/grunt-traceur/node_modules/traceur/bin/traceur-runtime.js',
          'build/*.js'
        ],
        reporters: ['progress'],
        port: 9876,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch:false, 
        browsers: ['Chrome'],
        captureTimeout: 60000,
        singleRun: false
      });
    };
    

    As mentioned above, we have to include the traceur-runtime here as well. Now we can go ‘karma start’ and ‘karma run’ and see our green bars and be happy!

    Alright, that’s it! A short glimpse into the world of ES6. I believe that, although it is a little cumbersome to get running and there are some issues still with the tools we are used to in our daily development, starting to learn how to use harmony and getting good at it will pay off big within the next few years for every professional JavaScript developer and…it’s a LOT of fun 😀

    Code:
    spec.js

    
    /*global describe, it, expect, sinon, stub, assert, before, beforeEach, afterEach, Board, Cell */
    /*jshint expr:true */
    describe('Conways Game of Life', () => {
    
        it('is a sanity test', () => {
            expect(true).toBe(true);
        });
        let board, cell; 
    
        beforeEach(() => {
            board = new Board();
            cell = new Cell(1, 1, true);
            board.addCell(cell);
        });
    
        describe('addCell', () => {
    
            it('adds a cell to a board', () => {
                expect(board.cells.x1y1).toEqual(cell);
            });
    
        });
    
        describe('getCellAt', () => {
    
            it('returns the cell at the provided coordinates', () => {
                expect(board.getCellAt(1, 1)).toEqual(cell);
            });
    
        });
    
        describe('getAliveNeighbors', () => {
    
            it('returns 0 if there are no other cells', () => {
                expect(board.getAliveNeighbors(cell)).toEqual(0);
            });
    
            it('returns 1 if there is one alive cell next to the cell', () => {
                let neighborCell = new Cell(0, 1, true);
                board.addCell(neighborCell);
    
                expect(board.getAliveNeighbors(cell)).toEqual(1);
            });
    
            it('returns 8 if there are 8 neighbors available', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(2, 1, true));
                board.addCell(new Cell(1, 0, true));
                board.addCell(new Cell(2, 2, true));
                board.addCell(new Cell(1, 2, true));
                board.addCell(new Cell(2, 0, true));
    
                expect(board.getAliveNeighbors(cell)).toEqual(8);
            });
    
            it('returns 1 if there are 7 dead cells next available', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, false));
                board.addCell(new Cell(0, 0, false));
                board.addCell(new Cell(2, 1, false));
                board.addCell(new Cell(1, 0, false));
                board.addCell(new Cell(2, 2, false));
                board.addCell(new Cell(1, 2, false));
                board.addCell(new Cell(2, 0, false));
    
                expect(board.getAliveNeighbors(cell)).toEqual(1);
            });
        });
    
        describe('calculateNextState', () => {
    
            it('dies if there are less than 2 living neighbors', () => {
                board.addCell(new Cell(0, 0, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(false);
            });
    
            it('dies if there are more than 3 living neighbors', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(1, 2, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(false);
            });
    
            it('lives if there are 2 or 3 living neighbors', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(0, 1, true));
    
                expect(board.calculateNextState(cell).isAlive()).toBe(true);
            });
    
            it('comes back to live if there are exactly 3 living neighbors ', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                cell.alive = false; 
    
                expect(board.calculateNextState(cell).isAlive()).toBe(true);
            });
    
        });
    
        describe('step', () => {
    
            it('calculates the new state for all dying cells', () => {
                board.addCell(new Cell(0, 0, true));
    
                board.step();
    
                expect(board.getCellAt(0, 0).isAlive()).toBe(false);
                expect(board.getCellAt(1, 1).isAlive()).toBe(false);
            });
    
            it('calculates the new state for all living cells', () => {
                board.addCell(new Cell(0, 0, true));
                board.addCell(new Cell(1, 2, true));
    
                board.step();
    
                expect(board.getCellAt(0, 0).isAlive()).toBe(false);
                expect(board.getCellAt(1, 1).isAlive()).toBe(true);
            });
    
            it('calculates the new state correctly for many cells', () => {
                board.addCell(new Cell(0, 1, true));
                board.addCell(new Cell(0, 2, true));
                board.addCell(new Cell(0, 0, false));
                board.addCell(new Cell(2, 1, true));
                board.addCell(new Cell(1, 0, true));
                board.addCell(new Cell(2, 2, true));
                board.addCell(new Cell(1, 2, false));
                board.addCell(new Cell(2, 0, false));
                board.step();
    
                expect(board.getCellAt(1, 1).isAlive()).toBe(false);
                expect(board.getCellAt(0, 1).isAlive()).toBe(true);
                expect(board.getCellAt(2, 2).isAlive()).toBe(true);
            });
        });
    
        describe('Cell', () => {
    
            it('is either alive or dead', () => {
                expect(cell.isAlive()).toEqual(true);
            });
        });
    });
    

    ~m

    Angular Default Request Headers and Interceptors

    When building web applications with AngularJS, making requests using the $http service is usually a core concern. The documentation of $http is quite extensive. In this post, we will take a look at setting defaults for requests and at registering request interceptors.

    First, defaults. Defining a request default is fairly simple

    
    .config(function($httpProvider) {
        $httpProvider.defaults.headers.common['X-Requested-With'] = true;
    }
    

    You can also do this directly on $http somewhere in your code, but if you have default headers which won’t change, you can just as well set it in your app’s configuration step.
    With these defaults, you can do simple things, such as setting default headers for your requests e.g. an Authorization Header for basic HTTP Auth:

    
    // only use it for get requests
    $http.defaults.headers.get.Authorization = 'Basic 1233225235'
    

    You can also do more advanced things, such as transforming requests and responses by default:

    
    $httpProvider.defaults.transformResponse = [...array of transformation functions...]
    

    The transformReponse and transformRequest defaults take an array of transformation functions which represent a transformation chain.
    This can be very useful if you have to transform the data you get from a server for every request and want to save some boilerplate code.

    So basically you can save lots of code and reduce complexity by using $http’s defaults.

    However, you can’t be very specific with these defaults, if they are set, they are used for all requests. Sometimes you might want to do certain transformations or apply certain headers just for a few select requests. Of course you can just deactivate the defaults and reactivate them again after the request, but this basically defeats the purpose of having them in the first place.

    The solution to this are $http’s interceptors. Interceptors can be registered with four different modes:
    * request – intercept outgoing request
    * requestError – intercept outgoing request error
    * response – intercept incoming response
    * responseError – incoming incoming error response

    The following is a simple example of an interceptor which, if it encounters a status code 403 (permission denied), will redirect the user to the login page.

    
    angular.module('InterceptorApp',[])
      .factory('httpErrorInterceptor', function($q, $location) {
        return {
          'responseError': function(rejection) {
            // permission denied, better login!
            if(rejection.status === 403) {
              $location.url('app/login');
            }
            $q.reject(rejection);
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push('httpErrorInterceptor');
    });
    

    The next example shows an interceptor which, when a url with ‘product’ in it is called, adds an Authorization Header.

    
    angular.module('InterceptorApp',[])
      .factory('httpProductRequestTransformationInterceptor', function($q, $location) {
        return {
          'request': function(config) {
              if(config.url.indexOf('product')) {
                  config.headers.Authorization = 'BASIC 12345';
              }
              return config;
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push('httpProductReponseTransformationInterceptor');
    });
    

    There are countless examples of using interceptors to implicitly add or remove configuration for certain requests as well as transform responses.

    Another thing that’s important to note is, that interceptors can also be pushed implicitly like in the following example, where, when we receive a 404 error, we broadcast a ‘page-not-found’ event, which we can then catch somewhere else for error handling:

    
    angular.module('InterceptorApp',[])
      .factory('httpProductRequestTransformationInterceptor', function($q, $location) {
        return {
          'request': function(config) {
              if(config.url.indexOf('product')) {
                  config.headers.Authorization = 'BASIC 12345';
              }
              return config;
          }
        };
      })
      .config(function($httpProvider) {
        $httpProvider.interceptors.push(function($q, $rootScope) {
            'responseError': function(rejection) {
                if(rejection.status === 404) {
                    $rootScope.$broadcast('page-not-found');
                }
                $q.reject(rejection);
            }
        });
    });
    

    That’s it. Have fun playing around with these awesome AngularJS features :)

    ~m

    NodeJS mocking with proxyquire

    I recently stumbled upon proxyquire (https://github.com/thlorenz/proxyquire), which is a mocking framework for NodeJS’s require. It basically gives you the possibility to stub out externally required dependencies of your module under test.

    An example would be, if you use a library such as mongoose (https://github.com/learnboost/mongoose/), where you have code like the following in your modules:

    
        var mongoose = require('mongoose'),
            Product = mongoose.model('Product'),
     
        findAllProducts = function() {
            Product.find({}, function(err, products) {
                // do something with the product
            };
        };
        
        module.exports.findAllProducts = findAllProducts;
    

    So, basically, you just have the concept of a Mongo model, which is used to find some records in MongoDB. Of course, when writing a Unit Test, you don’t want the test to actually go to the database, instead you want to “fake” what Product.find() does. Proxyquire can help with that.

    It can be easily installed and setup using

    
    npm install --save-dev proxyquire
    

    in your project root.

    Then, just add

    
    var proxyquire = require('proxyquire');
    

    to the top of your test file.

    Mocking out a dependency is very straightforward as well, in the test:

    
      describe('productmodule', function() {
          var productmodule,
              mongooseStub;
     
          before(function() {
              mongooseStub = {
                  model: function() {
                      return {
                          find: function(query, callback) {
                              callback(); 
                          } 
                      };
                  } 
              };
     
              productmodule = proxyquire('../modules/product', {'mongoose': mongooseStub});
          });
    
      });
    

    What is important here is, that the path for the module under test, is the path from the test to the module, whereas the path to the stubbed out module is the same path that is used in the module under test. Every dependency, which is not explicitly mocked out is the real thing.

    As for the mocking, you can do just about whatever you want to do here, even use something like SinonJS (http://sinonjs.org/) to spy on one of the mocked out methods.

    If you’re interested in some more examples, there are plenty on the project’s GitHub page, which can be found here

    I hope this helps you test some of your awesome NodeJS apps 😉

    ~m

    Transforming a timer into a simple AngularJS directive

    In my last post Building a very simple timer in AngularJS, I created a simple timer which can now be transformed into a directive.

    Building angular directives provides reusability and isolation of functionality, There are many different ways of building directives in Angular, I chose a very simple way, exposing an API through events and replacing my directive with with the rendered timer.

    If you compare the result below with the non-directive version, you can see the benefit, suddenly the component is placed in the view and the controller is only responsible for interactive with it, not implementing it’s functionality. Furthermore, the controller and view logic shrank down. This way, the directive can grow in isolation, and additional features and configuration options can be added more easily and across all parts of the application that use it.

    A working version of the timer directive can be found here

    timer.js

    
    angular.module('TimerApp',[])
      .controller('TimerCtrl', function($scope, $timeout) {
        $scope.startTimer = function() {
            $scope.$broadcast('timer-start');
        };
    
        $scope.stopTimer = function() {
            $scope.$broadcast('timer-stop');
        };
    
        $scope.$on('timer-stopped', function(event, remaining) {
            if(remaining === 0) {
                console.log('your time ran out!');
            }
        });
      })
    
    .directive('timer', function($compile) {
      return {
        restrict: 'E',
        scope: false,
        replace: false,
        controller: function($scope, $element, $timeout) {
          var timeoutId = null;
           
          $scope.counter = 30;
          $scope.$on('timer-start', function() {
            $scope.counter = 30;
            start();
          });
           
          $scope.$on('timer-stop', function() {
            $scope.$broadcast('timer-stopped', $scope.counter);
            $scope.counter=30;        
            $timeout.cancel(timeoutId);
          });
           
          function start() {
            timeoutId = $timeout(onTimeout, 1000);  
          }
           
          function onTimeout() {
            if($scope.counter === 0) {
              $scope.$broadcast('timer-stopped', 0);
              $timeout.cancel(timeoutId);
              return;
            }
            $scope.counter--;
            timeoutId = $timeout(onTimeout, 1000);
          }
     
          var elem = $compile('<p>{{counter}}</p>')($scope);
          $element.append(elem);
        }
      };
    }); 
    

    timer.html

    
       <timer/>
       <button ng-click='startTimer()'>Start</button> | <button ng-click='stopTimer()'>Stop</button>
    

    That’s it! This method of first implementing something simple that works and then transforming it into a reusable component can be used to great effect with AngularJS. There is a lot more to directives than has been covered in this post and I will try to write some more on the topic, touching on some more advanced topics in the future, so stay tuned! 😉

    ~m