AaronMonday, October 31st 2016


A realtime multiplayer game created using HTML5, Javascript and Node.js


Play Online Now


You play as a sheep on either the Red or Blue team, your objective is to capture the opposing teams flag 10 times before the timer runs out and to prevent them from capturing yours.


At regular intervals throughout the match power-up crates will drop, pick one of these up and it'll give you super speed, more health or a better weapon.



  • Move with W, A, S, D
  • Aim with the mouse
  • Click to fire (You can fire in any direction)


  • Touch and drag to move
  • Tap the fire button to shoot

How it works

Multiplayer (Networking)

Client, Server Communication diagram Websockets (Socket.Io) communicate all player input, in the diagram above player movements are sent to the server. The server then sends all movements to the client.


Processing player inputs on the server ensures that players cannot cheat by moving their character directly or by falsely informing the server that they have killed a player. This does however mean that all your inputs must be sent to the server, processed and the resulting update sent back to the client before anything can happen on screen (60 times a second). This is not a problem with fast internet connections, but with slower connections or connections with high latency this can make the game difficult to play.

Mobile input

Mobile input screenshot On mobile an alternative touch friendly overlay is presented to the user.


Physics runs on the server using p2.js, this ensures that all clients show the same representaion of the world at the same time. Essentially the clients simply render what they are sent. This allows collision calculations between projectiles and players to be very accurate, at the cost of increased bandwidth requirements.


Using the PIXI.js library, combined with the particle effects library pixi-particles to render to a HTML5 canvas element or WebGL depending on browser support allows us to create amazing effects and highly dynamic environments.


Using JSON files that store the intial positions of all level items and their behaviours allows us to load different levels by changing the loaded JSON file. The following items are currently supported:

  • Crates
  • Floors
  • Spawn locations
  • Available powerups
  • Background
  • Flag locations
  • Hazard areas


Games over websockets

Using websockets for online gaming is not ideal as they use TCP which means you need a low latency connection with limited packet loss otherwise you'll start to see freezes followed by a kind of fast forwarding effect, this is caused by the way TCP works if one packet is dropped all other received packets are held until the dropped packet can be re-transmitted.

TCP lost packet diagram

A better solution would be to use UDP however at present UDP is not supported in web browsers. It is however supported by node.js, electron and nwjs. In UDP packets are not ordered or retransmitted if there is a lost packet you have to handle it yourself as shown below:

UDP lost packet diagram

This is well suited to games and videos because if a packet is lost by the time it can be retransmitted the data it contained is probably out of date. In these situations its best to just wait for the next packet.

Understanding the JavaScript event loop

AaronSunday, March 13th 2016

For many people the event loop in JavaScript is something of a mystery, to understand the event loop first we need to understand why JavaScript has an event loop in the first place.

Why JavaScript has an event loop

JavaScript is single threaded but what differentiates it from other single threaded languages is that while your code always runs in a single thread, I/O operations are run in other threads and then they fire an event for your code to action the result. So while the I/O operation is in progress your code can do something else, this actually makes JavaScript a very efficient language.

The event loop

So now we know why JavaScript has an event loop lets go over exactly what the event loop is. One of the best ways to think of the event loop is imagine you have an island and on that island you have a minion who will do whatever you ask him to. However, this minion can only do one thing at once, so you build a terminal on the island and enter into it a list of things for your minion to do. Whenever your minion has nothing to do he goes to the terminal, presses a button and it prints out one task from the list. Your minion then reads that task and goes off to complete it.

So let's say you gave your minion this list of tasks:

  • take out the rubbish
  • get the shopping
  • clean the house
  • mow the lawn

The first thing your minion does is go to the terminal and press the button to print off the first task - to take out the rubbish. Your minion reads the task and then proceeds to go and get the rubbish and take it down to the dock where it'll get picked up later by a rubbish boat.

Once he's done this he has nothing to do so he goes back to the terminal, presses the button and gets another task - to get the shopping. He reads the new task, goes into the house and compiles a list of required shopping. He then takes it down to the dock and puts the list into a box where it'll be picked up and later on the shopping will be delivered to the dock. At this point, once the list is delivered to the box, your minion has nothing to do.

So, he goes back to the terminal and prints another task. This time he needs to clean the house, easy enough, he goes off and cleans the house. Once he's done he comes back to the terminal and prints another task. While he was cleaning the house the shopping was delivered to the dock and a new task was added to the list - to collect the shopping and unpack it. But when he prints the next task it says to mow the lawn, in this case there's an automatic lawn mower. So he goes and sets the lawn mower going. Then, as he has nothing to do, goes back to the terminal and presses the button.

The task is to get the shopping so he heads down to the dock to collect the shopping and unpack it. While he's doing this the lawn mower finishes and needs to be put away. This creates a new task on the terminal - to put the lawn mower away. When your minion goes back to the terminal he picks up the task to put the lawn mower away, which, he does.

In this analogy the terminal is the event loop, your minion is the main thread and the lawnmower or shopper at the dock are asynchronous I/O operations. So you could replace the shopper with a http request and the lawnmower with disk I/O.

This makes the JavaScript model one of the most efficient models for high throuput processing as anything blocking is handled in a different thread automatically for you. This means that the only thing that will slow your program down is your code which is the one downside. JavaScript is not normally suited to computationally intensive work as it would block the event loop, fortunately that's what web workers are for.

Entering User Mode

AaronSunday, March 6th 2016

If you are creating your own operating system or are interested in how to get from Ring 0 (Kernel mode) to Ring 3 (user mode) then the following tutorial is for you.

Required Knowledge

In order to use this tutorial a knowledge of operating systems is assumed, in addition to this your kernel will need to have a working GDT, IDT, and a video driver of some description.

Whats covered?

This tutorial covers getting to user mode (ring 3) but does not yet cover system calls so you will only be able to switch to ring 3 (user mode) but you will not be able to switch back, in addition to ring switching we will cover creating and installing a TSS.

Creating the TSS (Task State Segment)

First off we need to create a TSS (Task State-Segment) this is just a special entry in the GDT that allows the CPU to jump back to ring 0, for now you can just use the TSS bellow if you don't have one.

void install_tss(int cpu_no){
    // now fill each value
    // set values necessary
    sys_tss.ss0 = 0x10;
	// now set the IO bitmap (not necessary, so set above limit)       
    sys_tss.iomap = ( unsigned short ) sizeof( tss_struct ); 

And the TSS structure that you will need is: (this should go in a header file e.g. tss.h)

typedef volatile struct strtss{
    unsigned short   link;
    unsigned short   link_h;  
    unsigned long   esp0;
    unsigned short   ss0;
    unsigned short   ss0_h;  
    unsigned long   esp1;
    unsigned short   ss1;
    unsigned short   ss1_h;  
    unsigned long   esp2;
    unsigned short   ss2;
    unsigned short   ss2_h;  
    unsigned long   cr3;
    unsigned long   eip;
    unsigned long   eflags;  
    unsigned long   eax;
    unsigned long   ecx; 
    unsigned long   edx;
    unsigned long    ebx;  
    unsigned long   esp;
    unsigned long   ebp;  
    unsigned long   esi;
    unsigned long   edi;  
    unsigned short   es;
    unsigned short   es_h;  
    unsigned short   cs;
    unsigned short   cs_h;  
    unsigned short   ss;
    unsigned short   ss_h;  
    unsigned short   ds;
    unsigned short   ds_h;  
    unsigned short   fs;
    unsigned short   fs_h;  
    unsigned short   gs;
    unsigned short   gs_h;  
    unsigned short   ldt;
    unsigned short   ldt_h;  
    unsigned short   trap;
    unsigned short   iomap;  
}__attribute__((packed)) tss_struct;  

tss_struct sys_tss; //Define the TSS as a global structure

You will then need to set up some user mode GDT code segment and data segment like so:

gdt_set_gate(3, 0, 0xFFFFFFFF, 0xFA, 0xCF); // User mode code segment
gdt_set_gate(4, 0, 0xFFFFFFFF, 0xF2, 0xCF); // User mode data segment

Then set the GDT entry for the TSS:

unsigned long addr=(unsigned long)tss; 
int size = sizeof(tss_struct)+1;  gdt_set_gate(5,addr,addr+size,0x89,0xCF);

Getting to user mode

Now you should have a valid TSS and a running kernel so we can now make the jump to User Mode.

void switch_to_user_mode() {
    // Set up a stack structure for switching to user mode.
    asm volatile("  \
        cli; \
        mov $0x23, %ax; \
        mov %ax, %ds; \
        mov %ax, %es; \
        mov %ax, %fs; \
        mov %ax, %gs; \
        mov %esp, %eax; \
        pushl $0x23; \
        pushl %eax; \
        pushf; \
        mov $0x200, %eax; \
        push %eax; \
        pushl $0x1B; \
        push $1f; \
        iret; \    1: \

All this code does is setup the CPU for jumping into user mode and then jump to the address at the end of this code, once it has done that you are in user mode! However any interrupts will cause a exception in you kernel at some level depending on how its setup.

Whats Next?

You may wish to add some systemcalls so that your user mode code can place some text on the screen or even take user input.

ES6, ECMAScript2015

AaronFriday, March 4th 2016

What is it?

  • The Sixth Major release of the ECMAScript (Javascript) specification
    • Also Known as ES6, Harmony, ES2015, ECMAScript 6
    • First of a new living standard for Javascript
    • ECMAScript 2016 (ES7) should be finalised in June
    • The first changes to the ECMA language standards since 2009

How often should we expect changes to the standard?

The aim of the ECMA living standard is to make a release of finalised language features every 12 months, these releases should be substantially smaller than the changes made in ES6 as ES6 represents 6 years of standards.

Whats New?

ES6 adds a number of new features to the JavaScript language syntax some of which are listed bellow and most of wich can be used today through Babel.

  • Arrow Functions
  • Lexical this
  • Block Scoping (let + const)
  • Classes
  • Modules
  • Default + Rest + Spread for function arguments
  • Iterators + For..Of
  • Generators
  • Full Unicode Support
  • Map + Set + WeakMap + WeakSet
  • Proxies
  • Subclassable Builtins
  • Promises
  • String Improvements
  • Array Improvements
  • ...And more

Arrow Functions

The only difference between using arrow functions and decalring a function more traditionally is that Arrow functions will share the scope that they were defined in i.e. they have access to all of the local variables where they were defined and this inside an arrow function refers to the scope in which it was defined.

  • Very similar syntax to Java and c#
    • () => {}
    • (arg1) => {}
    • x => {}
    • x => statement
  • Syntactic sugar and shorthand for
    • function () {}
let a = [

var a2 ={ return s.length });

let a3 = s => s.length );

Lexical this

This works with classes and arrow functions to allow more intuitive use of the this keyword.

  • Inside classes this refers to the class
  • Inside Objects this is the object
  • Inside arrow functions this refers to the definition context


Using classes provides a simpler way to perform prototype based inheritence and gives a much clearer view of the code intent.

  • Similar to Java and other languages
  • Defined using the class keyword
  • Can extend another class
  • Has a constructor and static or instance variables or functions
class Polygon {
  constructor(height, width) {
    this.height = height;
    this.width = width;

Let and Const

This is probably one of the largest changes as it allows you to completely replace the var keyword with either let for normal local variables or const for readonly variables. An emerging best practice is to stop using var altogether and instead use let and const appropriately as there scope is simpler.

  • Replace var
  • let
    • The new var
    • Block scoped variable
  • const
    • single-assignment
    • read only
    • Cannot be used before assignment


These allow you to write a iterative algorithm in a single function that maintians it's own state across iterations. For example:

function* idMaker(){
  let index = 0;
    yield index++;

let gen = idMaker();

console.log(; // 0
console.log(; // 1
console.log(; // 2
// ...	
  • Yielding functions
  • defined as function*()
  • must contain a yield statement instead of return
  • good for creating infinite sequences or doing calculations on each iteration


JavaScript modules are distinct from classes as a module may contain one or more classes that may or may not be exported.

  • Allow the import of other javascript files or parts of them
  • Allow the export of an interface from a javascript file
  • Defined by the file they are in i.e. you cannot have 2 modules in one file

Module Example:

// Inside module.js
export class Polygon {
  constructor(height, width) {
    this.height = height;
    this.width = width;

export function something(){

Import Example:

//Imports only the polygon class
import { Polygon } from "module.js";


This covers a wide range of related improvements all related to the use and handling of variables and assignments. These improvements allow you to build more flexible methods that are self describing.

Default Parameters

Default parameters allow you to specify a default value to be used if the argument is not supplied by the caller.

function test(required, controls = {}, name = "test", value = 4){ /*...*/ }

Rest Arguments

This allows functions to take an any number of arguments and aggregate them into an array, this means that you no longer have to use the args array inside a function. Note that functions can only have a single Rest argument and it must be the last argument.

function test( a, b, ...c){
	return (a + b) * c.length;

Spread Operator

This is similar to Rest arguments but for array assignment.

let b = [ "bob", "jones" ];
let a = [ 1, 2, ...b ]; // [ 1, 2, "bob", "jones"]

For Loops

For loops can take advantage of destructuring through the new For... Of... loop, this allows you to interate over an array or iterator assigning each item to a local variable.

//Take advantage of destructuring to map two variables from a key value pair
for (let [name, builder] of Object.entries(models))

//Basic for of loop
for (let x of array)

String Improvements

Strings can now have template literals removing some of the need for templating libraries like Handlebars for basic interpolation, multiline strings are also supported. In order to take advantage of new features in strings you'll need to define your strings using backticks.

  • Template literals
  • Strings enclosed in backticks ` `
    • Multiline
    • Interpolation ${identity}
    • String.Raw resolves interpolations
let customer = `Bob`;
let welcome = `Welcome ${customer}`;

let rawString = String.raw(welcome); // Welcome Bob

How can we use ES6 today?

Using ES6 today requires a transpiler, I'd recommend the use of Babel as it closely follows ECMAScript standards and is committed to supporting new standards using the same syntax as the implemented standards. Babel also allows you to enable or disable transpilation transformations using a config file which means that as browsers start to support more of the ES6 standards you can disable increasing amounts of the transpilation to benifit from native implementations.

The best way to use Babel is to integrate it with you build process, you can do this using gulp with the gulp-babel plugin. There are also plugins for most other build systems see

  • Other Transpilers
    • Typescript
    • Tracur

Why use Babel?

  • Focused on ES6 standards
  • Transformers can be enabled or disabled as browsers introduce native support
  • Starting to support ES7 standards

Want to learn more?

Scaling with Docker Compose

AaronTuesday, February 23rd 2016

We'll be using Docker, Docker Compose, nginx, consul, consul-template, and node.js to build a load balanced stack that can be started on any server with docker installed just by uploading a few files and running docker-compose up -d

Setting up Docker compose

Start by creating a file called docker-compose.yml and then we'll add consul to it be adding the following code.

  command: -server -bootstrap -ui-dir /ui
  image: progrium/consul:latest
  hostname: consul
    - "8400:8400"
    - "8500:8500"
    - "8600:53/udp"
    - /mnt:/data

Consul is a service that provides automatic service discovery for any other service you might run on your server, we're going to use Consul to allow us to tell nginx where our web servers are so that it can load balance across them.

If you now run docker-compose up -d you will have a single node consul server running you can test this by navigating to http://<your ip>:8500/ui.

As you can see there are not currently any services registered on this node (not even consul) so lets fix that now by adding registrator to your docker-compose.yml file like so.

  command:  -internal consul://consul:8500
  image: gliderlabs/registrator:latest
    - "/var/run/docker.sock:/tmp/docker.sock"
    - consul

Registrator is a service that automatically finds all docker containers running on the current server and registers them with Consul (this includes consul). Note the -internal option that we pass in thats very important as it allows Registrator to register the details on services that are not exposed publically but are exposed through Dockers link functionality, this means that when we run our web services they will only be accessable through the publically exposed nginx load balancer. This becomes more important when you add in database servers as it means the database server does not need to be publicly accessible.

Now if you run docker-compose up -d again and navigate to http://<your ip>:8500/ui in your web browser you'll see that now the Consul service is listed :)

Configuring nginx

Next we need to configure nginx to be our load balancer, to do this we need our nginx config to be updated whenever a web service is added or removed from Consul. In order to do this we'll use consul-template but there is no nginx image on that includes consul-template and works reliably, fortunatly its not hard to make our own. The first thing to do is create a folder called lb this will house all the files we to to build our nginx and consul-template load balancer.

Next create a file called Dockerfile and add the following to it.

#Get the latest nginx image
FROM nginx:latest

#nginx does not include unzip so we need to add it
COPY unzip /usr/bin

#Install Consul Template
ADD /usr/bin/
RUN unzip /usr/bin/ -d /usr/local/bin

#Setup Consul Template Files
RUN mkdir /etc/consul-templates
ENV CT_FILE /etc/consul-templates/nginx.conf

#Setup Nginx File
ENV NX_FILE /etc/nginx/conf.d/app.conf

#Default Variables
ENV CONSUL consul:8500
ENV SERVICE consul-8500

# Command will
# 1. Write Consul Template File
# 2. Start Nginx
# 3. Start Consul Template

CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf \
& consul-template \
  -consul=$CONSUL \
  -template "/etc/consul-templates/nginx.ctmpl:$NX_FILE:/usr/sbin/nginx -s reload";

That looks like we've done a lot but all were doing is getting instructing docker on how to build out load balancer container. First we get the latest official nginx image then we add the unzip command to the image as it isn't included and we'll need it to extract consul-template. Next we install consul template by adding it to the image from the contul-template url. We then set some environmental variables to the location of the nginx and consul-template config files after that we tell docker which command should be run when the container is started and thats it.

Now it looks like we haven't actually configured nginx or consul-template, which we haven't but there is a good reason for this and thats because the best thing to do when services inside a docker container need config files is to provide them via dockers volumes functionality as this allows you to keep the config files outside the container (where you can modify them easily).

So now its time to create the config files so you'll need to create a new folder inside you lb folder (where your Dockerfile is) call it templates and then create a new file inside it called lb.ctmpl this is going to form your consul-template config which in turn will be used to generate your nginx config.

Now you've got your lb.ctmpl file open it and add the following to it.

server {
  listen 65333;

  location / {
    types {
      application/json json;
    default_type "application/json";
    return 501 '{
      "success": false,
      "deploy": false,
      "status": 501,
      "body": {
        "message": "No available upstream servers at current route from consul"

{{range services}}
  {{ if .Tags.Contains "production" }}
  upstream {{.Name}} {
    {{range service .Name}}
    server  {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;

server {
  listen 80;
  {{if service "web"}}
  location / {
    proxy_pass http://web;

That looks a bit complicated but really all we are doing is defining a server that will be a default fallback in case there are no web services available and thats just standard ngix config. Next the magic happens as we tell consul-template to find all the registered services that consul has then filter that list by the services tagged with production and for each of them add an upstream service with the same name as the consul service (web in our case) that load balances by the server with least connections we then set some extra settings for failover and define a service in port 80 that proxy passes all requests to the web upstream services.

Next we need to add our load balancer to our docker-compose.yml so add the following.

  build: ./lb
    - consul
    - "80:80"
    - ./lb/templates/lb.ctmpl:/etc/consul-templates/nginx.ctmpl

This is slightly different from the other services we've added to the docker-compose.yml so far because in this one we don't have a pre-built image so instead we tell docker to build an image based on the Dockerfile we created eariler. We then link this container to consul as it'll need to be able to acces the consul container and we expose port 80 publicly as port 80 so that you can visit the site. Now some magic happens as we use volumes to map the config file in our lb folder to the location in the container where we said the config file was ;).

We then define a set of health checks that Consul can use to determine if this service is healthy, so in this case we are telling consul that it should try to hit the root of the site over http on port 80 every 5 seconds and if it takes more than 1 second to reply or sends anything but a 200 response back the service is unhealthy. I'll write another post in future that shows you how to use this information to restart unhealthy services.

Adding our web service

Up till now we've created a service discovery service that can discover itself and a load balancer but we still can't actually navigate to our web service so next we'll add the web service and you'll have a fully load balanced and scalable web service that you can start on any docker host regardless of the hosting provider.

For simplicity we're going to use a prebuilt web service container called tutum/hello-world this simply serves up a web page with Hello World on it but it'll do for our purposes. You could of course replace this container for your own (I'll be covering creating your own container in another article).

Add the following to your docker-compose.yml.

  image: tutum/hello-world
    - 80
    - consul
    SERVICE_TAGS: production

You'll also need to modify the load balancer config (lb) like so.

  build: ./lb
    - consul
    - web
    - "80:80"
    - ./lb/templates/lb.ctmpl:/etc/consul-templates/nginx.ctmpl

As you can see we've added a link to web.

Now if you run docker-compose up -d and navigate to http://<your ip>/ you should see a page with hello world on it.

Scaling your service

To scale your new service or any docker-compose service simply run

docker-compose scale web=10

where web is the name of the service you want to scale and the number is the number of containers you want to scale to. This can be more or less than the current running containers, if it is less then the current number of running containers then docker-compose will destroy excess containers.