Comparing Merge vs Rebase for GitHub Desktop

Recently I’ve been using and recommending the GitHub Desktop client for non-programmers that need to interact with a Git repository. However, I ran into some interesting issues with the end result when multiple users are participating. This blog post takes a methodical walk through the common actions and how they behave when ‘merge’ or ‘rebase’ is the default.

The process I’ll follow will be to have two separate projects (test-merge and test-rebase), each being edited by two users (Mike and Sonja). The actions that will be tested are:

  • Add a new file
  • Change an independent file, commit, pull, push
  • Change an independent file, pull, commit, push
  • Change the same file, commit, pull, push
  • Change the same file, pull, commit, push
  • Delete a file

Add a New File

Initially we’ll start with Mike adding a file, committing and pushing to GitHub. Then Sonja will clone the repo, add a second file, commit and push. Mike will then pull.

As expected, both sides end up with both files, and the repo looks good.

Change an independent file, commit, pull, push

In this case, Mike will change his file, commit it to his local repository. Then Sonja will change her file, commit it to her local repository, push the changes. Finally, Mike will pull the changes in, and push his changes.

Interestingly, this generated a special merge commit. Because Mike attempted to push his changes after Sonja, GitHub Desktop prompted with a message indicating that Mike’s local repository was behind and needed to be updated.

He then needed to click the ‘Pull’ button to bring Sonja’s changes into his local repository. A new change was automatically created that merged Sonja’s ‘newer’ changes into his repository. Mike could then click ‘Push’ to send the changes to GitHub. Note that there are 2 changes (Mike’s original change and the merged change from Sonja) that Mike is sending. That’s reflected in the Graph that shows Sonja’s changes (blue line) being merged together.

Now, if instead, we do the process with a rebase (git config pull.rebase true), then the experience is almost identical. The only perceived difference is that Mike sees a “Pull with rebase” button instead of a “Pull” button. Once Mike does the Pull and Push, the history looks much cleaner, since Mike’s latest changes are rebased on top of Sonja’s changes.

Change independent file, pull, commit, push

This time, the Mike user will make the change, but won’t commit before pulling Sonja’s changes.

In the case of the merge, everything worked as expected. No special commits were made.

In the case of the rebase, when Mike went to pull in Sonja’s changes, an error was displayed:

Mike was able to click ‘Stash changes and continue’, the pull then succeeded. Mike had to then click ‘Stash’ and ‘Restore’ to pull the changes. As expected, the graph looks fine:

While this provides a better Git history, it is much more complicated for the user. Fortunately, there is another Git option that can fix it. Setting git config rebase.autoStash true enables the Stash process transparently. The end result, is Mike just clicks Pull, Commit, Push normally, and doesn’t notice his changes were Stashed and Unstashed before and after the Pull.

However, one important point is that the stashing can cause difficult merge conflicts depending on the type of file and the type of changes.

Change the same file, commit, pull, push

In this scenario, Mike will change his file, commit, then Sonja will change Mike’s file, commit and push, and finally, Mike will pull Sonja’s changes and push. To make the conflict easier, Sonja’s change will be 100 lines away from Mike’s.

In the case of the Merge, we again see the dirty error, then a pull, and a push. Since Sonja’s change was far enough way, there was no merge conflict. The end graph again shows the merge update.

In the case of the Rebase, the same error, Pull with rebase, and push, worked, and the graph is clean:

Change the same file, pull, commit, push

And finally, Mike will make a change, Sonja will make a change, Sonja will push, Mike will pull, then Mike will commit and push.

In the case of the Merge, we again see the dirty error, then a pull, with a stash.

And with the rebase, we get the same messages, and a clean graph.


For certain cases, most notably when there is only one branch, and multiple people are working on the same branch, and there is not alot of changes of the same file by multiple people, I would recommend using a rebase with autostash. This can be set into your Git repo by typing (if starting in GitHub Desktop, click Repository -> Open in Command Prompt to get a shell):

git config pull.rebase true
git config rebase.autoStash true

Using XMPP as a Message Bus

This highly technical post is focused on the idea of using the XMPP as a generic message bus.

As a bit of history, XMPP or the Extensible Messaging and Presence Protocol, was originally created 1999 under the name Jabber. Jabber was originally focused as a ‘chat’ protocol, and came in when things like ICQ, AIM and GAIM were the buzz words. As a standard for chatting, it quickly gained a following as it allowed a variety of different chat applications (most with their own proprietary protocols) to easily inter-communicate. By 2000, the IETF had formed a working group (IMPP), and by 2002, the official name (and steering committee) had formed around the name XMPP. By 2004, the first RFCs (3920 and 3921) were released. A plethora of clients and servers were created, and many still exist today. One of the key achievements of XMPP, however, was not in its ability to chat, but its development of the XMPP Extension Protocols (or XEPs). Each XEP is a standard that describes a new piece of functionality that can be layered on top of (or around) the base XMPP protocol. To date, there are 392 XEPs (see the list). They range from security (OMEMO / XEP-0384, OTR / XEP-0364, Encryption / XEP-0116) to UI (Forms / XEP-0004, Forms Layout / XEP-0141, XHTML / XEP-0071) to IoT (Sensors / XEP-0323, IoT Control / XEP-0325) and everything in-betweeen. In fact, when you’re looking at an XMPP library or a client (like an App on an App store), the feature list is almost always just a list of which of the 329 XEPs it supports.

Alot of use of XMPP today is in machine-to-machine communication. Some of the relevant XEP’s are Forms / XEP-0004, RPC / XEP-0009, Service Discovery / XEP-0030, Ad-hoc Commands / XEP-0050, Search / XEP-0055, Publish-Subscribe / XEP-0060 and In-Band Registration / XEP-0077. I’ve personally found that leveraging In-Band Registration, Service Discovery, Ad-hoc Commands and Forms creates a very straightforward, extensible and performant model for building a scalable machine-to-machine (or even user-to-machine) message bus.


Everything starts with registration. A new instance of code (whether it be a Docker container, a mobile app or just some process running in your dev environment) needs to be able to connect to the XMPP server, and needs to be establish itself under a given name. This is the process of registration. There already exists XEP-0077 for allowing In-Band Registration. What this means is that your code connects to the XMPP server anonymously, and then sends the user/password that it wants to be known as in the future, thus ‘self-registering’. The biggest issue with this, is, of course, the fact that there is very little control over ‘who’ is registering. It means that with a ‘wide open’ self-registration, you could easily have thousands of spam bots register themselves with your server and start sending millions of spam messages, without you even being aware (at least until the angry messages start coming back).

The trick is to restrict who is allowed to self register. This can best be done by having a shared secret between the XMPP server and the client. My particular favorite is to build a composite password that can be verified by the server.

UserName = Prefix + Base64({random name})
FirstPart = Base64({random password})
Password = FirstPart + '/' + Base64(Hash(ServerName + UserName + FirstPart + SharedSecret))

In this above pseudo-code, I define a shared secret that’s tied to a specific prefix. Everytime I create a new product or even version of the product, I change the prefix and the shared secret. This limits the damage if someone reverse engineers the code to find the shared secret. Let’s walk through an example:

A random name is created, base64’d (this reduces the character set nicely) and is added to a prefix (let’s say ‘MyAppV1-‘). Thus, my UserName might be MyAppV1-c3VwZXJjYWxpZnJhZ2lzdGlj. Next, a truly random password is generated, giving us FirstPart = 'Tm90IFRoYXQgUmFuZG9t'. If our server is located at, and the shared secret tied to MyAppV1- is ‘Bob‘, then we generate the composite string 'xmpp.example.comMyAppV1-c3VwZXJjYWxpZnJhZ2lzdGljTm90IFRoYXQgUmFuZG9tBob'. This is then hashed and base64 and then added to the FirstPart, giving us a final password of 'Tm90IFRoYXQgUmFuZG9t/ncwgNfpLhlWvnEt7UCovNRaqcpc='.

Because we’ve hashed the shared secret into final password, there is no way to recover it from the password itself. Additionally, because the hash contains the First Part portion and the User Name, someone can’t just Copy & Paste the second part and use it with other user ids.

Final registration message from the code to the server:

<iq type='set' id='reg2'>
  <query xmlns='jabber:iq:register'>

To make this all work, you need your XMPP server to be able to verify these passwords during the In-Band Registration. My personal favorite XMPP server is the venerable ejabberd. One of it’s many features is that you can easily add your own password managment system. Due to a few hurdles in doing that within a Docker environment, I won’t go into the details here (perhaps a post for another day), but you can do that fairly easily. I have a standard Docker container / sidecar that works with ejabberd that verifies these passwords against a known list of Prefix/Shared Secret combinations.


Once your code is registered, it usually needs to find the other servers it wants to talk to. This is easily done via the Search / XEP-0055 and Service Discovery / XEP-0030. A portion of this protocol would be to search for the jids of interest (perhaps searching by nickname):

<iq type='set' id='search2' xml:lang='en'
  <query xmlns='jabber:iq:search'>

The server would then respond with something like:

<iq type='result' id='search2' xml:lang='en'
  <query xmlns='jabber:iq:search'>
    <item jid=''>

This Search query found a single matching element with the given nickname. The code now has the full name (the JID). To make sure that this is the ‘right’ server, the next step would be to subscribe to the JID and then issue a Service Discovery request:

<iq type='get' id='info1'
  <query xmlns=''/>

Assuming this is the right server, the response would look like:

<iq type='result' id='info1'
  <query xmlns=''>
    <feature var=''/>
    <feature var=''/>
    <feature var=''/>
    <feature var=''/>

By the presence of the feature, it’s now known that this code is the right one to talk to.


Now that we’ve found the right server to talk to, and with the Service Discovery also found that it supports Ad-Hoc Commands / XEP-0050 via the feature, we can begin to talk to it.

There are many different ways that the communication could occur, and, perhaps, the most common for machine-to-machine communication would be to use RPC / XEP-009. However, using Ad-Hoc Commands / XEP-0050, has an added benefit: The same code can be used for humans as well. Ad-Hoc Commands is a little more inefficient (slightly more XML is sent), but by supporting both human and machine interaction over the same protocol means that it’s very easy to test, as well as supports manual commands when necessary.

Ad-Hoc Commands enables the list of valid commands to be dynamically queried. This is great for manual commands, but isn’t necessary during machine-to-machine. Additionally, if you send an initial request to a command with no parameters, it will respond with all the parameters that it supports, along with help text. This is also great for manual commands, but again, isn’t needed for machine-to-machine. However, by having it all, it does provide a great ‘self-documenting’ feature for the command, so that even when you are coding machine-to-machine, you can easily get all the details by just requesting with no parameters (or the wrong parameters). Additionally, Ad-Hoc Commands allows for multi-step commands (ie. you send the first bit of information, and then the next step responds with the questions you have to answer, which can continue as necessary). Again, this isn’t usually important for machine-to-machine, as you can usually provide all the data in the first step. All the data necessary to send for a command is usually encapsulated into a Form / XEP-0004. Forms can be very simple with just the name, type and some basic help text, but can also be expanded with complex validation (XEP-0122), complex UI Layout (XEP-0141), CAPTCHA (XEP-0158), videos (XEP-0221), arbitrary XML (XEP-0315), color (XEP-0331), signatures (XEP-0348) and geolocation (XEP-0350).

For this example, we’ll actually submit a unfilled out request (again, not normal for machine-to-machine), so we can see what kind of data can be returned. We’ll issue a request to the list-users command:

<iq type='set' id='exec1'
  <command xmlns=''

The server responds with:

<iq type='result' id='exec1'
  <command xmlns=''
    <actions execute='next'>
    <x xmlns='jabber:x:data' type='form'>
      <title>List Users</title>
      <instructions>Please select the type of user to list.</instructions>
      <field var='user-type' label='User Type' type='list-single'>

Here you can see that the command requires a single field called user-type. It must be one of three possible values. In a graphical client, this might be displayed as a popup with the three choices. The actual submission (which would normally be the first message in a machine-to-machine scenario) would be:

<iq type='set' id='exec1'
  <command xmlns=''
    <x xmlns='jabber:x:data' type='submit'>
      <field var='user-type'>

With a final response listing the two employees:

<iq type='result' id='exec1'
  <command xmlns=''
    <x xmlns='jabber:x:data' type='result'>
      <title>List Users</title>
        <field var='name' label='Full Name'/>
        <field var='email' label='Email Address'/>
        <field var='name'><value>Mike Mansell</value></field>
        <field var='email'><value></value></field>
        <field var='name'><value>Sonja McLellan</value></field>
        <field var='email'><value></value></field>


While this has been a long post showing the different components to leveraging XMPP within a message bus scenario, it’s covered all the major topics (registration, discovery and commands).

Many people feel that the biggest issue with XMPP is the fact that it uses XML, with all the corresponding verbosity. However, there are several things to consider. First, almost all communication is actually done over a compressed transport (XEP-0138), and XML compresses very well (usually 10:1 or better). If even more compression is needed, or the resource constraints of the device are minimal (ie. sensors), a binary format like EXI works very well (see Efficient XML Interchange (EXI) Format / XEP-0322).

Additionally, almost all the complex XML is hidden behind an XMPP library for your language. For example, within the Discovery section, we were checking to see if the code was ‘our’ server (ie. had the right feature). This can be done using the Babbler Java library with a couple of lines of code:

if (client.getManager(ServiceDiscoveryManager.class).discoverInformation(
      (infoNode) -> infoNode.getFeatures().contains("")
   ).get() == true)
      // All good
      // Wrong server

I’ve been using this process now for a few months, and have multiple projects using this methodology. It works well, scales nicely (XMPP / ejabberd can handle millions of devices communicating), it’s secure (all communication is over TLS and can even have end-to-end encryption using OMEMO / XEP-0384 or OTR / XEP-0364) and is incredibly easy to debug.

After 4 months: Stopping Telemarketers / Saving Money Part 2

So, it’s been 4 months since I originally posted about my new phone system (see here if you haven’t read it). It’s been running this whole time, and pretty much perfectly. I thought I’d take this time to talk about how many telemarketers I’ve stopped, how much money I’ve saved, and some more detail on how it’s built.


I’ve received 263 calls in the last 4 months. That’s an average of 16 calls a week or about 2 calls a day. Now, of course, it isn’t that perfect. The highest is 10 calls in a single day. So, not huge, but respectable for a 2-person house with no children.

Since I set it up to automatically route known people in, and send the rest to voicemail, I can get a more detailed breakdown. Of the 263 calls, 127 of them were people I knew, but 136 (51%) were sent to voicemail. The 136 were from 57 unique phone numbers, of which only 22 actually left voicemail. That means that 114 calls were people (or machines) who had nothing important to say, and that I didn’t have to listen to. That’s a pretty good time wastage reduction to me.


There are basically 3 “monthly” costs to this system. The VOIP provider, the Telephony provider, and my server time. As I have been running this on my server in my house, that was effectively $0.

Month VOIP Telephony Total
July $3.41 $4.45 $7.86
August $2.57 $4.10 $6.67
September $2.71 $4.60 $7.31

Because I use a VOIP adapter (Obihai) that supports multiple VOIP providers, I have one VOIP provider for outbound ( and a separate one for inbound SIP (linphone). The inbound one doesn’t charge me, so my VOIP bills are only for calls that I make. I’ve been quite happy with so far. They charge $1.50/month for 911 services, $0.85/month for a phone number, and $0.009/minute (I’m using their premium services since I feel the phone quality is better). That’s anywhere in North America, so there is no such thing as long distance charges.

The telephony provider (Twilio) provides all the services I need to build the system. They charge me $1.00/month for a phone number, $0.0085/minute for inbound calls, and $0.004/minute for outbound VOIP (and since linphone doesn’t charge anything for inbound, that’s it).

Considering that my monthly bill from my local phone provider is $34.78, only having to pay $6 to $8 is a huge savings.

Of course, I’m paying both at the moment, since I haven’t pulled the plug on my “old” landline while I’ve been testing the system, but I think I’ll be doing that within the next month or so. There doesn’t seem to be any reason not to.

High Level Architecture

Some people have asked for additional details on the design. I’ll probably write a Part 3 post in the future going into the details, but I thought I’d provide some basic details. NOTE: This will only be interesting to the tech-heads; everyone else can skip.

Whenever a person calls my main phone number, it’s answered by Twilio (#1), as they are the Telco for the number. Twilio looks up the details on that phone number, and sees that I want it to issue an HTTPS (REST) request to my server (#2). As a side note: If my server is not responding or provides an error, Twilio has a fallback where it immediately sends the call to my cell phone. Thus, if there are any technical glitches, I still get the call.

At this point, my application, Phonely, receives the request. It queries against the database (#3), which is currently PostgreSQL, to figure out how to handle the call. If the phone number is not listed, or does not have ‘direct’ contact privileges, then the Phonely server begins the process of collecting voicemail. This occurs by a bunch of back and forth requests between Twilio and the Phonely server, using Twilio’s great XML language, Twiml, to make it happen.

As an example of this, here’s the first response Phonely returns to start the voicemail process:

    <Gather timeout="5" numDigits="1" action="" method="GET" finishOnKey="#">
        <Say language="en-US">Hi. Hold on, this is not a normal answering machine. I screen all calls from unrecognized numbers, therefore you won't reach Mike without leaving a message. Stay on the line or press 1 to leave a message.</Say>
    <Redirect method="GET">;DQ-Submit=true</Redirect>

In this fragment, we’re telling Twilio to read some text back to the caller and attempt to gather some keypresses. If they press a button, then it will call Phonely back with the keypress. If they don’t do anything within 5 seconds, then it will call Phonely back indicating that they didn’t press any key.

It’s a very straightforward language, and can be easily used to build very complex responses.

Once Phonely has decided the next course of action, it also issues a request to my XMPP/Jabber server (#4). XMPP/Jabber is an instant messaging service that underpins alot of systems out there (Facebook Chat, WhatsApp, Google GTalk, Playstation chat, etc.). This message is directed at an app installed on my phone (#5), so I get immediate notification of whose calling. If they leave voicemail, then I also get the voicemail so that I can listen to it directly on my phone without having to call anything. Additionally, it can send to multiple parties, so everyone at the house receives the message on their instant messaging apps.

If Phonely decides that the person is allowed to directly contact me, then it sends a response to Twilio commanding it to redirect to my SIP address. SIP is the underlying protocol for connecting with VOIP phones. Twilio then connects with my SIP provider, (#6), which then redirects to my actual VOIP adapter at my house (#7). All my cordless phones are connected to my VOIP adapter, so my phone then ring (#8), I answer, and we have a conversation.

If I decide to make a call (#9), the VOIP adapter issues the outbound connection to my VOIP provider (#10), who then connects to the real phone number (#11).

Personally, I’d like to simplify a bit and remove and Not that they haven’t been providing a great service, but I should be able to get Twilio to provide it all, and that would just make the system simpler and less error-prone. I just haven’t figured out the details yet.

As always, feel free to leave questions in the comments or contact me at

Stopping Telemarketers and Saving Money

I’m sure we’ve all experienced the never ending calls of telemarketers. Especially around dinner time.


My wife and I have had the same phone number for more than 20 years, and more than a decade ago, we did provide some donations over the phone (although now, all our donations are done through a single institution). That means that we’re on a large number of lists. It’s not uncommon to receive 10+ calls a day.

As I’ve recently retired from the corporate scene, I decided to do something about it. After about a month of development, I’ve come up with “Phonely” (yeah, I might need a better name). This is an application that leverages the Twilio API (although it should be easy to adapt to Plivio, Nexmo or similar systems) to provide a complete phone system.


In a nutshell, I ported my phone number to Twilio and registered my application with them. Therefore, as soon as a phone call comes in, Twilio calls my app. My app looks at the incoming phone number, and checks it against a database of numbers. If it’s recognized and approved, it immediately forwards the call to the house phone (via VOIP). If not, it sends them to a voicemail system (also part of Phonely).

Thus, for friends, family and known companies, it works just like before. They dial the phone and it rings in the house. For everyone else (aka telemarketers), they go to voicemail, and my phone never rings.

The fact that I’ve now got a computer answering the phone, there’s a whole bunch of other stuff I can do. For example, I can have different voicemail prompts for different people. Another example, if the same phone number dials multiple times in quick succession, I can forward it to the house phone (telemarketers don’t dial back that quickly). Thus, people dialing from numbers I don’t recognize can still get through by just redialing right after getting voicemail.

Another feature I’m playing with is to have the ‘forwarded call’ actually travel with me. If I’m at home, it rings the house phone, if not, it rings my cell (a small app on my house network periodically checks if my phone is on the local network, and if not, tells Phonely that I’m away from the house).

Overall, this has basically cut out all telemarketer calls, since almost none will leave a recording, and even if they do, I’m not disturbed.

Additionally, switching over to a VOIP system from my local phone provider (Telus) seems like it’s going to save me a bunch of money.

I currently have a pretty basic landline package, but it’s still $40 CAD a month. With Twilio, the phone number costs me $1 a month plus about $0.01 / minute (it’s a little fluid, as for voicemail calls I only have to pay $0.008 / minute, since it’s just an incoming call, but for the real calls, I have to additionally pay $0.005 / minute for the outgoing connection, so it could be up to 1 cent a minute.

However, even at $0.01 / minute, that still means that I’d have to have more than 3900 minutes of calls to break even with my old landline. As that’s 65 hours or more than 2.5 days straight on the phone per month, I know that I spend nowhere near that much time on the phone.

At the moment, I’m just using this system for myself, but I’m thinking of making the entire system available as open source (probably via a Docker image). I’ve even thought about running a small business to make it easy for others to use (even with Docker, you still have to have the technical know-how to run Docker, have a server to run it on, setup and configure the Twilio system, and likely setup a VOIP phone at home).

I’ll likely provide some additional blog entries on some of the technical components of the system in the near future. But I’d love to hear anyone’s feedback or comments.

Setting up a build environment

A build environment is composed of many moving parts.

  1. Source control environment
  2. Build process
  3. Build environment
  4. Build Orchestration
  5. Build Results
  6. Build Artifacts

All of these require their own specific setups, and there can be many different ways to accomplish each. I’ll talk about my opinionated way, with detailed setups.

Source Control Environment

First, there is the source code itself. Generally, this will be stored in something like Git/GitHub/GitLab, SVN, RTC, etc. For most open source environments, GitHub is currently the king, but I’m seeing quite alot of movement to GitLab as well. Even for closed source, GitHub is a pretty cheap environment to use, and cuts down on one more system to setup and manage at the tool level.

All of my code is currently stored in a variety of repositories within GitHub. Quite a few are stored as private repositories, especially before I decide that I’m ready to ‘make it public’.

Build Process

There is an infinite number of ways that the build process can be set up, but I generally like to break it into 4 stages.

  1. Code Compiling
  2. Unit Testing
  3. Assembly
  4. Integration Testing

Coding Compiling

When it comes to compiling the code, I’m generally a fan of Maven, but there’s, obviously, alot of different choices such as SBT, Gradle, Ant, etc. I highly recommend that you try to have one consistent tooling, since it keeps things consistent. Regardless of choice, I do recommend that you leverage the Maven-style artifact repository, since managing dependencies manually is too error-prone. Fortunately, just about every build tool today supports it.

Within Maven, I like using a specific directory structure that works well with Eclipse, and that means that all pom’s need to be in ‘non-recursive’ folders (ie. one pom cannot be in a child folder of another pom). Thus, my project structures tend to look like:


In this structure, each level has a ‘root’ folder that contains a parent pom. Building the entire project simply means going into the top level root folder, and running

mvn install

Unit Testing

While I tend to be somewhat hit-and-miss around my consistent application of unit testing, I do heavily believe in them. There are many different unit testing frameworks for each language you work in. For Java, JUnit is the most well known.

I generally put all my unit testing within each project, and have them run as part of the compiling process. Thus, each project doesn’t complete compiling until the unit tests are complete.

Unit tests are meant to be reasonably quick. The entire suite shouldn’t take more than 5-10 minutes to run. Anything more complicated should be located with the Integration Testing.


Once all the pieces are compiled and unit tested, the next step is to assembly into the final structure. These days, that usually means Docker for me. This is where I build the docker image. This isn’t a complex stage, since it’s almost always just following a Dockerfile which is checked into source control, but, it’s critical to have the final assembly done before Integration Testing happens.

Integration Testing

This is where more complex and longer running testing happens. This can range anywhere from a few minutes to days of testing. I’ll generally set up my testing environment to only do ONE integration test in parallel. This means that if we’re still running an integration test when a new build comes along, the old integration test is cancelled, and the new integration test is started.

There are many different tools to use here, and I generally use multiple different ones. Things like Gatling, …

Build Environment

Over the years, I’ve used many different build environments, but I’m currently moving towards standardizing on Jenkins. It’s well known, easy to set up, and easy to customize.

Like everything else I do, I’ll only run tools within a Docker environment. Currently, I’m using the out-of-the-box build, so

docker run -d -v /var/run/weave/weave.sock:/var/run/docker.sock -v /data/docker:/usr/bin/docker -v /data/jenkins:/var/jenkins_home -t --name jenkins jenkins:latest

Because I’m going to use Docker during the build process, I want to expose the docker socket (in this case the Weave.Works socket, since I also do everything in a weave controlled environment). Additionally, I need the docker binary to be available. NOTE: Make sure that you mount a statically compiled docker binary, since the majority of ones installed by default are NOT statically bound, and your Jenkins container won’t have all the missing libraries. Finally, I’ve mounted a data folder to hold all the updates.

NOTE: That because this Jenkins is running in my isolated Weave environment, I do need to expose it to the outside world so that GitHub hooks, etc can reach it.

In my case, since I’m usually working at my house, which is NAT’d behind a firewall/router, I find it easier to just expose all my different servers under specific port numbers. Eventually, I’ll probably set up a good reverse proxy for it all, but until then, I’ve decided to expose Jenkins under port 1234. Additionally, enp0s3 happens to be the linux network interface on my box, most other people probably have eth0 or eth1.

iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 1234 -j DNAT --to-destination $(weave dns-lookup jenkins):8080

With a small change to my DNS Provider (ie. Cloudflare), I know have now available to everyone.

Build Orchestration

I’ve just started playing with Jenkinsfile’s and the Multi-branch Pipeline code, but so far, it’s been very good and easy to use (although I still don’t have the GitHub change hook working after fighting with it for 5 hours).

Groovy has never been a language of interest, but it’s close enough to Java, that I don’t generally have a problem. Of course, every example uses Groovy’s DSL structure instead of just plain functions, so it always “looks” weirder then it is.

One of the things I like to do is to have all my POM’s use a consistent XXX-SNAPSHOT version, and then, as part of the build, replace the versions with the latest build number. Within my Jenkinsfile, I’m currently using this:

// First, let's read the version number
def pom = readFile 'root/pom.xml'
def project = new XmlSlurper().parseText(pom)
def version = project.version.toString()
//set this to null in order to stop Jenkins serialisation of job failing
project = null
// Update the version to contain the build number
version = version.replace("-SNAPSHOT", "")
def lastOffset = version.lastIndexOf("-");
if (lastOffset != -1)
   version = version.substring(0, lastOffset);
version = version + "-" + env.BUILD_NUMBER
env.buildVersion = version

Two key issues here.

1) In Jenkins2’s new security Sandbox, the use of XmlSlurper causes a whole bunch of run errors, so you’ll have to ‘run this’ about 3 or 4 times, each time approving a new method call (the new(), the parseText(), the getProperty(), etc.). A little annoying, but once you do it once, it won’t bother you again.

2) Any non-serializable object, such as the XmlSlurper results, can’t be kept around since, some of the later “special functions”, actually serialize the entire context so that the data can be restored later. Thus, a simple solution is to just assign null to those variables (such as the project variable above). There’s a @NonCPS annotation as well, but I don’t really understand it, and this works fine for these simple variables.

Build Results

I generally like to make available the different build results, such as the test results, etc. The Jenkinsfile has a couple of step commands to make that pretty easy.

 step([$class: 'ArtifactArchiver', artifacts: '**/target/*.jar', fingerprint: true])
 step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])

In this case, I’m storing all the JARs as artifact results, and the JUnit results as JUnit Results (Jenkins has some nice tooling to make the JUnit results display nicely).

Build Artifacts

Finally, I want to store the artifacts, such as the Maven artifacts or Docker images into a repository, either public or private. I’ll talk more later about hooking up to the public Maven repository or standing up your own personal Docker repository.

docker-machine, none/generic drivers and TLS

I really like the idea of docker-machine. It provides a nice interface where I can see the machines that I’m working with. It’s easy to use the commands to quickly switch between machines, and it has lots of great commands for scripting.

However, if you didn’t create the machine on the computer where you are running docker-machine, it’s a complete mess (at least as of Docker 1.9). There are quite a few reported issues, and acknowledgements that it’s broken (See Docker Issue #1221).

But I was able to get at least the basic connections working by piecing together different comments and other google articles.

The primary issue is the TLS security that surrounds the Docker socket and allowing docker-machine to have access to it.

Additionally, the only docker-machine driver that ‘kind of works’ is the ‘none’ driver. However, it’s really meant as a test driver, so the fact that it works is a hack, and it sounds like that they plan to remove it (See Docker Issue #2437). It seems that the intent in the future is for the ‘generic’ driver to be used for this purpose, but at this point, the generic driver automatically regenerates all certificates and restarts the driver. So, completely useless when you have multiple docker-machine’s managing the same box (ie. in a production environment, you might have multiple administrators who look after the boxes).

So, for now, these steps work, but this will likely fail before long.

Download the necessary files

At this point, the complete set of TLS files are needed on the client box. This is the ca.pem, ca-key.pem, server.pem and server-key.pem.

Most of these are present in the /etc/docker folder on the host, but the ca-key.pem may only be present whereever you originally created this (ie. if you used docker-machine create on some other box, the ca-key.pem is only on the ‘other box’).

Copy all these files to a directory on your client box.

Generate a new Client Certificate

Now, we need to generate a client certificate for your client box, and then sign it with the server certificate.

openssl genrsa -out key.pem 4096
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
echo extendedKeyUsage = clientAuth > extfile.cnf
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile.cnf

Create the Machine

Now we need to create the machine using docker-machine and then fix up the configuration.

docker-machine --tls-ca-cert ca.pem --tls-client-cert cert.pem --tls-client-key key.pem create -d "none" --url tcp:// digitalocean-wordpress

Of course, replace your IP address with the IP address of your Docker host. The final argument is the machine name that you want to have it known by.

Unfortunately, the driver doesn’t copy the certificate information into the right folder, so you have to fix things up.

Navigate into the ~/.docker/machine/machines/digitalocean-wordpress folder

cd ~/.docker/machine/machines/digitalocean-wordpress

Now, copy all 5 files (ca.pem, server.pem, server-key.pem, client.pem and key.pem) into this folder.

NOTE: Annoyingly, Docker expects the files to have specific names, even though there is a config file that points to it, so don’t rename them from what’s listed.

Next, modify the config.json, and update the bottom section:

 "AuthOptions": {
 "CertDir": "/home/mmansell/.docker/machine/certs",
 "CaCertPath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress/ca.pem",
 "CaPrivateKeyPath": "/home/mmansell/.docker/machine/certs/ca-key.pem",
 "CaCertRemotePath": "",
 "ServerCertPath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress/server.pem",
 "ServerKeyPath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress/server-key.pem",
 "ClientKeyPath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress/key.pem",
 "ServerCertRemotePath": "",
 "ServerKeyRemotePath": "",
 "ClientCertPath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress/cert.pem",
 "ServerCertSANs": [],
 "StorePath": "/home/mmansell/.docker/machine/machines/digitalocean-wordpress"

In specific, you be updating the CaCertPath, ClientKeyPath and ClientCertPath entries.


At this point, you should be able to use the docker-machine commands.

docker-machine ls


eval $(docker-machine env digitalocean-wordpress)
docker ps

However, some commands such as docker-machine ssh, etc. will not work, since the ssh keys are not present. According to some of the discussions, this functionality is completely broken in the none driver.

Hopefully they’ll fix the generic driver (or create a new one) that allows full access without a ‘reinstall’ that the generic driver does currently.

Setup up a Weave Environment

So, I’ve been doing some more reading on running a production docker environment, and it’s clear that docker really “stops” at the host level. Managing multiple hosts and multiple applications is a real hassle in the default docker environment.

Enter Weave.

It automatically provides a more robust network overlay that your docker containers can work with. I highly suggest that you read through their website.

I wanted to record the process I went through to set up a Weave environment across two different Cloud Providers (Digital Ocean and Vultr).

Vultr is not directly supported for the new docker-machine functionality, so I used their website to create a basic host using the Ubuntu 14.04 LTS version.

Once the host was provisioned, I connected using SSH, and setup and installed docker:

vultrbox% wget -qO- | sh

As of this article it was Docker 1.8, which is pretty close to the minimal version needed to get Weave working properly.

Back in my docker management box (at my house), I added the new box to management under docker-machine:

home% docker-machine create --driver generic --generic-ip-address vultrbox --generic-ssh-user root --generic-ssh-key .ssh/MikeMansell.priv vultr3

This box is now managed by docker-machine, which makes it much easier to issue commands.

The next step was to download the Weave commands onto the management box.

home% curl -L -o /usr/local/bin/weave
home% chmod a+x /usr/local/bin/weave

Since we’re going to issue all the next set of commands in the context of our vultr box, we’ll set up an environment variable so that it automatically connects there:

home% export DOCKER_CLIENT_ARGS="$(docker-machine config vultr3)"

Next, we need to set up the Weave agent, but we want it to be secure and to support multiple isolated applications. So, I ran:

home% weave launch -password $WEAVE_PASSWORD -iprange -ipsubnet

This sets up a password that will be used to make sure that all traffic between the different Weave proxies are encrypted. It also says that Weave will manage the subnet, and if an application is launched without explicitly defining a subnet to run it in the subnet. This launches the proxy as a container on the vultr3 box.

Next, we want to have a DNS server running.

home% weave launch-dns

And finally, we want the Docker API proxy so that any docker container command are automatically routed through weave. However, since we want to make sure that everything remains secure, we’ll have to use the same TLS commands that were used to secure docker in the first place.

home% weave launch-proxy --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem

We now have three containers running on our vultr3 box. This represents our ‘base’ environment. We can do a quick test to make sure that everything is fine by redirecting our docker commands through weave.

home% eval $(weave proxy-env)

And now we can run a test command to see how it’s working.

home% docker run --name a1 -ti ubuntu
root@a1:/# hostname

We started a ubuntu shell container over on vultr3, but we can see that it’s run through weave, since it was assigned a DNS name in the weave.local DNS namespace.

We’ll clean that up.

root@a1:/# exit
home% docker rm a1

For a real scenario, we’re going to run a basic Cassandra server on this box. NOTE: We’ll run it in a separate isolated subnet.

home% docker run --name cassandra1 -e WEAVE_CIDR=net: -v /root/data/cassandra1:/var/lib/cassandra/data -d cassandra:2.2.0

Next, we want to get a Digital Ocean host set up, so that it can join the network. Similar to my other posts, I’ll assume that the Digital Ocean Acess Token is stored in an environmental variable.

home% export DO_API_TOKEN=xxxxx

Then, we’ll create the machine

home% docker-machine create --driver digitalocean --digitalocean-access-token $DO_API_TOKEN --digitalocean-image "ubuntu-14-04-x64" --digitalocean-region "nyc3" --digitalocean-size "512mb" donyc3

This creates a new Docker host running in New York, with the cheapest hosting plan.

We’ll switch over to using this host for the next set of commands

home% eval $(docker-machine config donyc3)
home% export DOCKER_CLIENT_ARGS="$(docker-machine config donyc3)"

And, we’ll launch the Weave environment here as well. The big change is that when we launch Weave, we’ll provide the host of the vultr3 box so that they can connect together.

home% weave launch -password $WEAVE_PASSWORD -iprange -ipsubnet $(docker-machine ip vultr3)
home% weave launch-dns
home% weave launch-proxy --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem

Now, we can use the Docker proxy for this second box:

home% eval $(weave proxy-env)

And lastly, we can run a Cassandra query from Digital Ocean in NYC to the Cassandra server box over in Vultr.

home% docker run --name cq -e WEAVE_CIDR=net: -ti cassandra:2.2.0 cqlsh cassandra1

From the point-of-view of the containers, they think they are running in the same network. That’s pretty cool.

XMPP and IoT

I plan to post alot more about the Internet of Things (IoT), but this post is just to get some thoughts down.

The more I read about the different competing protocols (XMPP, MQTT, DDS, AMQP, CoAP and others) the more I get concerned about the distinct lack of security (having been involved with the development of digital identity laws, it’s one of my pet rants).

Currently, the protocol receiving the most attention seems to be MQTT, but security is distinctly lacking. It’s basically a “vendor provided” feature (plain-text user name & password in an optional TLS encoded transport? This is HTTP basic-auth with all it’s issues). While this may be great for the large vendors who can stake out their ground and are large enough to fight off other contenders, it does nothing for interoperability (Single sign-on only works within a given vendor). Worse, the majority of smaller players will just skip the security, or do it badly, so that they can get an solution to market.

On the inverse side, I think that XMPP does provide a complete set of security options, but its more “complicated” set of standards (or XEP’s) cause users to turn away.

Additionally, XMPP gets a bad rap because the basic protocol is XML. <sarcasm> As everyone “knows”, XML is verbose. And isn’t this whole IoT thing about small constrained devices? </sarcasm> This line of reasoning always surprises me, since there is a never-ending list of failed predictions where some specific technological constraint caused people to say it’s impossible.

First of all, devices are becoming very powerful in a very small and cheap form factor. While there are still uses for very constrained devices, they, by their very nature, are more difficult to develop for than a more generic computer that can run more common languages and software. This problem constantly causes newer devices to be developed with more power and capability at the same cost as before. I’m currently playing with a Raspberry Pi, that at $35, provides a platform that is capable of sending tens of thousands of XMPP messages per second.

Secondly, there are many ways to reduce the verbosity of XML. Probably the best is the W3C EXI Recommendation, and it’s usage within XMPP (XEP-0322). Compressed XML via XMI can regularly achieve 100x or more reductions in size, thus reducing the verbosity argument to nothing.

But, in the end, this is just my opinion. The only way to help solve this problem is to actually contribute to the solution. Thus, I will be working on, talking about, and delivering IoT based solutions based on my take on the right solution, which, at the moment, is definitely based on XMPP.

Setting up a blog via Docker

My wife wanted a blog and I decided to start blogging as well as a way to document all the projects I work on.

Therefore, I figured that one of the first posts would be how I set up the blog system.

Of course, I could have just set up an account on the WordPress site, but where’s the fun in that? Instead, I wanted to build a reasonably complete Docker setup and have it deployed to a cloud provider.

Docker Installation

To start, I installed the latest (1.5 as of this post) build of Docker on one of my Linux environments. For the most part, I followed the instructions on:

I wanted to use the Docker-maintained package installation. My only real issue was that I run my own apt-cacher since I have dozens of virtual environments, and I found that the above instructions used an HTTPS update site that conflicted with apt-cacher. Instead of tracking down a better solution, I simply added "DIRECT";

to my apt configuration file at /etc/apt/apt.conf.d/01proxy

I also installed docker-machine from

and just copied the executable into /usr/local/bin

Setting up the Machine

I decided to try out using Digital Ocean as the cloud provider, since they had one of the cheaper setups ($5/month for a 512MB Memory / 20 Gig SSD / 1 TB data).

Once I had set up an account with Digital Ocean, I went to the Apps & API and then clicked “Generate new Token”.



That generated a very long token key that I stored away. In the following code snippets, I’ll make it available as an environmental variable.

export DO_API_TOKEN=xxxxx

Next, I need to figure out which zone to start the machine in. Unfortunately, Digital Ocean’s website shows nice human-readable names, but not the codes needed by docker-machine. Fortunately, they do have a REST API that I could use.

curl -X GET -u "$DO_API_TOKEN:"

This returned a JSON structure (formatted and truncated for readability):

 "regions" : [{
 "name" : "New York 1",
 "slug" : "nyc1",
 "sizes" : [],
 "features" : ["virtio", "backups"],
 "available" : false
 }, {
 "name" : "Amsterdam 1",
 "slug" : "ams1",
 "sizes" : [],

The ‘slug’ element is the one needed. So, if we wanted an box in New York, we’d use nyc1. In my case, I wanted San Francisco, so it was sfo1.

By running the docker-machine create command, we’re created a new machine in San Francisco, with only 512 MB of memory. The final parameter is the name for the machine, which I’ve called wordpress. NOTE: You start incurring charges at this point.

docker-machine create --driver digitalocean --digitalocean-access-token $DO_API_TOKEN --digitalocean-region sfo1 --digitalocean-size "512mb" wordpress

This takes a few minutes to provision and set up the machine,  and you should see something like:

INFO[0000] Creating SSH key...
INFO[0000] Creating Digital Ocean droplet...
INFO[0003] Waiting for SSH...
INFO[0066] Configuring Machine...
INFO[0108] "wordpress" has been created and is now the active machine.
INFO[0108] To point your Docker client at it, run this in your shell: $(docker-machine env wordpress)

Now, that you have a machine running, which you can verify at any time with:

docker-machine ls

NAME      ACTIVE DRIVER       STATE   URL                        SWARM
wordpress *      digitalocean Running tcp://

Or you can use the Digital Ocean dashboard to see that it’s running:



Now, there are many different ways to issue the commands to the docker machine, but my favorite is to just set up the environment so that regular docker commands works with the remote machine. So, enter:

$(docker-machine env wordpress)

This sets a few environmental variables so that all future docker commands will run remotely.

Setting up the Reverse Proxy

One of the things that I wanted to do was run  many different websites from the same host. The easiest way to do that is to run a reverse proxy that internally forwards traffic based on the requested Host parameter to the appropriate docker container. I found a great docker container by Jason Wilder that did everything I needed:

So, all I needed to do was to run his nginx reverse proxy container as the front-end.

docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy

Now, any new container that was registered with the right environmental variables would be automatically linked into the proxy container.

Setting up MySQL

Alot of the steps and inspiration for these instructions comes from a blog post:

Thanks out to Anthony Fielding for his set of instructions.

Because I want to be able to reproduce these steps for multiple different sites, I’ll store the site name and the MySQL password in environmental variables:

export SITE_NAME=diamondq
export MYSQL_PASSWORD=xxxx

I want to make it real easy to do management of the database environment, so I created a data container to hold the database data.

docker create --name $SITE_NAME-mysql-data -v /var/lib/mysql mysql

Next, I created the MySQL server itself.

NOTE: This is where I initially ran into a small challenge with the memory constrained environment at Digital Ocean. MySQL didn’t want to initialize because there wasn’t enough RAM. Since this was going to be a very low usage environment for some time, I just added 4 Gigs of disk space as swap. Connecting to the box was easy with the docker-machine command:

docker-machine ssh

Now, that I’m on the box, it was a few standard commands to generate the 4 gigabyte swapfile and make it permanent:

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo "/swapfile none swap sw 0 0" >> /etc/fstab

DOUBLE NOTE: When running the MySQL container that failed the first time, it left the data container in a bit of a mess, and I had to remove and restart the data container as well as the MySQL container to get it to work.

By providing some environmental variables, the wordpress database was automatically created.

docker run --name $SITE_NAME-mysql --volumes-from $SITE_NAME-mysql-data -e "MYSQL_ROOT_PASSWORD=$MYSQL_PASSWORD" -e "MYSQL_DATABASE=wordpress" -d mysql

We now have a MySQL database that is configured with an empty wordpress database. NOTE: I’m not exposing the MySQL port, so there is no way that anyone can remotely connect to the database.

Setting up WordPress

Now for WordPress. I originally looked around at the current WordPress container, but I wanted better control over the contents so that I could do things like back it up easier (as did Anthony, which is where I ran across his blog). So, I followed the same pattern as with MySQL. I started by creating a data container:

docker create --name $SITE_NAME-wordpress-data -v /var/www/html ubuntu

Next, I needed to download the current WordPress contents into this container. While this seems like a prime candidate for a separate Dockerfile, I didn’t want all this stored in the Docker AUFS, but instead in the Docker volume.

docker run -it --rm --volumes-from $SITE_NAME-wordpress-data ubuntu /bin/bash -c "apt-get install -y wget && cd /var/www/html && wget && tar -xzf latest.tar.gz && mv wordpress/* . && rm -rf wordpress latest.tar.gz && chown -R www-data:www-data /var/www/html"

Since there may be multiple WordPress containers running on the same box, each ones needs a unique port number so that the nginx reverse proxy can talk to it. Thus, we’ll put the unique port number into another environmental variable:

export SITE_PORT=8080

Now, we just start the final WordPress container:

docker run --name $SITE_NAME-wordpress --volumes-from $SITE_NAME-wordpress-data --link $SITE_NAME-mysql:mysql -p$SITE_PORT:80 -e "VIRTUAL_HOST=$SITE_HOST" -e "VIRTUAL_PORT=$SITE_PORT" -d antfie/wordpress

This is where all the magic comes together. It brings in the WordPress data container so that the WordPress content itself is available. It links in the MySQL database, so it can be contacted internally via a virtual host called ‘mysql’. It exposes the Apache web server as port 8080, but only to the local host. This means that other containers can reach it (like nginx), but not external users. The two environment settings, VIRTUAL_HOST and VIRTUAL_PORT, are provided so that the nginx reverse proxy knows that it needs to link this container into the proxy. And finally, we’re using antfie/wordpress, since it’s basically the normal WordPress install with WordPress removed (since we’re bringing it in ourselves via the data container).

Assuming that your DNS is configured to point your website ( in my case) to the IP address of the Digital Ocean machine, everything should be ready to configure WordPress.

The only slight thing to know is that in one of the first configuration screens for WordPress, you must configure the database connection. In this case, the user name is root, the password is whatever is in $MYSQL_PASSWORD (although do not type $MYSQL_PASSWORD, since your browser is not aware of this environmental variable), and the host is the string ‘mysql’ (since that was the virtual host name when you linked in MySQL above).


For those who just want to do this quickly, it’s just three ‘blocks’:

Setup up the nginx reverse proxy

docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy

Define the environment you want to build

export SITE_NAME=diamondq
export MYSQL_PASSWORD=xxxx
export SITE_PORT=8080

Build the environment

docker create --name $SITE_NAME-mysql-data -v /var/lib/mysql mysql

docker run --name $SITE_NAME-mysql --volumes-from $SITE_NAME-mysql-data -e "MYSQL_ROOT_PASSWORD=$MYSQL_PASSWORD" -e "MYSQL_DATABASE=wordpress" -d mysql

docker create --name $SITE_NAME-wordpress-data -v /var/www/html ubuntu

docker run -it --rm --volumes-from $SITE_NAME-wordpress-data ubuntu /bin/bash -c "apt-get install -y wget && cd /var/www/html && wget && tar -xzf latest.tar.gz && mv wordpress/* . && rm -rf wordpress latest.tar.gz && chown -R www-data:www-data /var/www/html"

docker run --name $SITE_NAME-wordpress --volumes-from $SITE_NAME-wordpress-data --link $SITE_NAME-mysql:mysql -p$SITE_PORT:80 -e "VIRTUAL_HOST=$SITE_HOST" -e "VIRTUAL_PORT=$SITE_PORT" -d antfie/wordpress

If you want a second site on the same box, then you just change the environment and re-run the build instructions. Takes about 30 seconds.

Building a new image on a Raspberry Pi

Even though there are many tutorials on how to initially set up a Raspberry Pi, I still found alot of them skipped steps or assumed too much knowledge. Since I tend to skitter between projects and come back to them months or years later, I figured I write this all down.

This is all based on a Raspberry Pi 2, but the majority of the steps should be similar for other versions; YMMV.

Raspberry Pi 2

Originally, my Pi had come with a 2 Gig microSD card, but I found that “way to small” to work with. Additionally, I wanted to have multi different projects on the go, so I wanted an easy to way to switch between the different projects. Therefore, I bought a 32 Gig SanDisk microSD from the local CostCo store for $30 CAD.

SanDisk 32Gb microSD

They’ve got a good video on the Raspberry Pi website for the basic install

But I’ll jot down the key things here (additionally, I had to do a number of things to properly configure for a non-UK environment).

To format the SD card, download and install the SD Card Formatter software from the SD Association:

Put the card in the computer, and format the card. NOTE: A number of sites suggested that the FORMAT SIZE ADJUSTMENT be set to ON. Other’s didn’t mention it. I did change the adjustment, and it seems to work fine.


I downloaded the NOOBS image from the Raspberry Pi website:

and then simply unzipped the contents into the root of the freshly formatted SD Card.

After installing the card into the Raspberry Pi, and attaching all the cables to the keyboard, mouse, monitor and Ethernet, as well as power, it booted to the NOOBS OS selection page.


I don’t recommend choosing the Data Partition as well as the Raspbian image. Everytime you run the NOOBS installer again, it will wipe the Data Partition. There seems to be lots of people confused about the real ‘point’ of this option. As far as I can tell, it used if you have multiple OS’s installed at the same time, and want to have a shared partition between them.


Additionally, by default, the Raspberry Pi is configured for UK usage, which includes a UK keyboard. A quick shortcut to making it work properly is to change the Language during the install process (in my case to English (US) and a us keyboard.

The install of the OS takes about 10 minutes.

After a reboot, it drops into the raspi-config screen. I usually do a couple of changes to make the environment a little more tailored for me.

  1. Change User Password
    It’s amazing how many people leave the default password of raspberry. This creates a huge security problem. Please change the password to something more secure.
  2. Internationalization Options
    1. Change Locale
      I usually remove the en_GB locale, and add the en_US locale. Only having one locale selected speeds up the installation of many software, as well as reduces the amount of disk space consumed by each locale.NOTE: I always pick the UTF-8 variant, since not only does it enable good internationalization support, but also allows for much better terminal graphics support.Additionally, I set the only locale (en_US) as the default.
    2. Change Timezone
      I change the timezone to the appropriate one for me (America -> Vancouver)
  3. Finish

Now that raspi-config is finished, I usually force a reboot. Sometimes the I18N features are not fully working until a reboot.

sudo reboot

I usually do a full update cycle to make sure that all software packages are up to date with the latest.

sudo apt-get update

Followed by an upgrade to install any updated packages

sudo apt-get upgrade

At this point, the Raspberry Pi is ready to go.