define( 'WPCACHEHOME', '/srv/www/www.ichilton.co.uk/html/blog/wp-content/plugins/wp-super-cache/' ); //Added by WP-Cache Manager Ian Chilton » ichilton

Archive

Author Archive

New Apple Macbook Pro RAM is soldered to the motherboard

June 13th, 2012 No comments

The new Retina Macbook Pro announced at WWDC 2012 looks really nice:

  • 2880-by-1800 resolution at 220 pixels per inch with support for millions of colors
  • 2.6GHz quad-core Intel Core i7 processor (Turbo Boost up to 3.6GHz) with 6MB shared L3 cache
  • 512gb SSD (with 768GB option)

However, I think this is a bit sly of them – they’ve soldered the RAM to the motherboard, so you not only can’t use cheaper third-party RAM, but you can’t upgrade the RAM at all.

It stands to reason, when they do similar on the iOS products – the iPhone, iPad and iPod’s all have fixed storage and no way of using SD card’s, USB stick’s or change the battery like on competitors products, but RAM has always been upgradable.

You can see it in this image on the Apple site:

Categories: Apple Tags: ,

My PHPNE Talk on Vagrant

May 22nd, 2012 No comments

I’m not a public speaker. In fact, i’m normally found either sitting at the back behind a sound desk or running round fixing technical problems.

However, Anthony Sterling approached me the week before the May 2012 PHP North East meetup and asked if i’d do a talk on Vagrant.

In order to give something back to the group, and knowing it was a product I really like that i’d be able to come up with plenty of content, I agreed.

I titled the talk Virtualized Development Environments with Vagrant so I could go into the background and why you would want to use such a tool, as well as how to use it. I also gave a brief introduction to Intrastructure as Code and Configuration Management with Puppet and Chef.

The full contents were:

  • Introduction to:
    • Development Environments
    • Virtualization
  • Why virtualize your development environment?
  • Introduction to Vagrant
  • Using Vagrant
  • Vagrant Plugins
  • Automated Provisioning (Configuration Management)

Watching it back, my public speaking and presentation skills do need quite a bit of work. However, given that it was a first attempt at public speaking, which is not natural to me, hopefully it was a good attempt. The feedback was good, plenty of people had positive things to say, questions and further discussion in the pub afterwards so that nice. I now also have some points to improve on.

The talk was video’d and available on Vimeo:

And the slides available on Slideshare or SpeakerDeck:

How GitHub Uses GitHub to Build GitHub

January 25th, 2012 No comments

I wrote a post a while back linking to an interesting video about the culture at GitHub, entitled: Optimizing for Happiness – why you want to go work at Github!.

Since then, i’ve watched a few other interesting talks about the culture and how they work at GitHub and two in particular are worth noting here.

Firstly, Zach Holman, one of the early “Githubbers” recently gave a talk about “How GitHub Uses GitHub to Build GitHub“:

Build features fast. Ship them. That’s what we try to do at GitHub. Our process is the anti-process: what’s the minimum overhead we can put up with to keep our code quality high, all while building features as quickly as possible? It’s not just features, either: faster development means happier developers. This talk will dive into how GitHub uses GitHub: we’ll look at some of our actual Pull Requests, the internal apps we build on our own API, how we plan new features, our Git branching strategies, and lots of tricks we use to get everyone – developers, designers, and everyone else involved with new code. We think it’s a great way to work, and we think it’ll work in your company, too.

You can watch the video here and also check out a series of blog posts he wrote on the same subject.

The second talk i’d recommend I had the pleasure of seeing live at a local conference i’ve attend (DIBI Conference). It’s by Corey Donohoe (@atmos):

The talk will cover the metrics driven approach GitHub uses to analyze performance and growth of our product. It will cover deployment strategies for rapid customer feedback as well as configuration management to ensure reproducibility.

You can watch the video here.

Both are great talks and well worth a watch.

Categories: Interesting Tags: ,

Introducing the Nanode RF and Wi-Node

November 10th, 2011 No comments

In my previous post, I introduced the Nanode - a low cost, internet connected Arduino board.

I mentioned at the end of the post that Ken had been working on two new products – the Nanode RF and the Wi-Node and I wanted to go into more detail about those here.

Firstly the Nanode RF – discussed by Ken here,  here and here. The Nanode RF is an evolution of the existing Nanode 5 board. It features a number of improvements and additional features. It will be available from December 2011 for around £30 to £35 depending on the build option (around £30 for the basic kit including RFM12B module and SRAM, going to up around £35 with the RTC, micro SD slot and super capacitor).

Ken has also announced that the Nanode RF PCB will at some point start to be sent out in all basic Nanode kits as it contains a number of improvements and the only thing it lacks over the Nanode 5 is the screw terminals for easy connection of external power and serial – both of which are available elsewhere on the board. Users can then at a later date chose to effectively upgrade their Nanode to be a Nanode RF module and/or the other optional features of the Nanode RF.

The Nanode RF brings the following changes/improvements:

  • Four – better spaced mounting holes.
  • Fully sealed vias for better soldering – less chance of solder shorts
  • Improved screenprint for better identification of connections.
  • Extra LED – for monitoring RF activity – or whatever.
  • 3V3 operation – but retains 5V compatibilty for use with Arduino shields.
  • mini B USB connector for powering at 5V.
  • Removal of screw terminals.

The Nanode RF brings the following new features:

  • A Hope RF RFM12B transceiver for 2 way communications with other boards.
  • A microSD card for general datalogging storage, storing applications and webpages
  • A realtime clock IC with alarm function which also holds a unique ID – or MAC address
  • An 8 pin socket (under the H logo) to allow you to add non volatile RAM for program download
  • An 8 pin SOIC footprint to accept an alternative memory device – instead of micro SD card
  • Super capacitor for maintaining SRAM and RTC non-volatility.

I was lucky enough to get a pair of Nanode RF prototype boards and have documented the full build process in a Nanode RF Pictorial Build Guide. You can see the photos I used to create the build guide here. I’ve also started collecting as many links and as much information about both the Nanode and Nanode RF as possible on my Nanode Information Page.

 

The second new product is the Wi-Node (short for Wireless Node) which you can read more about here.

The Wi-Node is a dual purpose product. It can either be a “backpack” to extend the functionality of a Nanode 5 board, or it can be used as a remote wireless node which can communicate to either a Nanode 5 with Wi-Node connected or a Nanode RF.

The Wi-Node includes:

  • ATmega microcontroller 16MHz
  • 868MHz wireless transceiver Hope RF RFM12B (433MHz or 915MHz as options)
  • 32K x 8 nonvolatile SRAM with super capacitor for non volatile backup
  • Real Time Clock with super capacitor non volatile backup – using the cool Microchip MCP79411 – which contains a unique ID – i.e. MAC address
  • Micro SD card for datalogging
  • 4 analogue/digital inputs – tolerant to 16V
  • 4 high current drive outputs – 1000mA for motors, relays steppers etc
  • Analogue inputs and digital drives brought out to 3.5mm pitch screw terminals
  • Serial interface/expansion/programming port
  • Battery operation where needed 2x AA 3V 2500mAh
  • 62 x 23 x 103 mm case
  • 5V Solar power option
  • Compatible with Nanode, Arduino and shields

The initial Wi-Node PCB’s should be arriving shortly and i’ll be writing up a pictorial build guide as soon as I receive some. The actual Wi-Node kits will be available from December and priced around £17.50. The basic kit will include all the standard build components and an RFM12B RF module. Optional extras of RTC, SD Card Socket and Motor Drive will be available for around an extra £7.50.

 

Another useful extra arriving soon is a low-cost Nanode branded programming adapter which saves buying an expensive FTDI cable.

Categories: electronics Tags: ,

Nanode – a low cost, internet connected Arduino board

October 28th, 2011 No comments

Back in August 2010, an Electronics Engineer from the UK called Ken Boak wrote that he had built a web connected Arduino for £12 and later shared the schematic.

These posts became the basis of a whole new product he would create which is known as the “Nanode”. Nanode is short for “Network Application Node” and is an open source, low cost Arduino board with built in Ethernet courtesy of the Microchip ENC28J60 chip. Over 1000 Nanode’s have been sold in the past few months.

The Nanode consists of a standard Arduino circuit with an ATMega328 IC, 16Mhz crystal and supporting circuitry, a Microchip ENC28J60, 25Mhz crystal, Magjack, supporting circuitry and a 74HC125 IC to increase the 3.3v levels coming from the ENC28J60 to the 5v levels used by the ATMega328.

The Nanode is fully compatible with the Arduino and fully supports both the Arduino software and expansion boards, which are called “shields”. It also supports memory expansion using SPI or I2C devices, a handy port for connection of the RFM12B wireless RF modules, a serial bus for interconnecting multiple Nanodes on a wired network and is powered by way of a serial programming header, USB or through a standard DC transformer using the on-board 7805 voltage regulator.

Pictured below are my two completed Nanode’s (one of the original green boards and one of the newer Red boards and an un-built kit).

The Nanode is available as a build yourself kit for around £20 from the following retailers:

As the Nanode is ideal for reading data from sensors and uploading it onto the internet, a site dedicated to that task, Pachube are offering a free upgrade to their Pro tier for Nanode users.

Here are some useful links relating to the Nanode:

 

On the back of the success of the “Nanode 5″, Ken has recently been working on two new Nanode boards which are scheduled for general availability at the end of the year:

  • The Nanode RF
  • The Wi-Node
These are discussed in the following blog post: Introducing the Nanode RF and Wi-Node.
Categories: electronics Tags: ,

Fixing the Ubuntu “Unable to locate tools.jar” Error

June 14th, 2011 No comments

When trying to use ant on Ubuntu (in either 10.04 or 10.10), I get the following error:
Unable to locate tools.jar. Expected to find it in /usr/lib/jvm/java-6-openjdk/lib/tools.jar

Turns out the fix is quite simple:

Edit /etc/apt/sources.list, find the lines similar to the following and uncomment them (obviously 10.04 will have lucid instead of maverick):
deb http://archive.canonical.com/ubuntu maverick partner
deb-src http://archive.canonical.com/ubuntu maverick partner

sudo apt-get update && apt-get install sun-java6-jdk
sudo update-alternatives --config java

Pick the option for: /usr/lib/jvm/java-6-sun/jre/bin/java which was number 2 on mine, rather than the openjdk default.

You now shouldn’t get that warning/error.

Categories: Linux Tags: ,

Creating a Mac OS X 10.7 Lion Install DVD

June 11th, 2011 No comments

I assume anyone that cares will be aware of the announcements from this year’s WWDC Keynote (video).

The first of these big announcements was regarding the next version of OS X, Apple’s Operating System for the Mac, code-named Lion (version 10.7). Lion will be available in July, priced at $29.99 and for the first time, only available in the “App Store”.

Shipping the upgrade in the App Store allows Apple to keep the cost down and provides a very easy way of obtaining the update. What is also neat is that as with other App Store applications, you are permitted to purchase it once and install on any number of Mac’s that you own and have linked to your Apple Account.

My instant reactions about ONLY shipping through the App Store (as opposed to offering it through the App Store but also providing the ability to purchase DVD’s from an Apple Store or the Apple Site) were:

What if you want to do a clean installation?

If you are re-installing in the future, do you need to install Snow Leopard and then upgrade to Lion all over again?

What happens if you don’t have an internet connection capable of downloading 4GB? (or have a non-internet connected Mac?)

What if you have a Mac without Snow Leopard and the App Store?

Will you be able to create a DVD some how?

Thankfully, even before it’s released, someone has already published a very easy way of creating a Mac OS X Lion DVD on any Lion compatible Mac running Snow Leopard.

  1. Purchase and download Lion from the Mac App Store.
  2. Right click on “Mac OS X Lion” installer and choose the option to “Show Package Contents.”
  3. Inside the Contents folder that appears you will find a SharedSupport folder and inside the SharedSupport folder you will find the “InstallESD.dmg”. This is the Lion boot disc image.
  4. Copy “InstallESD.dmg” to another folder like the Desktop.
  5. Launch Disk Utility and click the burn button.
  6. Select the copied “InstallESD.dmg” as the image to burn, insert a standard sized 4.7 GB DVD and wait.

I’ve not tried this as I don’t have access to the Lion beta, but it sure looks sane and shows that (assuming nothing changes before the final release) it will be possible. This is good news for me as i’m much more of a clean install than an upgrade guy.

I also assume that if Apple have made this so easy to do that they are ok with it – and after all, you are still genuinely purchasing the update.

Categories: Apple Tags: , ,

The Lowdown on the UK Mobile Networks

May 13th, 2011 4 comments

In the UK, we have 5 main mobile network providers (others simply piggy-back on one of these networks):

I’ve recently spend some time considering which network to move to next and thought i’d share my thoughts and experience.

Background:

For many years (probably approaching 10 at a guess), I stuck with Orange – upgrading to a new handset each year in exchange for signing another 12 month contract. Once I got married and moved, the signal where we lived was poor and I eventually left Orange for 3UK, who I used for several years until 2009 when I moved to O2 to get an iPhone, as they had the exclusive supply in the UK. At that time, I had the choice of an 18 month or 24 month contract and I opted for the 24 month to keep the cost down and keep in-sync with the iPhone release schedule. In 2010, with the release of the iPhone 4, I sold my iPhone 3GS and bought a sim-free, unlocked iPhone direct from Apple and continued with the O2 contract. In this time i’ve also had various other, secondary devices on various networks for various purposes or to take advantage of certain deals.

This brings us up to today, where I soon enter the last month of the O2 contract and I am keen to move away from O2 to another network. The reason for this is almost purely their (lack of) 3G network. As far as coverage goes, they are excellent – it’s very unusual for my phone to be out of service, anywhere. The problem is, I use my phone a lot for data and it’s almost a bit unusual to see a 3G signal unless i’m in a big city – and even then, i’ve been in both London and Middlesbrough and not had a 3G signal indoors. The other minor consideration is they changed their tarrifs last year and removed unlimited data and started charging for MMS (Picture/Video) Messages, which were previously charged at 4x text messages taken from the available allowance.

My thinking is as follows – i’ll do some testing with Vodafone and 3UK (it’ll become apparent below why these are my choices) using Pay-As-You-Go SIM cards and sign a rolling 1-Month SIM Only contract with my preferred choice. I’ll use the period until the next iPhone is released to fully evaluate the network to ensure i’m happy before committing to a 24 month contract in exchange for the latest iPhone. There are a lot of rumours around this year that the tradition of a new iPhone every June will be broken and it will be somewhere around September this year, so I may have plenty of time to evaluate my decision!

For the past few weeks, i’ve been carrying two Nokia 6120 Classic phones, one with a Vodafone sim and the other with a 3UK sim, in addition to my iPhone on O2 and comparing signal. I’ve compared signal across my own area of Teesside but also on a weekend trip to Worcester in the Midlands and another weekend trip up to Hexham in order to gauge the relative signal strengths in different places.

Firstly, a look at the different networks:

O2:

O2 (formally Cellnet, or BT Cellnet), along with Vodafone are the oldest of the UK networks. They transmit on 900Mhz (GSM) for their 2G network and as with all of the UK networks, 2100Mhz for 3G. As i’ve already covered, their 2G network is excellent and covers pretty much all of the UK but as you see on the 2009 Ofcom Report, their 3G coverage is the worst of all of the providers and is largely focused on cities and large population areas. Being on 900Mhz for 2G coverage means they are able to get greater range so can have fewer, more spread out masts to achieve good coverage, as opposed to Orange and T-Mobile who use the 1800Mhz (PCN) band and require cell sites closer together. 900Mhz also by nature gives better penetration through walls and buildings than 1800Mhz (PCN) and 2100Mhz (3G).

Orange:

Orange run a 2G network on 1800Mhz (PCN) and a 3G network on the standard 2100Mhz. Looking at the coverage maps, they would appear to have the 2nd best 3G coverage, behind 3UK. However, my wife has a HTC Desire on Orange and quite regularly she will have a good 3G signal but is unable to get any kind of data connection – a great source of frustration. At first, I wasn’t sure whether this was the network, phone or settings, but several other Orange users i’ve spoken to have exactly the same problem, which seems to be caused by network overloading. Reading around various forums, lots of people round the country seem to have the same problem and some describe the Orange data network as ‘on it’s knees’.

In 2010, Orange purchased T-Mobile and they have joined forces under the banner of ‘Everything Everywhere‘. Shortly after this, they announced that they had enabled roaming between their 2G networks. This means that Orange customers will roam onto T-Mobile’s 2G network if they are outside of Orange coverage and T-Mobile customers will roam onto Orange’s 2G network if they are outside of T-Mobile coverage. I also believe that Orange Mobile Broadband dongles now use the T-Mobile 3G network and backhaul rather than Orange’s. They have announced that a full network share (which will include 3G and allow connection to the strongest available signal rather than roaming when out of service) will come later. This latter part is interesting, as I will cover under T-Mobile below.

T-Mobile:

T-Mobile run a 2G network in the 1800Mhz (PCN) band. For their 3G network, they jointly own (50/50) a company called MBNL (Mobile Broadband Network Ltd) with 3UK. Under this agreement, they share cell sites, towers and (I think) antenna’s using something called RAN (Radio Access Network) but then have their own backhaul connectivity – either via fibre or using directional microwave links daisy chain on to another, nearby cell tower. They therefore have pretty much the same 3G coverage as 3UK (although I believe that 3UK’s backhaul is faster, giving better data speeds, but this will of course vary depending on location).

What’s interesting here though is that as I mentioned above, T-Mobile is now owned by Orange, who are looking to merge their 3G networks. How I believe this will work is that the MBNL sites will be enabled as part of the Orange network, they will shut off all of the Orange sites which are the same as or close to MBNL sites and will contribute their remaining (around 3,000 I believe) sites to the MBNL network. This means 3UK will have the option of running their backhaul links to those sites and gain the additional benefit of those sites on their network. Once all completed and enabled, these sites will transmit Orange, T-Mobile and 3UK signal through common antenna’s (but with their own equipment and backhaul). This will mean that in theory, all 3 networks should have the same coverage.

Three (or 3UK):

3UK is ran by Hutchinson Telecommunications who used to own Orange, before it was sold to France Telecom. As they are a newer company compared to the other networks, they decided to only build a 3G network and they utilised this early on by pushing ‘video calling’ as their unique selling point, as well as offering very generous call and text allowances. As I covered above, their 3G sites are deployed by MBNL which is jointly owned with T-Mobile but I believe they do deploy some of their own, 3-only sites to increase capacity (more than coverage) in certain areas.

As they don’t have their own 2G network and because of the higher frequency of 3G, it’s much harder to get complete coverage because masts have to be closer together and 3G also requires faster data connectivity which I assume will be tricky in rural areas. To ensure good coverage, they agreed a roaming agreement with O2. In 2006, a deal was done with Orange and 2G roaming migrated from O2 to Orange (although for a while it worked on both at the same time!). What this means is that if the phone is unable to obtain a 3UK (3G) signal, it will use Orange (or originally O2′s) 2G network. This 2G roaming is generally a good thing because it gives you the ability to make and receive calls and text messages (and very slow data) while you would otherwise be out of service. However, there is a few quirk’s with this. Firstly, if you are in an area that has a very weak 3 (3G) signal but a strong Orange (or previously O2) 2G signal, the phone will prefer and stick to the 3G signal – so at times you actually got a better signal for calls by going (further) inside, covering the phone or doing other things which would usually reduce the signal because it caused it to lose the weak 3G signal and use the stronger 2G signal. Phones also only re-scan the available signals every so often (usually after a few minutes inactivity) so it’s possible that if you lose 3G signal momentarily and it picks up the 2G network (especially as it will scan in sequence so it depends where in the sequence you catch it), it will hold on to it until the next check. This and people who disable 3G on the phones will mean that there are a lot of unnecessary calls going through the 2G network while in 3G coverage areas and that will obviously come at a cost to 3UK as they will have to pay Orange for the calls through their network. For these reasons, they have started to turn off the Orange roaming in certain areas which they believe will have adequate 3G coverage.

Vodafone:

Vodafone’s network is very similar to O2′s (in fact, I believe they do share some cell sites). They have an excellent 2G network on 900Mhz (GSM) and according to the coverage maps, they have a better 3G coverage than O2 (but not as good as 3UK or Orange).

My Conclusions:

For the reasons I outlined above, my choice was to move to either Vodafone or 3UK. In my testing with the Nokia 6120 Classic’s on 3UK and Vodafone, the Vodafone signal was good, but it was still on 2G at times – whereas the 3UK signal was excellent. The 3UK phone was always in 3G coverage and quite often had a full signal so was a clear winner on signal strength.

3UK were also a long way ahead in terms of the package – for £25/month on a 1-Month SIM-Only deal or £35 with an iPhone, you can get their “One Plan”. This includes 2,000 cross network minutes, 5,000 text messages, 5,000 3-3 minutes and unlimited data. Unusually, the unlimited data doesn’t have a fair usage limit and they make a big deal out of the fact that it is truly unlimited. They also allow the use of tethering and use of the “Personal Hotspot” feature in the latest iPhone software. Vodafone on the other hand are the same price, but for £25 on a 1-Month SIM Only deal, you only get 600 minutes, unlimited texts and 1GB data and on an iPhone contract, £35 gets you the same but with 500Mb data. Vodafone do include BT OpenZone Wifi usage when near a hotspot but they charge extra for tethering and personal hotspot which is a shame.

On the strength of the above, and the fact that i’d had a good experience with them for several years in the past, I went ahead and ordered the 3UK SIM Only deal (through Quidco to get £50 cashback). This is however where things get a little strange. When the Microsim arrived for the iPhone 4, the number of bars on the iPhone was clearly no where near that of the Nokia. I tried it for a few days and where the Nokia was often full signal, the iPhone was rarely full signal. It wasn’t that the iPhone was bad at receiving a signal because it was as good as the Nokia at picking up a very weak signal. There is a good article here about why bar’s don’t really matter – the bars are just a visual representation of the actual signal strength reading and there is no standard for this between manufacturers, but it just wasn’t right. The iPhone was quite often 1 or 2 bars and speed tests varied a lot. This also confirmed experience some friends have using iPhones on 3UK.

Another consideration was that Vodafone is the only network that seems to work well in our house. Coverage on the other networks is patchy, until you walk out and off down the street where it jumps in signal.

The above problems and the fact that 3UK are withdrawing the 2G roaming coverage which can easily create gaps in the coverage has led me to go with Vodafone. The 3G coverage might not be as comprehensive, the deal not as good and no tethering/personal hotspot without paying extra, but at least I know I should (just about) always have at least a 2G signal for calls and texts, even if I can’t get a 3G signal for data.

I’ve also already got a 3UK Mifi (Mobile Wifi) which I can continue to use for data as a backup so going with a different provider it saves me from finding an alternative to that, which I would have if i’d gone with 3UK for the iPhone. Also, it’s only on a 1-month contract (and I got the 1st 2 months free) so i’ve got between now and when the next iPhone comes out evaluate and change my mind…..

Categories: Phones Tags: , ,

Optimizing for Happiness – why you want to go work at Github!

April 28th, 2011 No comments

If you are a manager or high up in any company then I highly recommend you watch this video of a recent talk by Tom Preston-Werner, co-founder of Github. It’s around an hour in length but I urge you to take the time to watch it – it’s packed full of great advice all the way through.

The way traditional businesses approach the management and organization of creative, intellectual workers is wrong. By throwing away everything that blocks productivity (meetings, deadlines, managers, titles, strict vacation policies, etc) and treating your employees as the responsible adults that they are, huge amounts of potential can be unlocked and employee happiness and retention can be at unprecedented highs. At GitHub we’ve embraced a philosophy that gets things done and strips away policy and procedure in favor of smart decision making and personal responsibility. Come see how we make it work and how you can reap the same benefits in your own company.

The video goes into both how they recruit and how they run a profitable and productive company.

At GitHub we don’t have meetings. We don’t have set work hours or even work days. We don’t keep track of vacation or sick days. We don’t have managers or an org chart. We don’t have a dress code. We don’t have expense account audits or an HR department.

We pay our employees well and give them the tools they need to do their jobs as efficiently as possible. We let them decide what they want to work on and what features are best for the customers. We pay for them to attend any conference at which they’ve gotten a speaking slot. If it’s in a foreign country, we pay for another employee to accompany them because traveling alone sucks. We show them the profit and loss statements every month. We expect them to be responsible.

We make decisions based on the merits of the arguments, not on who is making them. We strive every day to be better than we were the day before.

We hold our board meetings in bars.

We do all this because we’re optimizing for happiness, and because there’s nobody to tell us that we can’t.

You can watch the video here.

Tell me now that you don’t want to work at Github?

Categories: Interesting Tags: ,

Amazon Web Services, Hosting in the Cloud and Configuration Management

April 23rd, 2011 No comments

Amazon is probably the biggest cloud provider in the industry – they certainly have the most features and are adding more at an amazing rate.

Amongst the long list of services provided under the AWS (Amazon Web Services) banner are:

  • Elastic Compute Cloud (EC2) – scalable virtual servers based on the Xen Hypervisor.
  • Simple Storage Service (S3) – scalable cloud storage.
  • Elastic Load Balancing (ELB) – high availability load balancing and traffic distribution.
  • Elastic IP Addresses – re-assignable static ip addresses to EC2 instances.
  • Elastic Block Store (EBS) – persistant storage volumes for EC2.
  • Relational Database Service (RDS) – scalable MySQL compatible database services.
  • CloudFront – a Content Delivery Network (CDN) for serving content from S3.
  • Simple E-Mail System (SES) – for sending bulk e-mail.
  • Route 53 – high availability and scalable Domain Name System (DNS).
  • CloudWatch – monitoring of resources such as EC2 instances.

Amazon provides these services in 5 different regions:

  • US East (North Virginia)
  • US West (North California)
  • Europe (Ireland)
  • Asia Pacific – Tokyo
  • Asia Pacific – Singapore

Each region has it’s own pricing and features available.

Within each region, Amazon provides multiple “Availability Zones”. These different zones are completely isolated from each other – probably in separate data centers, as Amazon describes them as follows:

Q: How isolated are Availability Zones from one another?
Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.

However, unless you have been offline for the past few days, you will have no doubt heard about the extended outage Amazon has been having in their US East region. The outage started on Thursday, 21st April 2011) taking down some big name sites such as Reddit, Quora, Foursquare & Heroku and the problems are still ongoing now, nearly 2 days later – with Reddit and Quora still running in an impaired state.

I have to confess, my first reaction was that of surprise that such big names didn’t have more redundancy in place – however, once more information came to light, it became apparent that the outage was affecting multiple availability zones – something Amazon seems to imply above shouldn’t happen.

You may well ask why such sites are not split across regions to give more isolation against such outages. The answer to this lies in the implementation of the zones and regions in AWS. Although isolated, the zones within a single region are close enough together that low cost, low latency links can be provided between the different zones within the same region. Once you start trying to run services across regions, all inta-region communication will go over the normal internet and is therefore comparatively slow, expensive and unreliable so it becomes much more difficult and expensive to keep data reliably syncronised. This coupled with Amazon’s above claims about the isolation between zones and best practises has lead to the common setup being to split services over multiple availability zones within the same region – and what makes this outage worst is that US East is the most popular region due to it being a convenient location for sites targeting both the US and Europe.

On the back of this, there are many people are giving both Amazon and cloud hosting a good bashing in both blog posts and on Twitter.

Where Amazon has let everyone down in this instance is that they let a problem (which in this case is largely centered around EBS) to affect multiple availability zones and thus screwing everyone who either had not implemented redundancy or had followed Amazon’s own guidelines and assurances of isolation. I also believe that their communication has been poor and had customers been aware it would take so long to get back online, they may have been in a position to look at measures to get back online much sooner.

In reality though, both Amazon and cloud computing less to do with this problem and more specifically the blame associated with it. At the end of the day, we work in an industry that is susceptible to failure. Whether you are hosting on bare metal or in the cloud, you will experience failure sooner or later and part of the design of any infrastructure you need to take that into account. Failure will happen – it’s all about mitigating the risk of this failure through measures like backups and redundancy. There is a trade-off between the cost, time and complexity of implementing multiple levels of redundancy verses the risk of failure and downtime. On each project or infrastructure setup, you need to work out where on this sliding scale you are.

In my opinion, cloud computing provides us an easy way out of such problems. Cloud computing gives us the ability to quickly spin up new services and server instances within minutes, pay by the hour for them and destroy them when they are no longer required. Gone are the days of having to order servers or upgrades and wait in a queue for a data center technician to deal with hardware. It was the norm to incur large setup costs and/or get locked into contracts. In the cloud, instances can be resized, provisioned or destroyed in minutes and often without human intervention as most cloud computing providers also provide an API so users can handle the management of their services programatically. Under load, instances can be upgraded or additional instances brought online and in quiet periods, instances can be downgraded or destroyed, yielding a significant cost saving. Another huge bonus is that instances can be spun up for development, testing or to perform an intensive task and thrown away afterwards.

Being able to spin new instances up in minutes is however less effective if you have to spend hours installing and configuring each instance before it can perform it’s task. This is especially true if more time is wasted chasing and debugging problems because something is setup differently or missed during the setup procedure. This is where configuration management tools or the ‘infrastructure as code’ principles come in. Tools such as Puppet and Chef were created to allow you to describe your infrastructure and configuration in code and have machines or instances provisioned or updated automatically.

Sure, with virtual machines and cloud computing, things have got a little easier by easily allowing re-usable machine images. You can setup a certain type of system once and re-use the image for any subsequent systems of the same type. This is however greatly limiting in that it’s very time consuming to then later update that image with small changes, to cope with small variations between systems and almost impossible to keep track of what changes have been made to which instances.

Configuration Management tools like Puppet and Chef manage system configuration centrally and can:

  • Be used to provision new machines automatically.
  • Roll out a configuration change across a number of servers.
  • Deal with small variations between systems or different types of systems (web, database, app, dns, mail, development etc).
  • Ensure all systems are in a consistant state.
  • Ensure consistency and repeatability.
  • Easily allow the use of source code control (version control) systems to keep a history of changes.
  • Easily allow the provisioning of development and staging environments which mimic production.

As time permits, i’ll publish some follow up posts which go into Puppet and Chef in more detail and look at how they can be used. I’ll also be publishing a review of James Turnbull’s new book, Pro Puppet which is due to go to print at the end of the month.

Categories: Web Tags: , , , , ,