Saturday, March 14, 2015

Yeoman is a tool that provides a scaffolding system to begin new projects. The genius thing about Yeoman is that, by itself, it doesn’t know how to do anything. This flexibility comes from a modular approach that relies on separate generator modules. Each generator knows how to create a particular kind of project (e.g. an Backbone.js app or a Chrome extension).

ReSpec is a JS library written by Robin Berjon that makes it easier to write technical specifications, or documents that tend to be technical in nature in general. It was originally designed for the purpose of writing W3C specifications, but has since grown to be able to support other outputs as well. On of the best things about ReSpec is its intrinsic understanding of WebIDL. You can outline the design for a new API and it makes it very easy to fill in the description of what the methods and properties do. It also makes it easy to refer to other specs using the SpecRef database.

Bringing these two together, I have create a Yeoman generator called generator-respec that outputs a basic ReSpec document.

Assuming you already have node and npm installed, you can install Yeoman with the command npm install –g yo. After that you should install the ReSpec generator with npm install –g generator-respec.

Now you have the tools installed, create a new folder to hold your specification and from the command prompt in that directory run yo respec. This will prompt you for a title, short-name, spec status, and author information and then create a new index.html document with an outline specification using ReSpec. From here you can edit your spec using the ReSpec documentation as a guide.

The current implementation of generator-respec is very basic. I’m sure there are some obvious things that can be added. One idea I have is to support a subgenerator that creates related specs in the same folder. What else should be added? The generator-respec project is available on GitHub.

Thursday, March 12, 2015

Yesterday, my esteemed (new) colleague Aaron Gustafson wrote a piece about his reaction to the “Break Up with Internet Explorer 8” site currently doing the rounds in the Twittersphere. He argues for support of older browsers and optimisation for newer, better browsers. I disagree.

Some people don’t have control over their browsing environment. Some people can’t afford to upgrade to a more recent version of Windows because of business software that is expensive to move forward. This is true but being stuck on IE8 isn’t the common case any longer.

Even Microsoft isn’t going to support IE8 customers after January 2016 and you shouldn’t either. There will be no more security updates for IE8 after that[1]. We all need to move on and we need to continue to encourage organisations to get to IE11 and deploy Enterprise Mode for their legacy applications.

Progressive enhancement is a good goal and something that we should aim for with today’s modern browsers. IE11 has good feature coverage and the new Microsoft EdgeHTML rendering engine that will be used by “Project Spartan” goes considerably further. All the popular browsers are adding lots of new features (track the IE ones at and we should make our apps light up in the face of new capabilities. Feature detection is king.

But IE8 is old. It didn’t have support for the old old DOM standards like Core, HTML, Style, Events, etc. You even have to polyfill addEventListener, for goodness sake. Yes, this is all possible (maybe using abstractions like jQuery 1.x) but why should we continue to do this for new work? Why continue to bloat the web for an audience that is shrinking ever faster? Most enterprises we engage with are rushing to get to IE11 before the support policy change comes into effect.

There are two types of web developers in the world. Those that are building and maintaining legacy systems for enterprises that may well have to support IE8 and probably don’t have to worry to much about modern browsers. And those who are targeting Chrome, Firefox, Safari, Opera, and modern IE. This latter category should target nothing older than IE9 and, given that IE11 share has been bigger than IE9 and IE10 combined for almost a year, I argue that you might just support IE11.

Does this apply to everyone? No, of course not. Is this too simplistic? Yes. Should you just cut people off tomorrow? No, again, of course not. But having a transition plan, letting customers know what it is, and then moving to a world where you don’t worry about old legacy IE. That’s the kind of web developer I want to be. Despite helping to bring IE8 to life, I’ve broken up with it.

[1] Unless you have a commercial relationship with Microsoft to provide these, and you are required to be executing on your plan to get off IE8.
Tuesday, May 20, 2014

My last post described how to acquire Twitter OAuth keys and tokens to allow you to use Twitter’s API to access Twitter feeds. I showed how to use the request module with node.js, which has built-in support for OAuth, to request and process data.

In this blog post I will show how to do the same thing using C# and .NET using the OAuthBase class linked to from

Let’s start with the code to call the Twitter API:

using System;
using System.IO;
using System.Net;
using System.Text;
using OAuth;

class App {
    static void Main() {
        // URL for the API to call
        string url = ""
            + "?screen_name=adrianba&count=5";

        // Create a http request for the API
        var webReq = (HttpWebRequest)WebRequest.Create(url);

        // Set the OAuth header
        var auth = new OAuthHeader();

        // Echo the response to the console
        using(WebResponse webResp = webReq.GetResponse()) {
            using(StreamReader sr = new StreamReader(
                    )) {

The code here is similar to the previous post. It creates a HTTP request to the API endpoint and this time simply writes the response to the console. The difference here is that we need to add the OAuth Authorization header. The magic takes place in the getHeader() method:

class OAuthHeader : OAuthBase {
    public string getHeader(string url,string method) {
        string normalizedUri;
        string normalizedParameters;

        // OAuth keys – FILL IN YOUR VALUES HERE (see this post)
        const string consumerKey = "...";
        const string consumerSecret = "...";
        const string token = "...";
        const string tokenSecret = "...";

        // Create timestamp and nonce for this request
        string timeStamp = GenerateTimeStamp();
        string nonce = GenerateNonce();

        // Generate signature for the header
        string signature = GenerateSignature(
            new Uri(url), consumerKey, consumerSecret, token, tokenSecret,
            method, timeStamp, nonce, out normalizedUri, out normalizedParameters);

        // Compose the authorization header value
        // See
        StringBuilder auth = new StringBuilder();
        auth.Append("OAuth ");
        auth.AppendFormat("{0}=\"{1}\", ", OAuthConsumerKeyKey, UrlEncode(consumerKey));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthNonceKey, UrlEncode(nonce));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthSignatureKey, UrlEncode(signature));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthSignatureMethodKey, "HMAC-SHA1");
        auth.AppendFormat("{0}=\"{1}\", ", OAuthTimestampKey, timeStamp);
        auth.AppendFormat("{0}=\"{1}\", ", OAuthTokenKey, UrlEncode(token));
        auth.AppendFormat("{0}=\"{1}\"", OAuthVersionKey, "1.0");
        return auth.ToString();    

The OAuthHeader class inherits from the OAuthBase class mentioned above. This provides the GenerateTimeStamp, GenerateNonce, and GenerateSignature methods. Twitter uses the OAuth 1.0a protocol, defined in RFC5849. This specification outlines the Authorization header value that is constructed at the end of the getHeader() method.

You can keep the OAuthHeader class around for easy access to resources needing OAuth authorization such as Twitter.

posted @ Tuesday, May 20, 2014 8:25 AM | Feedback (0) | Filed Under [ .NET ]
Monday, May 19, 2014

Following on from my last post that described using Node to access feeds from Delicious, I’ve also been investigating how to access my Twitter feed. This adds a little more complexity because Twitter requires that your app or script authenticate to Twitter using OAuth.

Per Wikipedia, "OAuth provides client applications a 'secure delegated access' to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials." What this means is that your app can access the Twitter API in an authenticated way using OAuth without having to embed your username and password into the script.

The node.js request library that I mentioned last time has built in support for OAuth authentication. It requires that you populate a JavaScript object as follows:

var oauth = {
   consumer_key: CONSUMER_KEY
  , consumer_secret: CONSUMER_SECRET
  , token: OAUTH_TOKEN
  , token_secret: OAUTH_TOKEN_SECRET

Each of CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN and OAUTH_TOKEN_SECRET are strings that we must supply as part of the OAuth handshake.

There are two ways to think about using OAuth to authenticate against a service such as Twitter depending upon the type of app that you are building. The first scenario is where, for example, you are building a Twitter client. You will distribute this application and each user of the application will authenticate using their own credentials so that they can access information from the service as themselves. In the second scenario you are building an application or service that you want to access the service as you and you never need to send a variety of credentials. For example, say you are building a widget on your web site that will indicate how long it has been since you last tweeted. This will always be about you and need to use only your credentials.

The CONSUMER_KEY and CONSUMER_SECRET values are provided by the service to identify your application. The OAUTH_TOKEN and OAUTH_TOKEN_SECRET represent the credentials of the user accessing the service. They may be determined and stored by your app in the first scenario above or they may be part of your application in the second.

This all sounds a little complicated so an example will help. Before we get to that we need to get the values. Twitter provides a portal for this at If you login and select Create New App you will see a screen that looks like this:


Here you provide the name of you application, a description, and a link to your web site. For our initial scripting purposes the values here don’t matter too much. There is a Callback URL value but we also don’t need this now and can leave this blank. Finally there are some terms and conditions to read and agree to. Once you have completed this form, press the Create your Twitter application button and you will see a screen that looks like this:


If you click on the API Keys tab you will see something like this:


Since we want our script to access Twitter using our account, we can click on the Create my access token button to generate the appropriate token values. You should see something like this:


You may need to refresh to see your new access token.

So now you have four strings: API key, API secret, Access token, and Access token secret. These map to the four values needed in the OAuth structure described in the code above.

There are lots of different ways to access the Twitter API. Here I am simply going to use the user_timeline API to retrieve the 5 most recent tweets from my timeline. You can use this API to retrieve any user’s timeline that you have access to from your Twitter account (including, of course, all the public timelines).

So here is the code to make a request to this API:

var request = require('request');
var url = "";

var CONSUMER_KEY = "...";
var CONSUMER_SECRET = "...";
var OAUTH_TOKEN = "...";

var oauth = {
    consumer_key: CONSUMER_KEY
  , consumer_secret: CONSUMER_SECRET
  , token: OAUTH_TOKEN
  , token_secret: OAUTH_TOKEN_SECRET

request.get({url:url, oauth:oauth, json:true}, function (e, r, data) {
  var tweets = [];
  data.forEach(function(item) {
    var tweet = {}; =;
    tweet.text = item.text;
    tweet.created = item.created_at;

You obviously need to replace the "..." strings with the values copied from your API Keys page.

The important addition in this code is passing the oauth option into the get() method. After this, the request module takes care of the rest. In general, all services that require OAuth authentication follow this pattern although they will differ in how the keys and tokens are issued to you.

posted @ Monday, May 19, 2014 8:30 AM | Feedback (0) | Filed Under [ node.js ]
Friday, May 16, 2014

In my last post, I wrote about using node.js as a scripting tool. Node has lots of good libraries for making network requests and processing the results. request is one of the most popular HTTP clients. It is easier to work with than the built-in http module that is designed to provide basic http client/server primitives.

Despite its chequered history, I recently started using again for managing and sharing bookmarks for sites I want to remember. Modern browsers like Internet Explorer support synchronising bookmarks or favourites amongst your devices but I like the ability to store interesting sites in a public place so other people can see what I’m looking at (should they be interested!). This also allows me to find things that I stored from someone else’s device.

Delicious provides a variety of interesting APIs for developers but also some simple RSS or JSON data feeds.

Here is a simple node script that uses the request and querystring modules to retrieve the last 10 public bookmarks and creates a simple JSON output.

var request = require('request');
var qs = require('querystring');

var url = "";
var params = { count: 10 };
url += qs.stringify(params);

request.get({url:url, json:true }, function (e, r, data) {
  var bookmarks = [];
  data.forEach(function(item) {
    var bookmark = {};
    bookmark.url = item.u;
    bookmark.text = item.d;
    bookmark.created = item.dt;

The important parts here are the use of request.get() which calls a callback when the response is retrieved and setting the json option to true so that the response JSON is already parsed when it is returned.

With just a few lines of code you can retrieve data with node and then do whatever processing you want on it.

posted @ Friday, May 16, 2014 11:32 AM | Feedback (0) | Filed Under [ node.js ]
Sunday, April 20, 2014

For the last couple of months, I’ve been experimenting more and more with node.js. My impression is that most people who have heard of node but not really used it think of it as a server technology for building web applications using JavaScript. Of course, it does that and there is good support for node.js hosted on Microsoft Azure Web Sites. But you can also use node as a scripting language for local tasks. There are lots of popular scripting languages like Python and Ruby but if you’re already a JavaScript developer then node is a convenient choice.

I guess most scripting environments have a package management framework and node’s is called npm. I recently wanted to do some scripting against an XML file and discovered the xml-stream module by searching on One of the helpful things about is that it tells you how often a particular package has been downloaded, which gives you an idea of whether it is a main stream module or just someone’s hobby module that might not work so well yet.

Installing xml-stream on Windows

Installing modules is easy and the command npm install xml-stream should take care of installing the module into the node_modules folder below the current directory.

However. When I first tried this I ran into some problems. First of all, this module needs Python to be available. I installed the latest version of Python (v3.4.0) and tried again. This time it complained because Python 2.7 was needed. I installed Python 2.7.6 too but now there was a problem – how would npm know which version of Python to use? You can specify this each time or you can use the npm config command to tell npm where to look:

npm config set python "C:\Python27\python.exe"

You can also configure the version of Visual Studio tools you have installed so that npm knows how to use the compilers:

npm config set msvs_version 2013

You can check that this is configured correctly by typing:

npm config list

With this configuration in place, issuing the command npm install xml-stream successfully downloaded a built the xml-stream module.

Using xml-stream

Now that I had xml-stream installed, I could try it out. The W3C publishes a list of all of their published documents in an RDF/XML file. I wanted to parse this file and identify the latest version of each document.

The first thing to do is to import the http and xml-stream modules and to download the XML file:

"use strict";

var http = require('http');
var XmlStream = require('xml-stream');
var url = "";

var request = http.get(url).on('response', function (response) {
    //TODO: process response here

The xml-stream module allows you to set-up event listeners for different elements in the document. The W3C file has different elements for Working Draft (WD), Last Call (LastCall), Candidate Recommendation (CR), etc. Here is the code that listens for each document type.

"use strict";

var http = require('http');
var XmlStream = require('xml-stream');
var url = "";

var request = http.get(url).on('response', function (response) {
    // Collection to store documents in
    var documents = {};

    var processDocument = function (item) {
        //TODO: process document

    var xml = new XmlStream(response, 'utf8');

    // Process each type of document
    xml.on('updateElement: WD', processDocument);
    xml.on('updateElement: LastCall', processDocument);
    xml.on('updateElement: CR', processDocument);
    xml.on('updateElement: PR', processDocument);
    xml.on('updateElement: REC', processDocument);
    xml.on('updateElement: NOTE', processDocument);

    xml.on('end', function () {
        // Write out JSON data of documents collection

Finally, we can add in a definition for the processDocument function, which will gather together all the documents into the documents collection:

    var processDocument = function (item) {
        // Collect document properties
        var document = {};
        document.type = item.$name;
        document.title = item['dc:title']; = item['dc:date'];
        document.verURL = item.$['rdf:about'];
        document.trURL = item['doc:versionOf'].$['rdf:resource'];

        // If we have already seen a version of this document
        if (documents[document.trURL]) {
            // Check to see if this one is newer and if so overwrite it
            var old = documents[document.trURL];
            if ( < {
                documents[document.trURL] = document;
        } else {
            // Store the new entry
            documents[document.trURL] = document;

At the end, the script writes out the JSON data to the console.

Of course, this script is a little fragile because it doesn’t map any of the namespace prefixes based on their declarations but it does the job I needed and is a good example of having a powerful JavaScript scripting environment coupled to a wide array of different packages to help you get tasks completed.

Saturday, April 19, 2014

I’m running Windows Server Essentials 2012 R2 for file storage and system backups. While I’m doing lots of experimenting with Boxstarter, I wanted a caching web proxy that would keep local copies of the package files I’m installing over and over so that I wouldn’t have to wait for them to come from the Internet each time.

Squid is a well known caching proxy and GuidoSerassio of Acme Consulting S.r.l. maintains the official native Windows port. You can download the latest stable build of Squid 2.7 from here.

Installation is simple. I extracted all the files into C:\squid. In the etc folder, there are four configuration files with .conf.default extensions. I removed the .default extension from squid.conf, mime.conf, and cachemgr.conf:

Next from a command prompt running as Administrator cd to c:\squid\sbin and run squid –i and squid –z. The first command installs squid as a service and the second initialises the cache folder (c:\squid\var\cache). Finally, start the service with net start squid.

Squid will now be running and listening on port 3128 (the default for squid). The last think you need to do is to configured the Windows Firewall to allow incoming connections to squid. Go to Advanced Settings from the Windows Firewall control panel. Select Inbound rules and add a New Rule allowing listening on TCP port 3128.

Now you should be all set – you can configure your browser to connect to the proxy on your server using port 3128. You can look in the c:\squid\var\logs folder for activity logs to make sure things are working.

Every year, Scott Hanselman publishes his Ultimate Developer and Power Users Tool List for Windows. I usually take a look through it, pat myself on the back when I see things I’ve been using for a while, and spend some time investigating some of the new ones. Sometimes a tool that I’ve looked at before but haven’t been using gets another mention and it prompts me to take a look (this year it was Clink, a tool that adds readline-style editing to the Windows command line).

I recently discovered that I’d missed an important recommendation – it’s right there at the beginning of the list. A while ago, someone on my old team at Microsoft UK had recommended Chocolatey. This is a command line tool that works like apt-get but it installs Windows tools and applications. What I really wanted was a way to script installation of tools through Chocolatey, say after deploying a clean installation of Windows.

Introducing Boxstarter. Boxstarter, as recommended by Scott, is a fantastic tool that allows you to configure the installation of a set of Chocolatey packages. It also allows you to configure key Windows settings. Even better, the WebLauncher installs everything from a simple URL.

For simple installations, you can list the packages you want in the URL and BoxStarter will take care of the rest. Say I want to install node.js, git, and Visual Studio Express for Windows. Easy. I just press Win+R to bring up the run dialog and type,git.install,VisualStudioExpress2013WindowsDesktop

Here is a video (don’t worry – most of the Visual Studio installation is edited out) that shows what happens when you launch this on a clean installation of Windows 8.1:

Thursday, December 20, 2012

From time to time I see posts like this describing "CSS Hacks" to detect a particular version of IE. This post describes how removing conditional comments support in IE10 might be a problem for targeting workarounds to IE10. Specifically it says:

"But without conditional comments in IE10, the only options we’re left with to target CSS problems are hacks or browser sniffing — and we certainly don’t want to resort to the latter."

It goes on to describe a set of hacks that amount to browser sniffing using JavaScript (if (/*@cc_on!@*/false && document.documentMode === 10)) or some combination of media queries that some developers will only fire for IE10. Just because you don't use the user agent string doesn't mean you're not browser sniffing.

The whole point of feature detection is to look for the feature you want to use and if it is missing do something else. If the issue really is a bug in a specific version of a browser and you can't find a way to detect the correct vs. the errant behaviour then consider browser detection explicitly, not hidden in code made to look like something else.

The main issue is one that David Storey notes in the comments: this kind of hack is unreliable as bugs are fixed or new features introduced. Today we find that most sites that don't work correctly in IE10 by default are broken because they made this kind of assumption. They either expect that if they detect IE then a specific feature isn't present or will work a specific way or the assume that if feature A is supported then feature B must also be supported, often when there is no connection between the two. When a new version of IE comes along that fixes the bug or maybe implements feature A but not B then the site is broken because of these incorrect assumptions.

All browsers prioritise the order in which they deliver new features according to their own goals. In IE10 we invested a lot in new CSS3 layout support like grid and regions, for example. There are other things that we didn't get to though we always wish we could have done more. Use feature detection in your site for the things you use that we didn't get to yet so that as soon as we include support your site should simply light up that part in IE.

Wednesday, December 19, 2012

I’m not a Gmail user but I know someone who is. She was a little disappointed to discover that when configuring her Gmail account with a Microsoft Surface, only one of the many Google calendars synchronised to the built-in Calendar app. It looks like the app only syncs the default calendar. After a little searching, we found a workaround. It’s a little clunky but did the trick. Hopefully this gets fixed properly in an update to the main app.

Saturday, February 25, 2012

This morning in the United States, the White House announced a new "Consumer Privacy Bill of Rights" as part of the effort to improve consumers' online privacy. As I've mentioned before, Microsoft is actively participating in the industry initiative for Tracking Protection at the W3C to produce Web standards for online privacy. [continues on the IEBlog]

Monday, February 6, 2012

In the last ten years Microsoft has invested heavily in user privacy. Just like security, privacy considerations are baked into every Microsoft product. It is almost a year since the World Wide Web Consortium (W3C), an international community that develops open standards to ensure the long-term growth of the Web, accepted and published Microsoft’s member submission for an Internet Standard to help protect consumer privacy. [continues on the IEBlog]

posted @ Monday, January 23, 2012 12:00 AM | Feedback (0) | Filed Under [ W3C ]
Monday, September 12, 2011
Scott Hanselman describes the ASP.NET fix need to make the browser definition files work with IE10. In general we encourage developers to use feature detection when switching their mark-up but the ASP.NET infrastructure has been around for a long time. One thing to watch out for if you are doing custom browser detection is the two digit version number with IE10. We've seen a few sites have issues because they only expect a single digit and end up thinking IE10 is IE1.
Thursday, September 8, 2011

Today, the W3C announced the creation of a Tracking Protection Working Group to work on defining what tracking is, signaling user intent, and tracking protection lists. The W3C’s action here can help protect consumers from unwanted tracking. We welcome the opportunity to work with the industry and governments on a Web standard based on our earlier work. [continues on the IEBlog]

Thursday, February 24, 2011

This morning the W3C accepted and published Microsoft’s Web Tracking Protection member submission proposing a standard for helping to address privacy concerns related to online tracking. You can read more on the IE Blog and on the W3C Blog. Web Tracking Protection is based on IE9’s tracking protection lists showcased on the IE9 Test Drive.

Wednesday, December 22, 2010

I have upgraded this blog to Subtext 2.5 and moved hosting providers from WebHost4Life to Arvixe. This is a test post to make sure it is all working.

Monday, November 29, 2010

Last year I wrote about the W3C’s annual Technical Plenary and Advisory Committee (TPAC) conference. This is where most of the W3C working groups get together for a week of face to face meetings and networking. TPAC 2010 was a couple of weeks ago and here I will highlight some of the topics discussed by the groups where we participate… [continues on the IEBlog]

posted @ Monday, November 29, 2010 7:20 PM | Feedback (0) | Filed Under [ W3C ]
Friday, October 8, 2010

I want to provide an update to my last blog post about the W3C process that we follow to develop and finalise Web Standards. The working group published the Release Candidate (RC) of the CSS 2.1 Test Suite on September 17. The next step is for the major browser vendors to submit their implementation reports using the working group’s template within one month from publication of the test suite. The group has set a deadline on October 18… [continues on the IEBlog]

posted @ Friday, October 8, 2010 4:30 PM | Feedback (0) | Filed Under [ W3C ]
Monday, September 13, 2010

Complete Web Standards with multiple browser implementations and comprehensive test suites are the backbone of the interoperable Web. Getting web standards through the complete standardisation process and turned into official W3C Recommendations takes a lot of effort. While it is tempting to view the latest editor’s draft of a specification as a "standard", a large part of the complexity that ensures good interoperability happens in the "last mile". In the last couple of weeks, several key web specifications have reached important milestones and these examples illustrate how the process works… [continues on the IEBlog]

posted @ Monday, September 13, 2010 6:00 PM | Feedback (0) | Filed Under [ W3C ]
Friday, March 26, 2010

Over the last month, as part of Microsoft’s commitment to interoperability, we’ve published information for Internet Explorer 7 and Internet Explorer 8 describing variations from certain web standards… [continues on the IEBlog]

posted @ Friday, March 26, 2010 1:45 PM | Feedback (0) | Filed Under [ IE8 ]