Tuesday, May 20, 2014

My last post described how to acquire Twitter OAuth keys and tokens to allow you to use Twitter’s API to access Twitter feeds. I showed how to use the request module with node.js, which has built-in support for OAuth, to request and process data.

In this blog post I will show how to do the same thing using C# and .NET using the OAuthBase class linked to from oauth.net.

Let’s start with the code to call the Twitter API:

using System;
using System.IO;
using System.Net;
using System.Text;
using OAuth;

class App {
    static void Main() {
        // URL for the API to call
        string url = "https://api.twitter.com/1.1/statuses/user_timeline.json"
            + "?screen_name=adrianba&count=5";

        // Create a http request for the API
        var webReq = (HttpWebRequest)WebRequest.Create(url);

        // Set the OAuth header
        var auth = new OAuthHeader();
        webReq.Headers.Add("Authorization",auth.getHeader(url,"GET"));

        // Echo the response to the console
        using(WebResponse webResp = webReq.GetResponse()) {
            using(StreamReader sr = new StreamReader(
                    webResp.GetResponseStream(),Encoding.GetEncoding("utf-8")
                    )) {
                Console.WriteLine(sr.ReadToEnd());
            }
        }
    }    
}

The code here is similar to the previous post. It creates a HTTP request to the API endpoint and this time simply writes the response to the console. The difference here is that we need to add the OAuth Authorization header. The magic takes place in the getHeader() method:

class OAuthHeader : OAuthBase {
    public string getHeader(string url,string method) {
        string normalizedUri;
        string normalizedParameters;

        // OAuth keys – FILL IN YOUR VALUES HERE (see this post)
        const string consumerKey = "...";
        const string consumerSecret = "...";
        const string token = "...";
        const string tokenSecret = "...";

        // Create timestamp and nonce for this request
        string timeStamp = GenerateTimeStamp();
        string nonce = GenerateNonce();

        // Generate signature for the header
        string signature = GenerateSignature(
            new Uri(url), consumerKey, consumerSecret, token, tokenSecret,
            method, timeStamp, nonce, out normalizedUri, out normalizedParameters);

        // Compose the authorization header value
        // See http://tools.ietf.org/html/rfc5849#section-3.5.1
        StringBuilder auth = new StringBuilder();
        auth.Append("OAuth ");
        auth.AppendFormat("{0}=\"{1}\", ", OAuthConsumerKeyKey, UrlEncode(consumerKey));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthNonceKey, UrlEncode(nonce));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthSignatureKey, UrlEncode(signature));
        auth.AppendFormat("{0}=\"{1}\", ", OAuthSignatureMethodKey, "HMAC-SHA1");
        auth.AppendFormat("{0}=\"{1}\", ", OAuthTimestampKey, timeStamp);
        auth.AppendFormat("{0}=\"{1}\", ", OAuthTokenKey, UrlEncode(token));
        auth.AppendFormat("{0}=\"{1}\"", OAuthVersionKey, "1.0");
        return auth.ToString();    
    }
}

The OAuthHeader class inherits from the OAuthBase class mentioned above. This provides the GenerateTimeStamp, GenerateNonce, and GenerateSignature methods. Twitter uses the OAuth 1.0a protocol, defined in RFC5849. This specification outlines the Authorization header value that is constructed at the end of the getHeader() method.

You can keep the OAuthHeader class around for easy access to resources needing OAuth authorization such as Twitter.

posted @ Tuesday, May 20, 2014 8:25 AM | Feedback (0) | Filed Under [ .NET ]
Monday, May 19, 2014

Following on from my last post that described using Node to access feeds from Delicious, I’ve also been investigating how to access my Twitter feed. This adds a little more complexity because Twitter requires that your app or script authenticate to Twitter using OAuth.

Per Wikipedia, "OAuth provides client applications a 'secure delegated access' to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials." What this means is that your app can access the Twitter API in an authenticated way using OAuth without having to embed your username and password into the script.

The node.js request library that I mentioned last time has built in support for OAuth authentication. It requires that you populate a JavaScript object as follows:

var oauth = {
   consumer_key: CONSUMER_KEY
  , consumer_secret: CONSUMER_SECRET
  , token: OAUTH_TOKEN
  , token_secret: OAUTH_TOKEN_SECRET
};

Each of CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN and OAUTH_TOKEN_SECRET are strings that we must supply as part of the OAuth handshake.

There are two ways to think about using OAuth to authenticate against a service such as Twitter depending upon the type of app that you are building. The first scenario is where, for example, you are building a Twitter client. You will distribute this application and each user of the application will authenticate using their own credentials so that they can access information from the service as themselves. In the second scenario you are building an application or service that you want to access the service as you and you never need to send a variety of credentials. For example, say you are building a widget on your web site that will indicate how long it has been since you last tweeted. This will always be about you and need to use only your credentials.

The CONSUMER_KEY and CONSUMER_SECRET values are provided by the service to identify your application. The OAUTH_TOKEN and OAUTH_TOKEN_SECRET represent the credentials of the user accessing the service. They may be determined and stored by your app in the first scenario above or they may be part of your application in the second.

This all sounds a little complicated so an example will help. Before we get to that we need to get the values. Twitter provides a portal for this at https://apps.twitter.com/. If you login and select Create New App you will see a screen that looks like this:

image

Here you provide the name of you application, a description, and a link to your web site. For our initial scripting purposes the values here don’t matter too much. There is a Callback URL value but we also don’t need this now and can leave this blank. Finally there are some terms and conditions to read and agree to. Once you have completed this form, press the Create your Twitter application button and you will see a screen that looks like this:

image

If you click on the API Keys tab you will see something like this:

image

Since we want our script to access Twitter using our account, we can click on the Create my access token button to generate the appropriate token values. You should see something like this:

image

You may need to refresh to see your new access token.

So now you have four strings: API key, API secret, Access token, and Access token secret. These map to the four values needed in the OAuth structure described in the code above.

There are lots of different ways to access the Twitter API. Here I am simply going to use the user_timeline API to retrieve the 5 most recent tweets from my timeline. You can use this API to retrieve any user’s timeline that you have access to from your Twitter account (including, of course, all the public timelines).

So here is the code to make a request to this API:

var request = require('request');
var url = "https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=adrianba&count=5";

var CONSUMER_KEY = "...";
var CONSUMER_SECRET = "...";
var OAUTH_TOKEN = "...";
var OAUTH_TOKEN_SECRET = "...";

var oauth = {
    consumer_key: CONSUMER_KEY
  , consumer_secret: CONSUMER_SECRET
  , token: OAUTH_TOKEN
  , token_secret: OAUTH_TOKEN_SECRET
};

request.get({url:url, oauth:oauth, json:true}, function (e, r, data) {
  var tweets = [];
  data.forEach(function(item) {
    var tweet = {};
    tweet.id = item.id.toString();
    tweet.text = item.text;
    tweet.created = item.created_at;
    tweets.push(tweet);
  });
  console.log(JSON.stringify(tweets));
});

You obviously need to replace the "..." strings with the values copied from your API Keys page.

The important addition in this code is passing the oauth option into the get() method. After this, the request module takes care of the rest. In general, all services that require OAuth authentication follow this pattern although they will differ in how the keys and tokens are issued to you.

posted @ Monday, May 19, 2014 8:30 AM | Feedback (0) | Filed Under [ node.js ]
Friday, May 16, 2014

In my last post, I wrote about using node.js as a scripting tool. Node has lots of good libraries for making network requests and processing the results. request is one of the most popular HTTP clients. It is easier to work with than the built-in http module that is designed to provide basic http client/server primitives.

Despite its chequered history, I recently started using delicious.com again for managing and sharing bookmarks for sites I want to remember. Modern browsers like Internet Explorer support synchronising bookmarks or favourites amongst your devices but I like the ability to store interesting sites in a public place so other people can see what I’m looking at (should they be interested!). This also allows me to find things that I stored from someone else’s device.

Delicious provides a variety of interesting APIs for developers but also some simple RSS or JSON data feeds.

Here is a simple node script that uses the request and querystring modules to retrieve the last 10 public bookmarks and creates a simple JSON output.

var request = require('request');
var qs = require('querystring');

var url = "http://feeds.delicious.com/v2/json/adrianba?";
var params = { count: 10 };
url += qs.stringify(params);
console.log(url);

request.get({url:url, json:true }, function (e, r, data) {
  var bookmarks = [];
  data.forEach(function(item) {
    var bookmark = {};
    bookmark.url = item.u;
    bookmark.text = item.d;
    bookmark.created = item.dt;
    bookmarks.push(bookmark);
  });
  console.log(JSON.stringify(bookmarks));
});

The important parts here are the use of request.get() which calls a callback when the response is retrieved and setting the json option to true so that the response JSON is already parsed when it is returned.

With just a few lines of code you can retrieve data with node and then do whatever processing you want on it.

posted @ Friday, May 16, 2014 11:32 AM | Feedback (0) | Filed Under [ node.js ]
Sunday, April 20, 2014

For the last couple of months, I’ve been experimenting more and more with node.js. My impression is that most people who have heard of node but not really used it think of it as a server technology for building web applications using JavaScript. Of course, it does that and there is good support for node.js hosted on Microsoft Azure Web Sites. But you can also use node as a scripting language for local tasks. There are lots of popular scripting languages like Python and Ruby but if you’re already a JavaScript developer then node is a convenient choice.

I guess most scripting environments have a package management framework and node’s is called npm. I recently wanted to do some scripting against an XML file and discovered the xml-stream module by searching on npmjs.org. One of the helpful things about npmjs.org is that it tells you how often a particular package has been downloaded, which gives you an idea of whether it is a main stream module or just someone’s hobby module that might not work so well yet.

Installing xml-stream on Windows

Installing modules is easy and the command npm install xml-stream should take care of installing the module into the node_modules folder below the current directory.

However. When I first tried this I ran into some problems. First of all, this module needs Python to be available. I installed the latest version of Python (v3.4.0) and tried again. This time it complained because Python 2.7 was needed. I installed Python 2.7.6 too but now there was a problem – how would npm know which version of Python to use? You can specify this each time or you can use the npm config command to tell npm where to look:

npm config set python "C:\Python27\python.exe"

You can also configure the version of Visual Studio tools you have installed so that npm knows how to use the compilers:

npm config set msvs_version 2013

You can check that this is configured correctly by typing:

npm config list

With this configuration in place, issuing the command npm install xml-stream successfully downloaded a built the xml-stream module.

Using xml-stream

Now that I had xml-stream installed, I could try it out. The W3C publishes a list of all of their published documents in an RDF/XML file. I wanted to parse this file and identify the latest version of each document.

The first thing to do is to import the http and xml-stream modules and to download the XML file:

"use strict";

var http = require('http');
var XmlStream = require('xml-stream');
var url = "http://www.w3.org/2002/01/tr-automation/tr.rdf";

var request = http.get(url).on('response', function (response) {
    //TODO: process response here
});

The xml-stream module allows you to set-up event listeners for different elements in the document. The W3C file has different elements for Working Draft (WD), Last Call (LastCall), Candidate Recommendation (CR), etc. Here is the code that listens for each document type.

"use strict";

var http = require('http');
var XmlStream = require('xml-stream');
var url = "http://www.w3.org/2002/01/tr-automation/tr.rdf";

var request = http.get(url).on('response', function (response) {
    // Collection to store documents in
    var documents = {};

    var processDocument = function (item) {
        //TODO: process document
    };

    var xml = new XmlStream(response, 'utf8');

    // Process each type of document
    xml.on('updateElement: WD', processDocument);
    xml.on('updateElement: LastCall', processDocument);
    xml.on('updateElement: CR', processDocument);
    xml.on('updateElement: PR', processDocument);
    xml.on('updateElement: REC', processDocument);
    xml.on('updateElement: NOTE', processDocument);

    xml.on('end', function () {
        // Write out JSON data of documents collection
        console.log(JSON.stringify(documents));
    });
});

Finally, we can add in a definition for the processDocument function, which will gather together all the documents into the documents collection:

    var processDocument = function (item) {
        // Collect document properties
        var document = {};
        document.type = item.$name;
        document.title = item['dc:title'];
        document.date = item['dc:date'];
        document.verURL = item.$['rdf:about'];
        document.trURL = item['doc:versionOf'].$['rdf:resource'];

        // If we have already seen a version of this document
        if (documents[document.trURL]) {
            // Check to see if this one is newer and if so overwrite it
            var old = documents[document.trURL];
            if (old.date < document.date) {
                documents[document.trURL] = document;
            }
        } else {
            // Store the new entry
            documents[document.trURL] = document;
        }
    };

At the end, the script writes out the JSON data to the console.

Of course, this script is a little fragile because it doesn’t map any of the namespace prefixes based on their declarations but it does the job I needed and is a good example of having a powerful JavaScript scripting environment coupled to a wide array of different packages to help you get tasks completed.

Saturday, April 19, 2014

I’m running Windows Server Essentials 2012 R2 for file storage and system backups. While I’m doing lots of experimenting with Boxstarter, I wanted a caching web proxy that would keep local copies of the package files I’m installing over and over so that I wouldn’t have to wait for them to come from the Internet each time.

Squid is a well known caching proxy and GuidoSerassio of Acme Consulting S.r.l. maintains the official native Windows port. You can download the latest stable build of Squid 2.7 from here.

Installation is simple. I extracted all the files into C:\squid. In the etc folder, there are four configuration files with .conf.default extensions. I removed the .default extension from squid.conf, mime.conf, and cachemgr.conf:

Next from a command prompt running as Administrator cd to c:\squid\sbin and run squid –i and squid –z. The first command installs squid as a service and the second initialises the cache folder (c:\squid\var\cache). Finally, start the service with net start squid.

Squid will now be running and listening on port 3128 (the default for squid). The last think you need to do is to configured the Windows Firewall to allow incoming connections to squid. Go to Advanced Settings from the Windows Firewall control panel. Select Inbound rules and add a New Rule allowing listening on TCP port 3128.

Now you should be all set – you can configure your browser to connect to the proxy on your server using port 3128. You can look in the c:\squid\var\logs folder for activity logs to make sure things are working.

Every year, Scott Hanselman publishes his Ultimate Developer and Power Users Tool List for Windows. I usually take a look through it, pat myself on the back when I see things I’ve been using for a while, and spend some time investigating some of the new ones. Sometimes a tool that I’ve looked at before but haven’t been using gets another mention and it prompts me to take a look (this year it was Clink, a tool that adds readline-style editing to the Windows command line).

I recently discovered that I’d missed an important recommendation – it’s right there at the beginning of the list. A while ago, someone on my old team at Microsoft UK had recommended Chocolatey. This is a command line tool that works like apt-get but it installs Windows tools and applications. What I really wanted was a way to script installation of tools through Chocolatey, say after deploying a clean installation of Windows.

Introducing Boxstarter. Boxstarter, as recommended by Scott, is a fantastic tool that allows you to configure the installation of a set of Chocolatey packages. It also allows you to configure key Windows settings. Even better, the WebLauncher installs everything from a simple URL.

For simple installations, you can list the packages you want in the URL and BoxStarter will take care of the rest. Say I want to install node.js, git, and Visual Studio Express for Windows. Easy. I just press Win+R to bring up the run dialog and type

http://boxstarter.org/package/nr/nodejs.install,git.install,VisualStudioExpress2013WindowsDesktop

Here is a video (don’t worry – most of the Visual Studio installation is edited out) that shows what happens when you launch this on a clean installation of Windows 8.1:

Thursday, December 20, 2012

From time to time I see posts like this describing "CSS Hacks" to detect a particular version of IE. This post describes how removing conditional comments support in IE10 might be a problem for targeting workarounds to IE10. Specifically it says:

"But without conditional comments in IE10, the only options we’re left with to target CSS problems are hacks or browser sniffing — and we certainly don’t want to resort to the latter."

It goes on to describe a set of hacks that amount to browser sniffing using JavaScript (if (/*@cc_on!@*/false && document.documentMode === 10)) or some combination of media queries that some developers will only fire for IE10. Just because you don't use the user agent string doesn't mean you're not browser sniffing.

The whole point of feature detection is to look for the feature you want to use and if it is missing do something else. If the issue really is a bug in a specific version of a browser and you can't find a way to detect the correct vs. the errant behaviour then consider browser detection explicitly, not hidden in code made to look like something else.

The main issue is one that David Storey notes in the comments: this kind of hack is unreliable as bugs are fixed or new features introduced. Today we find that most sites that don't work correctly in IE10 by default are broken because they made this kind of assumption. They either expect that if they detect IE then a specific feature isn't present or will work a specific way or the assume that if feature A is supported then feature B must also be supported, often when there is no connection between the two. When a new version of IE comes along that fixes the bug or maybe implements feature A but not B then the site is broken because of these incorrect assumptions.

All browsers prioritise the order in which they deliver new features according to their own goals. In IE10 we invested a lot in new CSS3 layout support like grid and regions, for example. There are other things that we didn't get to though we always wish we could have done more. Use feature detection in your site for the things you use that we didn't get to yet so that as soon as we include support your site should simply light up that part in IE.

Wednesday, December 19, 2012

I’m not a Gmail user but I know someone who is. She was a little disappointed to discover that when configuring her Gmail account with a Microsoft Surface, only one of the many Google calendars synchronised to the built-in Calendar app. It looks like the app only syncs the default calendar. After a little searching, we found a workaround. It’s a little clunky but did the trick. Hopefully this gets fixed properly in an update to the main app.

Saturday, February 25, 2012

This morning in the United States, the White House announced a new "Consumer Privacy Bill of Rights" as part of the effort to improve consumers' online privacy. As I've mentioned before, Microsoft is actively participating in the industry initiative for Tracking Protection at the W3C to produce Web standards for online privacy. [continues on the IEBlog]

Monday, February 6, 2012

In the last ten years Microsoft has invested heavily in user privacy. Just like security, privacy considerations are baked into every Microsoft product. It is almost a year since the World Wide Web Consortium (W3C), an international community that develops open standards to ensure the long-term growth of the Web, accepted and published Microsoft’s member submission for an Internet Standard to help protect consumer privacy. [continues on the IEBlog]

posted @ Monday, January 23, 2012 12:00 AM | Feedback (0) | Filed Under [ W3C ]
Monday, September 12, 2011
Scott Hanselman describes the ASP.NET fix need to make the browser definition files work with IE10. In general we encourage developers to use feature detection when switching their mark-up but the ASP.NET infrastructure has been around for a long time. One thing to watch out for if you are doing custom browser detection is the two digit version number with IE10. We've seen a few sites have issues because they only expect a single digit and end up thinking IE10 is IE1.
Thursday, September 8, 2011

Today, the W3C announced the creation of a Tracking Protection Working Group to work on defining what tracking is, signaling user intent, and tracking protection lists. The W3C’s action here can help protect consumers from unwanted tracking. We welcome the opportunity to work with the industry and governments on a Web standard based on our earlier work. [continues on the IEBlog]

Thursday, February 24, 2011

This morning the W3C accepted and published Microsoft’s Web Tracking Protection member submission proposing a standard for helping to address privacy concerns related to online tracking. You can read more on the IE Blog and on the W3C Blog. Web Tracking Protection is based on IE9’s tracking protection lists showcased on the IE9 Test Drive.

Wednesday, December 22, 2010

I have upgraded this blog to Subtext 2.5 and moved hosting providers from WebHost4Life to Arvixe. This is a test post to make sure it is all working.

Monday, November 29, 2010

Last year I wrote about the W3C’s annual Technical Plenary and Advisory Committee (TPAC) conference. This is where most of the W3C working groups get together for a week of face to face meetings and networking. TPAC 2010 was a couple of weeks ago and here I will highlight some of the topics discussed by the groups where we participate… [continues on the IEBlog]

posted @ Monday, November 29, 2010 7:20 PM | Feedback (0) | Filed Under [ W3C ]
Friday, October 8, 2010

I want to provide an update to my last blog post about the W3C process that we follow to develop and finalise Web Standards. The working group published the Release Candidate (RC) of the CSS 2.1 Test Suite on September 17. The next step is for the major browser vendors to submit their implementation reports using the working group’s template within one month from publication of the test suite. The group has set a deadline on October 18… [continues on the IEBlog]

posted @ Friday, October 8, 2010 4:30 PM | Feedback (0) | Filed Under [ W3C ]
Monday, September 13, 2010

Complete Web Standards with multiple browser implementations and comprehensive test suites are the backbone of the interoperable Web. Getting web standards through the complete standardisation process and turned into official W3C Recommendations takes a lot of effort. While it is tempting to view the latest editor’s draft of a specification as a "standard", a large part of the complexity that ensures good interoperability happens in the "last mile". In the last couple of weeks, several key web specifications have reached important milestones and these examples illustrate how the process works… [continues on the IEBlog]

posted @ Monday, September 13, 2010 6:00 PM | Feedback (0) | Filed Under [ W3C ]
Friday, March 26, 2010

Over the last month, as part of Microsoft’s commitment to interoperability, we’ve published information for Internet Explorer 7 and Internet Explorer 8 describing variations from certain web standards… [continues on the IEBlog]

posted @ Friday, March 26, 2010 1:45 PM | Feedback (0) | Filed Under [ IE8 ]
Friday, March 12, 2010

There’s lots going on in the web standards world and being part of the Internet Explorer team gives me a front row seat. We’ve posted a few updates on the IE Blog relating to standards in the last few weeks:

I’m really looking forward to the feedback we get from MIX where General Manager of IE, Dean Hachamovitch, will present one of the keynote sessions and there will be a number of IE9 breakouts. Unfortunately, I won’t get a chance to be at MIX this year but I’ll be watching from afar and waiting to hear the stories my colleagues have to tell when they return.

Technorati Tags: MIX10,IE9,W3C,HTML5

posted @ Friday, March 12, 2010 4:07 PM | Feedback (0) | Filed Under [ W3C ]
Tuesday, March 9, 2010

We’re always excited to engage with members of the W3C including the developers of other browsers as well as the broader web development community to help shape the direction of emerging Web standards, particularly HTML5.  This includes participating in events like TPAC, which we wrote about in November, and on-going engagement with various working groups… [continues on the IEBlog]