October 2007 Blog Posts

There's no two ways about it, Windows Home Server is cool. The backup system is excellent, especially the part where it only backs up identical clusters for different machines once. This means that if you have the same OS on multiple PCs, all the OS files will be backed up only once saving a bunch of space. It also means that you get differential backups sort of for free.

First thing on Monday morning, my laptop blue screened due to a fault in the smart-card reader driver. When the system rebooted, my VPN software appeared to be missing and after trying to repair it for a few minutes I decided that even if I got this working I'd never really know why it had disappeared and if anything else was gone. Since I'd created no new files that morning and my e-mail was all stored on the exchange server, I went ahead and booted off the Home Server recovery CD. I picked the back-up image and it re-imaged the disk to just as it had been before the crash. Touch wood, it's been working fine again for the last couple of days.

I guess this post is really about the merits of having a simple backup strategy. Home Server just makes that dead easy.

In the Managing wildcards in paths section of Windows PowerShell in Action (Part 2) on MSDN, it says:

What happens when you want to access a path that contains one of the wildcard meta-characters: "?", "*", "[", and "]"? In the Windows filesystem, "*" and "?" aren't a problem because we can't use these characters in a file or directory name. But we can use "[" and "]". In fact, they are used quite a bit for temporary internet files. Working with files whose names contain "[" or "]" can be a challenge because of the way wildcards and quoting work.

The solution, it says, is the -LiteralPath parameter:

We don't want trial and error when we know the exact name of the file and want to suppress all pattern matching behavior. This is done using the –LiteralPath parameter on the core cmdlets.

Unfortunately, rename-item doesn't have the -LiteralPath parameter so it looks like renaming items with square brackets in their name is a problem.

After a little searching, I came across a forum post by Bruce Payette, Technical Lead in the PowerShell team, that explains the situation:

Here's what happened. There was a push just before the release of version 1 where we went through and cleaned up most of the wildcard quoting bugs. This was done right at the end so the work was triaged pretty heavily. Only the most important things got fixed since we were locking done for RTM/RTW. We fixed all of the *-Item commands except for Rename-Item, the rational being that the Rename-Item cmdlet isn't necessary since you can use Move-Item instead and Move-Item does support the -LiteralPath parameter.

The bottom line is to use Move-Item instead of Rename-Item.

I've recently copied over some Audible files from one PC to another. Had I used robocopy, I could have preserved all the timestamps but I just copied them in Explorer so I ended up with the situation where the last modified time was the same as the original but the created time has very recent. When you import files into your Audible Manager library it shows the created date. Since I usually sort by date so that my recent purchases are at the top of the list, this was something of a problem.

Having been used to the power of Unix shells bash and tcsh years ago, I was really looking forward to the promise of PowerShell. The problem was getting used to its syntax, which as with most powerful shells is a little arcane. Time being the scarce resource it is, I'd seen a few demos from team mates but never really taken the plunge. Returning to what I've known for years, that you learn something new best when you've actually got a need for it, I'm looking for opportunities where PowerShell will help me out. This timestamp situation is one.

I don't know if this is the best or easiest way to accomplish modifying the created time to be the same as the last modified time but it's pretty simple. I'm sure someone will tell me if it can be made simpler still.

dir | foreach { $_.CreationTime = $_.LastWriteTime }

In PowerShell, the dir command (actually an alias for Get-ChildItem) returns a stream of FileInfo objects. Piping them into foreach allows you to execute an action against each. $_ refers to the current object in the pipeline and PowerShell gives you direct access to the properties of objects.

I've copied across my IE favourites to a new PC knowing that I need to spend some time going through them and reorganising. There are things there that probably don't point to anything useful and a bunch that could be easier to find if they were in sensible folders.

While looking through some of the random things just dumped in the top level folder, I came across the link I'd saved to TestDisk. I came across this tool some time ago while I was having a bit of a panic trying to recover some data. It saved me then and is hence fondly recommended.

At the time, I was planning to use the recovery CD for my Toshiba M400. It's purpose is to format the hard disk and re-write an image of a Windows install with all the drivers installed and the tablet features enabled. I started by copying all my important files onto an external USB hard disk and then booted off the CD. The imaging software seemed to get a bit confused because I'd forgotten to unplug the USB drive so I turned off the system, unplugged the drive and then tried again. Everything went fine this time and in a few minutes I had a brand new install of Windows. Everything started to go wrong when I plugged the USB drive back in. What I hadn't realised was that when the recovery disk got confused with the extra drive being present, instead of deleting the partition table on the internal drive, it had done so on the USB drive. So now, all my carefully backed up files were on a drive where the partition table was no longer present and therefore unreadable.

I tried a number of different attempts before deciding that I really needed a proper tool to try to fix this. Partition Magic from Symantec was one that was well known and considering the importance of the data, the price tag wasn't outrageous. Reviewing the product information suggested that it would do just what I needed and I could buy online. In the past I'd bought Symantec products online and found it very convenient. You download an ISO file then you can burn it to a disk and you've got the product there - easy. Only for some obscure reason, Partition Magic didn't come as an ISO file but just installed as a collection of files. This meant you couldn't boot off a disk as you would if you bought it from a store and for some reason, when running it under Windows on my laptop it just crashed while scanning the drive I wanted to repair.

I tried working with the support people from Symantec but they were pretty unhelpful. They didn't see the irony of them asking me to boot on a clean install of Windows and then to try it even though they hadn't given me a bootable image which would have made that easy.

At the end of my tether I gave up on Partition Magic and searched some more before landing on the TestDisk site. To cut a long story short, TestDisk did an exhaustive search of the disk (which took quite a while) before telling me that it had found an NTFS partition and was quite willing to re-write the partition table if I so desired. And so all my data came back.

These days I make sure that back-up drives are nowhere near being connected while re-installing the OS but I keep a link to this (free!) tool in my favourites.

I used to use WinKey to bind the Windows key on the keyboard together with another key to use as a shortcut. By default, the Windows shell has some "hotkeys". For example, Win+R gives the run dialog and Win+E opens an Explorer window. I wanted some extras for common tasks and that's what WinKey did. Unfortunately, as you can see if you follow the link, WinKey hasn't been supported by Copernic since 2005. In the past, I noted that it didn't work completely without admin rights and under Vista that becomes common with UAC but I've struggled on nevertheless.

This weekend, I've built a new installation of Vista on my new desktop and wanted to find a solution to the UAC+WinKey issues. I'd considered writing my own version in the past but time being what it is never actually got around to it. It crossed my mind again today and in searching around for some clues to assess how hard it was going to be, I stumbled upon a solution that's already been written.

Enter AutoHotkey. This sophisticated utility allows you not only to bind keys to actions but includes a clever scripting syntax that allows you to re-map keys and take different actions based on what it happening on your system. As a simple example, you could bind a key so that it runs an application but if that application is already running it just brings the window to the front.

So far, I've only created some simple key bindings but I suspect that as things crop up that I want to automate, I'll be building upon the simple script I'm using now. Definitely recommended.

Once I'd made the decision to begin blogging again, I had a look around to see whether I should change the engine used to host my blog.

3 or 4 years ago, when I made the decision to move away from Radio Userland, I wrote my own engine using ASP.NET and SQL Server. What I created was very basic but the implementation taught me a lot about developing with ASP.NET. I focused on writing a scalable, reusable core that could readily be customised and made appropriate use of caching for commonly used data, such as the RSS feed. I was pleased with what I created and I had planned to extend it and add new functionality maybe even getting to a point where I might share the source code. In the end, I never simultaneously had both the time and the inclination to move things forward and that never happened.

These days my motivation is a little different. While I will likely always dabble in writing code and playing with new technology my passion isn't for spending every day in a development environment. There are so many things that occupy me both at work and in my own time that I really don't want to spend it building something that other people have spent enough time on already. For this reason, I decided to review the available blogging systems and to select one for my blog moving forward. I didn't have many criteria for the selection with the primary one being that I'd be able to import all my previous posts relatively easily. I quickly discounted Community Server as being overkill for what I wanted. I took a long hard look at dasBlog and was almost convinced. It didn't take much, in the end though, to persuade me to go with Subtext. I like the way the data is stored in SQL Server - I figured I may want to do some manipulation with that in future. I liked the way it was so easy to set-up - there were very few configuration settings before I was led through most of the settings through the browser. Finally, I liked the simple BlogML import feature that I could use to import my old posts.

One of my current interests is F# and so I wrote a small F# script that exported my old posts from SQL into a BlogML file. It only took me an hour or so and a large part of that was really learning about the F# features I wanted to use. I'll likely post about that in more detail in the near future.

I've written a simple redirect script that should map the old URL format to the new with a permanent (301) redirect. This should push search engines and aggregators to update their links. If you spot any broken links, please let me know.

All in all, I'm happy with the decision so far. Time will tell if I remain content with that decision but at the present time I would recommend Subtext if you're looking for a straightforward .NET blog engine.

In the two years since I last blogged, lots of things have changed. I still work for Microsoft and I'm still part of the Application Development Consulting (ADC) team in the UK. Just over a year ago, I stopped doing a consulting delivery role and took on the programme manager position for the team. This job is sort of a meta-consulting role - I'm involved in two core areas. The first is helping the sales teams to scope new consulting gigs (balancing the needs of both Microsoft and the customer so it's a win-win for both and we're set for success) before the ADCs take over the main customer relationship during the delivery phase. The second is to act as an escalation point for both customers and consultants to help ensure we provide the best possible service delivering the most value for our customers. Together with the principal consultants in the team, I help to ensure we provide a consistent high-quality service.

One of the aspects that I'm involved in is developing and executing the marketing plan for the team. We provide a fantastic but little known developer consulting service and we're working hard to get the word out so people know where to come. I'm pleased that in the last couple of months we've launched the first version of our web site outlining the UK Application Development Consulting offering. We'll be adding more detail about both the service and the team behind it in the coming months.

Finally, we're still hiring. We have jobs listed as located in Manchester, Edinburgh and Reading. In fact, we have customers throughout mainland UK (and a few beyond) so wherever you live in the UK we can probably accommodate you. If you want to join one of the highest-calibre developer consulting teams in the country, get in touch.

You'd be forgiven for thinking that maybe I left for the PDC in September 2005 and never came back. The truth is work got on top of me and before I knew it, it had been months since I'd posted anything. I didn't want to be one of those lapsed Microsoft bloggers who pop up and say they're back and then never post again. I decided that I wasn't going to blog any more until I was confident that I'd keep it up. Hopefully that time is now and this won't be that solitary "I'm back" post I was so worried about.