One of the best new features of .NET 3.5 is LINQ. It's great for accessing data from databases like SQL Server but it's far more useful than just that. LINQ to objects is the killer feature in .NET 3.5 and it can change the way you write code for the better.

I am finding more cases where I can use LINQ for general data manipulation to simplify the code that I write. Coupled with Extension Methods it's a really powerful new way to write algorithms over your data types.

One of my favorite new tricks is to use LINQ to make recursion over a tree of objects simpler. There are many cases where this type of pattern is possible. For instance when working with controls in ASP.NET or Windows Forms each Control has a property named Controls, which is a collection of its child controls. If you need to perform some operation over all of them you can gather them all in just one LINQ statement similar to this:

ChildControls = from nested in 
                    c => c.Controls.Cast<System.Web.UI.Control>())
                select nested;

 The above code is a lot simpler to express than the recursion code that would have been necessary to work with each of the nested controls. And if you need to filter the child controls returned you can just add a LINQ Where clause (or any of the other LINQ operators).

Note: The Cast<T> function above is necessary since the Controls property return a non-generic IEnumerable. Cast<T> simple wraps the non-generic IEnumerable and returns a typed IEnumerable<T>.

The trick that makes this work is a function called Descendants, which is an extension method that I wrote. You can use it with any IEnumerable<T> collection to descend into those collections. It will descend into those collections by way of a Func<T, IEnumerable<T>> lambda that you supply. In the above example I pass it a lambda function that tells it how to retrieve the nested child controls.

Here is that extension method:

static public class LinqExtensions
    static public IEnumerable<T> Descendants<T>(this IEnumerable<T> source, 
                                                Func<T, IEnumerable<T>> DescendBy)
        foreach (T value in source)
            yield return value;

            foreach (T child in DescendBy(value).Descendants<T>(DescendBy))
                yield return child;


You could just as easily write a lambda function to descend into a tree structure or any other data structure as long as it supports implemented IEnumerable<T>. Since you can supply the function you need to descend into the data structures, it makes Descendants a generic way to traverse into nested data structures.

There's a new virtual desktop manager available over at CodePlex, Vista/XP Virtual Desktop Manager. Prior to finding this one I used AltDesk on XP for years and it worked pretty good. On Vista however, I never got it to work properly. It lost windows and crashed a lot. These days I have multiple monitors on my main desktop machine but I still find virtual desktop managers useful for having separate 'workspaces' when I am multitasking on several things at once. They are also super useful on my laptop when I travel. I can have an email/Internet workspace and a development workspace and switch back and forth as necessary.

This new one is by far the best one I have seen for Vista (or XP). Unlike AltDesk it has a very minimal UI, which I actually prefer. It allows up to 9 virtual desktops and has flexible hot-key assignment for all of the features. You can pull up the "switcher" which will show all of the virtual desktops at once and allows you to drag/drop windows between them. It supports 'sticky' applications which will show on all of the virtual desktops, which is really useful for things like the task manager, Vista's gadgets, etc... Another nice feature is that it supports live thumbnails on Vista as well as an Exposé-like application switcher.

Vista/XP Virtual Desktop Manger is open source and seems to be actively worked on. At this point it is labeled an RC candidate but so far it seems pretty stable to me.

With Windows Vista Microsoft has changed where user account profile and program data gets stored. Instead of storing all profile data under "c:\Documents and Settings\" as they did with Windows XP, things are now split between "c:\Users\" and "c:\ProgramData\" (on a standard Vista installation). In order for older programs that hard-coded these directories names (instead of using the proper API to find them) will continue work, Microsoft has used features of the NTFS file system, specially junction points and symbolic links, to create virtual folders that look like the older locations but in reality point to the new locations. This virtualization scheme is pretty complex and for the most part hidden from regular standard users, but as a programmer and system administrator it is sometimes handy to know how these folders have been constructed.

I have created this diagram that maps out these junction points and symbolic links to their physical directories. I have also include information on the virtualized folders that get used when programs attempt to store data in 'c:\Program Files\'  or 'c:\Windows\'.


Vista System Volume

Click to download this diagram as a PDF file

After hemming and hawing for quite a while I decided to dive into the deep end and make the transition to Windows Vista. The tipping point came with a new gift, a tiny mobile computer called an OQO model 02. It came preloaded with Vista Ultimate and it worked so well even on the lower-powered OQO that I decided that for the new development machine I was going to build, I would try Vista as my primary OS.

So far the transition, while not completely painless, has been remarkable smooth. Sure I've had a few unproductive moments (aka system crashes) but these have all been related to drivers, mainly the display drivers for the new NVidia GeForce 8800 GTS card I put in the machine. I should mention up front that I don't play graphic intensive games so I'm not really pushing the display card to it's limits and thus probably not stressing the driver or card that much. The 8800 is complete overkill for what I need but I also wanted a DX10 card so that I could experience the full Vista/WPF experience. Plus I wanted room to grow as I will probably never trade-up the card for the life of this machine. This whole experience reminds me of the transition from Windows 2000 to Windows XP but it's a bit less painful than that actually.

Along the way I've learned some tips, which I thought I would share here:

Monitor color calibration in Vista

The color management system has changed in Vista however you can still use the older .ICM format color profiles. Unfortunately, it seems that Vista still cannot properly load custom LUT tables into video cards from the ICM profiles; you still need to use a profile loader to set the custom profile for your Video card. Furthermore, there still seems the be the restriction that you cannot load separate color profiles in a multi-monitor setup unless you are using separate video cards as well (i.e. no color profiling for dual-output video cards). This really amazes considering that Macs have had this for something like a decade but what's worse is that there seems to be a bug in Vista that will cause it to reset the gamut table for the video cards shortly after the profile loader sets it at startup. I'm sure this depends on loading order but that's not something that can be easily worked around. My solution was to use the new task schedule as set up a task to launch my profile loader about 1 minute after the login event. This way, shortly after logging in and after Vista has reset my video card, the profile loader can load it again properly. Until Microsoft fixes Vista, this works pretty well.

Virtual CDROM/DVD drive

Every now and then I need to mount an .ISO or .IMG file from a CD or DVD drive, usually for software installation. Vista of course knows nothing about how to do this. There are however several free ISO/IMG loaders that allow you to create virtual CDROM drives. I've used one for the last few years but when I transitioned to Vista I had to search out a new one that would work properly. Virtual CloneDrive is the one I settled on. It's easy to use and works very well and requires a minimum of stuff to be installed. It supports multiple virtual drives and best of all, it's free.


Another thing that I installed which has become indispensable is a small utility called Start++. If you've ever used the older version of Windows Desktop Search on Windows XP, you might have been aware that you could easily create shortcuts or macro commands that could be launched from the search toolbar. Start++ brings that capability to the search feature in Vista. But it goes further in that it has even richer macro scripting and these macros also work at the command prompt as well. Down the road the author is promising an API where you will be able to create plug-ins for even richer commands.

I use Start++ to set up commands to quickly search Google, or I've also used it to set up shortcuts for quickly launching the remote desktop client and connecting to a specific machine or opening network folders that I use frequently. It has several built-in commands but by far the most useful one is sudo, for launching a program with elevated privileges (UAC will still prompt you though). I can't recommend Start++ enough. It's so useful that I never open the start menu's run command anymore, I just hit the "Windows" key and type in a command to run the program I need.

Indexing network folders with Desktop Search

I love Windows Desktop Search. I've been using it on Windows for years (even in its earlier incarnation as the older index service, which Desktop Search has evolved from and despite Google complaints, has been a part of Windows since the early days of Windows NT). Searching is something that I firmly believe should be a core part of the OS, not an add-on. There are many things that can be accomplished once indexing and searching are services of the OS and Vista is a great example of this. Searching feels natural, not like something tacked on. All applications can share a common and universal API. Microsoft has given Vista a very smart architecture for indexing and searching with its iFilters, property handlers, protocol handlers, and store providers. But one of the things that is missing out of the box is the ability to index remote file locations. This is especially important if you share documents or media files from a network location. Luckily Microsoft has release an add-on for Vista's indexing service that allows you to specify network folders to index. Simply install it and you will have the option to index network locations. There is also an add-on to index your Internet Explorer browser history as well, but I have not tried it yet.


That's it for now. I'll blog more about my experiences with Vista and any tips/workarounds that I stumble across.

Update: Since installing this newer RE2 drive firmware, my RAID array has been working flawlessly every since. I have not had one single timeout or error since. It appears that this firmware completely solved my issues.

Here are the links on the Western Digital site:
WDxxxxYS firmware update information
WDxxxxYS firmware download

A hard drive in my server crashed last early December. It was only about 3 year old, but it was also out of warrantee too. Instead of just replacing it I decided to buy a new set of drives and build a RAID 5 array so that if in the future one drive crashes I will have some level of redundancy. After doing some research I choose to build a software RAID 5 array (yes, I know) because I wanted to be able to guarantee that I could move my RAID 5 array to any other Windows machine in case of hardware failure. I didn't want to worry about becoming dependant on a certain RAID controller with a certain revision, certain driver, etc... For the most part this has been a good decision.

In order to do this I also decided to switch to SATA drives as well, which meant I would need to get a PCI SATA controller since my server is a bit older and doesn't support SATA natively. I choose a basic Promise controller that had 4 SATA 3.0gb ports. I then installed the controller and driver and easily built a 1.5TB RAID 5 array. All was well.

Then on December 18th my server mysteriously dropped one of the drives from the RAID array. In my event logs I saw a whole slew of device timeout messages for the failed drive. When I looked in the disk manager sure enough, one of the disks was missing but because it was a RAID 5 array, no data was lost (yet). I suspected that the drive was toasted so I shut down the machine and was going to reboot to run diagnostics in preparation for sending the drive back for replacement. However once I rebooted the drive came back online without an errors. I ran the diagnostics and they said the drive was fine. Windows happily rebuilt the RAID array and all was fine, until January 18th.

On January 18th the same thing happened again, a drive was dropped from the RAID array after a whole slew of device timeout messages. I figured that it was the same drive, getting more flaky but then I noticed that it was a different drive this time. My next thought was that it must be a controller error. Perhaps the cheap Promise controller I bought was not that best decision. I ordered a Adaptec SATA PCI controller as a replacement and kept my fingers crossed that it would not crash again before it arrived.

Once the new controller arrived I felt a little vindicated in my decisions to go with software RAID. I simply swapped out controllers and rebooted and the RAID array came online without a hitch. Now I felt, everything was going to be ok. That was until, February 18th.

On February 18th the system dropped yet a different drive. The fact that it was happening almost exactly 4 weeks after that last two incidents was not lost of me. Could it have just been a strange coincidence? Whatever it was it was clear to me that it was not just a controller issue. But neither was it a single drive as each time it was a different drive that was crashing. Perhaps it was some weird configuration error. I rebuilt the array (which takes 14 hours) and started poking around the system for things that could cause this.

I found all sorts of suspicious things, which would all eventually turn out to be red-herrings. Things like the disks set for auto spin-down, my UPS mysteriously disconnecting for a few seconds which led to the server thinking it was running on batteries for a few moments, old bits of the Promise filter drivers still installed, etc... Each time I thought I found the cause until the array crashed again. However by now the array was crashing much more unpredictably and frequently (did I mentioned that it also almost always crashed when I was out of town?). I also started experiencing other strange issues on the server, such as the system clock jumping into the future whenever the RAID array crashed. At this point I resigned myself to believing that the old server hardware must be going south so I set out to build a new server.

Transferring everything to a new server (domain, configuration, services, Exchange, SQL, IIS, data, etc...) turnout out to be a LOT of work, more so because I also decided to build a new primary domain controller with all the important services in a virtual machine running on the new hardware (which is also a DC with little else running on it). It took me well over a week to plan things out and to transfer and set up all the domain services. The only worrisome part was when I attempted to transfer over my RAID array. The new server recognized it as an array but it kept telling me that not all of the drives where present and that I would lose data if I imported it. After much research (and backing things up) I determined that this was probably not going to be the case so I let it import the array, which it did instantly and perfectly. The RAID array was now transferred and functioning in the new server. Surely everything must be right now. My RAID array by this time had survived no less than 6 crashes without losing data and each time the failing drive appeared to be fine after a reboot.

Then on July 3rd while I was out of town, the new server dropped a drive from the RAID array again after a whole slew of device timeouts. At this point I was just going to send the drives back to Western Digital for replacement. Their must be something wrong with them I figured. As I prepared to request an RMA, I decided to download and run the diagnostics tools one more time. That is when I noticed that for the Western Digital RE2 drives there was a firmware update. When I read the description from their knowledgebase I almost fell out of my seat (emphasis mine):

WD hard drives have an internal routine that is periodically executed as part of the internal “Data Lifeguard” process that enhances the operational life expectancy. While the drive is running this routine, if the drive encounters an error, the drive’s internal host/device timer for this routine is NOT canceled causing the drive to be locked in this routine, never becoming accessible to the host computer/controller. This condition can only be reset by a Power Cycle. WD has resolved this issue by making a change to the firmware so when a disk error is encountered, the host/device timer is checked first and then the routine is canceled allowing the drive to be accessible to the host computer/controller. The interval rate for the error condition to occur is 1-4 weeks, and will only occur if the drive encounters a disk error when running this routine.

Could it be that I was suffering from this? It seems that this is a description of EXACTLY what I was experiencing every 1-4 weeks. I shutdown my server and flashed all the drives with the newer firmware. Again since I was running software RAID I could ignore the warnings about not updating drives that are part of a RAID array since to everything concerned, the are just a bunch of single drives. Note that this KB article seems to imply that my drives are in fact experiencing disk errors that are triggering this and perhaps they are and will still need replacing. So far no diagnostic tools shows that there are. Unfortunately for me, only time will tell. Hopefully it will only be 1-4 weeks though before I know.

Flux and Mutability

The mutable notebook of David Jade