rsync is both an interesting program and an interesting idea. rsync the program does a lot of interesting and useful things but what has always been most interesting to me is the rsync algorithm - its ability to update two versions of the same file over a network, one being local and the other being remote while efficiently sending little more than just the data that has changed. rsync the program uses this ability to efficiently copy or sync file changes between networked computers. Many others have built upon this idea to build general backup tools and services.

What has not been so great about rsync is that as a UNIX program it has never really worked well on Windows. Sure there are 'ports' but by not being a native Windows application, many things were ignored or left by the wayside like file ACLs, permissions, NTFS alternative data streams, etc.… You can get it to work and some have done a pretty good job of fixing it up to make it more Windows-friendly but at its core it just doesn't have access to the information or APIs needed to truly integrate into the Windows ecosystem.

rsync itself is pretty old – it predates much of the Internet as we know it, dating back to 1996. The ideas and algorithms have been tweaked over the years but are stable at this point. Newer ideas, such as Multiround rsync have been proposed to better deal with bandwidth-constrained devices such as mobile devices but these are not currently used in rsync.

Years ago I remember reading about a new feature built into Windows Server 2003 R2 called Remote Differential Compression (RDC). At the time it was a technology only used as part of Windows DFS and not really applicable to anything else. Recently while working on a side project I started to wonder how hard it would be to port the rsync algorithm for use in my project. While researching this I came across RDC again and realized that it has since been built into every version of Windows since Vista as a low-level COM API that anyone can use. This RDC API is data source agnostic meaning that it can be used for files, databases, or any other form of raw data that needs to be efficiently transferred. Since Vista's release I suspect that RDC or the ideas behind it are at the core of Windows features like block-level Offline Files updating and BranchCache.

On the surface RDC seems a lot like rsync but it has been researched and developed by Microsoft in significant new ways. Mostly notably it is a recursive algorithm. What this means is, instead of exchanging one set of block hashes or signatures, RDC recursively processes the block signatures to generate smaller sets of signatures to send over the wire. It's like if RDC took the set of signatures for a file and wrote them to disk as a new file and then understanding that both the local and remote signatures are probably similar (since the original files should be similar), used the RDC algorithm to recursively generate a new set of signatures based on the previous set of signatures, each time cutting the size of block signatures to a fraction of what they first were. Now instead of sending the original large set of signatures, it send this reduced set and then uses the RDC algorithm to build the larger set. In the end this means that even less data needs to be transferred over the wire.

Another novel idea in RDC is the idea what you can use more than just the two versions of the same file to generate signatures. In fact, you can likely use parts of other files to further reduce the data that is transferred over the network. The idea comes from something like this: you may have made multiple copies of the same files or even cut/paste similar data between files. Some files may even have blocks that are mostly the same despite being totally unrelated. Since transferring block signatures uses less bandwidth than sending the actual data, why not take advantage of this fact to reduce the transferred data even more. The problem is how do you tell which parts of other local files may match parts of the remote files you want to transfer. You can't reasonably compare every file to every other file as that would be too long of a process. The answer to this problem is solved by pre-analyzing "traits" of each file and storing them so that they can be quickly compared as candidates. This "trait" data is extremely small, being around 96-bits per file and can easily be stored in the file's metadata, in a database or even in memory.

It is no surprise to me that RDC remains unknown to most. Even though it has been a Windows feature with a public API for years now the MSDN documentation on how to use RDC is severely lacking – mostly notably on how RDC recursively generates signatures and how to transfer, reconstruct, and use them. In my extensive searching I have only come up with two examples of how to use the RDC APIs (which are COM APIs, btw). Both of these are complex with one being completely broken, despite being published as an official MSDN sample. The only truly working sample I could find is the one that ships as part of the Windows SDK, which you'll have to install to get to it. You'll find that sample in the Samples\winbase\rdc folder of your Windows SDK installation (at least in SDK v7.1 – I have not checked the Windows 8 SDK version).

As I mentioned, these sample are complex – the working C++ sample uses DCOM so that you can use a local and remote machine. This means customs interfaces and levels of abstraction that make it hard to get to the core of what is really going on, not to mention it being all cross-process. The second, broken sample is built for .NET and is written in C# and uses COM interop and a custom Web Service so that you can test against a local and remote machine. Both of these samples btw, can be run on the same machine which makes it a bit easier to follow but you're still dealing with cross-process debugging. The .NET one is a little easier to follow though as it uses less custom abstraction to hide the complexities of working with RDC. However it is broken.

It was only by understanding the SDK's C++ RDC sample that I could understand how the .NET sample was broken. Remember how I said that RDC recursively generates signatures to cut down on the amount of bandwidth needed? Well, as it turns out the .NET sample was generating these signatures both locally and remotely just fine. What it was not doing was transferring these signatures correctly. As best I can tell it was assuming that there was only one level of signatures to be transferred – in fact it would work if there was only one level which is typically what happens for very small files (in an unrelated bug it would simply crash with files over 64kb which leaves me to believe it was never tested with larger files). However RDC by default decides how many levels of recursion should take place and when it does it means that multiple levels of signatures need to be transferred (but by using the RDC algorithm so the overall bandwidth stays low). Handling these multiple levels of transfers was what was broken. I was able to fix that and test that it works no matter how many levels of signatures where being used or the size of the file. In the process I added more logging, fixed other bugs, and cleaned a few things up as well. Here is that newer, fixed up sample RDC .NET project. It is by no means ready for production (or even useful as an rsync alternative) but it does show how to use RDC with .NET.

RDC.NET sample on Bitbucket.org: https://bitbucket.org/davidjade/rdc.net 

Note: If you're not a Mercurial user and you just want the source as a .zip file, just go to the Downloads | Branches section of the project and select your archive format.

While complex to use RDC works really well. In my sample data I used two bitmap files of the same photograph, one of which had been altered by writing the word "changed" in a 72pt font across it. Since these files were saved as uncompressed bitmaps (as compared to jpegs) they were mostly the same. The total file size was 2.4mb but by using RDC to transfer the changed version from a remote location, the total RDC bandwidth needed as only 217kb making for a 92% reduction in bandwidth needed to update the file. The RDC overhead is likely too great to use for smaller files but for larger files it can really pay off, especially if you are bandwidth constrained.

RDC was developed several years ago – 2006 being the year that the research paper on RDC was published with the samples dating to shortly after that time as well. I would encourage you to check out that research paper if you really want to understand how RDC functions. It is a great example of how Microsoft Research really is producing novel and significant advances in computing. It is also the missing documentation that is needed to best make use of the minimal MSDN documentation on the RDC COM API.

One last note, as sometimes unfortunately happens in the world of Windows, misinformation has spread among Windows users (the type that think every service they don't use directly should be turned off) that has led users to think that RDC should be uninstalled on Windows so that it somehow fixes slow network file transfers. Besides this making totally no sense whatsoever and besides the fact the RDC isn't used for anything in a client Windows OS, it may be something you have to contend with if you build an app that uses RDC. The myth is so widespread that Microsoft had to write about it.


Scheme (the hard way)

Published 10/23/2012 by David Jade in Programming
Tags: ,

TLDR; I took a lot of complex Java source code for a complete R5RS Scheme interpreter and converted it to C# and it worked! It was much harder than this little TLDR makes it sound.

I’ve been on a Scheme learning kick lately because it was something I always wanted to do for many various reasons. My original goal was to just read and work through the classic SICP book. I’m not very far along on that front because at some point I needed a Scheme system and I got sidetracked and sucked into the totally ridiculous idea of porting a Java-based R5RS compliant Scheme interpreter (SISC) to C# and .NET. I’m not really sure why, it just sounded challenging and interesting to me and I got sucked in by the somewhat naïve and seemingly possible notion that it could mostly be done mechanically. Plus there really are no full-featured Scheme interpreters or compilers for .NET* – the few that exist leave out features like support for full continuations and tail calls - stuff that would be useful for really learning Scheme. Although it hasn’t been updated for a few years, SISC claims to be a fully R5RS compliant Scheme interpreter.

I started with a machine translation of the java source code (using Sharpen). Even getting Sharpen to work was a unusual chore (my fault mostly) and what C# it split out would not build. SISC is comprised of 203 java class files – 18,000+ lines of actual source statements. What Sharpen gave me is currently much more C# code because it uses a lot of custom classes and shims to match Java’s intentions, functionality, etc.… It’s (hopefully) a lot of extra layers that I will be able to mostly remove with careful, targeted re-writes to native C#.

Getting it to build wasn’t too hard, but it took time. Sharpen’s conversion doesn’t have a shim for everything so sometimes the code it translates calls class methods that don’t exist in .NET. Other times it entirely skips and omits lines of Java code altogether (I’m still manually checking for those cases here and there). So I had to write more shim code and hand translate what was missing. Getting it from simply building to actually working though was far harder.

Oh, did I mention that I am not really a Java programmer? I’ve dabbled here and there before but nothing as complex as this and many of Java’s conventions and idioms were less than obvious to me.

The hardest part was that the SISC code base is complex and hard to understand. It seems to be very well written though. There are two primary ways to write a Scheme system: an interpreter or a compiler (either to native code, a byte code VM, or cross compilation to another language like C). Both are complex things to write and understand, especially if you did not write them. SISC is an interpreter which means that much of the code is just plain Java. But that code looks up symbols, builds expression trees, applies procedures, manages registers and call stacks – all abstractions of the Scheme language it is interpreting. In doing so the source code for SISC leverages some features of Java that have to be carefully understood when translating to another language.

First off, Java has some different notions about things like equality from .NET. For instance, in Java if you have two Lists classes and use equals() to compare them, the contents of the lists are checked to determine equality. In .NET this is rarely the case as Equals() usually just settles for reference equality. So more shim code to fix that.

Collection classes can also be pretty different, with Java supporting things like mutable views into Lists instances via subList(). Some Java collection classes have no equivalents in .NET. The Java SISC source was also not written to use generics and instead liberally uses Object which makes it hard to tell what is intended to go into any collection instance (and often it is a mixed bag).

Then there’s things like WeakHashMap or ReferenceQueue which not only do not exist in .NET, they are near impossible to accomplish since the CLR doesn’t publically expose the garbage collector hooks necessary to write .NET equivalents that follow Java semantics. To create similar types of things in .NET you have to liberally use WeakReferences together with polling and/or class finalizers. It works but is not ideal.

Finally there’s exceptions, which normally wouldn’t have been a big deal but the SISC code base unfortunately relies on using exceptions for non-exceptional things, like rewinding the interpreter’s execution when failing to convert a piece of text that might be a number (or a symbol that just looks like a number) into a number. Catching exceptions happens polymorphically  which means that if your exceptions class hierarchies are out of sync it’s going to be a world of hurt. Telling the “normal” exceptions apart from the true failure exceptions was tough. Luckily, it turned out to be not as bad as it could have been but I had to spend considerable time studying both Java’s and .NET’s exception class hierarchies to see where there were out of sync and fix up the converted code so that the original intentions were preserved. I am sure there are more similar Java to C# gotcha’s waiting for me in this converted code base that I have yet to uncover.

The final complication came in the form of Scheme itself. The core of an R5RS Scheme language is a very small, simple language containing just 23 constructs, 11 of which can be built from combinations of the other 12. The entire R5RS language spec is less than 50 pages. To have a functioning Scheme language that does something interesting though requires much more, like a set of primitive functions (math, string functions, basic I/O APIs, etc.…) and a Scheme library so that it can interface with input and output devices, files systems, etc... Most of these primitives are contained within the SISC interpreter as Java (and now C#) native code but much of the library is actual Scheme code that uses these lower level native primitives to build a more complete and useful library. This Scheme library code amounts to more that 70,000 lines of Scheme code that gets built into a heap image during the build process. The trick is you need a functioning Scheme system to build this Scheme heap. However, SISC was designed to be self-bootstrapping, which meant that it could load and parse a small set of special Scheme source files and split out a binary heap image to a file that could then be reloaded when running the full interpreter.

All throughout this project I had the nagging question of “how will I know when it’s really working?” It’s an interpreter so unless it is interpreting lots of code it would never be tested very well. However at this point I could barely understand Scheme source much less write any. The surprise answer came when I finally understood that the self-bootstrapping process was in fact executing lots and lots of actual Scheme code, parsing it and building internal representations of Scheme s-expressions. Moving beyond having it simply not blow up, verifying that the heap image it was creating turned out to be very difficult.

The heap is structured as a memory image of serialized s-expressions with lots of pointer indirection in the form of dynamically generated integer IDs that index into tables of class types, expressions, data fragments, and BER encoded integers. To make matters worst, most of the internal data structures that get serialized are stored internally in hash tables which means that when serializing these sets they usually get written out in a slightly different order each time (i.e enumerating hash set entries with dynamically generated symbol names is not guaranteed to be deterministically ordered from run to run). In the end I got the bugs squashed by modifying both code bases to be completely deterministic when writing heap images and output copious amounts of trace logs. This way I could do a binary comparison of the heap image files and logs as well as step through each code base side-by-side to see where they differed when reloading the heap. it was the most frustrating part of the project but peace of mind in the end to know that internally, they were producing the exact same set of data structures. This also meant that they were interpreting lots of Scheme code identically too to produce those data structures. This was as close to a testing framework as I was likely to get, comparing one working Java version to a converted C# version when processing Scheme source code.

All in all it took a few weeks but I got it working – not complete, but complete enough to be useful (some parts of the library are not ported yet, like networking functions, the FFI, etc.…). It was one of those things where it seemed possible at first glance, then impossible once I got into it, then suddenly and surprisingly possible again the next moment – flip-flop, flip-flop with each little bit of progress I made. I came close to stopping several times but then I’d have a little breakthrough and it would seem like the finish line was very near (only to be not). In the end is just suddenly mostly started working with minor little bugs here and there. I can’t quite prove that it completely works though but I’m able to learn Scheme and use little bits of Scheme snippets and test cases I find around the net. I still don’t fully understand how the interpreter works though – it is very hard to mentally trace through the code.

All so I could learn stuff like this: http://www.willdonnelly.net/blog/scheme-syntax-rules/

Scheme’s ability to extend the language syntax is fascinating to me. You can write a whole new language right inside of Scheme with new syntax rules and everything.

Unfortunately, SISC.net is a ways from being put out there for anyone else to use but someday I hope to do so. The issue is, it contains some source code that is Ok for me to use but not so Ok for me to redistribute (Reflector *cough* *cough*). When I’ll be able to do so I don’t quite know. This was my fun little fall learning project and I’m done for a while now. If you’re a Seasoned Schemer and this is something that would be useful to you though, knowing full well that it is experimental at best, leave me a comment to prod me along to get it into a more stable release state.

P.S. I also wrote a Forth threaded interpreter in .NET but it was much, much simpler than any of this and I had an old but good reference for how to do so. It’s the project I never completed from my early days of computing. It’s not quite finished yet either (it needs a bit of a re-write to solve some issues) but I’m working on getting it out as well. Forth is a magical little language.

* The exception is the Common Larceny project but its future does not look too bright: https://lists.ccs.neu.edu/pipermail/larceny-users/2010-October/000829.html


There are quite a few discussions and examples on using DynamicObject to create wrappers for XML documents already out here. However I found most of them lacking. Here’s my take on this.

This is largely based on this example: Dynamic in C# 4.0: Creating Wrappers with DynamicObject (you’ll probably need to read this first to understand what I’m talking about here as mine is an extended version of this).

I made some important additions like handling attributes and dealing with collections of elements. My goal here was to keep the fluency of working with XML this way, not to cover every base. It’s not a perfect abstraction by any means nor is it a complete one. But it works for me.

For accessing XML attributes, I went with an indexer syntax like a dictionary class:

dynamicXMLObject[“AttributeName”] = “AttributeValue”

Note that is you set a value to null it removes the attribute. Setting a value for an existing attribute replaced it. This all happens in the TrySetIndex() and TryGetIndex() methods of this wrapper class and it pretty self-explanatory.

 

For dealing with collections of elements, you just access the underlying XElement collection method of choice:

dynamic elements = dynamicXMLObject.Elements()

This was possible in the orignal example but I added the ability to return wrapped collections of DynamicXMLNode. When you call any underlying method that returns an IEnumerable<XElement> collection, I instead return an IEnumberable<DynamicXMLNode> collection . This bit of magic is probably the most interesting part as I detect the return type for any underlying method called and substitute in the wrapped collection as necessary. This allow all the members of XElement that return collections of XElement to return collections of DynamicXMLNode instead, transparently. Note that in your dynamic code, you’ll need to cast this collection to IEnumerable<dynamic> to access all the handy Linq extension methods, etc…

Along those same lines, I also detect and convert underlying method parameters so that you can call XElement.Add(), etc… and pass in DynamicXMLNode. When you do I instead substitute the inner wrapped XElement as necessary. This allows for transparent use of DynamicXMLNode in most cases.

Both of these detections and substitution happen in TryInvokeMember(). The combination of these two substitutions allow any of the underlying methods of the wrapped XElement to transparently work with the DynamicXMLNode wrapper class.

I also cloned some of the useful static XElement methods, like Load() and Parse(). I would have liked to handle this dynamically like other method calls but it seems you can’t trap static method calls with DynamicObject. Oh well.

The only hitch is that for XElement methods that expect an XName object like Elements() or Descendants() you’ll have to cast any strings you use to an XName in your dynamic code. Normally with static typing this is not necessary as it happens behind the scene via an implicit cast done for you. However with dynamic code the compiler can’t do this for you.

I suppose that I could snoop the parameters to all the methods of XElement and do this cast automatically, but I’ll leave that for the next person to figure out.

public class DynamicXMLNode : DynamicObject
{
    private XElement node;
    
    public DynamicXMLNode(XElement node)
    {
        this.node = node;
    }

    public DynamicXMLNode(String name)
    {
        node = new XElement(name);
    }

    public static DynamicXMLNode Load(string uri)
    {
        return new DynamicXMLNode(XElement.Load(uri));
    }

    public static DynamicXMLNode Load(Stream stream)
    {
        return new DynamicXMLNode(XElement.Load(stream));
    }

    public static DynamicXMLNode Load(TextReader textReader)
    {
        return new DynamicXMLNode(XElement.Load(textReader));
    }

    public static DynamicXMLNode Load(XmlReader reader)
    {
        return new DynamicXMLNode(XElement.Load(reader));
    }

    public static DynamicXMLNode Load(Stream stream, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(stream, options));
    }

    public static DynamicXMLNode Load(string uri, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(uri, options));
    }

    public static DynamicXMLNode Load(TextReader textReader, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(textReader, options));
    }

    public static DynamicXMLNode Load(XmlReader reader, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(reader, options));
    }

    public static DynamicXMLNode Parse(string text)
    {
        return new DynamicXMLNode(XElement.Parse(text));
    }

    public static DynamicXMLNode Parse(string text, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Parse(text, options));
    }

    public override bool TrySetMember(SetMemberBinder binder, object value)
    {
        XElement setNode = node.Element(binder.Name);
        if (setNode != null)
        {
            setNode.SetValue(value);
        }
        else
        {
            if (value.GetType() == typeof(DynamicXMLNode))
            {
                node.Add(new XElement(binder.Name));
            }
            else
            {
                node.Add(new XElement(binder.Name, value));
            }
        }
        return true;
    }

    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        XElement getNode = node.Element(binder.Name);
        if (getNode != null)
        {
            result = new DynamicXMLNode(getNode);
            return true;
        }
        else
        {
            result = null;
            return false;
        }
    }

    public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
    {
        Type xmlType = typeof(XElement);
        try
        {
            // unwrap parameters: if the parameters are DynamicXMLNode then pass in the inner XElement node
            List<object> newargs = null;
            var argtypes = args.Select(x => x.GetType()); // because GetTypeArray is not supported in Silverlight
            if (argtypes.Contains(typeof(DynamicXMLNode)) || argtypes.Contains(typeof(DynamicXMLNode[])))
            {
                newargs = new List<object>();
                foreach (var arg in args)
                {
                    if (arg.GetType() == typeof(DynamicXMLNode))
                    {
                        newargs.Add(((DynamicXMLNode)arg).node);
                    }
                    else if (arg.GetType() == typeof(DynamicXMLNode[]))
                    {
                        // unwrap array of DynamicXMLNode
                        newargs.Add(((DynamicXMLNode[])arg).Select(x => ((DynamicXMLNode)x).node));
                    }
                    else
                    {
                        newargs.Add(arg);
                    }
                }
            }

            result = xmlType.InvokeMember(
                      binder.Name,
                      System.Reflection.BindingFlags.InvokeMethod |
                      System.Reflection.BindingFlags.Public |
                      System.Reflection.BindingFlags.Instance,
                      null, node, newargs == null ? args : newargs.ToArray());


            // wrap return value: if the results are an IEnumerable<XElement>, then return a wrapped collection of DynamicXMLNode
            if (result != null && typeof(IEnumerable<XElement>).IsAssignableFrom(result.GetType()))
            {
                result = ((IEnumerable<XElement>)result).Select(x => new DynamicXMLNode(x));
            }
            // Note: we could wrap single XElement too but nothing returns that

            return true;
        }
        catch
        {
            result = null;
            return false;
        }
    }

    public override bool TrySetIndex(SetIndexBinder binder, object[] indexes, object value)
    {
        if (indexes[0].GetType() == typeof(String))
        {
            node.SetAttributeValue((string)indexes[0], value);
            return true;
        }
        else
        {
            return false;
        }
    }

    public override bool TryGetIndex(GetIndexBinder binder, object[] indexes, out object result)
    {
        if (indexes[0].GetType() == typeof(String))
        {
            XAttribute attr = node.Attribute((string)indexes[0]);
            if (attr != null)
            {
                result = attr.Value;
                return true;
            }
            else
            {
                result = null;
                return false;
            }
        }
        else
        {
            result = null;
            return false;
        }
    }


    public override bool TryConvert(ConvertBinder binder, out object result)
    {
        if(binder.Type == typeof(XElement))
        {
                result = node;
        }
        else if(binder.Type == typeof(String))
        {
                result = node.Value;
        }
        else
        {
            result = false;
            return false;
        }

        return true;
    }
}
 

Scratch that.

I’ve been collaborating with someone on a new and better way to works with subrepos in Mercurial. We wrote a Mercurial extension that adds commands for directly working on subrepos. It is much more capable than my original script.

You can find the Mercurial extension here: http://bitbucket.org/kirbysayshi/hgsubrepo

You install it like any other Mercurial extension, just place the file somewhere and register the extension in your settings file. You can find details about that process here.

Once installed, you can get help on using it by executing “hg help subrepo”.

 

 

I’ve recently switched to Mercurial for source control. It’s a good system but it’s a little rough around the edges sometimes. One area is subepo support. It’s quite usable but there’s one thing missing that really bugs me: recursive status when using nested repos (i.e. combined status for all subrepos). Right now Mercurial’s status command knows nothing about nested repos. It’s a pain since Mercurial’s commit command will commit all nested repos and without proper status support you can’t tell what will be committed.

However, it’s pretty simple to automate getting this status with a little batch file:

@echo off
for /d /r %%d in (*.hg) do (
    pushd %%d
    cd ..
    cd
    hg status %*
    popd
)

This is a very basic version but it works for getting basic status for both the current and all nested repos. It searches for all repos and executes a hg status in each working directory, passing any arguments that you specify to hg status .

I’m sure someone could also patch this directly into Mercurial, perhaps via an extension but this is simple and it works. Hopefully this is something that won’t be needed for too much longer.

 


One of the best new features of .NET 3.5 is LINQ. It's great for accessing data from databases like SQL Server but it's far more useful than just that. LINQ to objects is the killer feature in .NET 3.5 and it can change the way you write code for the better.

I am finding more cases where I can use LINQ for general data manipulation to simplify the code that I write. Coupled with Extension Methods it's a really powerful new way to write algorithms over your data types.

One of my favorite new tricks is to use LINQ to make recursion over a tree of objects simpler. There are many cases where this type of pattern is possible. For instance when working with controls in ASP.NET or Windows Forms each Control has a property named Controls, which is a collection of its child controls. If you need to perform some operation over all of them you can gather them all in just one LINQ statement similar to this:

ChildControls = from nested in 
                myform.Controls.Cast<System.Web.UI.Control>().Descendants(
                    c => c.Controls.Cast<System.Web.UI.Control>())
                select nested;

 The above code is a lot simpler to express than the recursion code that would have been necessary to work with each of the nested controls. And if you need to filter the child controls returned you can just add a LINQ Where clause (or any of the other LINQ operators).

Note: The Cast<T> function above is necessary since the Controls property return a non-generic IEnumerable. Cast<T> simple wraps the non-generic IEnumerable and returns a typed IEnumerable<T>.

The trick that makes this work is a function called Descendants, which is an extension method that I wrote. You can use it with any IEnumerable<T> collection to descend into those collections. It will descend into those collections by way of a Func<T, IEnumerable<T>> lambda that you supply. In the above example I pass it a lambda function that tells it how to retrieve the nested child controls.

Here is that extension method:

static public class LinqExtensions
{
    static public IEnumerable<T> Descendants<T>(this IEnumerable<T> source, 
                                                Func<T, IEnumerable<T>> DescendBy)
    {
        foreach (T value in source)
        {
            yield return value;

            foreach (T child in DescendBy(value).Descendants<T>(DescendBy))
            {
                yield return child;
            }
        }
    }
}

 

You could just as easily write a lambda function to descend into a tree structure or any other data structure as long as it supports implemented IEnumerable<T>. Since you can supply the function you need to descend into the data structures, it makes Descendants a generic way to traverse into nested data structures.


Flux and Mutability

The mutable notebook of David Jade