Scheme (the hard way)

Published 10/24/2012 by David Jade in Programming
Tags: ,

TLDR; I took a lot of complex Java source code for a complete R5RS Scheme interpreter and converted it to C# and it worked! It was much harder than this little TLDR makes it sound.

I’ve been on a Scheme learning kick lately because it was something I always wanted to do for many various reasons. My original goal was to just read and work through the classic SICP book. I’m not very far along on that front because at some point I needed a Scheme system and I got sidetracked and sucked into the totally ridiculous idea of porting a Java-based R5RS compliant Scheme interpreter (SISC) to C# and .NET. I’m not really sure why, it just sounded challenging and interesting to me and I got sucked in by the somewhat naïve and seemingly possible notion that it could mostly be done mechanically. Plus there really are no full-featured Scheme interpreters or compilers for .NET* – the few that exist leave out features like support for full continuations and tail calls - stuff that would be useful for really learning Scheme. Although it hasn’t been updated for a few years, SISC claims to be a fully R5RS compliant Scheme interpreter.

I started with a machine translation of the java source code (using Sharpen). Even getting Sharpen to work was a unusual chore (my fault mostly) and what C# it split out would not build. SISC is comprised of 203 java class files – 18,000+ lines of actual source statements. What Sharpen gave me is currently much more C# code because it uses a lot of custom classes and shims to match Java’s intentions, functionality, etc.… It’s (hopefully) a lot of extra layers that I will be able to mostly remove with careful, targeted re-writes to native C#.

Getting it to build wasn’t too hard, but it took time. Sharpen’s conversion doesn’t have a shim for everything so sometimes the code it translates calls class methods that don’t exist in .NET. Other times it entirely skips and omits lines of Java code altogether (I’m still manually checking for those cases here and there). So I had to write more shim code and hand translate what was missing. Getting it from simply building to actually working though was far harder.

Oh, did I mention that I am not really a Java programmer? I’ve dabbled here and there before but nothing as complex as this and many of Java’s conventions and idioms were less than obvious to me.

The hardest part was that the SISC code base is complex and hard to understand. It seems to be very well written though. There are two primary ways to write a Scheme system: an interpreter or a compiler (either to native code, a byte code VM, or cross compilation to another language like C). Both are complex things to write and understand, especially if you did not write them. SISC is an interpreter which means that much of the code is just plain Java. But that code looks up symbols, builds expression trees, applies procedures, manages registers and call stacks – all abstractions of the Scheme language it is interpreting. In doing so the source code for SISC leverages some features of Java that have to be carefully understood when translating to another language.

First off, Java has some different notions about things like equality from .NET. For instance, in Java if you have two Lists classes and use equals() to compare them, the contents of the lists are checked to determine equality. In .NET this is rarely the case as Equals() usually just settles for reference equality. So more shim code to fix that.

Collection classes can also be pretty different, with Java supporting things like mutable views into Lists instances via subList(). Some Java collection classes have no equivalents in .NET. The Java SISC source was also not written to use generics and instead liberally uses Object which makes it hard to tell what is intended to go into any collection instance (and often it is a mixed bag).

Then there’s things like WeakHashMap or ReferenceQueue which not only do not exist in .NET, they are near impossible to accomplish since the CLR doesn’t publically expose the garbage collector hooks necessary to write .NET equivalents that follow Java semantics. To create similar types of things in .NET you have to liberally use WeakReferences together with polling and/or class finalizers. It works but is not ideal.

Finally there’s exceptions, which normally wouldn’t have been a big deal but the SISC code base unfortunately relies on using exceptions for non-exceptional things, like rewinding the interpreter’s execution when failing to convert a piece of text that might be a number (or a symbol that just looks like a number) into a number. Catching exceptions happens polymorphically  which means that if your exceptions class hierarchies are out of sync it’s going to be a world of hurt. Telling the “normal” exceptions apart from the true failure exceptions was tough. Luckily, it turned out to be not as bad as it could have been but I had to spend considerable time studying both Java’s and .NET’s exception class hierarchies to see where there were out of sync and fix up the converted code so that the original intentions were preserved. I am sure there are more similar Java to C# gotcha’s waiting for me in this converted code base that I have yet to uncover.

The final complication came in the form of Scheme itself. The core of an R5RS Scheme language is a very small, simple language containing just 23 constructs, 11 of which can be built from combinations of the other 12. The entire R5RS language spec is less than 50 pages. To have a functioning Scheme language that does something interesting though requires much more, like a set of primitive functions (math, string functions, basic I/O APIs, etc.…) and a Scheme library so that it can interface with input and output devices, files systems, etc... Most of these primitives are contained within the SISC interpreter as Java (and now C#) native code but much of the library is actual Scheme code that uses these lower level native primitives to build a more complete and useful library. This Scheme library code amounts to more that 70,000 lines of Scheme code that gets built into a heap image during the build process. The trick is you need a functioning Scheme system to build this Scheme heap. However, SISC was designed to be self-bootstrapping, which meant that it could load and parse a small set of special Scheme source files and split out a binary heap image to a file that could then be reloaded when running the full interpreter.

All throughout this project I had the nagging question of “how will I know when it’s really working?” It’s an interpreter so unless it is interpreting lots of code it would never be tested very well. However at this point I could barely understand Scheme source much less write any. The surprise answer came when I finally understood that the self-bootstrapping process was in fact executing lots and lots of actual Scheme code, parsing it and building internal representations of Scheme s-expressions. Moving beyond having it simply not blow up, verifying that the heap image it was creating turned out to be very difficult.

The heap is structured as a memory image of serialized s-expressions with lots of pointer indirection in the form of dynamically generated integer IDs that index into tables of class types, expressions, data fragments, and BER encoded integers. To make matters worst, most of the internal data structures that get serialized are stored internally in hash tables which means that when serializing these sets they usually get written out in a slightly different order each time (i.e enumerating hash set entries with dynamically generated symbol names is not guaranteed to be deterministically ordered from run to run). In the end I got the bugs squashed by modifying both code bases to be completely deterministic when writing heap images and output copious amounts of trace logs. This way I could do a binary comparison of the heap image files and logs as well as step through each code base side-by-side to see where they differed when reloading the heap. it was the most frustrating part of the project but peace of mind in the end to know that internally, they were producing the exact same set of data structures. This also meant that they were interpreting lots of Scheme code identically too to produce those data structures. This was as close to a testing framework as I was likely to get, comparing one working Java version to a converted C# version when processing Scheme source code.

All in all it took a few weeks but I got it working – not complete, but complete enough to be useful (some parts of the library are not ported yet, like networking functions, the FFI, etc.…). It was one of those things where it seemed possible at first glance, then impossible once I got into it, then suddenly and surprisingly possible again the next moment – flip-flop, flip-flop with each little bit of progress I made. I came close to stopping several times but then I’d have a little breakthrough and it would seem like the finish line was very near (only to be not). In the end is just suddenly mostly started working with minor little bugs here and there. I can’t quite prove that it completely works though but I’m able to learn Scheme and use little bits of Scheme snippets and test cases I find around the net. I still don’t fully understand how the interpreter works though – it is very hard to mentally trace through the code.

All so I could learn stuff like this: http://www.willdonnelly.net/blog/scheme-syntax-rules/

Scheme’s ability to extend the language syntax is fascinating to me. You can write a whole new language right inside of Scheme with new syntax rules and everything.

Unfortunately, SISC.net is a ways from being put out there for anyone else to use but someday I hope to do so. The issue is, it contains some source code that is Ok for me to use but not so Ok for me to redistribute (Reflector *cough* *cough*). When I’ll be able to do so I don’t quite know. This was my fun little fall learning project and I’m done for a while now. If you’re a Seasoned Schemer and this is something that would be useful to you though, knowing full well that it is experimental at best, leave me a comment to prod me along to get it into a more stable release state.

P.S. I also wrote a Forth threaded interpreter in .NET but it was much, much simpler than any of this and I had an old but good reference for how to do so. It’s the project I never completed from my early days of computing. It’s not quite finished yet either (it needs a bit of a re-write to solve some issues) but I’m working on getting it out as well. Forth is a magical little language.

* The exception is the Common Larceny project but its future does not look too bright: https://lists.ccs.neu.edu/pipermail/larceny-users/2010-October/000829.html


There are quite a few discussions and examples on using DynamicObject to create wrappers for XML documents already out here. However I found most of them lacking. Here’s my take on this.

This is largely based on this example: Dynamic in C# 4.0: Creating Wrappers with DynamicObject (you’ll probably need to read this first to understand what I’m talking about here as mine is an extended version of this).

I made some important additions like handling attributes and dealing with collections of elements. My goal here was to keep the fluency of working with XML this way, not to cover every base. It’s not a perfect abstraction by any means nor is it a complete one. But it works for me.

For accessing XML attributes, I went with an indexer syntax like a dictionary class:

dynamicXMLObject[“AttributeName”] = “AttributeValue”

Note that is you set a value to null it removes the attribute. Setting a value for an existing attribute replaced it. This all happens in the TrySetIndex() and TryGetIndex() methods of this wrapper class and it pretty self-explanatory.

 

For dealing with collections of elements, you just access the underlying XElement collection method of choice:

dynamic elements = dynamicXMLObject.Elements()

This was possible in the orignal example but I added the ability to return wrapped collections of DynamicXMLNode. When you call any underlying method that returns an IEnumerable<XElement> collection, I instead return an IEnumberable<DynamicXMLNode> collection . This bit of magic is probably the most interesting part as I detect the return type for any underlying method called and substitute in the wrapped collection as necessary. This allow all the members of XElement that return collections of XElement to return collections of DynamicXMLNode instead, transparently. Note that in your dynamic code, you’ll need to cast this collection to IEnumerable<dynamic> to access all the handy Linq extension methods, etc…

Along those same lines, I also detect and convert underlying method parameters so that you can call XElement.Add(), etc… and pass in DynamicXMLNode. When you do I instead substitute the inner wrapped XElement as necessary. This allows for transparent use of DynamicXMLNode in most cases.

Both of these detections and substitution happen in TryInvokeMember(). The combination of these two substitutions allow any of the underlying methods of the wrapped XElement to transparently work with the DynamicXMLNode wrapper class.

I also cloned some of the useful static XElement methods, like Load() and Parse(). I would have liked to handle this dynamically like other method calls but it seems you can’t trap static method calls with DynamicObject. Oh well.

The only hitch is that for XElement methods that expect an XName object like Elements() or Descendants() you’ll have to cast any strings you use to an XName in your dynamic code. Normally with static typing this is not necessary as it happens behind the scene via an implicit cast done for you. However with dynamic code the compiler can’t do this for you.

I suppose that I could snoop the parameters to all the methods of XElement and do this cast automatically, but I’ll leave that for the next person to figure out.

public class DynamicXMLNode : DynamicObject
{
    private XElement node;
    
    public DynamicXMLNode(XElement node)
    {
        this.node = node;
    }

    public DynamicXMLNode(String name)
    {
        node = new XElement(name);
    }

    public static DynamicXMLNode Load(string uri)
    {
        return new DynamicXMLNode(XElement.Load(uri));
    }

    public static DynamicXMLNode Load(Stream stream)
    {
        return new DynamicXMLNode(XElement.Load(stream));
    }

    public static DynamicXMLNode Load(TextReader textReader)
    {
        return new DynamicXMLNode(XElement.Load(textReader));
    }

    public static DynamicXMLNode Load(XmlReader reader)
    {
        return new DynamicXMLNode(XElement.Load(reader));
    }

    public static DynamicXMLNode Load(Stream stream, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(stream, options));
    }

    public static DynamicXMLNode Load(string uri, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(uri, options));
    }

    public static DynamicXMLNode Load(TextReader textReader, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(textReader, options));
    }

    public static DynamicXMLNode Load(XmlReader reader, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Load(reader, options));
    }

    public static DynamicXMLNode Parse(string text)
    {
        return new DynamicXMLNode(XElement.Parse(text));
    }

    public static DynamicXMLNode Parse(string text, LoadOptions options)
    {
        return new DynamicXMLNode(XElement.Parse(text, options));
    }

    public override bool TrySetMember(SetMemberBinder binder, object value)
    {
        XElement setNode = node.Element(binder.Name);
        if (setNode != null)
        {
            setNode.SetValue(value);
        }
        else
        {
            if (value.GetType() == typeof(DynamicXMLNode))
            {
                node.Add(new XElement(binder.Name));
            }
            else
            {
                node.Add(new XElement(binder.Name, value));
            }
        }
        return true;
    }

    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        XElement getNode = node.Element(binder.Name);
        if (getNode != null)
        {
            result = new DynamicXMLNode(getNode);
            return true;
        }
        else
        {
            result = null;
            return false;
        }
    }

    public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
    {
        Type xmlType = typeof(XElement);
        try
        {
            // unwrap parameters: if the parameters are DynamicXMLNode then pass in the inner XElement node
            List<object> newargs = null;
            var argtypes = args.Select(x => x.GetType()); // because GetTypeArray is not supported in Silverlight
            if (argtypes.Contains(typeof(DynamicXMLNode)) || argtypes.Contains(typeof(DynamicXMLNode[])))
            {
                newargs = new List<object>();
                foreach (var arg in args)
                {
                    if (arg.GetType() == typeof(DynamicXMLNode))
                    {
                        newargs.Add(((DynamicXMLNode)arg).node);
                    }
                    else if (arg.GetType() == typeof(DynamicXMLNode[]))
                    {
                        // unwrap array of DynamicXMLNode
                        newargs.Add(((DynamicXMLNode[])arg).Select(x => ((DynamicXMLNode)x).node));
                    }
                    else
                    {
                        newargs.Add(arg);
                    }
                }
            }

            result = xmlType.InvokeMember(
                      binder.Name,
                      System.Reflection.BindingFlags.InvokeMethod |
                      System.Reflection.BindingFlags.Public |
                      System.Reflection.BindingFlags.Instance,
                      null, node, newargs == null ? args : newargs.ToArray());


            // wrap return value: if the results are an IEnumerable<XElement>, then return a wrapped collection of DynamicXMLNode
            if (result != null && typeof(IEnumerable<XElement>).IsAssignableFrom(result.GetType()))
            {
                result = ((IEnumerable<XElement>)result).Select(x => new DynamicXMLNode(x));
            }
            // Note: we could wrap single XElement too but nothing returns that

            return true;
        }
        catch
        {
            result = null;
            return false;
        }
    }

    public override bool TrySetIndex(SetIndexBinder binder, object[] indexes, object value)
    {
        if (indexes[0].GetType() == typeof(String))
        {
            node.SetAttributeValue((string)indexes[0], value);
            return true;
        }
        else
        {
            return false;
        }
    }

    public override bool TryGetIndex(GetIndexBinder binder, object[] indexes, out object result)
    {
        if (indexes[0].GetType() == typeof(String))
        {
            XAttribute attr = node.Attribute((string)indexes[0]);
            if (attr != null)
            {
                result = attr.Value;
                return true;
            }
            else
            {
                result = null;
                return false;
            }
        }
        else
        {
            result = null;
            return false;
        }
    }


    public override bool TryConvert(ConvertBinder binder, out object result)
    {
        if(binder.Type == typeof(XElement))
        {
                result = node;
        }
        else if(binder.Type == typeof(String))
        {
                result = node.Value;
        }
        else
        {
            result = false;
            return false;
        }

        return true;
    }
}
 

One of the best new features of .NET 3.5 is LINQ. It's great for accessing data from databases like SQL Server but it's far more useful than just that. LINQ to objects is the killer feature in .NET 3.5 and it can change the way you write code for the better.

I am finding more cases where I can use LINQ for general data manipulation to simplify the code that I write. Coupled with Extension Methods it's a really powerful new way to write algorithms over your data types.

One of my favorite new tricks is to use LINQ to make recursion over a tree of objects simpler. There are many cases where this type of pattern is possible. For instance when working with controls in ASP.NET or Windows Forms each Control has a property named Controls, which is a collection of its child controls. If you need to perform some operation over all of them you can gather them all in just one LINQ statement similar to this:

ChildControls = from nested in 
                myform.Controls.Cast<System.Web.UI.Control>().Descendants(
                    c => c.Controls.Cast<System.Web.UI.Control>())
                select nested;

 The above code is a lot simpler to express than the recursion code that would have been necessary to work with each of the nested controls. And if you need to filter the child controls returned you can just add a LINQ Where clause (or any of the other LINQ operators).

Note: The Cast<T> function above is necessary since the Controls property return a non-generic IEnumerable. Cast<T> simple wraps the non-generic IEnumerable and returns a typed IEnumerable<T>.

The trick that makes this work is a function called Descendants, which is an extension method that I wrote. You can use it with any IEnumerable<T> collection to descend into those collections. It will descend into those collections by way of a Func<T, IEnumerable<T>> lambda that you supply. In the above example I pass it a lambda function that tells it how to retrieve the nested child controls.

Here is that extension method:

static public class LinqExtensions
{
    static public IEnumerable<T> Descendants<T>(this IEnumerable<T> source, 
                                                Func<T, IEnumerable<T>> DescendBy)
    {
        foreach (T value in source)
        {
            yield return value;

            foreach (T child in DescendBy(value).Descendants<T>(DescendBy))
            {
                yield return child;
            }
        }
    }
}

 

You could just as easily write a lambda function to descend into a tree structure or any other data structure as long as it supports implemented IEnumerable<T>. Since you can supply the function you need to descend into the data structures, it makes Descendants a generic way to traverse into nested data structures.


Flux and Mutability

The mutable notebook of David Jade