Generic type factories

Happy new year to everyone! I’ve really got the blog bug over this Christmas period… today, I want to talk a little about how to create easily-consumable generic classes.

Generic Types and Composition

Generics in C# are great. They give so many elegant ways to solve common problems and to implement certain design patterns that without generics would involve lots of boilerplate code. One of the things that they are great at is writing composed objects. For example, let’s say we wanted to represent a binary tree structure in C#. It might look something like this: –


Seems logical, doesn’t it? We have our properties which represent the data of a tree node, and a constructor that takes in the item that the node holds. Great – you can now have a tree node for any type e.g. Node<String>, Node<Int32> etc.! However, when you try to start using it and creating nodes in code, you’ll immediately come across a small problem: –


See the issue? Every time you want to create a Node of type T, you have to explicitly specify T in the type definition, even though we’re passing in an object of type T into the constructor. Seems a bit pointless, doesn’t it? Can’t the compiler “figure out” the definition from the constructor argument?

Generic methods

Luckily, there is indeed a way to achieve this effect through a generic method which acts as a factory – as long as it does not belong in the generic Node<T> class! Check this guy out: –


This method lives in the non-generic Node class. We can easily consume it as follows and thus create nodes much more succinctly: –


Sure enough, the nodes are correctly typed just like before e.g. Node<String>, Node<Employee> etc., but now a lot of the fluff has been removed – based on the type of the constructor argument, the return type will be inferred by the compiler. C#’s type inference perhaps isn’t as powerful as F#, but it’s really not that bad Smile

Also notice that we’ve put the factory method in a class called Node – again, this aids the developer in terms of discoverability as it sits alongside Node<TItem>.


You might often find yourself putting generic methods into non-generic classes that themselves return generic objects.

There are many occasions when you may find that you need a non-generic base / sibling class for your generic classes in order to easily construct them, or deal with a set objects of a particular generic type that differ only by their type parameters e.g. Node<String>, Node<Int32> and Node<Employee> – having a non-generic base class called Node with common properties or methods allows you to deal with all instances of all Nodes in one go.

Generics resolution in .NET

I recently posted about generics and resolution at runtime. Some people on StackOverflow gave some really good answers – but before I got any responses, and with other people in the office stumped as to this problem, I decided to ask Eric Lippert directly (he of Fabulous Adventures in Coding) to get the definitive answer. Sure enough within a couple of days (by which time a couple of people had posted on SO) I received the following, which crystallised the thoughts of both people on SO plus what I had suspected would be the case…

Indeed, the answers on SO are correct. Generic calls are resolved by the compiler at compile time, using the information available at compile time to make the best choice available at that time. This is exactly the same as if you’d said:
void Foo(string s) {}
void Foo(int x) {}
void Foo(object x) {}

object[] objects = { 10, "hello", null };
We don’t re-do the overload resolution analysis at runtime. The best choice at compile time is the "object" overload, so that’s the one you get every time, no matter what the runtime value is.  It’s no different with generics; we make the best choice we can based on the information you give us at compile time.
If you want to do the analysis at runtime then you can use "dynamic" in C# 4, but be aware that doing full analysis at runtime * does full analysis at runtime*, unsurprisingly. You are basically *starting the compiler again at runtime*.  That’s not cheap. And if the result can change on every call, then of course we re-run the semantic analyzer again on every call.

I wrote a blog article about this a few months back.

So, there’s the answer straight from the horses mouth, as it were. And thanks Eric for your quick and succinct response – really appreciated! I actually looked on Eric’s blog because I thought he had posted something about it – but couldn’t find the above article whilst trawling through his blog – he’s too prolific a blogger!