Latest features on JustMockLite


Just a quick post and update on my review on JustMockLite from earlier this year. I had originally a few comments on some features which I’m pleased to say have now been rectified 🙂

Recursive Mocks

Support (or lackof) for recursive mocks was one of my main criticisms with earlier versions of JML. For example, if you have a mock which itself had a method needed to return another mock – or worse still, needed to mock the result of a method on that child mock, it was a bit of a pain; you had to manually construct the child mock, and then arrange the top level call to return the child mock etc. etc. etc.

This simple code sample illustrates how recursive mocks are now extremely simple to do in JML. Child mocks are now automatically created without the need to explicitly create one, and you can chain a method call expression when arranging the result of a nested mock. Very nice.

XML Comments

This is a small but important feature for getting up to speed quicker – JML now includes comments on methods etc., which should aid in getting up and running without having to resort to the documentation.

Conclusion

This is all really good. I’d still love to see ignoring of arguments by default on method call arrangement, but overall JML continues to improve – definitely recommended.

Using wrappers to aid unit testing


As I alluded to about recently when blogging about JustMock, one of the most important attributes of unit tests has to be that they are readable; you can easily reason about them and see what they do.

I also talking about Moq’s overly cumbersome and verbose approach to performing Setups on mocks – I rarely supply arguments for setup methods on mocks, since this would be doing two tests in one i.e. mocking that we handling the result of the method, but also implicitly testing that we called the method with the correct arguments. The latter should be left for another test.

Coincidentally, I had a look at a few other frameworks recently: –

  • FSUnit, which is an F# unit testing framework that wraps around NUnit / MSTest / XUnit etc. to provide a more succinct unit test experience in F#
  • Simple.Data, which is an awesome data access layer that works over multiple data sources and uses C#’s dynamic feature to allow you to very easily generate queries etc. against data sources with the minimum of fuss.

Simplifying Moq’s Setup

This got me thinking – could we not do the same with mocking frameworks? Well, a couple of hours later, the answer is yes. Here’s a simple example of how you can set up mocks in Moq much more succinctly using a dynamic wrapper class. First the original Moq Setup method: –

[Fact]
public void Foo_GotPayroll_LogsIt()
{
    SetupClassUnderTest();
    service.Setup(svc => svc.GetPayroll(It.IsAny<Person>(), It.IsAny<Person>(), It.IsAny<Person>())).Returns("ABCDEFG");

    // Act
    classUnderTest.Foo(null, null, null);

    // Assert
    logger.Verify(l => l.Log("Got back ABCDEFG."));
}

Notice the large amount of noise from the It.IsAny<T>() calls – almost 50% of the contents of the statement are taken up by It.IsAny().

Now look at this version: –

[Fact]
public void TestFoo()
{
    SetupClassUnderTest();
    service.Setup().GetPayroll().Returns("ABCDEFG");

    // Act
    classUnderTest.Foo(null, null, null);

    // Assert
    logger.Verify(l => l.Log("Got back ABCDEFG."));
}

It uses an new extension method of Setup which operates slightly differently: –

  1. It returns a dynamic object which when called immediately seeks out any method on the mock service that are called “GetPayroll”.
  2. It then filters out any overloads that do not have the matching return type of System.String.
  3. Then, for each matched method, it parses the argument list and generates an expression which calls the method, with an appropriate It.IsAny<T>() call for every argument.

In effect, it expands into the code of the first version, but at runtime. Notice how much more succinct the code is – you don’t need to waste time with It.IsAny<T>(), or call IgnoreArguments(), or even with the lambda expression – you simply provide the name of the method you want to mock out as a parameterless method call – which is what your intent is anyway – and then call Returns on it.

You can also do the same with Throws, which will take in an Exception and setup a Moq call to .Throws(). Easy.

Conclusion

This was more an experiment to see how easy it would be to create a more succinct wrapper around Moq (I’ll put the source code up for anyone that wants it) but also to see whether it would actually work from a consumption point of view – does it feel “right” to call a dynamic method which does setup / mocking for you? Can you have confidence in it? I leave that you to to decide 🙂

First experiences of Telerik’s JustMock


Problems with Moq

Having migrated from Rhino Mocks over to Moq, I have found myself lately getting more and more frustrated with the verbosity of Moq for simple assertions. I present as exhibit one the GetPayroll method, called below.

public void Foo(Person first, Person second, Person third)
{
   logger.Log("Processing data for the following users: ");
   logger.Log(first);
   logger.Log(second);
   logger.Log(third);

   var payroll = myService.GetPayroll(first, second, third);

   logger.Log(String.Format("Got back {0}.", payroll));
}

I want to assert that I call the Log method with the result of GetPayroll. So I need to arrange that when I call GetPayroll, it returns an arbitrary string that I can use to assert in the call to Log(). Here’s the Moq test to prove that we log the correct payroll string: –

[Fact]
public void Foo_GotPayroll_LogsIt()
{
   var logger = new Mock<ILogger>();
   var myService = new Mock<IMyService>();
   var classUnderTest = new ClassUnderTest(logger.Object, myService.Object);
   myService.Setup(svc => svc.GetPayroll(It.IsAny<Person>(), It.IsAny<Person>(), It.IsAny<Person>())).Returns("ABCDEFG");

   // Act
   classUnderTest.Foo(new Person(), new Person(), new Person());

   // Assert
   logger.Verify(l => l.Log("Got back ABCDEFG."));
}

Notice that I don’t care what values are passed in to the service call. Why? Because I already have another unit test that Verifies that I called this method with the correct arguments. I don’t need to test that twice (which also increases fragility of tests).
What aggravates me is the ridiculous repeated use of It.IsAny<Person>(). Imagine you had more arguments in your stubbed method (this can be the case when mocking out some BCL interfaces or other third party ones)  – your tests can quickly become unreadable, lost in the sea of It.IsAny<T> calls.

What I want is something like Rhino Mock’s IgnoreArguments() mechanism, or even better, TypeMock’s “ignore arguments by default” behaviour, which is a fantastic idea, encouraging you to only assert arguments during assertions and not during arrangement. Unfortunately, TypeMock is not available on NuGet and is a fairly heavyweight install, requiring add-ins to VS etc.. I therefore gave JustMockLite (JML) a quick go – and so far I’ve been very impressed with it.

Just Mock Lite

Just Mock Lite is a free unit testing framework from Telerik. I saw some demos of it a few months ago, but frankly was not impressed with the API in the webcast – all the demos I saw showed Record / Replay syntax. There was nothing on AAA. However, I saw it on NuGet so thought “let’s see what it’s like anyway”. Just Mock also has a full version which includes TypeMock-like features e.g. mocking statics, concretes etc.

Note (8th Oct 2013): I’ve updated my comments on JML regarding criticisms below with a new post here.

Getting up and running

Whenever I try out a framework like this, I try to avoid reading the docs to see how friendly the API is to the complete newbie – someone who knows what to expect from a unit test framework. I don’t want to spend hours in webpages going through APIs – I want the API to be discoverable and logical. I’m happy to say that the main JustMock static class, Mock, is very easy to use, such that I was able to get up an running without resorting to the online docs until I came across some more complex situations.

However, I would like to see a slightly cut-down version of the publicly-visible namespaces for JustMock Lite that doesn’t include the types that are only available with the “full” version. There’s probably 15-20 classes and more namespaces underneath the Telerik.JustMock namespace – what are they all for? Do I as the client of the framework need to see all of them? Not sure. Perhaps some should be under an “.Advanced” namespace or something.

JML in action

Here’s a redone test of the one above using JustMockLite: –

[Fact]
public void Foo_GotPayroll_LogsIt()
{
    var logger = Mock.Create<ILogger>();
    var myService = Mock.Create<IMyService>();
    var classUnderTest = new ClassUnderTest(logger, myService);
    Mock.Arrange(() => myService.GetPayroll(null, null, null)).IgnoreArguments().Returns("ABCDEFG");

    // Act
    classUnderTest.Foo(new Person(), new Person(), new Person());

    // Assert
    Mock.Assert(() => logger.Log("Got back ABCDEFG."));
}

The main things to note are that: –

  • You don’t have the “Object” property anywhere; JustMock works as TypeMock, by having static methods that take in expressions that contain mock objects etc.. This is nice as it cuts down on the fluff of Moq’s composition approach (which is still probably a cleaner approach than Rhino’s extension methods).
  • The JML Mock static methods have intelligent names – Arrange, Assert etc. etc. – exactly what you want if you follow the AAA unit testing approach.
  • IgnoreArguments() is back. Hurrah! Now I can just put in null or whatever for arguments and postfix them with .IgnoreArguments() – all done. This is much, much more readable, quicker to author, and less fragile than Moq’s approach. But TypeMock’s approach of ignore-by-default is a better approach still.
  • What if you need to specify “some” arguments? That’s easy – it reverts to the Moq approach, except there are handy constants for common “Ignore” type arguments. These are quick to type with intellisense and take up less space than the full It.IsAny<String>() malarky: –

There are also the usual Match<T> as well as helpers on top of this like IsInRange etc. etc..

Mock.Assert(() => myService.DoStuff(Arg.AnyString, Arg.IsInRange(1, 5, RangeKind.Inclusive), Arg.IsAny<Person>()));

I was able to migrate a load of Moq tests to JustMock in about 30 minutes with the help of a couple of macros to rewrite Verify calls to Assert etc. etc. – pretty easy in fact. The API takes several pieces from Moq in terms of design although methods are of course renamed – instead of Times.x we now have Occurs.x etc. etc. – nothing to worry about.

Other features

I also noticed that JML supports call counting, which I blogged about a few weeks ago. This lets you easily say “I expect that this method was called x number of times”. Furthermore, you can chain sequences of results through an extension method in JustMock.Helpers that gives you a fluent-style chaining mechanism so you can say “return 1, then return 5, then return 10” – although I wonder how often this sort of feature would be required.

Criticisms

  • One thing that JML falls short of in is it’s ability to generate recursive mocks. Whilst JML does support limited recursion, it cannot automatically return child mocks from methods on a parent mock; nor does it have the ability to make assertions on them. Instead, you need to manually create child mocks and wire them up as the return object for the parent mock’s method. This is unfortunate, because it does add a bit of complexity to some mocking scenarios, but thankfully it’s not a common situation.
  • The API could probably be cut down a bit – there’s lots of classes in the main namespace that you will probably not often use etc..
  • The API is very powerful – probably one of the most powerful of the free mocking frameworks out there. This is of course a good, thing, but it has its pitfalls. For example, In addition to doing the standard “AAA” style mocking, it also supports the old “Record/Replay” style of unit testing whereby you can set up expectations on methods during the arrange and then simply call “Assert” at the end. I hate this way of unit testing and would have preferred not to have seen those methods at all, or at least have them as an “opt-in”. People generally write unit tests in the RR or AAA style, but don’t tend to mix and match between them – neither type of developer will want to see the other style of unit test methods.
  • No XML comments on the API. Come on guys – it just takes a few minutes to put XML comments on your API with GhostDoc and then I don’t have to resort to opening up the browser to see what the Occurs methods does on IAssertable.

Conclusion

Overall, I’m pretty happy with JML. I’ve only used it for a couple of days, so no doubt I’ve missed some things out – but so far I’m very impressed with it. It’s powerful – notwithstanding my reservations on recursive mocks, has a fairly lightweight “core” API that is easy to get up and running with, and is being actively worked on. There’s also the full version of the API which can mock all sorts of other things, so you can upgrade if required. If you’re starting a new project, I’d seriously recommend having a look at it before going down the route of Moq as you might well prefer this.

Why Entity Framework renders the Repository pattern obsolete?


A post here on a pattern I thought was obsolete yet I still see cropping up in projects using EF time and time again…

What is a Repository?

The repository pattern – to me – is just a form of data access gateway. We used it to provide both a form of abstraction above the details of data access, as well as to provide testability to your calling clients, e.g. services or perhaps even view models / controllers. A typical repository will have methods such as the following:-

interface IRepository
{
    T GetById(Int32 id);
    T Insert(T item);
    T Update(T item);
    T Delete(T item);
}

interface ICustomerRepository : IRepository
{
    Customer GetByName(String name);
}

And so on. You’ll probably create a Repository<T> class which does the basic CRUD work for any <T>. Each one of these repositories will delegate to an EF ObjectContext (or DbContext for newer EF versions), and they’ll offer you absolutely nothing. Allow me to explain…

Getting to EF data in Services

Let’s illustrate the two different approaches with a simple example service method that gets the first customer whose name is an arbitrary string. In terms of objects and responsibilities, the two approaches are somewhat different. Here’s the Repository version: –

public class Service
{
    private readonly ICustomerRepository customerRepository;
    public Customer GetCustomer(String customerName)
    {
        return customerRepository.GetByName(customerName);
    }
}
public class CustomerRepository : ICustomerRepository
{
    private readonly DatabaseContext context;
    public Customer GetByName(string customerName)
    {
        return context.Customers.First(c => c.Name == customerName);
    }
}

Using the Repository pattern, you generally abstract out your actual query so that your service does any “business logic” e.g. validation etc. and then orchestrates repository calls e.g. Get customer 4, Amend name, Update customer 4 etc. etc.. You’ll also invariably end up templating (which if you read my blog regularly you know I hate) your Repositories for common logic like First, Where etc.. – all these methods will just delegate onto the equivalent method on DbSet.

If you go with the approach of talking to EF directly, you enter your queries directly in your service layer. There’s no abstraction layer between the service and EF.

public class ServiceTwo
{
    private readonly DatabaseContext context;

    Customer GetCustomer(String customerName)
    {
        return context.Customers.First(c => c.Name == customerName);
    }
}

So there’s now just one class, the service, which is coupled to DatabaseContext rather than CustomerRepository; we perform the query directly in the service. Notice also that Context contains all our repositories e.g. Customers, Orders etc. as a single dependency rather than one per type. Why would we want to do this? Well, you cut out a layer of indirection, reduce the number of classes you have (i.e. the whole Repository hierarchy vs a fake DbContext + Set), making your code quicker to write as well as easier to reason about.

Aha! Surely now we can’t test out our services because we’re coupled to EF! And aren’t we violating SRP by putting our queries directly into our service? I say “no” to both.

Testability without Repository

How do we fix the first issue, that of testability? There are actually many good examples online for this, but essentially, think about this – what is DbContext? At it’s most basic, it’s a class which contains multiple properties, each implementing IDbSet<T> (notice – IDbSet, not DbSet). What is IDbSet<T>? It’s the same thing as our old friend, IRepository<T>. It contains methods to Add, Delete etc. etc., and in addition implements IQueryable<T> – so you get basically the whole LINQ query set including things like First, Single, Where etc. etc.

Because DBSet<T> implements the interface IDbSet<T>, you can write your own one which uses e.g. in-memory List<T> as a backing store instead. This way your service methods can work against in-memory lists during unit tests (easy to generate test data, easy to prove tests for), whilst going against the real DBContext at runtime. You don’t need to play around with mocking frameworks – in your unit tests you can simply generate fake data and place them into your fake DBSet lists.

I know that some people whinge about this saying “it doesn’t prove the real SQL that EF will generate; it won’t test performance etc. That’s true – however, this approach doesn’t try to solve that – what it does try to do is to remove the unnecessary IRepository layer and reduce friction, whilst improving testability – for 90% of your EF queries e.g. Where, First, GroupBy etc., this will work just fine.

Violation of SRP

This one is trickier. You ideally want to be able to reuse your queries across service methods – how do we do that if we’re writing our queries inline of the service? The answer is – be pramatic. If you have a query that is used once and once only, or a few times but is a simple Where clause – don’t bother refactoring for reuse.

If, on the other hand you have a large query that is being used in many places and is difficult to test, consider making a mockable query builder that takes in an IQueryable, composes on top of it and then returns another IQueryable back out. This allows you to create common queries yet still be flexible in their application – whilst still giving you the ability to go directly to your EF context.

Conclusion

Testability is important when writing EF-based data-driven services. However, the Repository pattern offers little when you can write your services directly against a testable EF context. You can in fact get much better testability from an service-with-an-EF-context based approach than just with a repository, as you can test out your LINQ queries against a fake context, which at least proves your query represents what you want semantically. It’s still not a 100% tested solution, because your code does not test out the EF IQueryable provider – so it’s important that you still have some form of integration and / or performance tests against your services.

Call Counting when unit testing


I’ve been trying out the free version of TypeMock solator (TMI) recently – basically does the same sort of thing as Rhino Mocks / Moq i.e. mocking of interfaces and virtual methods.

Although it’s undoubtedly not as common or popular as those two long-established free frameworks (yet), it has a very clean API, and thankfully opts to ignore arguments when validating calls by default, so you avoid the requirement in Moq to have silly amounts of It.IsAny<String>() for every argument etc. etc.

What is Call Counting?

One thing it doesn’t have though is the ability to mock a result for a number of calls, or to easily verify the number of times a method was called. Now, I’ve been told recently that unit testing the number of times that a method is called can be considered a code smell leading to fragile tests… hmmm. Yes and no.

Here’s a simple caching scenario where if you are to verify that your cache works properly you’ll need to do some form of call counting.

image

So all the MyServiceCache class does is take in the “real” service, and wrap the call to GetData with a simple caching mechanism, essentially a form of decoration (incidentally, this is essentially one of the ways that IoC containers perform interception on types).

Unit testing our Service Cache

How do we unit test this service cache class? Well, here is a simple test class and a setup: –

image

And now a couple of unit tests that prove that the first time we call the cache, it calls the underlying “real” service and returns whatever that returns: –

image

Call Counting with Type Mock Isolator

There’s an internal side-effect going on when we call GetData() though, which is that it stores a reference to the object the real service returned and will use it for subsequent calls. So how can we prove that? By proving that repeated calls to the decorator only call the “real” service once, and that we return the same object for repeated calls.

Unfortunately, TMI doesn’t have any inbuilt way to prove a method was called a certain number of times, but there’s a workaround: –

image

What I really would have liked for the first test above would simply have been to say something like this: –

image

 

Conclusion

Call counting can be a signal that your unit tests are testing too much or are brittle. However, for some scenarios it’s necessary (unless someone can suggest a way to avoid it in the scenario above?). It’d be nice if TypeMockIsolator had a more readable way of checking this – it’s sometimes (though not often) required, and Moq and Rhino both have it.

Nonetheless, as an aside from this, I’d recommend having a look at the free version of TMI – it’s still a clean, easy-to-use mocking framework with the added benefit that you can always upgrade to the full version if required.

A few words on the growing popularity of Test Driven Development


TDD seems to be more and more a buzzword these days than years gone by, particularly in the .NET world. Every agent and potential employer seems to be interested in it. Yet I see more and more people that are chucking around the TDD word on their CVs / LinkedIn profiles and don’t even use it on a day to day basis. Some say that they “sometimes” practice it (when?), others that they know what it is (which is presumably good enough), or some that they simply write unit tests (which is apparently the same thing as TDD).

Given this, I want to post a little reminder on what TDD is, and what it is not.

Test Driven Development or Development Driven Tests?

Let’s be clear – there’s a big difference between what I call TDD and its inverse, DDT. The former has a well known three-step structure know as Red-Green-Refactor: –

  1. Write a failing test
  2. Write enough code to make the test pass
  3. Refactor

Repeat ad infinitum. Sounds easy doesn’t it? As it turns out, it’s surprisingly easy to accidentally drift from this well-established process into the later DDT process.

How to break from TDD?

Firstly, the “failing test” bit is there for a good reason, although it’s often not clear to people what this is. It’s there to help to ensure that your test actually tests what it’s supposed to, i.e. if you write a failing test that doesn’t go red initially, what’s the point of fixing it? Worse still, you could write a test that doesn’t actually test what you think it should do, think that your code passes, and then discover the bug later (these are some of the worst kinds of bugs you can get when not performing TDD in a disciplined manner).

Another kind of lapse is to simply treat unit testing as TDD. Sadly these two distinct practices have been grouped together as one because TDD really brought unit testing to the fore (at least, within the .NET community). However, I’ve interviewed many a developer before who says that they practice TDD but when asked about it, they simply mean “they write unit tests”. This is all well and good – but I’d rather people simply said that rather than suggest that they “do” TDD.

Why not DDT?

Personally, I’m not a big fan of simply writing unit tests after the fact. Firstly, you don’t know when you’ve got complete test coverage – don’t assume that code coverage of 100% means that you have complete coverage. All it means is that you’re run though every line of code under test. A more accurate indicate of more complete code coverage is “if I comment out a single line of code, will I break at least one unit test”. Secondly, you don’t get any of the free warning signs that TDD gives you about violating SRP. When you write tests after the fact, you can end up with god classes that are difficult to fully test. What you then get is a set of tests that often don’t fully cover all paths of your code, but just what you consider to be the “main” paths. Sadly, those are often the ones that you don’t need to test as much as the exceptional cases. It’s actually very difficult to retrospectively write unit tests for production code to a high degree of coverage.

Conclusion

The process of TDD uses unit tests as a mechanism for driving production code. The act of writing unit tests alone is not enough to say “I know TDD” or “I practice TDD”.

What is Unit Testing not?


An ex-colleague of mine was telling me about a situation that he experienced on a project recently whereby his team had written a comprehensive unit test suite for a component designed to generate XML files for consumption in another system. Their code passed all the unit tests, yet when they delivered the component to the client system, it transpired that their component was not generating the XML in the correct format.

We spoke about the benefits and costs of unit testing and ended up discussing the value of unit testing – after all, his developers had written a bunch of unit tests, but the project still had issues when running against the final system – issues that it was felt should have been identified earlier.

I was of the opinion that unit tests were being misused here; something did not sound quite right to me. I felt acceptance tests were required rather than unit tests. In order to explain why I felt this, let’s define what unit and acceptances tests generally are – and what they are not.

Defining Unit Tests

What are unit tests for? They serve several purposes e.g. allowing refactoring with confidence, regression testing, protecting code from violating SRP, providing developer-level documentation etc.. Note that nowhere here do we mention anything about unit tests defining requirements. How could they? The tests are most likely written by the developer who writes the code.

Who are unit tests for? The developers. They are there to give the developer confidence that what they think they have written is what they really have written.

Unit tests are not there for a BA or PM to say “all the unit tests are green – therefore the requirements have been met”. Never fall into this trap. It is easy to make, and for simple components, or for ones where the problem space is easily defined, you may be able to get any without anything more. But once you get into complex systems, or areas that are difficult to prove by simple unit tests, you need something more.

Acceptance Tests

What are acceptance tests for? They protect against the problem of having “fuzzy” requirements. They force the authors of the test to really think about what they expect to happen in a given situation. They should also provide a form of regression suite that is ideally available to the whole team.

Who are acceptance tests for? The business. They give them confidence that the system delivered does what they asked. As a secondary beneficiary, it should also serve the developers to provide them with guidance on what the expectations from the business are.

Let me be clear: Never have a developer write acceptance tests. Whether this acceptance test is materialised by a C# unit test, or Fitnesse or Cucumber is irrelevant. The main point is that they are delivered by someone outside of the developers writing the code, and represent a contract that must be fulfilled by the developer.

Conclusion

This post is probably beginning to sound a bit like software development 101, but you’d be surprised the amount of times that this part is left out of the development life cycle, sometimes with extremely costly results. The only way you can truly have confidence in your ability to fulfil requirements is to code against acceptance tests that are written (ideally) by the business, or at least a business analyst, that have a good understand of what is needed. These tests run repeatedly against your code and give everyone confidence that what you are writing is what is actually required.

Think back to the scenario that I illustrated at the start of this post. Why had this situation occurred? Because unit tests were being treated as a form of acceptance test. This is something that unit tests should never be used for. It’s dangerous because it gives a false sense of security to everyone involved. I’m a big fan of unit testing, and have found designing and developing software much more enjoyable since using TDD as a day-to-day practice. But at the same time, I’d hate it to unfairly come under criticism for not providing something it doesn’t claim to.

Unit Tests prove that the system does what the developer thinks it should do

Acceptance Tests prove that the system does what the client needs it to do