First experiences of Telerik’s JustMock


Problems with Moq Having migrated from Rhino Mocks over to Moq, I have found myself lately getting more and more frustrated with the verbosity of Moq for simple assertions. I present as exhibit one the GetPayroll method, called below. I want to assert that I call the Log method with the result of GetPayroll. So … Continue reading First experiences of Telerik’s JustMock

Advertisements

Six Months Down The Line – Unit Testing


So, the project that I’ve been working on for the past six months or so is finally coming to an end drawing closer to it’s first release date. These days I try to stop talking about “coming to an end” of a project when it goes live – which, to me, seems more like a waterfall-way of talking about the software lifecycle i.e. “development” is a phase which ends and no more changes ever happen to a project. In reality, the first release should be no different from future releases which have one or a hundred features (notwithstanding the fact that the first release might be a slightly bigger bang than the other ones…). Anyway, I wanted to blog about some of the different techniques and technologies that we used on this project…. first up: - Unit Testing and TDD Without a doubt this is the biggest difference to project I’ve previously worked on, certainly from a development perspective. I’d used unit testing and TDD on a couple of other projects, but this one is the first where we really tried to “push it to the max” and adopt it 100% from the start. Having had some experience of TDD, I felt that I was ready to both help others learn how to do it, as well as adopt it as part of our project coding standards. So this post is about some of the lessons I learned from using TDD on a medium-large project. Hopefully you can avoid some of the pitfalls I fell into and save yourself some time and money! I’m not going to talk about the usual cost / benefit argument of TDD as that’s been done to death. Instead, I want to discuss the side effects, both good and bad, that you might encounter from using TDD. Done means Done It enabled the team to really define, with a degree of confidence, when “done” is really “done”. It makes you think hard about what you’re coding - business rules, interactions with other dependencies etc.. In my case, our project interacts with a number of other systems and endpoints. Integration testing with them is difficult enough when either you don’t have an instance of said client system in your development environment and are just coding to a contract, or arranging the data for testing purposes is time consuming and difficult to repeat. Using unit testing, with the aid of Rhino Mocks, allowed us to develop the system and, rather than leaving all the integration testing until the end, we were able to at least mock the external systems right from the start. This really helped us think about how we should respond under different conditions e.g. the other system is unavailable etc.. I can’t stress enough how important this has been to us! It also got us thinking about requirements e.g. we would write a seemingly-benign test and think “hang on, we’ve not discussed that with the users. What would happen if value xyz was passed in?”, and this would sometimes start a chain-reaction of question/answer sessions with the users. The in a way slowed us down sometimes, but it was worthwhile because we’d have had to deal with it later anyway. The upshot was that once we had finished a feature and moved onto the next one, we felt like we actually had a list of points that we could refer to to prove “look, we’ve done all of this and it works”. How much should you test? I started off thinking that, within our application, we’d test just the service layer of the application, and not bother unless it was really important to test presentation layer (I’m talking Smart Client Software Factory MVP here). However, as I coded more and more, I found myself writing more tests for the presenters – it simply made my life easier. I didn’t have to keep running the whole application to test out a couple of lines of code I’d written – my unit test would do that for me. In fact, I now sometimes go hours without even hitting F5. I would say that you should trust your instincts – obviously you’ll improve with time knowing when to write tests and when not to. Just don’t rule it out in any particular area of your application on principle. We ended up writing unit tests on all tiers of the project except for the two ends of the spectrum – the actual UI tier (no automated tests) and the low level data layer (i.e. our LINQ queries over our Entity Framework model). But we minimised them both by using MVP for front-end and testing the presenters, and by keeping the data queries in their own class which did nothing but execute queries / attach entities to data context; all our service-oriented logic was in a separate class which had near-enough 100% code coverage. MSTest The project is a reasonable size – maybe 25 projects / assemblies, using SCSF. We used MSTest as our test framework and runner. In general, this was a positive experience. However, I do have some criticisms of it: - Inheritance of unit test classes is average at best. The biggest issue that I had with it was that if you want a base TestInitialise method to fire on a number of test fixture classes – you can’t do it. The workaround we had was that we put the reusable code in the base class and all child classes had to explicitly call the base class method in the Test Init. The test runner in MSTest is pretty slow. We have around 1,700 unit tests currently. Each one is fairly small and takes just a few ms to run. However, when you do your CTRL-R, A to run all tests – actually initialising the unit test running and closing it at the end takes maybe 5-10 seconds. And, if you stop the unit tests half way through a run – that’s another 5-10 seconds for VS2008 to “think about it”. There’s no ability (that I could find) to switch between different Test profiles as part of Team Build e.g. one with Code Coverage enabled and one without. We had a bug on all our machines whereby the test runner would not debug tests. It’d run them fine, but on debug the VSTestHost.exe process would simply stall. We never did fix it, although we tried TestDriven.NET’s test runner and that worked fine. Having said all that, overall I am satisfied with the features of MSTest. The best part is the integration with VS and TFS. Obviously, there’s no need to purchase and install a separate test runner. There’s also a relatively short learning curve in using it as it integrates so well into VS. TFS integration is also excellent – it’s easy to push into Team Build for CI, and reports use the results as well. You can also tie in unit tests to work items through attributes, although this didn’t work as well as I was hoping (I would have liked to have seen reports or TFS screens showing Work Item Unit Test relationships. Keeping Unit Tests Granular Even though I’d written unit tests on a couple of projects before, I didn’t realise just how difficult they can be to write when you’re holding them as the standard within a project, or practicing TDD from the start! Not in a bad way I suppose – more in terms of "the art of writing good tests”. Half way through the project we adopted a few different practices for our unit tests; the biggest change by far was the three-part naming convention for unit tests: - MethodUnderTest_Scenario_ExpectedBehaviour e.g. LoadCustomer_DatabaseIsDown_ReturnsNull or whatever. Doing this helps so much in writing small unit tests with clear boundaries for what they should and shouldn’t do. Before we had this convention, many of our unit tests had (and some still do) several asserts, or worst still, several “actions” within them. It was unclear what the unit tests were doing and how they functioned. Worse still, if you broke a test through e.g. a breaking change, it was difficult to understand several weeks later just what the unit test was doing or how to fix it. By keeping your unit tests small, it may take a few seconds or minutes longer to write two or three small unit tests into of one larger one, but it pays dividends once you come to change the code under test later. We used a couple of “tests” to check whether a unit test was doing two much: - More than one Assert. I don’t adhere to this rule religiously, but if I had more than one or two, I ask myself if I could or should split this into a couple of tests into. Test Name is too long. If you have a test method which reads too long – or has telltale signs such as use of the word “With” or “And” – particularly in the Behaviour part of the test name, this is generally a sign that the test is doing too much e.g. LoadCustomer_DatabaseIsDown_ReturnsACustomerWithCustomerIdOfMinusOneAndNameOfNULL This could (should?) be rewritten into two or three unit tests e.g. LoadCustomer_DatabaseIsDown_ReturnsACustomer LoadCustomer_DatabaseIsDown_CustomerIdIsMinusOne LoadCustomer_DatabaseIsDown_CustomerNameIsNULL They’d probably have very similar set-up; the only different would be the assertion. The benefit of this approach is that if you decide to change the customer name for null customers to something else in future, only that one unit test breaks, so it’s clear what unit test has broken, rather than just a part of it. You could achieve this in one test if you put in appropriate messages for each assertion, but you then run the risk of the test set up growing too large in cases where the two assertions have little relation to one another. I found it far easier to go with lots of small unit tests rather than few large ones. Treat Unit Tests as First Class Citizens This means that attempting to write unit tests as quickly as possible, with no thought given to their maintainability, is a fallacy and will quickly come back to bite you in the bum – hard. In other words, treat your unit test code as you would treat your main application – to a point. For us this meant things like: - Code must be easily readable No uber-large methods Avoid copying and pasting large reams of code between tests – use a shared method if possible with logical names to indicate what part of the set up they are doing (very important!) However, we didn’t do things like turn on code analysis on our test projects. I actually watched an interesting video podcast from the guys at Typemock where Gil and Dror mentioned that there’s nothing wrong with copying and pasting code between unit tests. I have to say that (unless I misunderstood them) I disagree with that statement. Yes, unit tests are not production code. However, if you have lots of set-up code for a test (perhaps creating some mocks, injection into your IoC container etc.) then it makes sense to refactor that into a method that can be called by all your unit tests. I’m not necessarily proposing pushing that into your test init method – that often makes matters worse as your tests become unreadable – but an easy to read method that is explicitly called by your tests e.g. StubLoadCustomerService (Customer customerToReturn). Obviously that’s a contrived method which might only have one or two lines in it – but if you’ve got five-ten lines of boilerplate setup, push it into a method. This will not only make your code easier to maintain in case of a breaking change, but more importantly, will make your tests much more readable, and this is key if someone else breaks one of your tests. In fact, the biggest cost of unit tests that I found wasn’t so much writing the tests first time – it was the impact on your tests when you refactor your code under test. Simple changes like method names etc. aren’t a problem – it’s when you fundamentally change the way that your code works which totally breaks your unit tests. In these cases, having a set of reusable methods across multiple unit tests will dramatically reduce the cost of getting your unit tests up and running again. Inheritance can be a Good Thing Even since I became familiar with SOLID and read Robert Martin’s “Agile Patterns…” booking, I’m always a bit fearful of over-using inheritance; I try to avoid deep, nested inheritance. However, in this case a single level or two can be a massive help. Example: testing your presenters in Smart Client always has some boiler plate code that you need to write: creating a presenter of , providing the appropriate view to it, and normally placing a mocked work item as well. So we made a unit test fixture base class for Presenters which looks something like this: - YourPresenterTestFixture : PresenterTestFixtureBase { } This class gives you, for free, a presenter, view and work item all created together in a consistent way with the property names consistent across all tests. We did a few more to enable easier injection into the IoC container (Unity) and some of ones for SCSF-specific stuff. Once we did this, it helped so much with writing new unit tests that we went back and pulled all the other ones into this as well. Because these base classes don’t do “too much”, it’s still clear as to what they do and what’s going on in unit tests – in fact it helps massively as the boilerplate code is removed and it just lets you get on with the main unit testing. Keep Compile Times Down! You’ll be running your unit tests very often, and as a result compiling a lot – more so than you would be when writing code without TDD, in fact. So make sure that compile times are as low as possible – if it’s not, you’ll either get developers not bothering to write tests because they’re twiddling their thumbs too much while waiting for the compile - or twiddling their thumbs too much whilst waiting for the compile instead of writing tests. Just a few seconds saved on a typical compile will save your hours and days of developer time and money – this is about removing impediments and making life as pleasurable (or not as painful, depending on your perspective) as possible for your team so that they can concentrate on coding. I spent a day or so tracking down an issue with Entity Framework which was causing a full rebuild of every assembly (remember this is about 25) just because you changed a unit test. But once we fixed this compile times went down from 30-40 seconds to around 5-10 seconds. Imagine a three-developer team, each conservatively doing 100 compiles a day caused by changes to your unit tests: - Time saved by fixing EF issue per compile: 30 seconds Compiles per day: 100 x 3 = 300 Time saved per day: 300 * 30 / 60 / 60 = 2.5 hours I’m not suggesting that you’ll find a time saving like that every week (in fact, if you do, that’s bad because you’ve been losing 2.5 hours every day up until today…) but even a saving of 5 seconds per compile will help. It’ll keep your team happier and more in the zone of coding instead of being distracted whilst waiting for the compile, and get stuff done quicker. Try turning off code coverage on your tests to get them running quicker (maybe have a build solution which has it turned on) or doing something in the project build settings to keep compiles as quick – check your references so that only real dependencies are referenced etc. Mocks and Stubs Or Isolations (?) depending on which mocking framework you use 😉 We use Rhino Mocks for our test framework. I’m quite lucky in that that was the first framework we looked at, and as it turns out, it’s the most popular. On the first project I used TDD for, I quickly realised the need to use interfaces to break dependencies and unit test instead of integration test. However, I then didn’t go the next step to look at mocking frameworks and hand-crafted all my mocks! Obviously this took time – but in retrospect it was a good experience as I now appreciate frameworks such as Rhino and when to use them. Coming in quite late to the stubs/mocks game, I learned that there were the Arrange/Expect crowd and the Arrange/Act/Assert crowd; I quickly fell into the latter – it feels more natural to me, more logical. And, importantly, easier to learn. I’m also happy to have read that Rhino 4 will be rewritten to completely dump the Expect() mechanism and will move exclusively to AAA. People repeatedly told me that they found that there were too many ways to accomplish the same goal in Rhino; hopefully R4 will remove that issue. There’s still a fair amount of time you need to invest in using Rhino – we’re still figuring out new things today that we wish we had known six months ago – but compared to hand-writing your mocks (or not mocking at all!), it’s a small price to pay. Getting People into TDD This has without doubt been the hardest part. Thankfully the people on my team personally have been open to new ideas and with some coaching have picked it up now – the thing I’ve struggled with is getting it onto other teams as well. I often hear things like “we don’t have the budget” or “it’s not appropriate on this project” etc. etc.. Slowly, though, I think I’m breaking down the barriers – TDD is hard to learn; it’s almost like learning another way to code, so it’s only fair to assume that people will take time to get into it. Hopefully though, as people on my team work on other projects, they’ll disseminate the skills across the organisation. That’s the plan anyway 🙂 Summary TDD has been a massive benefit to my project. It’s not been easy, but the lessons we’ve learned on this project are a one-off which we won’t encounter again. And as we start to gear up for our first release, I’m hopeful that the application won’t suffer from a high number of defects in testing or from code smells over time, because of the work we’ve done in providing a framework and set of standards for maintaining the quality of code.

Mocks vs Stubs


I've been using Rhino Mocks for a while on my current project, and recently came across a few articles weighing up the pros and cons of mocks vs stubs. To me, I don't get what the fuss is about within the context of a unit test. Most if not all our unit tests follow the AAA pattern. So I might have something like: // Arrangevar dbLayer = MockRepository.Stub ();var customers = new List ();dbLayer.Stub (db => db.LoadCustomersByRegion(1)).Returns (customers);// ... register into e.g. Unity etc. // Actvar myServiceLayer = new ServiceLayer ();var result = myServiceLayer.GetCustomersForRegion (Regions.Uk); // AssertdbLayer.AssertWasCalled (db => db.LoadCustomersByRegion (1));Assert.AreSame (customers, result); This sort of approach seems fairly logical – to me – and easy to read i.e. creation of stubs and setting what methods will return is done at the start of the unit test. Assertions are at the end of the unit test. However, I’m reading other people using Replay, Expect and VerifyAllExpectations etc. etc.. Martin Fowler’s definition of stubs seems to be the “classical” definition of what a stub is i.e. typically a method which is written to return some hard-coded values in absence of “real code”. However, in the context of unit tests like that above, stubs effectively become mocks. I’ve read a few articles scattered around on people commenting on how using stubs in the above manner is “wrong” and that one should never place expectations on stubs – why not? It all works to me – and frankly it reads more logically than the Reply / Expect mechanism. However, as someone who is a relative newcomer to using mocking frameworks (maybe the last six months of using it in anger) maybe I’m missing something here – please feel free to tell me 🙂