The Dependency Inversion Principle


This is something that I read a while ago in the Agile Patterns & Principles book a while ago – in fact, first time I read it was a good 18 months ago. I tried using it in a project then, and was partially successful – but am now using it “properly” on my current, and thought it was worth blabbing a little about it.

Here’s a simple example of how you might currently have a simple two-tier system talk to each other: –

image

image

You’ll notice that there’s a concrete reference between the two classes. This is often fine, but sometimes, you’ll want to abstract this relationship away (don’t you find that everything in computer science comes down to abstractions 🙂 and have an interface in between as a level of indirection. Why? Perhaps you have a number of concrete implementations and want to have some single piece of code that can act over any implementation. Or maybe (as I do in my current project) you want to mock your DAL and thus replace the “real” DAL with a stub that does stuff to help write your unit tests.

So, you create an interface on your DAL based on your DAL class, like so (note: it just so happens that Visual Studio has an “Extract Interface” refactor tool built in for exactly this sort of scenario): –

image

Then, on e.g. your Business Logic class, you perhaps have a constructor which by default uses the “real” DAL, and another which can take in a mock version or similar. Otherwise, your BLL class only ever talks to the interface. Great!

image

Except there’s a problem to this approach that’s both logical and physical – the placement of your interface. By putting it onto the DAL, the BLL still has to directly reference the DAL. It’s still physically tightly coupled to the DAL assembly; if you wanted to change it over, you’d theoretically have to redeploy both assemblies even though the BLL isn’t interested in the concrete implementation, only the interface.

How do we solve this problem? Using the DIP. This states that you effectively keep the interface – the contract – close to the client of that contract, rather than close to the server (implementation) of it. This is because when you think about it, why should a change in the physical implementation of that interface be closely coupled to the interface? Even worse, why should you be able to break the client simply by changing the server dll!?

So, you change the relationship like so:

image

So, now the business logic is closely related to the interface. You cannot deploy a change to the interface without redeploying the business logic layer client itself. However, there’s a catch… you may have asked yourself – how does the BusinessLogic get a reference to the real CustomerDal class? It doesn’t reference that assembly any more! So what do you do? There are a number of options. One is to have a third “controlling” assembly which references both the assemblies above and passes the concrete implementation at runtime into the business logic. This works great, but requires manual effort, and is really just writing boilerplate code to act as a factory injecting objects in.

A much more elegant solution is to use something like Unity, which is basically an object factory. You specify in your application configuration file what gets resolved to what e.g. ICustomerDal maps to CustomerDal. Then in your BLL, you simply tell Unity – “give me an object of type ICustomerDal”. This way, you never reference the physical implementation. You can mock up a fake Data Access layer in your unit tests. Your code is loosely coupled. It – just – works! 🙂

So what I’ve done compared in this final version of code is effectively take out the code which decides what object the ICustomerDal should be:

image

The Container class above is a simple singleton instance of the Unity Container.

UPDATE: Note that, as has been pointed out, the above code is NOT a valid implementation of the dependency injection (DI) pattern. I’ve left this intentionally as I do not want this post to confuse matters in terms of “what is DI?” versus “what is DIP?”, which are two separate concepts. You can have DIP without DI (as above), although DIP is more effective with DI, and you should probably use a container like Unity to do the grunt work for you. Similarly, you can have DI without using DIP. The two are closely related, and often go hand-in-hand, but they don’t have to.

The first time you do things this way it might feel “wrong” – you’re probably used to doing downwards-facing references. But once you get over that initial “ugh” feeling, you’ll see the elegance of doing things this way. I think that it’s really smart anyway!

I’ll talk about Unity more in detail another time, suffice it to say that you can configure the mappings between the interface and the concrete type in your app.config file, or through code, in an easy-to-read fashion.

Advertisements

2 thoughts on “The Dependency Inversion Principle

  1. A great write-up in principle but it suffers from the same problem that I see and read about many times regarding DI containers. Basically, you have implemented the Service Locator pattern here, and replaced your DAL dependency with a dependency on your DI Container. Every time you need to inject a dependency within your code you will introduce another dependency on the DI container. It doesn’t take very long before your code is completely tightly coupled to your container framework and there’s no way to break apart those dependencies. Believe me, it can get to the point where it is more painful than having concrete dependencies in your code.
    The point about Dependency Injection is that you are injecting your dependencies into the classes that need them, either via properties or the constructor. That way, you can keep all your boilerplate code that maps your interfaces to your concretes in a single location and have at most one single call, at the entry point to your application, that hits your DI Container and starts the process of resolving all your dependencies.
    The other problem I have with this approach, which tends to be a fairly even split in terms of for and against, is the use of XML configuration rather than a fluent interface. For me, XML config suffers from a lack of type safety and intellisense, which can cause some annoying and hard-to-track-down bugs due to namespace typos, etc.
    Paul Hiles has written an excellent series of posts that highlight some of these issues: http://bit.ly/tKbqrf

    1. OK – I was going through some old blog posts to clean up some of the code and saw this one and was in the process of correcting this exact issue…

      Yes, you’re absolutely right re: service locator. There is a very good reason why this post uses that pattern here – which I’ll explain in just a second 🙂 If I were to do it today, I wouldn’t have any dependency in the class on a container at all, except for perhaps [Dependency] attributes on my dependency properties. As for XML configuration – I never use it any more. I did it once on a single project and regretted it ever since – as you say, it’s error-prone and worse than that, it’s hard to read and modify. I would only use it nowadays for where I have to change dependencies post-deployment, which rarely happens – I find that most dependencies have static mappings. Nowadays I tend to use a naming convention approach or something similar, and on startup of my appdomain use reflection to generate the mappings.

      Now, as for why I’m using Unity “wrongly” in this example – it was written over two years ago; I was writing a system using Smart Client Software Factory which generates views for you using the MVP pattern; if we wanted to inject dependencies into our code using Unity, the only way to do it was through the service locator style pattern rather than using the container properly. This was because SCSF was tightly coupled to the old ObjectBuilder (the precursor to Unity) and we didn’t want to use that (can’t remember the reasons why…) for managing our dependencies. For this demo I just ripped out some of that code as an example and changed the class names etc..

      Anyway – I’ve decided to clean up the code sample a little but I’ve actually left in the service locator pattern rather than go to a full DI example. That’s because this blog post was on “what is the dependency inversion principle” rather than “how-do-I-do dependency injection” – and, in true SOLID fashion, I want to leave the post as dealing with just one thing rather than dealing with two things at once 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s