logo1

logo

Noninvasive Unit Testing in ASP.NET MVC4 – A Microsoft Fakes Deep Dive

59 Comments
Posted in Unit Testing

A lot of today’s unit testing technologies require significant invasive code changes in order to unit test appropriately. I’ve always been of the mindset that testing your code should be as noninvasive as possible to the system under test, regardless of how that system is designed.

The ability to test a system shouldn’t be dependent on whether or not that system was designed to be compatible with a certain set of testing tools. The design of systems should instead be driven by the needs of the problem domain, while complexity in applied patterns and concept count should only be escalated as it becomes necessary to do so.

In combining the KISS principle and YAGNI with agile architecture you get an architectural design, that at any given point in time, is the simplest to use, easiest to work with and the most maintainable as is allowable or possible in the problem domain in question.

The KISS principle states that most systems work best if they are kept simple rather than made complex, therefore simplicity should be a key goal in design and unnecessary complexity should be avoided. – Wikipedia

"You ain't gonna need it" or “You aren′t gonna need it” (acronym: YAGNI) is the principle in extreme programming that programmers should not add functionality until it is necessary. – Wikipedia

Up until recently the ability to test without escalation of certain architectural patterns was not possible. Even if you wanted to practice a noninvasive testing style, the means to do so as well as community support weren’t generally available.

Conceptually, noninvasive testing tools and methodologies represent a natural progression/evolution in unit testing practices and principles. This is most visible in the evolution of major commercial testing tools, such as TypeMock’s Isolator and Telerik’s JustMock, that now have features to test/mock everything, not just interfaces and base classes. Now, with the introduction of Microsoft Fakes in Visual Studio 11 (more specifically the ability to detour via shimming), we are given all the tools necessary to accomplish noninvasive unit testing built right in to our development environment.

See my earlier blog posts on Microsoft Fakes for more background information: Using Stubs and Shims to Test with Microsoft Fakes in Visual Studio 11 and Comparing Microsoft Moles in VS2010 to Microsoft Fakes in VS11

Additionally Fakes allows us to take the “mockist” approach of behavior verification described by Martin Fowler in his article Mocks Aren’t Stubs.

But as often as not I see mock objects described poorly. In particular I see them often confused with stubs - a common helper to testing environments. I understand this confusion - I saw them as similar for a while too, but conversations with the mock developers have steadily allowed a little mock understanding to penetrate my tortoiseshell cranium.

This difference is actually two separate differences. On the one hand there is a difference in how test results are verified: a distinction between state verification and behavior verification. On the other hand is a whole different philosophy to the way testing and design play together, which I term here as the classical and mockist styles of Test Driven Development.

Later on in his article Martin provides a more concrete example as he discusses the differences.

The key difference here is how we verify that the order did the right thing in its interaction with the warehouse. With state verification we do this by asserts against the warehouse's state. Mocks use behavior verification, where we instead check to see if the order made the correct calls on the warehouse. We do this check by telling the mock what to expect during setup and asking the mock to verify itself during verification. Only the order is checked using asserts, and if the the method doesn't change the state of the order there's no asserts at all.

With that said I’ll be using Microsoft Fakes to apply noninvasive and mockist testing techniques to test the AccountController of a default MVC 4 project created using the "Internet Application” template, making absolutely no changes at all to the project. This example will use a mixture of both shimming and stubbing from Microsoft Fakes in order to get the job done.

Getting Started

Let’s take a quick look at the class definition for AccountController.

MethodsToTest

Looking through the implementations, a few of the methods are trivial enough for us to skip as part of this example.

public ActionResult ChangePassword() { return View(); }

public ActionResult ChangePasswordSuccess() { return View(); }

[AllowAnonymous]
public ActionResult Login() { return ContextDependentView(); }

[AllowAnonymous]
public ActionResult Register() { return ContextDependentView(); }

Additionally, we’re going to forego testing ContextDependentView, GetErrorsFromModelState & ErrorCodeToString in favor of the more complex methods. That’s not to say you wouldn’t test these methods for appropriate coverage, just that we’re going to exclude them to keep this post somewhat reasonable in length.

Before we get started though, we need to do some basic project setup. 

  • Create a new ASP.NET MVC 4 Application project using the Internet Application template
  • Add a Unit Test Project, I renamed the default cs to AccountsControllerTests
  • Add references to the following items in the Unit Test project
    • the MVC 4 project
    • System.Web
    • System.Web.MVC
  • Additionally I’ll be using NUnit for assertions, so pull down NUnit from Nuget and add the following using statement to the top of the AccountsControllerTests file:
using Assert = NUnit.Framework.Assert;

After all that your solution should look something like this:

InitialSolutionConfig

LogOff Method

We’ll start off with the LogOff method (as seen below), since this is one of the simpler methods we’re going to be looking at.

public ActionResult LogOff()
{
    FormsAuthentication.SignOut();

    return RedirectToAction("Index", "Home");
}

First off, let’s review our goals here. Since our intent with mocking is behavior verification, we want to test both that the correct RedirectToAction was returned and that FormsAuthentication.SignOut() was called. Testing that the correct RedirectToAction was returned seems easy enough, so we’ll start with that.

using System;
using System.Web.Mvc;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NoninvasiveMVC4Testing.Controllers;
using Assert = NUnit.Framework.Assert;

namespace NoninvasiveMVC4Testing.Tests
{
    [TestClass]
    public class AccountsControllerTests
    {
        [TestMethod]
        public void TestLogOff()
        {
            var accountController = new AccountController();
            var redirectToRouteResult = accountController.LogOff() as RedirectToRouteResult;

            Assert.NotNull(redirectToRouteResult);
            Assert.AreEqual("Index", redirectToRouteResult.RouteValues["Action"]);
            Assert.AreEqual("Home", redirectToRouteResult.RouteValues["controller"]);
        }
    }
}

We get the following results when running our new unit test:

FailedInitialUnitTest

Looking at the stack trace we can see that a NullReferenceException was thrown from FormsAuthentication.SignOut(). This makes sense as technically we’re not in the context of an actual web request and FormsAuthentication depends on a valid HttpContext to be available. This type of problem is common when testing web applications outside of the context of an actual request to a web server.

The traditional guidance on how to test something like this is as follows (see this StackOverflow post for more information):

  • Create a wrapping class around FormsAuthentication with a public method that runs the necessary method
  • Create an interface for this behavior
  • Use dependency injection in our controller to replace the direct call to FormsAuthentication with that of our wrapping class.

Using this formula, our controller code (not the test code mind you) would have to be changed as follows:

public interface IAuthenticationProvider
{
    void SignOut();
}

public class FormsAuthWrapper : IAuthenticationProvider
{
    public void SignOut()
    {
        FormsAuthentication.SignOut();
    }
}

public class AccountController : Controller
{
    private readonly IAuthenticationProvider _authenticationProvider;

    public AccountController(IAuthenticationProvider authenticationProvider)
    {
        _authenticationProvider = authenticationProvider;
    }

    public ActionResult LogOff()
    {
        _authenticationProvider.SignOut();
        return RedirectToAction("Index", "Home");
    }
}

As you can see this pattern is invasive to the system under test and with the purpose of this post being to apply noninvasive testing techniques, we’re going to consider a different way of testing using Microsoft Fakes. Surprisingly enough, it makes short work of these types of scenarios.

The Noninvasive Approach

Let’s start off by putting in what’s minimally necessary to get our test to pass as is. Right click on the System.Web reference in the test project and select Add Fakes Assembly. Once a Fakes assembly is added for System.Web we can use shims in Microsoft Fakes to detour the call to FormsAuthentication.SignOut() to an implementation of our choosing, hopefully one that won’t throw a NullReferenceException.

using System;
using System.Web.Mvc;
using System.Web.Security.Fakes;
using Microsoft.QualityTools.Testing.Fakes;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NoninvasiveMVC4Testing.Controllers;
using Assert = NUnit.Framework.Assert;

namespace NoninvasiveMVC4Testing.Tests
{
    [TestClass]
    public class AccountsControllerTests
    {
        [TestMethod]
        public void TestLogOff()
        {
            var accountController = new AccountController();
            RedirectToRouteResult redirectToRouteResult;

            //Scope the detours we're creating
            using (ShimsContext.Create())
            {
                //Detours FormsAuthentication.SignOut() to an empty implementation
                ShimFormsAuthentication.SignOut = () => { };
                redirectToRouteResult = accountController.LogOff() as RedirectToRouteResult;
            }

            Assert.NotNull(redirectToRouteResult);
            Assert.AreEqual("Index", redirectToRouteResult.RouteValues["Action"]);
            Assert.AreEqual("Home", redirectToRouteResult.RouteValues["controller"]);
        }
    }
}

That’s simple enough and it does indeed pass.

TestLogOffInitialSuccess

We still have to test that FormsAuthentication.SignOut() was actually called. All we have to do is flip a bit inside of the detoured SignOut method (see lines 28 and 31) and assert it. Here’s the final method.

[TestMethod]
public void TestLogOff()
{
    var accountController = new AccountController();
    var formsAuthenticationSignOutCalled = false;
    RedirectToRouteResult redirectToRouteResult;

    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        //Detours FormsAuthentication.SignOut() to an empty implementation
        ShimFormsAuthentication.SignOut = () =>
        {
            //Set a boolean to identify that we actually got here
            formsAuthenticationSignOutCalled = true;
        };
        redirectToRouteResult = accountController.LogOff() as RedirectToRouteResult;
        Assert.AreEqual(true, formsAuthenticationSignOutCalled);
    }

    Assert.NotNull(redirectToRouteResult);
    Assert.AreEqual("Index", redirectToRouteResult.RouteValues["Action"]);
    Assert.AreEqual("Home", redirectToRouteResult.RouteValues["controller"]);
}

Testing that FormsAuthentication was called seems pretty trivial here, however we’ll build on this approach as we test more and more complicated methods.

JsonLogin

Moving on, JsonLogin is probably the next simplest method to test in order for us to ease our way into noninvasive testing with Fakes.

[AllowAnonymous]
[HttpPost]
public JsonResult JsonLogin(LoginModel model, string returnUrl)
{
    if (ModelState.IsValid)
    {
        if (Membership.ValidateUser(model.UserName, model.Password))
        {
            FormsAuthentication.SetAuthCookie(model.UserName, model.RememberMe);
            return Json(new { success = true, redirect = returnUrl });
        }
        else
            ModelState.AddModelError("", "The user name or password provided is incorrect.");
    }

    // If we got this far, something failed
    return Json(new { errors = GetErrorsFromModelState() });
}

Right off the bat, it’s pretty clear that MemberShip.ValidateUser and FormsAuthentication.SetAuthCookie will need to be detoured based on our prior experience with the LogOff method. We’ll additionally test that the correct parameters were passed into each.

[TestMethod]
public void TestJsonLogin()
{
    string testUserName = "TestUserName";
    string testPassword = "TestPassword";
    bool testRememberMe = false;
    string testReturnUrl = "TestReturnUrl";

    var loginModel = new LoginModel
    {
        UserName = testUserName,
        Password = testPassword,
        RememberMe = testRememberMe
    };

    var accountController = new AccountController();
    JsonResult jsonResult;
    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        //Sets up a detour for Membership.ValidateUser to our mocked implementation
        ShimMembership.ValidateUserStringString = (userName, password) =>
        {
            Assert.AreEqual(testUserName, userName);
            Assert.AreEqual(testPassword, password);
            return true;
        };

        //Sets up a detour for FormsAuthentication.SetAuthCookie to our mocked implementation
        ShimFormsAuthentication.SetAuthCookieStringBoolean = (userName, rememberMe) =>
        {
            Assert.AreEqual(testUserName, userName);
            Assert.AreEqual(testRememberMe, rememberMe);
        };

        jsonResult = accountController.JsonLogin(loginModel, testReturnUrl);
    }
}

Now on to the tricky part, testing JsonResult. JsonResult.Data is of type Object, but is filled with an anonymous type.

JsonResultProperties

return Json(new { success = true, redirect = returnUrl });

This makes it slightly more difficult to get at the properties we want to test.

Possible solutions

First off, we might try to cast JsonResult.Data out of Object into some type we could use to access the fields. This requires some runtime trickery and ends up being a bit of mess. See this StackOverflow post for more info.

private static T CastTo<T>(this Object value, T targetType) 
{ 
    // targetType above is just for compiler magic 
    // to infer the type to cast x to 
    return (T)x; 
}

Unfortunately, this only works if you’re working within the same assembly that defined the original anonymous type.

Next up, we could cleverly stuff a dynamic type with the value from JsonResult.Data and access the properties that way.

dynamic data = jsonResult.Data;
Assert.AreEqual(true, data.success);
Assert.AreEqual(testReturnUrl, data.redirect);

This fails as well, since anonymous types are declared as internal as described by the blog post: Anonymous Types are Internal, C# 4.0 Dynamic Beware!.

JsonLoginDynamicFail

Even though the dynamic data variable has the success property, we don’t have access to it. We could use the assembly attribute InternalsVisibleTo in order to give our testing project access to internal types.

[assembly: InternalsVisibleTo("NoninvasiveMVC4Testing.Tests")]

I don’t consider this to be a bad technique, however since we’re trying to be completely noninvasive, I’m going to opt for a slightly different approach.

We’ll use PrivateObject (MSDN Link) to get at the properties. PrivateObject’s MSDN description:

Allows test code to call methods and properties on the code under test that would be inaccessible because they are not public.

PrivateObject ultimately just uses reflection in order to expose the values we need to test.  The real value is in the fact that it abstracts the reflection code away from us. Here’s the code updated with PrivateObject:

var success = (bool)(new PrivateObject(jsonResult.Data, "success")).Target;
var redirect = (string)(new PrivateObject(jsonResult.Data, "redirect")).Target;

Assert.AreEqual(true, success);
Assert.AreEqual(testReturnUrl, redirect);

And with that we now have successful tests

JsonLoginTestsSuccess

Just for completeness, I’ve put together a test to validate the behavior of an invalid login.

[TestMethod]
public void TestInvalidJsonLogin()
{
    string testUserName = "TestUserName";
    string testPassword = "TestPassword";
    bool testRememberMe = false;
    string testReturnUrl = "TestReturnUrl";

    var loginModel = new LoginModel
    {
        UserName = testUserName,
        Password = testPassword,
        RememberMe = testRememberMe
    };

    var accountController = new AccountController();
    JsonResult jsonResult;
    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        //Sets up a detour for Membership.ValidateUser to our mocked implementation
        ShimMembership.ValidateUserStringString = (userName, password) => false;
        jsonResult = accountController.JsonLogin(loginModel, testReturnUrl);
    }

    var errors = (IEnumerable<string>)(new PrivateObject(jsonResult.Data, "errors")).Target;
    Assert.AreEqual("The user name or password provided is incorrect.", errors.First());
}

And there we go, easy as pie.

LogOff_ValidLogin_InvalidLogin_Tests

For the remainder of the post, I’m just going to focus on the “happy” path for brevity. Testing the other paths is relatively straight forward given what we’ve already done.

Login Method

Stepping up in complexity we move on to the Login method.

[AllowAnonymous]
[HttpPost]
public ActionResult Login(LoginModel model, string returnUrl)
{
    if (ModelState.IsValid)
    {
        if (Membership.ValidateUser(model.UserName, model.Password))
        {
            FormsAuthentication.SetAuthCookie(model.UserName, model.RememberMe);
            if (Url.IsLocalUrl(returnUrl))
                return Redirect(returnUrl);
            else
                return RedirectToAction("Index", "Home");
        }
        else
            ModelState.AddModelError("", "The user name or password provided is incorrect.");
    }

    // If we got this far, something failed, redisplay form
    return View(model);
}

MemberShip.ValidateUser and FormsAuthentication.SetAuthCookie are easy enough to test via Shimming. Under normal circumstances Url.IsLocalUrl would be simple to Shim as well. Unfortunately I ran into an issue when faking the System.Web.Mvc assembly containing it. Once you try to instantiate a controller (as part of your test project) after adding a Fakes assembly you get a System.Security.VerificationException: Operation could destabilize the runtime. See my Microsoft Connect submission for more info.

Fortunately enough, there’s a way to mock its implementation using the stubs portion of Microsoft Fakes as opposed to shims. This brings up an interesting dilemma, if a compatible stubbing technique is available should you use that instead of shimming?

I would say the answer is generally “yes” provided that these criteria are met:

  • It doesn’t significantly decrease the readability of the test
  • It doesn’t require excessive measures (such as reflection dumpster diving) to figure out how to do it

Stubbing Around System.Web.Mvc

The first problem we need to solve is that the Url property (of type UrlHelper) is null on our instance of AccountController. The ctor on UrlHelper requires a RequestContext. The ctor on RequestContext requires an HttpContextBase. Since HttpContextBase is an abstract class we can stub it easily and make our way back up the dependency hierarchy.

Decompiling UrlHelper with ILSpy shows us that we’ll need to stub one more item in order to avoid the dreaded NullReferenceException.

public bool IsLocalUrl(string url)
{
    return this.RequestContext.HttpContext.Request.IsUrlLocalToHost(url);
}

We need to make sure that the Request property on HttpContextBase returns a value. As luck would have it the Request property is of type HttpRequestBase and we can easily stub it as well.

[TestMethod]
public void TestLogin()
{
    string testUserName = "TestUserName";
    string testPassword = "TestPassword";
    bool testRememberMe = false;
    string returnUrl = "/foo.html";

    var loginModel = new LoginModel
    {
        UserName = testUserName,
        Password = testPassword,
        RememberMe = testRememberMe
    };

    var accountController = new AccountController();

    //Setup underpinning via stubbing such that UrlHelper 
    //can validate that our "foo.html" is local
    var stubHttpContext = new StubHttpContextBase();
    var stubHttpRequestBase = new StubHttpRequestBase();
    stubHttpContext.RequestGet = () => stubHttpRequestBase;
    var requestContext = new RequestContext(stubHttpContext, new RouteData());
    accountController.Url = new UrlHelper(requestContext);

    RedirectResult redirectResult;
    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        //Sets up a detour for Membership.ValidateUser to our mocked implementation
        ShimMembership.ValidateUserStringString = (userName, password) =>
        {
            Assert.AreEqual(testUserName, userName);
            Assert.AreEqual(testPassword, password);
            return true;
        };

        //Sets up a detour for FormsAuthentication.SetAuthCookie to our mocked implementation
        ShimFormsAuthentication.SetAuthCookieStringBoolean = (userName, rememberMe) =>
        {
            Assert.AreEqual(testUserName, userName);
            Assert.AreEqual(testRememberMe, rememberMe);
        };

        redirectResult = accountController.Login(loginModel, returnUrl) as RedirectResult;
    }

    Assert.NotNull(redirectResult);
    Assert.AreEqual(redirectResult.Url, returnUrl);
}

With that, let’s run our tests and make sure everything is working.

TestRunAfterLoginMethod

One thing to notice here, is that the stubbing we’re doing (starting line 20 and continuing to line 27) doesn’t exactly convey what we’re trying to accomplish. All we care about doing is getting Url.IsLocalUrl to return true. Additionally, we had to know quite a bit about the internals of a Controller, UrlHelper, HttpContextBase, HttpRequestBase just to get this behavior to work.

In this scenario it would be preferable, readability wise, just to set and detour Url.IsLocalUrl. In this case our hand was forced since Microsoft Fakes and System.Web.Mvc aren’t currently cooperating, so I’m more than happy that at least a fallback was available.

JsonRegister Method

Both JsonRegister and Register are very similar, so we’ll just hit one of them. There’s really no new concepts here, just reapplying the what we used to test earlier methods.

[AllowAnonymous]
[HttpPost]
public ActionResult JsonRegister(RegisterModel model)
{
    if (ModelState.IsValid)
    {
        // Attempt to register the user
        MembershipCreateStatus createStatus;
        Membership.CreateUser(model.UserName, model.Password, model.Email, 
            passwordQuestion: null, passwordAnswer: null, isApproved: true, 
            providerUserKey: null, status: out createStatus);

        if (createStatus == MembershipCreateStatus.Success)
        {
            FormsAuthentication.SetAuthCookie(model.UserName, createPersistentCookie: false);
            return Json(new { success = true });
        }
        else
            ModelState.AddModelError("", ErrorCodeToString(createStatus));
    }

    // If we got this far, something failed
    return Json(new { errors = GetErrorsFromModelState() });
}

For JsonRegister we’ll need to shim Membership.CreateUser, which is straightforward enough. We’ll need to add a reference to System.Web.Security.ApplicationServices to our testing project for to work with MembershipCreateStatus and we’re good to go.

[TestMethod]
public void TestJsonRegister()
{
    string testUserName = "TestUserName";
    string testPassword = "TestPassword";
    string testConfirmPassword = "TestPassword";
    string testEmail = "TestEmail@Test.com";

    var registerModel = new RegisterModel
    {
        UserName = testUserName,
        Password = testPassword,
        ConfirmPassword = testConfirmPassword,
        Email = testEmail
    };

    var accountController = new AccountController();
    JsonResult jsonResult;
    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        //Sets up a detour for Membership.CreateUser to our mocked implementation
        ShimMembership.CreateUserStringStringStringStringStringBooleanObjectMembershipCreateStatusOut =
            (string userName, string password, string email, string passwordQuestion, 
                string passwordAnswer, bool isApproved, object providerUserKey,
                out MembershipCreateStatus @createStatus) =>
            {
                Assert.AreEqual(testUserName, userName);
                Assert.AreEqual(testPassword, password);
                Assert.AreEqual(testEmail, email);
                Assert.Null(passwordQuestion);
                Assert.Null(passwordAnswer);
                Assert.True(isApproved);
                Assert.Null(providerUserKey);
                @createStatus = MembershipCreateStatus.Success;

                return null;
            };

        //Sets up a detour for FormsAuthentication.SetAuthCookie to our mocked implementation
        ShimFormsAuthentication.SetAuthCookieStringBoolean = (userName, rememberMe) =>
        {
            Assert.AreEqual(testUserName, userName);
            Assert.AreEqual(false, rememberMe);
        };

        var actionResult = accountController.JsonRegister(registerModel);
        Assert.IsInstanceOf(typeof(JsonResult), actionResult);
        jsonResult = actionResult as JsonResult;
    }

    Assert.NotNull(jsonResult);
    var success = (bool)(new PrivateObject(jsonResult.Data, "success")).Target;
    Assert.True(success);
}

Running our tests once again, we can see everything’s passing

TestRunAfterJsonRegisterMethod

ChangePassword Method

The ChangePassword method is slightly more difficult to work with, since we have additional items to fake and stub, but otherwise the concepts are pretty similar.

[HttpPost]
public ActionResult ChangePassword(ChangePasswordModel model)
{
    if (ModelState.IsValid)
    {

        // ChangePassword will throw an exception rather
        // than return false in certain failure scenarios.
        bool changePasswordSucceeded;
        try
        {
            MembershipUser currentUser = Membership.GetUser(User.Identity.Name, 
                userIsOnline: true);
            changePasswordSucceeded = currentUser.ChangePassword(model.OldPassword, 
                model.NewPassword);
        }
        catch (Exception)
        {
            changePasswordSucceeded = false;
        }

        if (changePasswordSucceeded)
            return RedirectToAction("ChangePasswordSuccess");
        else
            ModelState.AddModelError("", "The current password is incorrect or the new password is invalid.");
    }

    // If we got this far, something failed, redisplay form
    return View(model);
}

We need to make sure that User.Identity.Name returns properly. In order to do this, we’re going to have to make sure AccountController’s User property gets populated with an Identity object. Again, due to the MVC faking issue, we’re going to approach this via stubbing, which is slightly less readable and requires some framework dumpster diving, but still gets the job done.

Decompiling down into the Controller class in System.Web.Mvc to see what we need to stub show the following:

public IPrincipal User
{
    get
    {
        if (this.HttpContext != null)
        {
            return this.HttpContext.User;
        }
        return null;
    }
}

Drilling in this.HttpContext

public HttpContextBase HttpContext
{
    get
    {
        if (base.ControllerContext != null)
        {
            return base.ControllerContext.HttpContext;
        }
        return null;
    }
}

The ControllerContext property is settable, so that’s our way in and it has a public ctor taking elements we already have. Additionally, we already have a StubHttpRequestBase which we can set the User property on.

We’ll need to add a Fakes Assembly for mscorlib in order to stub an IPrincipal for the AccountController’s User property. To add a Fakes assembly for mscorlib, add one for the System reference.  System.Web.ApplicationServices needs a Fakes assembly as well in order to shim the ChangePassword method on MembershipUser.

[TestMethod]
public void TestChangePassword()
{
    string testUserName = "TestUserName";
    string testOldPassword = "TestOldPassword";
    string testNewPassword = "TestNewPassword";

    var changePasswordModel = new ChangePasswordModel
    {
        OldPassword = testOldPassword,
        NewPassword = testNewPassword
    };

    var accountController = new AccountController();

    //Stub HttpContext
    var stubHttpContext = new StubHttpContextBase();
    //Setup ControllerContext so AccountController will use our stubHttpContext
    accountController.ControllerContext = new ControllerContext(stubHttpContext, 
        new RouteData(), accountController);

    //Stub IPrincipal
    var principal = new StubIPrincipal();
    principal.IdentityGet = () =>
    {
        var identity = new StubIIdentity { NameGet = () => testUserName };
        return identity;
    };
    stubHttpContext.UserGet = () => principal;

    RedirectToRouteResult redirectToRouteResult;
    //Scope the detours we're creating
    using (ShimsContext.Create())
    {
        ShimMembership.GetUserStringBoolean = (identityName, userIsOnline) =>
        {
            Assert.AreEqual(testUserName, identityName);
            Assert.AreEqual(true, userIsOnline);

            var memberShipUser = new ShimMembershipUser();
            //Sets up a detour for MemberShipUser.ChangePassword to our mocked implementation
            memberShipUser.ChangePasswordStringString = (oldPassword, newPassword) =>
            {
                Assert.AreEqual(testOldPassword, oldPassword);
                Assert.AreEqual(testNewPassword, newPassword);
                return true;
            };
            return memberShipUser;
        };

        var actionResult = accountController.ChangePassword(changePasswordModel);
        Assert.IsInstanceOf(typeof(RedirectToRouteResult), actionResult);
        redirectToRouteResult = actionResult as RedirectToRouteResult;
    }
    Assert.NotNull(redirectToRouteResult);
    Assert.AreEqual("ChangePasswordSuccess", redirectToRouteResult.RouteValues["Action"]);
}

After running tests, we see that our new unit test is passing.

TestRunAfterChangePasswordMethod

Conclusions

Through the use of Microsoft Fakes and the idea of noninvasive testing, with the mockist approach, we’ve been able to test the AccountController quite thoroughly without any project modifications. I imagine we could of easily hit 100% coverage if that was our goal. The only real issues we ran into were related to beta software.

Oddly enough, I’m glad we ran into the System.Web.Mvc faking issue. This forced us to use stubbing, and ultimately exposed both negative effects on overall readability and increased complexity in terms of the amount of framework decompiling needed to figure out what stubbing was necessary. Shimming in these cases would of better conveyed our intent and abstracted us away from having to deal with the guts of the underlying framework.

With these results in mind, it’s evident that testing tools have truly reached a point where anything can be tested, regardless of design. We’re entering a time where ANY application with ANY architecture can be thoroughly unit tested without even the slightest change to code; a time when the ability to unit test a system is decoupled from the design and architecture of that system.

All of this is for good reason. Today’s testing patterns and practices have arisen from limitations in our capabilities to isolate dependencies when unit testing code. Those limitations have been addressed, it’s time to reevaluate our approaches and move on.

When you drive architecture with the goal of being structurally easier to test, the only thing you end up with is an architecture that is good at being tested. Let architecture be naturally shaped by the needs of the problem domain over time. Let complexity escalate only as needed and simplicity, maintainability and ease of use all be key goals in a system’s design.

From now on, we can definitively say that any constraints or limitations in our abilities to thoroughly test any system with any design, are entirely self imposed.

 

The code for this post is available on GitHub

  • 59 Comments

Comments (59) -

Doug Schott

Great post! You've managed to take something that a typical developer would cringe at doing and make it seem trivial. This is the kinda stuff that can really help someone.

-Doug

Nirav

Very good post on TDD.

Robert Anderson

This article and your previous ones on stubs and shims are excellent and clear.  I was gearing up for a massive refactoring of a mature project in order to implement dependency injection everywhere just to be able to improve test coverage.  I now have a better option - thanks!

Jim

Looks like there is potential to encourage the creation of poor code here. If  you can get away with using statics then you might do it too often. Now you're code is highly-coupled and pretty fragile IME.

Generally I find in most cases if you can't test it there is probably a design smell. I see tools like this useful when you've got highly-coupled legacy code that you just can't refactor - a last resort, and not a primary tool.

I also thought the names of your tests could be improved. It doesn't really describe the scenario, or state how the unit under test behaves.

Of course these are all opinions. And I also found the post to be a bit long so didn't read it all. I might have missed the point totally. In which case I'm sorry.

Rich Czyzewski

Jim,

I'm of the opinion that it's not the job of your unit testing framework to enforce the creation of well designed code. The job of your unit testing framework is instead to be capable of testing whatever you need it to test.

Many language features, including statics, have their applicable places where they make sense to use as well as places where they can be abused. There's no sense in cutting out language features just because your unit testing framework can't test them. The smell here is the unit testing framework itself.

tl;dr version for you: today’s testing patterns are based on limitations in our capabilities to isolate dependencies when unit testing code. Those limitations have been addressed, it’s time to reevaluate our approaches and move on. Language features and design patterns aren't bad because the unit testing framework du jour can't test them.

We now have the tools to test anything of any design, let's use them.

Rich

Jim

Statics are good. When you have a dependency that never changes, use a static. But then why the hell are you trying to test it?

Jim

*mock it

Rich Czyzewski

Statics are tested for the same reason you'd test any other code, to determine if it's fit for use. There are plenty of valid nontrivial uses for static that would require testing or mocking.

* Singletons: http://en.wikipedia.org/wiki/Singleton_pattern
* Implicit & Explicit Operators: msdn.microsoft.com/.../z5z9kes2(v=vs.71).aspx & msdn.microsoft.com/en-us/library/xhbhezf4.aspx
* Extension Methods: msdn.microsoft.com/en-us/library/bb383977.aspx
* Caching Expensive Data (especially when doing anything with reflection) : Similar to stackoverflow.com/.../caching-reflection-data

Additionally, it is generally recommended to convert member methods that do not access instance data to static: (see fxcop rule CA1822: Mark members as static - msdn.microsoft.com/en-us/library/ms245046.aspx).

Take a look at something like Dapper (http://code.google.com/p/dapper-dot-net/) which uses both extension methods & caching extensively. Would you be able to get that kind of experience and performance without statics? There's no need for a tool this useful to be considered not unit testable.

Let's use language features to solve the problems they were intended to solve and get a unit testing framework that can test their usage appropriately.

Rich

Mike

Hey Rich,

"Statics are tested for the same reason you'd test any other code, to determine if it's fit for use"

That sounds a bit vague. I think the point is that you test static methods, of course. But if you're invoking a static  method in your code, you've just created a coupling.

Now by creating a tight coupling, you're saying this code should always be executed here. So why then are you going on to mock it with a different behavior?

I feel the same about extension methods - why do you want mock them for?

Singleton might be a good case, although not sure about caching.

If  methods don't access instance data then you make them static. But then any code that calls them is tightly-coupled to them. Is that your recommendation?

Do you think your examples are good use cases? I mean, do you think it is good to couple your controllers to a static forms authentication?

I don't get what you're saying? Are you saying we should go static everywhere and use these tools to mock statics? Is that good OO design?

Rich Czyzewski

Mike,

Architectural patterns (such as decoupling infrastructure) should only be implemented when those patterns add value. This is the point of the agile architecture, the KISS principle and YAGNI, generally avoiding complexity introduced by premature optimization. If it needs to be better decoupled then the design will be refactored to meet those needs over time. At any given point in time, you have the simplest to use, easiest to work with and the most maintainable architecture as is allowable or possible in the problem domain in question

Really what we're talking about here, is what is an acceptable amount of coupling for your architecture. What are the drivers for the level of decoupling that you'd allow/disallow? If your only driver is because your unit testing framework needs it that way, then that's no longer good enough.

I don't necessarily advocate any of the code in the AccountController, as the example is taken directly from the default "Internet Application" template for an MVC4 project from Microsoft. What I do advocate, is that it should be unit testable without requiring it to be drastically changed.

Rich

Mike

Sounding vague, but I'm getting the gist.

You are basically saying it is good to introduce coupling, right? You are suggesting that the use of static methods is a good practice. But we should be careful about how much coupling we introduce?

We should only use dependency inversion and loosely-coupled architecture when we definitely know that code will have different dependencies?

Is that it?

Mike

BTW - my driver is dependency inversion and loose coupling. I intentionally want my components to be as loosely-coupled as possible.

I'm still trying to get your point though. I don't understand it enough to agree or disagree.

Doug Schott

This is a good conversation but I hardly take Rich's comments as saying coupling and statics are good practice. It seems to me that he is just suggesting that coupling is a side-effect of writing code and that statics are a method of coding. His argument, which I believe is well presented in his article, is that testability should not be the driving factor in choosing instance implementation over static implementation or choosing to decouple code through DI and/or class design and in the process adding additional complexity to the code.

And to deviate a little from the arguments presented in the article, I personally use domain representation, behavioral intent, dependency isolation and refactorability as my primary concerns for class design. Factors for me include:

- What data structures, structural patterns and behavioral patterns best represent the domain that I am working with?
- What are the external dependencies to my codebase and how can isolate those dependencies so that my code can internally change independent of the external dependencies?
- What logical, domain driven partitions exist internally within the codebase and how can I isolate dependencies between them?

By focusing on these concerns, I find that I develop clean representations of the problem domain with little architectural overhead and complexity and can rapidly refactor code as the domain changes or new understandings of the domain occur.

Microsoft Fakes seems to allow me to focus on these things without the burden of worrying about testability.

Mike

Microsoft fakes lets you mock a static method call.

A static method call is a tight-coupling.

If you create a tight-coupling you've done it because it is the best design. So why do you want to mock it? You've just said that your design requires this tight coupling and the behaviour should never change.

Yet when you mock it, you are saying it should be decoupled - that the unit under test is NOT tied to the static method.

Personally I don't see your point. I think these tools are good as a last resort for testing legacy code - not an excuse to write crap code.

Doug Schott

Are you suggesting that an instance method calling another instance method is not an example of tightly coupled code? Are you also suggesting that I would never want to validate that a static member performs as expected? I think your equating static members and/or types to tight coupling and thus to bad programming is a flawed chain of logic. Your argument seems to have little to do with good design and more to do with you just don't like statics because you don't know how to test them. The principles of good design suggest that decoupling of behaviors (perhaps via DI, perhaps via factories, etc.) should be a tool used to create a seperation of concerns. This means that a decoupled system is not necessarily the end goal but a condition created out of need (testability no longer being one of them).

Using a slightly abstract example, if all dependencies to a logical partition in my codebase are isolated to a set of interfaces (dependency isolation) but the underlying code makes use of static members because static members made sense, what would be the point of forcing the implementation to be through instance members? Testability? Really? I can change the behavior of those static members whenever I want. Right? And if I do modify the implementation of those static members, wouldn't I want to still ensure that given a set of inputs the correct set of outputs are returned from a static member?

Doug Schott

I don't think I adequately covered your question about mocking static members. If your code under test uses a static member, why wouldn't you fake the behavior of the static member in order to isolate the code under test? Would you want that particular unit test to fail because a bug was introduced to the static member dependency? I wouldn't... I would only want the unit tests that tests the behavior of the static member to fail.

Mike

1. Instance method called instance method is not an example of tightly coupled code if one method is on an interface - that's what I call loose coupling

2. Nope, never said you shouldn't test a static method

3. Calling a static method from another class is tight coupling - yes that is exactly what I am saying. Do you disagree?

4. I like statics - when I want to create a tight coupling - where the behaviour will not be changed - e.g. for a new implementation of an abstraction or for testing. Example - string/date formatting utilities

5. Decoupling means the components are free to evolve independently. By depending on an interface and not a static method, the owning class is not tied to an implementation - the system is not designed around a class that is tied to an implementation. Too often when refactoring legacy code a dependency on statics costs time, money, and frustration.

6. The point would be that your classes have no dependencies - the implementations are free to change. You are not making a statement that this class has a solid dependency on this class. You are not saying whenever this method is called then this static class MUST ALSO be called - because if you are trying to test it with a different behaviour then you are saying it is NOT TIED to this static method. You are testing the class in isolation of that static method - so why have you coupled them?

7. If the logic in the static method breaks then you do not want the logic in calling classes to break. IMO that is ridiculous to suggest otherwise - one static method breaks and your entire code base goes red?

Then you are not unit testing? You are integration testing. In which case - why are yo mocking statics?

I'm bored now. This isn't going anywhere.
  



Doug Schott

1. I agree but if decoupling of behaviors doesn't benefit the design of the app, why have an interface to begin with?

For example, if I have a view model used by a specific screen in the presentation tier of an app, I think it would be a poor design choice to isolate all dependencies between the different components of the view model through interfaces. The only benefit in that scenario would be testibility through an inversion of control framework. Fakes seems to nullify that need now.

2. But you said you shouldn't mock one. I provided an example of why you would.

3. I agree with you! Smile It is tight coupling. I only indicated that I feel it is the context of use that should dictate whether or not it is a poor design decision. Just because you tightly couple 2 behaviors (via static members) does not mean the application will be difficult to maintain.

4. That seems like a valid usage of statics to me.

5. I wouldn't argue with your statements... that is an excellent use for interfaces. I could see how poorly designed applications would cost you time, money and frustration but I would not say interfaces solve the problem. I would say clearly defined logical partitions in your application solve those problems, where there is a clear seperation of concerns so that major concerns can vary independently. Identifying those logical partitions (both architectural and domain driven partitions) should be one of the major roles of an architect. Interfaces are just one of the tools that can be used, service orientation is another that immediately comes to mind (think clearly defined a request/response model).

6. Within a logical partition, most of my code would be coupled because there would be no value in adding the complexity and development overhead in decoupling the code. Once again I'll refer to my view model example: for a particular screen in an app, why would I decouple any components of the view model from any other component of the view model? They all change together, have only a single concern and have no external dependencies. Where is the value?

7. As I said, I would only want the tests that are testing the static member functionality to fail thus I would want to fake the static member for any tests that are testing code consuming the static member... I don't really follow you argument.

I am sorry you are bored. I actually think this is a very valid and interesting debate. I think that perhaps this is a very important discussion now that alternative testing methodologies are available.

Mike

Doug,

I'm going to close with this, which is just a re-iteration of what I keep on saying:

- Invocation of a static method is an intentionally-tight coupling

- When you tightly-couple you are saying that where A comes B always comes along as well

- When you decide to unit test A, but you are mocking the behaviour of B - you are rejecting your own rule that a coupling exists.

- Now you are saying that the logic of A is independent of the logic of B

- In which case a static method and its tight-coupling is a bad design decision.

- The only time I would mock this is on legacy code where the poorly-chosen static invocation is in-situ

- That's where these kind of tools provide any kind of value.

I'm probably wrong and I usually am.

Good night folks

Rich Czyzewski

Mike, Doug,

I think the essence here is that ANY design, architecture and/or code can now be unit tested. The ability to unit test code is no longer a driver as to what types of architectural patterns you apply to your solution. Pick whichever ones best fit the problem domain.

The nature of specifically what architectural pattern(s) you should use are perhaps beyond the scope of this post and a different topic altogether.

Rich

Jaime

Rich, (Mike, Dough),

Interesting article and an interesting discussion. I totally agree that unit testable code should never be the driver of your architectural decisions, but here's the thing, in my view the use of decoupling patterns (and other patterns) have never been about unit testing, it has always been about fighting complexity in your code/architecture, just happens that a unit testable code (without shims) tends to be a less complex code/architecture (obviously this is not strictly true, there's plenty of crappy unit tested code). Looking at your could through the consumer/unit test side, makes you think about the complexity (coupling) and responsibilities of your code, in that sense, unit tests are just a tool to help you fight your code into a simpler life. That's why TDD is not about testing, is about how you develop your code.

Just my 2 cents,

Paul Hadfield

Almost off topic, but why do you have separate HTML and JSON login and register methods on your account controller.  It's possible to identify how the request was made and tailor the response accordingly.  Using the pattern shown, if you wanted to return XML then you'd need two new controllers, but tailoring the output dependent on the input you shouldn't even need to modify the controller if you've separated concerns properly.

Rich Czyzewski

Paul,

The AccountController I used in the code above is just the AccountController as part of the default "Internet Application" template from MVC4. This was used as a baseline to see how well it could be tested with Microsoft Fakes without changing the underlying code at all.

If it were my AccountController, I'd minimally address the resharper warnings, if nothing else ;) If I were writing this from scratch, I'd probably design it differently.

Rich

Paul Hadfield

Ah, sorry in that case - just goes to show the last time I looked inside Microsoft default projects, always create "empty" MVC projects now.  The article convinced me that shims are worth a look if only to fill in those hard to mock static objects that the web team at MS seem to love so much.  Played with Moles a bit, looks a bit like that come of age.

Joe Eames

Rich,

First off let me say that this is a well prepared and presented article.  It's obvious you spent a lot of effort here, and that you're good at writing articles. Also it's quite obvious that you're very competent and experienced.

That being said, when I read your article, at first, I was mollified but then I checked myself and realized that this wasn't an article on TDD.  This is an article on unit testing legacy code.  (see Michael Feathers definition of legacy code) And I noticed as I looked through it that never did you claim that it WAS an article on TDD.  Although I believe you are confusing the reader when you quote Martin Fowler's discussions on mocking in TDD.

As such it's a great article, but I would highly recommend that you edit it to make clear the fact that these techniques apply to unit testing legacy code, and not TDD.   Nirav sadly didn't understand that this is not an article on TDD.

There are significant difference between unit testing legacy code, and practicing TDD, and it behooves us to be clear when discussing one matter or the other, as they are quite separate disciplines.

Rich Czyzewski

Joe,

Thanks for the kind words. Are you by chance the Joe Eames from Javascript Jabber episode 9 javascriptjabber.com/.../ ? If so, I really enjoyed the episode overall as well as your descriptions of Stubs vs Mocks & BDD vs TDD. Hearing that level of thought and guidance into testing in JavaScript was quite refreshing. Tweet as proof of prior enjoyment ;) https://twitter.com/#!/RichCzyzewski/status/195293676543557632

You are correct that this post isn't intended to provide any guidance on how one would go about practicing TDD and I have no problem updating the post with that distinction if needed. Additionally, you are also correct in that this post presents the reader with methods that would work to test legacy code in a fashion of how one might approach testing legacy code in general.

However, where we may differ in opinion, is that I see no reason why Microsoft Fakes can't be used in the course of normal unit testing, even within a TDD process. Fakes represents another tool in the unit tester's toolbox. It allows you to decouple the architecture or design of code from the ability to adequately test that code, ultimately allowing for greater flexibility in acceptable and valid architectural approaches, not just those considered "classically unit testable".

In a half years time, we'll be in place where anything is considered unit testable and we'll no longer be able to use the limitations of our unit testing frameworks as a crutch for enforcing "good" design practices. I'm interested to see how the testing community evolves around this idea as well as both the good and bad things that may come along with it. How about you?

Rich


Joe Eames

I am indeed that guy.  Thanks for the compliment.  That was a fun show to produce, and it's a fantastic podcast.

So here's my problem with ms fakes in a TDD process:  I think your first example, with the FormsAuthentication.SignOut is a perfect example.  If I'm test driving that code (the ActionResult LogOff() method), I want something like what you showed as the "extra work" of a normal testing framework.  The reason being as I test drive the logoff method, that I know does two things (redirect and signout the forms authentication) then as I build that I will write two tests (not necessarily in this order)

test 1:  When SignOff is called, Then Redirect To Index/Home is called.
And that test will just assert that a mocked RedirectToAction call is made.

Then I create test 2:  When SignOff is called, then The Forms Authentication subsystem is signed out
So for this test, I will put in a stub for the RedirectToAction call, cause I don't care in this test if Redirect To Action is called.
And I'll inject a wrapper around the forms authentication that has my own SignOff method. And I'll mock that dependency.

Why?  Doesn't that seem like more work?

Well, yes, but I WANT that thin abstraction layer over the formsAuthentication subsystem.  I'll probably create a class called MyAppSecurity or something like that.  This class will have simple wrappers around things like SignOut() and maybe issuing tickets if I want to do that by hand.  The reason is, I don't want MY code tightly coupled to that third party code.  I don't want my business logic to have to conform to Microsoft's ideas of how you log out.  SignOut is very simplistic.  In more complex third party dependencies (think jQuery's Ajax method) then the code I have to write to make that call is ugly.  It's demanded by that third party's interface.  I don't want that ugliness in my code.  I don't want the tight coupling to how they do things.  I want my code, and my logic to be expressive.  A third party API is never expressive of MY domain.  in my head, before I write the code, I know how I want that code to look:
  MyAppSecurity.LogOut();
This is my own personal preference.  I don't like the way SignOut sounds in my head.  I don't sign out of websites I visit, I log out of them.  It's a nitpicky example but it works for me.

Here's a more expressive example:

One of your examples uses a shim over FormsAuthentication.SetAuthCookie()

That call isn't expressive of my domain.  In my domain, that operation is LogonUser() (again my own name that works for me in my head).

So I want to have my abstraction class (adapter really) have a LogonUser() method that calls that method inside of it.  I don't want that 3rd party API to leak into (be coupled to) my domain.  I want it abstracted.  I WANT that separation.

I know that on the surface, the noninvasive code seems like a good idea, but that's because you're looking at the code as if it's already written.  With TDD that code isn't written yet.  You're in the process of writing it.  You're figuring out what that code looks like, and you don't want to be limited by any 3rd party API's.  Unit Testing is to TDD what driving is to building a road.  My priorities when driving are entirely different than they are when building a road.  The priorities of those two activities may overlap, but they overlap a lot less than you would think at first glance.  To take the analogy even further, not wrapping adapters around 3rd party api's is like building a road without allowing yourself to do any digging, or building any ramps, or creating any tunnels.  That's far too restrictive.  And that's the great subtlety of TDD.  The extra code SEEMS like it's more restrictive, but it's actually what frees you up.

So for me, ms fakes seems great for testing legacy code, but I firmly believe that it has no place in the TDD process, and encouraging it in the TDD process will encourage bad habits.

Overall this has been an engaging topic to read on (blog.pluralsight.com/.../) and I have found myself reexamining my own views on TDD and reflecting on my opinions and experiences in new ways, and I find myself forced to better express things that I hadn't fully articulated before.

Rich Czyzewski

Joe,

You're using the ability for code to be unit testable in "classical terms" as something that defines a good design. There are many designs which are preferable and valid designs which don't necessarily provide ease of testing. Inversion of Control is not a predetermined conclusion in all architectural designs and is many times used as "the Golden Hammer" http://en.wikipedia.org/wiki/Golden_hammer

It's very easy to have an ivory tower perspective here and try to impose that your architectural choices are right for everyone and every case, all the while demonizing anything that doesn't match the preconceived notion of "the one perfect architecture to the rule them all" There is no architecture that is perfect for every scenario, no design pattern that applicable every time. For instance, are these the same patterns we'd use for embedded systems, where hardware resources are limited? Shouldn't those systems be unit testable as well?

Part of this, is that realization, that there are good designs which may not be inherently unit testable by "classical unit testing tools." At which point you'll also realize that the unit tester's toolbox falls drastically short in terms of capabilities of testing anything besides IoC designs. There's no reason this should be the case.

Additionally, part of the point of agile architecture, the KISS principle and YAGNI, is to generally avoid complexity introduced by premature optimization. If it needs to be better decoupled then the design will be refactored to meet those needs over time. At any given point in time, you have the simplest to use, easiest to work with and the most maintainable architecture as is allowable or possible in the problem domain in question.

Code is not etched in stone, and with the general practice of TDD (even one that would include the usage of Microsoft Fakes) refactoring is not a practice to be feared or loathed. Both of your examples are something that could easily be refactored into later on down the road, if a business need arose that necessitated such a refactoring. If the need never does arise, then no time is wasted and no additional complexity is introduced on unused features created through premature optimization.

For me, Microsoft Fakes is another tool in the unit tester's toolbox. It makes absolute sense to be used in TDD or BDD or where ever the need arises.
* Can you do great things with it - yes
* Are you allowed greater flexibility in acceptable and valid architectural approaches - yes
* Can you do bad things with it - yes
* Will the world end when Fakes is released - no

I do agree, the topic is quite engaging, and will be quite interesting to see how it evolves as I think the discussion is most likely far from over ;)

Mike

Rich,

All this tool lets you do is mock statics? I've pointed out above that when you use statics you are saying the component calling the static should not be isolated.

You keep talking about KISS, YAGNI and good design. But you don't give any examples.

You're making out these tools do more than let you mock statics? You make out that this tool lets you have some amazing architecture - all it lets you do is mock statics - so you need to have statics in your code to use it.

We've just established that statics are an  intentional tight-coupling where you do NOT want the code to be isolated.

It's not ivory tower - it's not even IoC. It's the SOLID principle of dependency inversion. If you want to isolate a code and test it - don't have it depend on other concrete instances.

What architectural patterns are you talking about?

You do not need to deisgn your code so that it can be tested - you need it to be loosely-coupled. Loosely-coupled meaning there is no dependency.

[Edited to maintain constructive tone]

Rich Czyzewski

Mike,

For the capabilities of Fakes, checkout the documentation on MSDN: msdn.microsoft.com/.../hh549175(v=vs.110).aspx It is does indeed do more than just let you unit test statics.

The definition of the D portion of the SOLID principle (at least per Wikipedia) is: the notion that one should “Depend upon Abstractions. Do not depend upon concretions."

This could easily be implemented with an abstract factory pattern en.wikipedia.org/wiki/Abstract_factory_pattern (which in itself decides which concrete implementation to use) where's it's usage is encapsulated within the object that needs the concrete implementation.

It's a perfectly valid approach, and one that, in a differentiation from dependency injection, abstracts the needs (dependencies) of the containing object (using the abstract factory) from the consumer of that object, reducing coupling even further. However, due to the limitations of "classical unit testing frameworks", this approach can't be adequately tested and is therefore a less optimal choice.

There are varying levels of loose coupling and you have to draw the line somewhere as to what level of coupling you're going to accept as part of your architecture. For instance, should DateTime.MinValue be decoupled? The more decoupled you become, the more complicated you architecture becomes.

Think about embedded systems, where hardware resources are limited, would we be using IoC patterns if we're near our resource limit? Shouldn't those type of systems be unit testable as well? The types that control our cars...

What you really want is a good design that fit's the problem domain's needs. That design should be unit testable regardless of how it meets those needs. The general preference is towards loose coupling, but ultimately we must realize that excessive loose coupling is an anti pattern and we must draw the line of how far we go with it.

All in all, I see Microsoft Fakes as more of an enabler to increase pervasiveness of unit testing across the community as a whole, not as much something that'll drive bad design decisions.

Rich

Joe Eames

It's important that I stress that I'm discussing TDD and not simply unit testing here:

I truly agree with much of what you say.  We can't hold tradition sacred simply because it's a tradition.  But like Object Oriented programming, DI/TDD/SOLID isn't an ivory tower.  It's just the best way we currently know of to build well factored classes and support maintainability of projects.

Exploring new options is a great idea.  But let's not regress and use tools that encourage obviously bad behavior.

Think of it this way.  Let's say we were discussing health instead of development, which just like development, people have been constantly inventing new ways to be more healthy with less cost (both in time and money).  Now in this landscape some clinic comes out advertising a new lypo procedure, saying "Lypo gives you quick results with less effort.  Sure you can still do diet and exercise, but don't hold on to those ivory towers simply because we've always done them.  There are better ways to be healthy."  Anyone well educated about health would be opposed to the idea.  But the danger is all those people who want to be healthy, and haven't become educated enough to make an informed decision.  They might simply see two different experts arguing about "another tool in the tool belt" and start using it.  But Lypo, like MS Fakes, encourages bad behavior.  

With Lypo you can be skinny, and eat fatty, greasy fast food and sugar all day long.  If you do that you're not healthy, regardless of what you look like from the outside.  With MS Fakes, you don't have to make the collaborators of a class you're writing explicit.  you can just call static methods within the class to access those collaborators, and you don't have to write adapters to 3rd party code.

That's my worry with MS Fakes.  I already know it's a bad idea for TDD.  But for all those developers out there who are looking to improve their code quality, and have started dabbling with TDD,  hearing everyone say how it produces better code, but as of yet are a little turned off by the extra effort it takes, then they find this "new way to do TDD" right in Visual Studio.  So they try it and it's easy.  So much easier than TDD without it.  And hey, I'm still doing TDD right?

But right here is the crux.  TDD with shims, isn't TDD.  TDD isn't simply writing a test before you write the code.  It's writing code which is designed better, using SOLID principles, expressing concepts in your domain.  The tests simply enforce that behavior, and make it painful when you do something wrong.

In a TDD process using MS Fakes as a tool will encourage bad behavior, such as not putting an abstraction layer between your code and third party code.  Or using static methods where they don't belong.  Dependency injection isn't a good idea simply because it enables injecting test fakes (Martin Fowler's fakes, not MS fakes) it actually produces a more well factored class. You can look at the constructor and see what that class depends on.  Static methods hide those dependencies.  More dependencies means more coupling.  More coupling is more complexity and rigidity and difficulty refactoring.

I'm constantly changing how I do TDD.  And now that I do TDD in javascript I've had to change a lot.  JavaScript has had the concept of shims forever.  It's a dynamic language, so it's built in.  I can shim anything.  I can shim jQuery's ajax method.  But I don't.  I hide it behind my own adapter class.  The principles of good code apply regardless of my ability to circumvent them based on my language and environment.

Typemock has been available for a long time.  Why hasn't it become well known that using typemock to simply mock statics, and not need to write adapters for 3rd party code is a superior way to write code?  Because it isn't.

MS Fakes isn't adding anything new.  But what it IS doing, is putting a dangerous tool that encourages poor behavior within easy reach of every .NET developer.  It's like putting beer dispensers in the high schools.  This has the potential to cause an enormous setback to coding quality among .NET developers as a whole.  

MS's own fakes site thankfully never claims to be good for the TDD process.  Quoted directly from their fakes page (msdn.microsoft.com/.../hh549175(v=vs.110).aspx):  "it is often difficult to implement stubs in practice, because the code-under-test does not allow the use of test stubs,"  that's obviously referring to preexisting code.  Code I haven't written yet can't possibly not allow the use of test stubs.  I can write it to use them.  

So I will still maintain, and hopefully those who read this discussion will see the wisdom in my position, that MS Fakes has NO place in a TDD process.

Rich Czyzewski

Joe

I've discussed that you can create non unit testable architectures that conform to SOLID. You seem to be missing this point altogether in your evaluation that TDD and "classical unit testing tools" should drive design rigidly.

As mentioned earlier, the definition of the D portion of the SOLID principle is: the notion that one should “Depend upon Abstractions. Do not depend upon concretions."

This could easily be implemented with an abstract factory pattern en.wikipedia.org/wiki/Abstract_factory_pattern (which in itself decides which concrete implementation to use).

It's a perfectly valid approach, and one that, in a differentiation from dependency injection, abstracts the needs (dependencies) of the containing object (using the abstract factory) from the consumer of that object, reducing coupling even further. However, due to the limitations of "classical unit testing frameworks", this approach can't be adequately tested and is therefore a less optimal choice.

In this case and in many others, applying TDD via "classical unit testing tools" as a solution to rigidly fix all design problems isn't just making "it painful when you do something wrong" it's making it painful to use alternate valid and correct SOLID conforming approaches. TDD with "classical unit testing tools" casts too wide a net on the type of architectures and designs it disallows and therefore doesn't serve the purpose of design validator appropriately.

That aside, the gist of your remaining argument appears to be that if we give developers better tooling to work around these limitations (such as Microsoft Fakes, Typemock's Isolator or Telerik's JustDecompile) in "classical unit testing" then the possibility exists that they could do something designed poorly. This is a general fear of anything new. I'm sure "they" had the same worries about the first version of the .NET Framework itself.

As you said "We can't hold tradition sacred simply because it's a tradition" and this applies here too, we have to move forward and not be afraid of change. Microsoft Fakes adds tremendous value in terms of pushing unit testing tooling forward to overcome these artificial limitations we've dealt with for so long that it's second nature to many of us. This is another tool in the unit tester's toolbox, even the TDD r's toolbox and there's absolutely no reason not to use it in order to enable a greater range of designs to be unit testable.

What I'm really interested in is the valid, correct & proper approaches (as well as the new possibilities of those) that Microsoft Fakes will now enable testing for.

Rich

Joe Eames

There's some validity to your argument, although I'd say DI is superior to the abstract factory since it makes your class's collaborators obvious.  I personally dislike factories of any time since you have to look at the innards to know what dependencies/coupling that factory is actually adding.

But in the end, good tools encourage good behavior.  Bad tools encourage bad behavior.  for TDD (again, not general unit testing) MS Fakes is a bad tool.

David Adsit

It is important to remember what automated testing is actually for. The primary goal it to get code that is maintainable. I think that the Fakes library, like Type Mock and Just Mock** before it, is a very powerful tool for dealing with poorly-structured, legacy code. Unfortunately, like any very powerful tool, they are also dangerous in general use. No one would use a 16-penny, framing air-nailer to assemble a kitchen spice rack, for example.

Allow me to describe what I feel is a proper use of a tool such as Fakes (borrowed at least in spirit from Michael Feathers):
1. Decide to add a feature to an untested big-ball-of-mud.
2. Introduce tests (vice grips) around a tightly coupled class that ensure it's current behavior.
3. Refactor mercilessly; extracting methods and classes and introducing interfaces until you have well factored code that conforms to common design principles like loose coupling, high cohesion, single responsibility, and dependency injection. Ensure that the vice grip tests continually pass. Introduce new unit tests around each new component ensuring their functionality.
4. Once all the code is refactored properly and is well tested and the vice grips still pass, remove them as the were only needed to hold the project together while it was in motion and the new, more-focused unit tests are now guaranteeing the functionality of the whole. (See JBrains - Integration Tests are a Scam for more details on why they must be removed.)
5. Add the new feature to the well factored, clean code.
6. Repeat from 1 on the next block of messy code I need to add a feature to.

You will notice that the use of the Fakes is transient, not a permanent part of the test suite. Once I have properly factored the code, I have no need to deep dive into the CLR or private members to manipulate them directly.

Unfortunately, I do not see most teams I work with moving past step 2. Once the code has tests, most developers consider their work done and they move to the next feature. After all, the test coverage metric has been met, right? Even if there isn't schedule pressure, unwinding a tightly coupled mess is a challenge that many developers avoid, in my experience because adding features is more visible to stakeholders.  But if you aren't refactoring the code to introduce additional tests and testing seams after the vice grip tests are in place, you are missing out on the primary benefit of libraries like Fakes or Type Mock. You will be left with code that is just as messy and difficult to deal with the next time you need to modify it. You will find that the shortcuts you took by using Fakes didn't make you job at all easier when you return to that code.

As Joe implied, the expressiveness of code is paramount to long term maintenance. We have all learned that code is written once, but ready many, many times. Using terms from this application's domain rather than another domain is a key to maintaining the code in the long term. I too strongly advocate isolating "my" code from "their" code by building up an anti-corruption layer using thin(ish) adapters. This allows me to change my code at will without worrying about how they express their domain.

As was mentioned before, there are many positive pressures put on a code base by the introduction of tests. Small, focused classes with lower cyclomatic complexity are much more straightforward to test. Explicit injection of dependencies makes using a class in a context other than production much easier. Relying on interfaces rather than implementations helps focus the code as well on behaviors. The pressures that push us to these features in the design disappear when you can Fake or shim at the CLR level. This will lead to more poorly factored code. My current team consists of very skilled designers and developers. Even for us, testing regularly exposes areas of the code that are more coupled and less flexible than we suspected. We then refactor the code to better designs and as a side effect, we get to add the desired tests.

* As a side note, in my experience, the static keyword is a code smell in an OO language as it leads to more concrete type coupling, which again makes maintenance more difficult.

** I always felt that the price of these tools was a feature rather than a drawback as that keeps teams who don't absolutely need them from reaching for these tools. Alas, Fakes ships with VS11.

Rich Czyzewski

David,

Just as with Joe, you're using the ability for code to be unit testable in "classical terms" as something that defines a good design.  When you say "Once I have properly factored the code..." for indicating when you'd be done using Fakes, you're buying into a very specific design, one that is "classically unit testable." Let's be fair here, there is really only one design pattern that enables code to be "classically unit testable", the tool set is exceptionally limited.

I've included a portion of my response to Joe as many of my responses to Joe are directly applicable here.

There are many designs which are preferable and valid designs which don't necessarily provide ease of testing. Inversion of Control is not a predetermined conclusion in all architectural designs and is many times used as "the Golden Hammer" http://en.wikipedia.org/wiki/Golden_hammer

It's very easy to have an ivory tower perspective here and try to impose that your architectural choices are right for everyone and every case, all the while demonizing anything that doesn't match the preconceived notion of "the one perfect architecture to the rule them all" There is no architecture that is perfect for every scenario, no design pattern that applicable every time. For instance, are these the same patterns we'd use for embedded systems, where hardware resources are limited? Shouldn't those systems be unit testable as well?

Part of this, is that realization, that there are good designs which may not be inherently unit testable by "classical unit testing tools." At which point you'll also realize that the unit tester's toolbox falls drastically short in terms of capabilities of testing anything besides IoC designs. There's no reason this should be the case.

Additionally, part of the point of agile architecture, the KISS principle and YAGNI, is to generally avoid complexity introduced by premature optimization. If it needs to be better decoupled then the design will be refactored to meet those needs over time. At any given point in time, you have the simplest to use, easiest to work with and the most maintainable architecture as is allowable or possible in the problem domain in question.

Code is not etched in stone, and with the general practice of TDD (even one that would include the usage of Microsoft Fakes) refactoring is not a practice to be feared or loathed. Both of your examples are something that could easily be refactored into later on down the road, if a business need arose that necessitated such a refactoring. If the need never does arise, then no time is wasted and no additional complexity is introduced on unused features created through premature optimization.


The type of pressures you speak of that are applied through "classical unit testing" create only one type of architecture as the solution. My point is that there is no one architecture that fits the needs of every problem domain. You must embrace the fact that valid, correct and well designed architecture's exist that don't adhere to the only design that "classical unit testing" creates. After we accept that and move past it, it's easy to see that testing tools are quite lacking in these areas.

The presumption that developers will always gravitate towards terrible code and terrible design because of better tooling is a little outlandish IMO. Those who created garbage code before better tooling, will continue to create garbage code after better tooling. Those who created good code before will most likely continue to create better code afterwards. Keeping in mind that good code is subjective to the needs of the problem domain and that "your good code" is not necessarily another problem domain's good code, nor should it be.

As I mentioned with Joe, Fakes is just another tool in the unit tester's toolbox, it can be used to allow greater flexibility in the range of acceptable and valid architectural approaches that can be tested. Granted I do agree that... with great power comes great responsibility and see that just as with anything else this can be abused.

Do I think that developers who would not abuse this tool should be deprived of it, absolutely not. I believe that Fakes has a very bright future ahead of it as an enabler for unit testing and that it can only serve to push the community forward in terms of the both prevasiveness of unit testing practices and the evolution of practices as a whole.

Rich

Doug Schott

David,

Are you saying that DI produces better, more maintainable code; therefore, all code should be written to conform to DI?

That sounds like a 16-penny framing air-nailer to me.

Jim Cooper

Doug,

I think David hit it on the head (pun intended).  No doubt using patterns just for a patterns sake is wrong and the 16-penny nailer is a good analogy.  But DI is much more than just a pattern.  It is a fundamental coding principle identified by and championed by people with much more experience than any of us (unless any of you happen to have close to 50 years of programming experience).  I am approaching 20 years and I'm still an infant compared to some of the true "seniors", pun also intended Smile, in our field.  If you aren't familiar with SOLID, you should read up on it, if you are, you should give it a real try on an application where you really have the opportunity to flesh it out.  

I am certain that what you will find is that dependency injection, and in fact all of the SOLID principles, are not heavy tools at all.  They are actually quite light-weight.  I take serious issue with Rich's comment that "The more decoupled you become, the more complicated you architecture becomes."  I have been using DI for 5 years (admittedly that's a relatively short period) and experience has taught me quite the opposite is true.  Using dependency injection makes your architecture much simpler.  I understand the aversion people have to creating interfaces for almost everything, it feels uncomfortable at first.  But it is wonderful to work in a system where everything is loosely coupled.  The simplicity spreads throughout the application.  I have worked on systems that are completely built upon DI from the ground up and there was no complexity that resulted (other than just creating the interfaces, which is like 2 keystrokes with Resharper).  Once they're there, you rarely think about their existence since your refactoring tools take care of everything else.  Check that out, I used they're, there, and their all in one sentence!  I'll end with that accomplishment. Smile

Doug Schott

Jim,
  You make some intelligent arguments for your case and I don't doubt your level of experience or your success with wholesale DI in the past. But I would argue that a successful implementation does not necessarily indicate that there is not a better implementation.

I'm glad you mentioned SOLID, because I am familiar with the princicples it defines and use them. However, I feel that the principle of Dependency Inversion (the "D" in SOLID) has been abused (and possibly mis-represented) by the DI testing framework community. If we review the principle of Dependency Inversion it states (from wikipedia):

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.


My argument is that not every class to class interaction in an application qualifies as high-level to low-level interaction. It seems that only when you are dealing with higher level concerns (some level higher than each and every individual class definition) do you really encounter the dependencies that truly matter for extensibility and maintainability, which is of course what the principles of SOLID are a good guide for.

To go one step further, and I'll probably fry this Smile, I actually desire extremely tight coupling between many of my classes, as long as the tightly coupled classes have only one shared concern. Only when those classes begin to develop dependencies upon classes that have other concerns (whether architectural in nature or domain specific) would I provide an interface that isolates the interaction and is narrow to that specific interaction (the "I" in SOLID).

If I limit my decoupling effort to the situations just described, what possible benefit would I get in forcing the tightly-coupled, single responsibility set of classes into supporting dependency inversion? I refactor them as a unit. They have one shared concern (model data for display, model data for transfer, etc). All external dependencies are well-defined (via well named interfaces). They are fully testable as is. What is the benefit?

Keith Brown

"When you drive architecture with the goal of being structurally easier to test, the only thing you end up with is an architecture that is good at being tested."

While this may have been the original intent of TDD about a decade ago (building testable systems), the state of the art has moved quite a bit beyond that. For many years now it's been more about driving maintainable code, with the resulting test suites being a nice side effect.

http://www.drdobbs.com/229218691
bradwilson.typepad.com/.../...sign-by-example.html
lostechies.com/.../
blog.c42.in/tdd-isnt-about-testing-its-about-design
www.stickyminds.com/sitewide.asp

Jim Cooper

I just wrote a post on the Pluralsight blog that I think expresses my feelings on SOLID and how we should be writing code that adheres to it, not because it creates easy-to-test code, but because it creates easy-to-maintain code.  Those interested in this conversation might find the post interesting:  blog.pluralsight.com/.../

Doug Schott

I read your blog post and it seems a stretch to me to say that understanding "Bounded Contexts" should naturally allow us to understand TDD. They are two different and nearly un-related concepts. Bounded contexts are high-level logical partitions in your codebase that are in place to create a separation of architectural or domain driven concerns. TDD, however, centers upon creating small units of behavior and testing them in isolation. The isolation part is, of course, what has driven the industry tools to where they are today. Most of them require all tested behaviors in an application to be abstracted (via interfaces).

Your position it seems, is that decoupling is architecturally good; therefore, all things must be decoupled, even class structures that derive no real direct maintainability or extensibility benefit from the layer of abstraction required to produce the decoupled classes.

And my position is that most of the maintainability or extensibility problems that exist can be solved using logical partitioning (Bounded Contexts), not wholesale abstraction of every unit of behavior in the entire codebase (which is what TDD has driven us to). Not everything must be injected.

It seems to me that you are taking high-level concerns and forcing them down to a very low-level and in doing so forcing additional and wholly unneeded architectural requirements upon all elements of code. It may be that because you do this with all your classes, you have never had the opportunity to really analyze what real scenarios in your domain warrant the added layers of abstraction.

Mike

+1 for what Jim said.

If you can't test code because you're calling a static method then you've created a tight coupling. So why on earth are then you trying to isolate 1 half of a tight coupling - because it is a bad design decisions.


Joe Eames

I replied to your comment when I meant to post down here.  Now my reply is buried in the middle of the comments section.
www.richonsoftware.com/.../...Fakes-Deep-Dive.aspx

Doh

You guys talk funny in here

Steve Donn

This post is almost good except your commentary on why using this new and shiny tool is better than other design approaches.

Rich Czyzewski

Steve,

I could see why the discussion around when Microsoft Fakes should/could be used could be turn off to some. In which case I would recommmend my prior posts on the subject, "Comparing Microsoft Moles in VS2010 to Microsoft Fakes in VS11" richonsoftware.com/.../...in-Visual-Studio-11.aspx and "Using Stubs and Shims to Test with Microsoft Fakes in Visual Studio 11" richonsoftware.com/.../...in-Visual-Studio-11.aspx, which are both general overviews of the technology. This particular post, on the otherhand, is a deep dive into the subject matter and would be incomplete without describing the contexts of how this could/should be used.

Additionally, the concepts in Microsoft Fakes are nothing new, in fact Fakes itself is a polished, rebranded version of Microsoft Moles that's been out for years. Moles was developed under Microsoft Research and was never officially supported. Additionally Typemock's Isolator has been doing this for quite some time, 4 maybe 5 years, maybe even more. I'm not exactly sure when Telerik added these features to JustMock.

The real change here is not that there is a new and shiny tool (as it's not new anyway), but that these new features will be widely available to anyone with a sku of Visual Studio 11 that has MSTest in it (at least that's my best guess). Before this the only choices were paid or not officially supported tools.

Having tools and features as default installations in Visual Studio has the capacity to drive change. Think about when Visual Studio first added MSTest. This easily lead to widespread adoption of both unit testing and test driven approaches in .NET projects.

The true discussions here are about what changes the inclusion of Microsoft Fakes itself will drive.

Rich

Mike

Rich - I haven't seen anyone suggesting this feature will change how we write code. It will just let us test tightly-coupled legacy code.

I enjoy you blogging and showing us how we were all wrong, and how SOLID is an over-hyped design smell.

Until then, this talking is really going nowhere. We need some examples Rich.

Rich Czyzewski

Mike,

This post is much less about changing how we write code, and more about changing how we test code and what designs  are considered capable of being testing appropriately. At no point do I mention anything against SOLID.

Here's a quote from up the comment chain that describes a valid example for you:

I've discussed that you can create non unit testable architectures that conform to SOLID. You seem to be missing this point altogether in your evaluation that TDD and "classical unit testing tools" should drive design rigidly.

As mentioned earlier, the definition of the D portion of the SOLID principle is: the notion that one should “Depend upon Abstractions. Do not depend upon concretions."

This could easily be implemented with an abstract factory pattern en.wikipedia.org/wiki/Abstract_factory_pattern (which in itself decides which concrete implementation to use).

It's a perfectly valid approach, and one that, in a differentiation from dependency injection, abstracts the needs (dependencies) of the containing object (using the abstract factory) from the consumer of that object, reducing coupling even further. However, due to the limitations of "classical unit testing frameworks", this approach can't be adequately tested and is therefore a less optimal choice.

In this case and in many others, applying TDD via "classical unit testing tools" as a solution to rigidly fix all design problems isn't just making "it painful when you do something wrong" it's making it painful to use alternate valid and correct SOLID conforming approaches. TDD with "classical unit testing tools" casts too wide a net on the type of architectures and designs it disallows and therefore doesn't serve the purpose of design validator appropriately.


Additionally, let's try to keep the discussion constructive. This is a good discussion and everyone else has had no problem keeping things that way.

Hope this helps.

Rich

Mike

No it doesn't Rich. Show me the code.

Roger Harford

Hey Rich,

Great article!

When I first started reading I was exactly on the same page as you. I am looking to add unit tests to a small but growing project and felt very odd about adding a lot of unnecessary architecture for the sake of testing that wouldn't be used in any other context.

Ironically, after seeing the alternative you presented (very elegantly, I might add), I think you just converted me to the other side Smile

It's not that I don't agree with your points--I still do--but now I can see why using some of these patterns like IoC/Dependency Injection makes sense. It provides a means of describing the dependencies explicitly.

I still don't like the idea of making interfaces for my data access layer (which is itself already supposed to be an abstracted interface since I'm using EF), but now I can see that it lets everyone know "Hey, I have this dependency and I won't work without it. You can't invoke me without it."

The alternative which you've described is that you really have to investigate the code and spend a lot of time through trial-and-error to figure out what the heck the code needs to run, and then as you showed it can sometimes be a pain to reproduce these dependencies through mocks/stubs (although you did a great job of layout out all the work that needs to be done in order for someone else to do so).

I still wish there was a happy medium where you could reliably describe a module's dependencies explicitly without having to create interfaces for every one of them. Maybe just accept an explicit object in the constructor and then mock that? Wouldn't involve a lot of unit-testing specific code going that route. Again, it's not the mocking I have an issue with, it's the mystery and guess-work.

Curious as to your thoughts on this. Again, great article, thanks!
-Roger

Rich Czyzewski

Hey Roger,

Thanks for the compliments. It's good to see some fresh well thoughtout ideas on the subject. What you're seeing is the fact that in testing our code out of its normal context, it ultimately forces us to provide that context one way or another.

Your comment made me think of the possibilities of even more interesting tooling, involving something like Roslyn http://msdn.microsoft.com/en-gb/roslyn, where we will be able to inspect code and actually identify dependencies directly. In essense a module's dependencies are something the compiler already knows, we just need to tap into that.

Using that information, we may be in a place where we don't have to identify dependencies either way (unless it happens to be architecturally necessary/useful). What if for any method tested, the test host would be able to identify dependencies needing faking, provide default detouring/fakes for simple things and prompt you for anything it couldn't figure out. Maybe I have another side project on my plate ;)

Putting the potential future aside, we still have the question of what to do today with our current tooling. One thing to keep in mind is that this article isn't advocating any particular design either way, just that any design should be reasonably testable with the tools available and that the design should instead be driven by the needs of the problem domain as opposed to the needs of the testing tool.

Today we have a portion of the community that vehemently demonizes anything that isn't absolutely testable though classical unit testing techniques. That negative sentiment needs to evolve similarly in ideas to what you've identified above; that both techniques are sub optimal in their own ways (or rephrased they each optimize for different circumstances) and that there are still problems to be solved and work to be done as a happy medium has yet to be found.

So, what design should we use? The best answer (and most overused) is that "it depends." It's all about what you want to optimize for, weighing the pros and cons of each approach/design as they apply to your specific environment. The only difference is that DI/IoCs weight is changed as it's no longer absolutely coupled to the ability to unit test. Not a definitive answer, but the best we have to work with Smile

Rich

Sergio Romero

I soooo not agree with this approach. This option is just a poor excuse to justify bad designs and not to refactor code to a better state.

Even this post from a Program Manager Lead on Microsoft's Visual Studio ALM tools says, and I quote "shims are evil".

www.peterprovost.org/.../

Basically it says that shims should ONLY be used to cover some untestable code with tests so it can be safely refactor into a better design, cover the new code with properly built unit tests and then discard the ones using shims.

Rich Czyzewski

Sergio,

Although Peter's opinion is that "shims are evil", I doubt that opinion will be represented in the MSDN docs or that it matches that of the team(s) who developed the functionality or those who decided it was necessary in the first place.

Peter's opinion aside, all you've mentioned is that you don't agree with this approach and you think it leads to bad designs, but you really haven't provided any reasoning or justification.

Care to elaborate on those thoughts a little more?

Rich

Jim Cooper

Rich,

I actually spoke with Peter the day before he wrote that article.  Peter led the development of that framework, so I suspect the team who developed it feels the same.  He also said he was going to be pressing the docs team to make sure the intent of the moles is documented, whether he'll succeed or not since it's not his team remains to be seen.  But I think the intent of the shims is clear.

Rich Czyzewski

Jim,

This article and comments are less about intent (which is not clear in any terms nor would it matter if it was) and more about: can this tool be used to test noninvasively and why there's nothing wrong with using it that way.

The point of my comment above is that we could all lambast each other with links to blog, forum, twitter, facebook posts all day long and get nowhere. The authors of those ideas are welcome to post here just as much as anyone else and present their case. Until then, continue to bring your own ideas, reasoning and justification and prove your points.

Speaking of which, I'd be more interested to see what your counter points are to Doug's criticism of your blog post (in the comments listed above).

Rich

Jim Cooper

Happy to respond to Doug.  I hadn't been back to this article in a while and just came back recently because I was using these comments as an example of how moles are causing people to stop writing good code.

Point 1:
Doug said, "It seems a stretch to me to say that understanding "Bounded Contexts" should naturally allow us to understand TDD."

My Response:  I'm not sure what you are referring to in my post on SOLID, that post was about the SOLID principles of software development and not so much about TDD, but that those principles along with isolating yourself from Bounded Context do support TDD.  I just looked through my post and I don't see anywhere where I hinted, at all, that understanding bounded context naturally allows us to understand TDD.  What I did say regarding bounded context is "first, it prevents our code from having to speak the language of the other context (except within the adapter itself); and second, it isolates our code from changes to the other context."  This is completely unrelated to TDD it is doing things right because it makes our code more maintainable.  It so happens that those parts of our code also become easier to unit test, but the practice is worth doing whether you are unit testing or not.

Point 2:  Doug said, " The isolation part is, of course, what has driven the industry tools to where they are today. Most of them require all tested behaviors in an application to be abstracted (via interfaces). Your position it seems, is that decoupling is architecturally good; therefore, all things must be decoupled, even class structures that derive no real direct maintainability or extensibility benefit from the layer of abstraction required to produce the decoupled classes."

My Response:  "I agree that 'the isolation part is what has driven testing tools to be where they are today' and that they require an application to be abstracted.  But I disagree in the inferred context that they did this because they had no choice and would have done it differently if they could have.  I believe they did it because that is how code should be written.  You are correct that my position is that "decoupling is architecturally good; therefore, all things must be decoupled."  If you have a class structure that 'derives no real direct maintainability or extensibility benefit from the layer of abstraction' then it is actually a single responsibility and should be part of the same class and never should have been broken out in the first place.  If breaking it out into a separate class is necessary (because it is a separate responsibility) then coupling the classes together does affect maintainability and therefore it should be decoupled using interfaces.

Point 3:  Doug said, "It may be that because you do this with all your classes, you have never had the opportunity to really analyze what real scenarios in your domain warrant the added layers of abstraction."

My Response: First of all, let me say that I appreciate the suggestion that I have always been what I would, today, consider a good developer (even if you wouldn't).  Smile  I'll be honest.  I've written some horribly unmaintainable code in my career.  Early on in my career I knew nothing (seriously...nothing) about abstraction.  I've written aweful, procedural code.  I've written aweful, closely-coupled, object-oriented code.  And I watched other developers have to deal with that mess when we hired new developers.  Then I had the lucky opportunity to learn under the guidance of ThoughtWorks with people who learned and rubbed shoulders with other Thoughtworkers such as Martin Fowler and developed tools such as Cruise Control and Selenium.  It was a definite blessing in my career.  I learned a ton about better ways to write software.  Since then, on my own projects, I've tried various blends of TDD (using it here and there vs. everywhere), and blends of abstraction.  My experiences have taught me abstraction everywhere is the best route, I too often have found myself regretting a lack of abstraction.  And frankly, abstraction is easy.  I don't intend to say that I now know it all.  The more a learn the more I realize how little I know and I learn so much from those around me.  But yes, I have learned that abstraction, everywhere, is a good thing.

Jim Cooper

So now I've responded to Doug, I'd like to see your responce to Mike's "show me the code" comment.  I'd like to see SOLID code that is un-unit testable.  Keeping in mind the additional concept of context boundaries.  Guaranteed, if it is not unit testable it is not conforming to SOLID and context boundaries.  But not being unit testable isn't the worst part, it's also not very maintainable.

Doug Schott


I agree that following mainstream TDD coding practices leads to code that conforms to SOLID. However, I disagree that following mainstream TDD coding practices is the best way to write SOLID conforming code.

The big divergence in our positions is that I see SOLID as a set of principles for system design and not for class design. The Dependency Inversion Principle, doesn't state that all interactions with a class should be abtracted. In fact, it is very explicit in that it indicates only high-level to low-level interaction should be abstracted. Those are the areas of concern that truly derive benefit from the abstraction.

Perfect examples of high-level to low-level interaction would be layer to layer interactions or bounded context interactions. These are the important interactions that should be abstracted, whereas, TDD tools have artificially made all class to class interactions seem important, even when they are not.

I say abstract the important interactions, not all interactions. Put your architecture hat on, identify the areas of concern that should be isolated and abstracted, isolate and abstract them and don't pollute your codebase by forcing those concerns upon every class. But, if you do use the golden hammer approach with regard to isolation and dependency inversion, don't try to say the anything less is not SOLID conformant, because you'd be wrong.

Agarwal

I'm confused about the intent of your article. Is it about principles of good design or is it about testing legacy code.

If one is talking about testing legacy code then why speak about IOC/DI to begin with?

Comments are closed