Tuesday, 28 April 2009

Unit Testing CSLA with Type Mock Isolator

I am a huge fan of CSLA and look to try and use it in projects where ever I can as it just makes life so much simpler by taking some of the major decision points out of the development cycle allowing you to focus on the real problem which is defining and designing your business logic layer for what ever project you are working on.

As with any framework, CSLA has its benefits and it has its drawbacks, I am not going to go into all these in this post as there are plenty of people that have already had these discussions already but one of the major drawbacks I found was when trying to unit test my CSLA objects.

In a nutshell your CSLA based business objects inherit from a base class called BusinessBase<T>, this class implements a whole bunch of useful features such as authorisation rule checking, validation rule checking, n-level undo, distributed business objects via the data portal etc that can be utilised by any inheriting class.

Probably the key area is the Data Portal mechanism which performs the following (please note this is over simplified for clarity) when the Save() method is called on your business class:

  • Determine the state of your class, is it Valid (all data is correct) and Dirty (some data has changed)
  • Determine what kind of operation to perform (If the object is new then Insert else Update etc)
  • Serializing your business object
  • Transport the business object to your data access layer (which could be across a server boundary)
  • De-serializing the business object
  • Executing any required data access logic i.e. Insert, Update, Fetch operations
  • Perform any validation rules required
  • Serialize the object again
  • Transport the business object back to the application layer (which could be across a server boundary)
  • De-serializing the business object
  • Return a new instance of the application logic

As you can see CSLA does a lot of work under the hood which involves a lot (and I do mean a lot) of reflection which is all good for making your life as a developer great when developing the business domain but this becomes a problem when coming to unit test and mock out parts of the system.

CSLA encapsulates the data access logic methods i.e. DataPortal_XYZ within the business object, which is probably its most controversial point, but this does not restrict your choice of the actual data access mechanism that you want to use, I personally am using Linq to SQL as my data access layer and find that it is extremely quick and easy to:

  • Add new columns to the database
  • Expose them via Linq to SQL
  • Expose them in the business object
  • Add any validation rules in the business object
  • Put a field on the UI

That said, any other type of data access could easily used instead such as Linq to Entities, NHibernate, ADO .NET etc etc.

So hopefully you are starting to see the problem we might have with unit testing, essentially as soon as you perform an action on an object such as set a property, CLSA will perform some behind the scenes work to execute authorisation and validation rules to determine whether the current user is permitted to perform the action and then determine whether the property is valid after its value has been set. It is this "behind the scenes" work that is absolutely awesome and makes CSLA a great framework but makes the mocking and unit testing almost impossible as it becomes very difficult to isolate the various parts of the business object.

I have been working Rhino Mocks for a while and found that this has been great especially when compared to NMock but when I tried mocking a CSLA object it let me down. It seems that the problem is that Rhino Mocks relies on Dependency Injection to perform its mocking and because CSLA objects are generally closed and do not expose ways of injecting mocks into the objects it is impossible to replace the data access logic with a mock replacement - or so I thought until I came across TypeMock.

TypeMock Isolater uses Aspect Oriented Programming to create a mock aspect, it essentially monitors the applications execution and registers an interest in specific methods on an object with the .NET framework API, when the method is called the .NET framework notifies the TypeMock and allows it to return mocked objects or values. What this means is that we don't need to change our objects and sacrifice our "good OO design" in order to perform unit testing of our objects.

Here is a very simple example of how I have used Type Mock to fake a couple of child objects that exist on my Employee class, I want to test the validation rules on the Employee object but I do not want to have to load or create a new instance of either Workgroup or Role properties so I simply create a "fake" instance using the Isolate class:

        private Employee CreateEmployee()
{
var employee = Employee.NewEmployee(CompanyId);
employee.FirstName = "Test";
employee.LastName = "Employee";
employee.EmailAddress = "test.employee@testcompany.com";
employee.Workgroup = Isolate.Fake.Instance<Workgroup>();
employee.Role = Isolate.Fake.Instance<Role>();
return employee;
}



Now my Employee class has all of its properties set (two of them with fake objects) and therefore will be valid, I can independently test each validation rule to ensure that they are fired correctly when the relevant property on the object changes, in the following case I ensure that the "Email address is required" rule is fired:



        [TestMethod]
public void EmailAddressIsRequired()
{
var employee = CreateEmployee();
employee.EmailAddress = null;

Assert.AreEqual(1, employee.BrokenRulesCollection.Count());
Assert.IsTrue(employee.BrokenRulesCollection[0].Property == "EmailAddress");
Assert.IsFalse(employee.IsValid);
}



Now admittedly I could have achieved the same thing here using Rhino Mocks and creating a mock instance of each of the classes, but lets say for example the Employee object has a custom validation method that using a command object to determine whether an employee with the same email address already exists, the command object might look something like this:



        class EmployeeExistsCommand : CommandBase
{
public bool EmployeeExists { get; set; }
public string EmailAddress { get; set; }

private EmployeeExistsCommand(string emailAddress)
{
EmailAddress = emailAddress;
}

public static bool CheckIfEmployeeExists(string emailAddress)
{
var cmd = new EmployeeExistsCommand(emailAddress);
cmd = DataPortal.Execute(cmd);
return cmd.EmployeeExists;
}

protected override void DataPortal_Execute()
{
using (var ctx = Csla.Data.ContextManager<HolidayPlanrDataContext>.GetManager(HolidayPlanr.DataAccess.Database.HolidayPlanrDb))
{
var data = from e in ctx.DataContext.Employees
where e.Email == EmailAddress
select e;

EmployeeExists = data.SingleOrDefault() != null;
}
}
}



The main thing to note here is that this command makes a trip to the database via Linq to SQL in order to determine whether an employee with the same email address already exists. If we add a custom validation rule to our Employee class to execute this command like so...



protected override void AddBusinessRules()
{
ValidationRules.AddRule(CommonRules.StringRequired, FirstNameProperty);
ValidationRules.AddRule(CommonRules.StringRequired, LastNameProperty);
ValidationRules.AddRule(CommonRules.RegExMatch, new CommonRules.RegExRuleArgs(EmailAddressProperty, @"\w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"));
ValidationRules.AddRule(Rules.ObjectRequired, FirstNameProperty);
ValidationRules.AddRule(Rules.ObjectRequired, LastNameProperty);
ValidationRules.AddRule(Rules.ObjectRequired, EmailAddressProperty);
ValidationRules.AddRule(Rules.ObjectRequired, RoleProperty);
ValidationRules.AddRule(Rules.ObjectRequired, WorkgroupProperty);
ValidationRules.AddRule(EmployeeAlreadyExists, EmailAddressProperty);
}



And the static method that implements the logic looks like this:



        private static bool EmployeeAlreadyExists(object target, RuleArgs args)
{
if (target is Employee)
{
var employee = target as Employee;
if (employee.IsNew)
{
if (EmployeeExistsCommand.CheckIfEmployeeExists(employee.EmailAddress))
{
args.Description = string.Format("An employee already exists with email address {0}", employee.EmailAddress);
return false;
}
}
}
return true;
}



Now when we run our unit tests the creation of our Employee object will execute the validation rules and therefore call the static method and make a call to the database to determine whether the employee already exists which is what we don't want to happen because now this rule will fire for all of my other tests.



This is, as I see it, where the power of Type Mock Isolator comes in, what I can do now is define an Isolate command to intercept any calls to the EmployeeAlreadyExists method and simply return the validation result I want. So I can do this by declaring the following in my CreateEmployee method:



private Employee CreateEmployee()
{
Isolate.NonPublic.WhenCalled(typeof(Employee), "EmployeeAlreadyExists").WillReturn(true);

var employee = Employee.NewEmployee(CompanyId);
employee.FirstName = "Test";
employee.LastName = "Employee";
employee.EmailAddress = "test.employee@testcompany.com";
employee.Workgroup = Isolate.Fake.Instance<Workgroup>(Members.CallOriginal);
employee.Role = Isolate.Fake.Instance<Role>(Members.CallOriginal);

Isolate.Verify.NonPublic.WasCalled(typeof (Employee), "EmployeeAlreadyExists");

return employee;
}



So now prior to creating a new instance of the Employee object I define an isolation of a non-public method called "EmployeeAlreadyExists" on the class Employee and set its return value to "true" ensuring that by default this validation rule will always be true allowing me to continue and isolate my other validation rules.



The second statement I added was just a verify statement which ensures that a call was actually made to the "EmployeeAlreadyExists" command, so this lets me know if there was a problem in the actual call to the method.



Conclusion



Type Mock Isolator allows areas of a system that were previously un-testable/mockable to now be tested and mocked in a nice and easy to understand way, I like it very much because it now gives me the ability to write some much more in depth CSLA unit tests without breaking my OO design.



It is very powerful and could easily be mis-used or over-used but with benefits of increasing general unit test coverage it is probably worth it.



One major drawback is the lack of a community edition, the product comes with a 21 day enterprise license that reverts to the free features after the trial period. The single user license priced at 89 euros which I suppose could be well worth the price considering the peace of mind that could be achieved after unit testing those hard to reach places - all in all I like it and would recommend others at least give it a try.

8 comments:

Dave.Erwin said...

Thanks for the post. It clarified a couple of things for me. As we've discussed "drawing the line" is my difficulty. I've been stuck the last day or so trying to test down into the DAL. Trying to mock the DAL to return a SafeDataReader seems impossible and until I read this I hadn't thought about the reflection aspect of CSLA preventing it from working. So the isolation wouldn't carry through from DataPortal.XXX to DataPortal_XXX. TypeMock can do alot but maybe not quite that much?

That being said it may be that I'm pushing the test too far down. I currently use a pretty standard DAL (no NHibernate etc.). It fills in a data transfer object which is returned to the business object. I'm setting it up so that I can swap out the DAL in the future. I am in the process of migrating my app to the DTO idea, most of it has the DAL directly in the BO. I'm thinking that the tests should go no further than mocking the call to the DAL and returning a DTO. Not sure what I'm really going to gain by testing into the DAL.

Based on the reflection issue it would seem that my older BOs that have the DAL in them cannot be tested. All the more reason to get them over to DTOs.

How far in are you taking your tests?

Richard Allen said...

I don't think I shall be testing into the DAL explicitly but I will be performing tests that will execute data access logic i.e. expect to set values in the database, and then use a Linq To SQL data context to test whether the values have been set as I expected.
I would recommend using this approach of a LINQ to SQL data context as part of your DAL validation tests, it is really easy to setup and allows you to quickly verify that the stored proc you are calling is actually setting the values you expect it to without the additional need to write plumbing logic for the tests.
I shall try and get a blog post written on how to get it configured.

Dave.Erwin said...

I managed to solve the problem 5 minutes after I posted my last comment (of course). Turns out I missed faking out a sql command object and that was causing a nullreference exception.

I am able to fake everything so that I can feed a SafeDataReader into my DTO. Your post and Phil Haack's StubDataReader (http://bit.ly/C8QLf) helped put the pieces together. I had to make a change to wrap _sdr = new SafeDataReader(Command.ExecuteReader()); in a method before I could get my fake SafeDataReader in place. Still working on why that was needed. Swap.NextInstance doesn't seem to work in that case.

Really just pushing it this far to learn more about using TypeMock in different situations. I'm still not sure that the test I'm writing accomplishes much.

hagashen said...

How do you then test that the command is actually working correctly?

Richard Allen said...

If you take a look at my next post on using Linq2SQL to validate data access: http://richallen.blogspot.com/2009/04/using-linq-to-sql-to-validate-data.html, this explains a way in which you could simply use Linq2SQL to insert a known row into your database in the above case we would insert an employee with the required test email address, then action your command object, then assert that the result of the command object is true or false depending on what your expectation of the test is.

AndrewH said...

Hi Rich,

Long time no speak; I hope you are well.

I too am continuing to struggle with the best way to unit test, because of data access. I think what Dave.Erwin said is relevant not (just?) because of testing the data access, but because of creating an object to test.

I think what is 'hidden' in your blog is that Employee.NewEmployee could perform data access. In this instance I am guessing it does not. But if you have to hit the database to create a new object, that's where you have to mock out the data access.

I really don't want to go to the trouble of complicating the architecture with interfaces, DTOs and multiple data access layers, it's just not worth the extra complexity in our environment. But I do want to deliver a good answer for our guys to use in all instances.

Is it only/mostly validation and authorisation rules that you ever test? Ironically, the thing that almost always fails for us after object changes is the data access - incorrectly named SP parameters or missing parameters! Er, hang on, isn't that the bit we are trying to avoid running ...!

Donny Brasco said...

Is there any other Mocking framework you can use besides Type Mock Isolator that is for free?

Richard Allen said...

Unfortunately I do not know of a free alternative to TypeMock - although I believe that Telerik's JustMock might provide similar behaviour but that is not free either. If you do find one then let me know :-)