Monday, 2 November 2009

CSM course in Bournemouth 23 & 24 November

Geoff Watts will be holding a CSM training course in Bournemouth on the 23/11/2009 and 24/11/2009. For full details see here

Monday, 17 August 2009

An ASP .NET Testing Time Ahead

Thursday 20th August sees NxtGenUG Southampton hosting a web application testing session that is not to be missed. Testing guru and general all round community good guy Ben Hall is coming down to pay us a visit and talk about all things web test, Ben will be looking at how you can test different aspects of ASP .NET from the core engine to the UI and databases. Topics will include test driven development, mocking of external services as well as UI automation but probably most importantly Ben will be describing the best way to get started with testing you web application.

All in all this looks like a great session and if you are in the area I would definitely recommend paying us a visit.

Wednesday, 20 May 2009

Unit Testing ASP.NET? ASP.NET unit testing has never been this easy.


Typemock is launching a new product for ASP.NET developers – the ASP.NET Bundle - and for the launch will be giving out FREE licenses to bloggers and their readers.

The ASP.NET Bundle is the ultimate ASP.NET unit testing solution, and offers both Typemock Isolator, a unit test tool and Ivonna, the Isolator add-on for ASP.NET unit testing, for a bargain price.
Typemock Isolator is a leading .NET unit testing tool (C# and VB.NET) for many ‘hard to test’ technologies such as SharePoint, ASP.NET, MVC, WCF, WPF, Silverlight and more. Note that for unit testing Silverlight there is an open source Isolator add-on called SilverUnit.

The first 60 bloggers who will blog this text in their blog and tell us about it, will get a Free Isolator ASP.NET Bundle license (Typemock Isolator + Ivonna). If you post this in an ASP.NET dedicated blog, you'll get a license automatically (even if more than 60 submit) during the first week of this announcement.

Also 8 bloggers will get an additional 2 licenses (each) to give away to their readers / friends.
Go ahead, click the following link for more information on how to get your free license.

Wednesday, 29 April 2009

Using Linq to SQL to validate Data Access Logic

So Microsoft aren't going to be supporting Linq to SQL (L2S) see "Microsoft Kills Linq to SQL" for more info, but there are still some great situations that it can be really useful in your environment with it going into a production build and one of those is using L2S for testing your data access logic.

This post aims to talk you through a way of using L2S to confirm that what you think your data access logic is doing is what your data access logic is actually doing.

A really useful tool that comes with L2S is called SQLMetal, this tool is used to automatically generate a data context based on the current schema of a given database. So instead of using the fancy designer front end, you auto generate the whole database schema including Stored Procedures and Views if you want but you can restrict this if you want by simply not including the required input parameter. To do this every time your application builds simply amend your project file with the following ensuring that you replace the text in [] with your specific fields:

<Target Name="BeforeBuild">
<
Exec Command="&quot;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\SqlMetal.exe&quot; /server:[ServerName] /Database:[DatabaseName] /code:[MyDataContext].cs /context:[MyDataContext] /namespace:[MyCompany].[MyProject].DataAccess /views /sprocs /functions /pluralize WorkingDirectory="$(ProjectDir)" />
</
Target>



The parameters /views /sprocs /functions will expose all of the views, stored procedures and functions accordingly. The /pluralize parameter indicates that you want to generate plural names for tables so a table called "Customer" will be exposed as "Customers" in the data context. For full details of the parameter list go here. Once you've modified the project file reload it and when you next build the project file it will generate a data context for you with as much of your database exposed as you would like.



So now that we have our data context we can put this to good effect when testing our database access logic, for example take the following data portal insert method from a CSLA business object which will be called when our business object is new and the Save method is called:



protected override void DataPortal_Insert()
{
using (var connection = ConnectionManager<SqlConnection>.GetManager("MyDatabase").Connection)
{
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.StoredProcedure;
cmd.Connection = connection;
cmd.CommandText = "usp_CreateEmployee";
cmd.Parameters.AddWithValue("@FirstName", ReadProperty(FirstNameProperty));
cmd.Parameters.AddWithValue("@LastName", ReadProperty(LastNameProperty));
cmd.Parameters.AddWithValue("@EmailAddress", ReadProperty(EmailAddressProperty));
cmd.ExecuteNonQuery();
}
}


This method quite simply executes a stored procedure called "usp_CreateEmployee" and passes in the parameters from the business object. What we want to do here is validate that the parameters are correctly passed from our business object into our stored procedure and the stored procedure correctly performs an operation to insert a new employee with the details set on the business object. What we don't want to do here is have to write a whole bunch of code in order to do this, step in L2S.



So now back in our unit test we can do the following:



        [TestMethod]
public void EmployeeIsCreatedInDatabase()
{
TestDataContext = new MyTestDataContext(CreateConnection());

int initialCount = TestDataContext.Employees.Count();

const string FirstName = "TestFirstName";
const string LastName = "TestLastName";
const string EmailAddress = "test@test.com";

// Create business object and set values
var employee = Employee.NewEmployee(CompanyId);
employee.FirstName = FirstName;
employee.LastName = LastName;
employee.EmailAddress = EmailAddress;

// Call save on business object, this will execute data portal logic
// and insert a new row in the database with the new values
employee.Save();

// Refresh our data context with data from the database
TestDataContext.Refresh(RefreshMode.OverwriteCurrentValues);

// Now get the new employee table count and assert it has increased by 1
int newCount = TestDataContext.Employees.Count();
Assert.AreEqual(initialCount + 1, newCount);

// Now get a Linq Employee object directly from the data context and verify that
// the all of the fields have been populated as we expected them to.
LinqEmployee linqEmployee = TestDataContext.Employees.OrderByDescending(e => e.ID).FirstOrDefault();
Assert.IsNotNull(linqEmployee);
Assert.AreEqual(linqEmployee.FirstName, FirstName, "Expected first names to match");
Assert.AreEqual(linqEmployee.LastName, LastName, "Expected last names to match");
Assert.AreEqual(linqEmployee.Email, EmailAddress, "Expected email addresses to match");
}


As you can see from the above code, the test creates a TestDataContext L2S object  and takes an initial count of the number of employees in the Employee table and defines our test values as constants.



Next we create a new employee business object and set the appropriate values with the test data. We then call the Save method on the business object, this is the method we are actually testing and we want to ensure that it correctly passes the information set on the business object down to our data access layer and then the data access layer inserts the appropriate data into the relevant tables.



We can then verify that the Save method has done what it needed to do by using out TestDataContext, we simply refresh the data with values from the database and then perform some assertions. Firstly, we check that the employee count has incremented by 1, this ensures that an insert was performed on the table. Next, we sort the employees by Id and order descending ensuring that we get the latest Id generated by the table - our new employee should be the last row in the table (unfortunately the Last() function in L2S does not work so we have to do this order by descending approach). We then get an instance of this L2S object and perform assertions on its properties ensuring that the data we set on our business object has been set on the L2S object.



Conclusion



Linq to SQL may no longer be supported by Microsoft and as such might not be suitable for production ready systems, but it can still be a really useful tool especially when performing data tests to ensure our data access layers are performing as we expect. I hope you find this post useful and it helps you to get a bit more data access testing done. Let me know your comments.

Tuesday, 28 April 2009

Unit Testing CSLA with Type Mock Isolator

I am a huge fan of CSLA and look to try and use it in projects where ever I can as it just makes life so much simpler by taking some of the major decision points out of the development cycle allowing you to focus on the real problem which is defining and designing your business logic layer for what ever project you are working on.

As with any framework, CSLA has its benefits and it has its drawbacks, I am not going to go into all these in this post as there are plenty of people that have already had these discussions already but one of the major drawbacks I found was when trying to unit test my CSLA objects.

In a nutshell your CSLA based business objects inherit from a base class called BusinessBase<T>, this class implements a whole bunch of useful features such as authorisation rule checking, validation rule checking, n-level undo, distributed business objects via the data portal etc that can be utilised by any inheriting class.

Probably the key area is the Data Portal mechanism which performs the following (please note this is over simplified for clarity) when the Save() method is called on your business class:

  • Determine the state of your class, is it Valid (all data is correct) and Dirty (some data has changed)
  • Determine what kind of operation to perform (If the object is new then Insert else Update etc)
  • Serializing your business object
  • Transport the business object to your data access layer (which could be across a server boundary)
  • De-serializing the business object
  • Executing any required data access logic i.e. Insert, Update, Fetch operations
  • Perform any validation rules required
  • Serialize the object again
  • Transport the business object back to the application layer (which could be across a server boundary)
  • De-serializing the business object
  • Return a new instance of the application logic

As you can see CSLA does a lot of work under the hood which involves a lot (and I do mean a lot) of reflection which is all good for making your life as a developer great when developing the business domain but this becomes a problem when coming to unit test and mock out parts of the system.

CSLA encapsulates the data access logic methods i.e. DataPortal_XYZ within the business object, which is probably its most controversial point, but this does not restrict your choice of the actual data access mechanism that you want to use, I personally am using Linq to SQL as my data access layer and find that it is extremely quick and easy to:

  • Add new columns to the database
  • Expose them via Linq to SQL
  • Expose them in the business object
  • Add any validation rules in the business object
  • Put a field on the UI

That said, any other type of data access could easily used instead such as Linq to Entities, NHibernate, ADO .NET etc etc.

So hopefully you are starting to see the problem we might have with unit testing, essentially as soon as you perform an action on an object such as set a property, CLSA will perform some behind the scenes work to execute authorisation and validation rules to determine whether the current user is permitted to perform the action and then determine whether the property is valid after its value has been set. It is this "behind the scenes" work that is absolutely awesome and makes CSLA a great framework but makes the mocking and unit testing almost impossible as it becomes very difficult to isolate the various parts of the business object.

I have been working Rhino Mocks for a while and found that this has been great especially when compared to NMock but when I tried mocking a CSLA object it let me down. It seems that the problem is that Rhino Mocks relies on Dependency Injection to perform its mocking and because CSLA objects are generally closed and do not expose ways of injecting mocks into the objects it is impossible to replace the data access logic with a mock replacement - or so I thought until I came across TypeMock.

TypeMock Isolater uses Aspect Oriented Programming to create a mock aspect, it essentially monitors the applications execution and registers an interest in specific methods on an object with the .NET framework API, when the method is called the .NET framework notifies the TypeMock and allows it to return mocked objects or values. What this means is that we don't need to change our objects and sacrifice our "good OO design" in order to perform unit testing of our objects.

Here is a very simple example of how I have used Type Mock to fake a couple of child objects that exist on my Employee class, I want to test the validation rules on the Employee object but I do not want to have to load or create a new instance of either Workgroup or Role properties so I simply create a "fake" instance using the Isolate class:

        private Employee CreateEmployee()
{
var employee = Employee.NewEmployee(CompanyId);
employee.FirstName = "Test";
employee.LastName = "Employee";
employee.EmailAddress = "test.employee@testcompany.com";
employee.Workgroup = Isolate.Fake.Instance<Workgroup>();
employee.Role = Isolate.Fake.Instance<Role>();
return employee;
}



Now my Employee class has all of its properties set (two of them with fake objects) and therefore will be valid, I can independently test each validation rule to ensure that they are fired correctly when the relevant property on the object changes, in the following case I ensure that the "Email address is required" rule is fired:



        [TestMethod]
public void EmailAddressIsRequired()
{
var employee = CreateEmployee();
employee.EmailAddress = null;

Assert.AreEqual(1, employee.BrokenRulesCollection.Count());
Assert.IsTrue(employee.BrokenRulesCollection[0].Property == "EmailAddress");
Assert.IsFalse(employee.IsValid);
}



Now admittedly I could have achieved the same thing here using Rhino Mocks and creating a mock instance of each of the classes, but lets say for example the Employee object has a custom validation method that using a command object to determine whether an employee with the same email address already exists, the command object might look something like this:



        class EmployeeExistsCommand : CommandBase
{
public bool EmployeeExists { get; set; }
public string EmailAddress { get; set; }

private EmployeeExistsCommand(string emailAddress)
{
EmailAddress = emailAddress;
}

public static bool CheckIfEmployeeExists(string emailAddress)
{
var cmd = new EmployeeExistsCommand(emailAddress);
cmd = DataPortal.Execute(cmd);
return cmd.EmployeeExists;
}

protected override void DataPortal_Execute()
{
using (var ctx = Csla.Data.ContextManager<HolidayPlanrDataContext>.GetManager(HolidayPlanr.DataAccess.Database.HolidayPlanrDb))
{
var data = from e in ctx.DataContext.Employees
where e.Email == EmailAddress
select e;

EmployeeExists = data.SingleOrDefault() != null;
}
}
}



The main thing to note here is that this command makes a trip to the database via Linq to SQL in order to determine whether an employee with the same email address already exists. If we add a custom validation rule to our Employee class to execute this command like so...



protected override void AddBusinessRules()
{
ValidationRules.AddRule(CommonRules.StringRequired, FirstNameProperty);
ValidationRules.AddRule(CommonRules.StringRequired, LastNameProperty);
ValidationRules.AddRule(CommonRules.RegExMatch, new CommonRules.RegExRuleArgs(EmailAddressProperty, @"\w+([-+.]\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"));
ValidationRules.AddRule(Rules.ObjectRequired, FirstNameProperty);
ValidationRules.AddRule(Rules.ObjectRequired, LastNameProperty);
ValidationRules.AddRule(Rules.ObjectRequired, EmailAddressProperty);
ValidationRules.AddRule(Rules.ObjectRequired, RoleProperty);
ValidationRules.AddRule(Rules.ObjectRequired, WorkgroupProperty);
ValidationRules.AddRule(EmployeeAlreadyExists, EmailAddressProperty);
}



And the static method that implements the logic looks like this:



        private static bool EmployeeAlreadyExists(object target, RuleArgs args)
{
if (target is Employee)
{
var employee = target as Employee;
if (employee.IsNew)
{
if (EmployeeExistsCommand.CheckIfEmployeeExists(employee.EmailAddress))
{
args.Description = string.Format("An employee already exists with email address {0}", employee.EmailAddress);
return false;
}
}
}
return true;
}



Now when we run our unit tests the creation of our Employee object will execute the validation rules and therefore call the static method and make a call to the database to determine whether the employee already exists which is what we don't want to happen because now this rule will fire for all of my other tests.



This is, as I see it, where the power of Type Mock Isolator comes in, what I can do now is define an Isolate command to intercept any calls to the EmployeeAlreadyExists method and simply return the validation result I want. So I can do this by declaring the following in my CreateEmployee method:



private Employee CreateEmployee()
{
Isolate.NonPublic.WhenCalled(typeof(Employee), "EmployeeAlreadyExists").WillReturn(true);

var employee = Employee.NewEmployee(CompanyId);
employee.FirstName = "Test";
employee.LastName = "Employee";
employee.EmailAddress = "test.employee@testcompany.com";
employee.Workgroup = Isolate.Fake.Instance<Workgroup>(Members.CallOriginal);
employee.Role = Isolate.Fake.Instance<Role>(Members.CallOriginal);

Isolate.Verify.NonPublic.WasCalled(typeof (Employee), "EmployeeAlreadyExists");

return employee;
}



So now prior to creating a new instance of the Employee object I define an isolation of a non-public method called "EmployeeAlreadyExists" on the class Employee and set its return value to "true" ensuring that by default this validation rule will always be true allowing me to continue and isolate my other validation rules.



The second statement I added was just a verify statement which ensures that a call was actually made to the "EmployeeAlreadyExists" command, so this lets me know if there was a problem in the actual call to the method.



Conclusion



Type Mock Isolator allows areas of a system that were previously un-testable/mockable to now be tested and mocked in a nice and easy to understand way, I like it very much because it now gives me the ability to write some much more in depth CSLA unit tests without breaking my OO design.



It is very powerful and could easily be mis-used or over-used but with benefits of increasing general unit test coverage it is probably worth it.



One major drawback is the lack of a community edition, the product comes with a 21 day enterprise license that reverts to the free features after the trial period. The single user license priced at 89 euros which I suppose could be well worth the price considering the peace of mind that could be achieved after unit testing those hard to reach places - all in all I like it and would recommend others at least give it a try.

UK South Scrum User Group (UKSSUG)

It's been a while since I last blogged, can't really think of a decent reason apart this thing called Twitter taking up a fair bit of time recently, I've not been twittering myself all that much but doing lots of reading of tweeple's tweets in the twittersphere.

Anyway, with that out of the way, last night I went to the second meeting of the UK South Scrum User Group to talk about "The Definition of Done", something that seems to plaque the majority of people developing software.

It was a great evening lead by Mike Williams, we discussed the various stages of "done" including Tasks, User Stories, Iterations and Releases and everybody suggested ways in which it is possible to define "done" and talked about ways in which any problems we were facing could be addressed.

All in all a great second night of the user group, its great to be apart of a group that I am not running so that I can actually participate rather than constantly trying to organise the next thing for NxtGenUG. If you fancy coming along to UKSSUG, it's an open invite and totally free, keep an eye on the linked in group and we'll get the next meeting date posted up soon.

Tuesday, 31 March 2009

New South Coast Scrum User Group

The South Coast now has a new user group for all those interested in Scrum and anything Agile. Plamen Balkanski and a couple of other Agile enthusiasts have setup this user group to help spread the word of Scrum within the local area, anyone is welcome whether you are new to Agile or an experienced Scrum Master.

The next session will be on 27th April and the topic is "Getting acceptable definition of done" - which seems to be a bit of a classic Agile development issue. 

The meeting starts at 19:00 and will continue until 21:00 and can be found at:

The Inn on the Furlong
Meeting House Lane
Ringwood
Hampshire
BH24 1EY

http://www.ringwoodbrewery.co.uk/iof.htm

To register for the event go to: http://ukssug.eventbrite.com , it's free but this is necessary so we know how many people are going to attend. You can also join the LinkedIn group here, where you will find discussions, notes and links with details from previous meetings.

Look forward to seeing you there.

Friday, 6 February 2009

I am on Twitter

For anyone interested, I am now on Twitter: http://www.twitter.com/rich_allen.

Subtle changes to Tortoise SVN increase machine performance

For the last I don't know how long, I've been complaining about the speed of my work desktop machine. It's a dual-core with 2GB RAM running XP but it runs like an absolute pig. A colleague of mine recently did some digging and found that unless you make some subtle changes to Tortoise SVN it does quite a bit of unnecessary work in the background.

Quote from colleague:

"I think a significant factor in our machines being slow is Tortoise performance as it is monitoring and caching all file modifications (it is the biggest user of CPU time on my box).

This article makes some recommendations - http://tortoisesvn.net/node/267.

I’ve followed a couple and it seems to have made some difference:

1. Right Click->Tortoise->Settings – Icon Overlays Tab

Add excludes to all drives.

Add an include to your working directory only.

clip_image002

2. Remove all currently unused source directories - > I have removed the branches etc and only have MF and MIS trunk checked out for instance."

Also, doing the following can help if you don't mind losing the pretty overlay icons on some of your files:

clip_image002[4]

Select status cache = none.

This means you lose having meaningful icons on the your source folders, but the rest of tortoise works – commit/update/get modifications etc.

I can live without the icons….

Since making these changes I have seen a marked difference in how my machine responds...hooray!

Thursday, 8 January 2009

BHF - Bournemouth Bay 10K Run 2009

Now I am a little confused as to how I have actually got myself into this but in the whirl wind return to work after a great Christmas and New Year, a colleague at Fitness First decided to send a link around and say that he was taking part in the Bournemouth Bay 10K Run for the British Heart Foundation.

Before I knew it a couple of other colleagues had signed up to it and I compulsively found myself stating something like "...yeah sounds good to me, where to I sign...". So I am firmly signed up to this huge challenge (I am a software developer not a road runner by any means) and so the training has started and the plea for aid and donations has begun.

I've not really done anything like this before but if anybody is interested in helping me raise money for the British Heart Foundation or simply wants to be bet that I cannot do it (should encourage me some more), then please feel free to pop along to My Fundraising Page or use the widget in the side bar and donate as much as you can afford.

Many thanks

The Tricky Design Experience with Dr. Neil Roodyn

Next week on Thursday January 15th we've got Tricky Basset coming to show us the 'Tricky Design Experience' at NxtGenUG Southampton. After seeing Bill Buxton's key note at Remix '08 it became clear that Microsoft is really moving into this design space with some considerable vigour, the concept of people not buying products but buying experiences is something that Microsoft seem to have missed over the years whereas companies like Apple have seemed to excel.

Tricky will talking us through his team's iterative approach to design including Branding, Ideation, Research, Sketching and Conceptualisation and showing off some great WPF and Silverlight projects.

As a special treat Tricky has invited Dr. Neil Roodyn (MVP) to travel all the way from the land of Oz to join him for this exclusive session at NxtGenUG Southampton, together they shall be providing an interactive session with lots of audience interaction.

Design is something that I've always wanted to get a grip on (in that I have no design skills whatsoever) and I personally can't wait for this session it is stacking up to be an awesome experience and one not to be missed.

See you there!