I’ve recently been spending some time working on removing the friction in project start-up, by producing a skeleton solution for an ASP.NET MVC application, based on all the goodness that we’ve thrown into Who Can Help Me?. I ran this by my colleague Simon Brown and asked him to pull it apart and ask any questions that came to mind.

The result of which is quite a good Q & A session that relates directly to the Who Can Help Me solution. These are the kind of questions that people may not feel like asking in the forums, so I thought I’d post up the answers to the questions – kind of like an idiots guide to the reasoning behind a lot of the decisions we took in the journey leading to the Who Can Help Me solution.

It’s a bit of a long one, but here goes…

How do you decide where to store interfaces? Should there be a separate interfaces project? Why are some interfaces in the Domain project and others in individual projects (e.g. Mapper interfaces are in the Web.Controllers project)?

We try to reinforce a convention over configuration approach throughout the entire solution, which means that following the conventions (in this case, keeping things in the same place) means that the IOC registration is frictionless.

The interfaces that are used cross-project (Repositories, Services, Tasks) are stored in the Domain project. The reason for this is that the Domain project is already referenced by the other projects in the solution as the domain entities are used all the way from the repositories to the controller layer. The idea is to keep the solution as loosely coupled as possible, so minimizing the actual hard references between projects. For example – the Tasks project does not need to reference the Repositories project even though it uses repositories heavily as it just talks to repository interfaces which live in the domain layer. Equally, the Controllers project does not need to reference the Tasks project. Ideally, we wouldn’t need to have these references anywhere, but the Web project does have references to everything needed to run the application to ensure that everything is copied into the bin directory. This could be achieved using a custom build task, making the solution completely loosely coupled.

The interfaces that are used within a project (e.g. mappers in the controllers project) live with the implementations that they describe. These interfaces are not used cross-project and only referenced within a project.

Once these conventions are defined, it removes decisions about where to put things. On a micro-scale, this removes the friction and time it takes to add new interfaces to the solution. The Castle Windsor fluent registration exploits these conventions, making IOC registration frictionless i.e. adding new interfaces and implementations does not require any updates to registration code or configuration.

What is the difference between a ViewModel and a FormModel? Is this really necessary? Why are they not all named ViewModel?

A viewmodel is an object used to bind to a view (i.e. for display purposes) and a formmodel is an object used to bind to a set of form parameters (i.e. when an HTML form is submitted). We’ve found that keeping display and update models separate is definitely preferred as they’re rarely the same and serve different purposes. For example, a formmodel is likely to have validation requirements that a viewmodel wont. We’ve found that this naming convention works – the use of view or form is explicit enough to describe what the purpose of the class is. Some people take this a step further and use ViewModel, EditModel, CreateModel, DeleteModel however we’ve found this to be overkill in the applications we have built. However, in a heavy CRUD style application, this may be preferred.

Why is MEF being used to determine Castle Windsor registrations?

This is also to keep the solution as loosely coupled as possible and to reduce friction in adding new components or groups of components that need registering in the IOC container. The solution contains a number of Registrars and Initialisers. The Registrars are used for registering components in the IOC container (using the convention based fluent registration) and the Initialisers are used to set something up for the solution (e.g. replace the MVC view engine, configure the validation framework etc). All of these activities need to happen at application startup so we need something to orchestrate this activity.

We could have a component that does this by calling directly into all the Registrars and the Initialisers, however this tightly couples the Web project to all other layers in the solution as these components are present in all projects. Also, it adds an overhead in maintenance if a new Registrar or Initialiser is added as the central orchestrator needs to be updated.

The use of MEF avoids this as it treats Registrars and Initialisers as “plugins” that just need to be executed when the application starts. MEF is not performing the IOC registration, simply finding all the Registrar and Initialiser components that need to do something and calling them. This means that there are not hard coded references across all layers of the solution from the Web project and that adding a new Initialiser is simply a case of adding the new class. If it inherits from the correct interface and has the [Export] attribute then it will be found and executed. So, by looking at another initialiser, you have everything you need to know about how to create a new one – you do not need to trawl the solution trying to find the way that they are all wired up.

All of your Castle registrations are dependent on the use of conventions and assume only one implementation of each interface is used by each component. How do you deal with the case for example where two services consume a different implementation of an IRepository?

We’ve found the main benefit in using dependency injection and IOC is not of the ability to swap out components at runtime, but that it aids the development process by making a loosely coupled solution that is really easy to test. It is a very rare requirement that a registration needs to occur based on a condition and that there may be two possible implementations of one interface. The conventions used here do rely on the fact that there is a one to one mapping between contract and implementation, however this can be changed easily if required.

The fluent registrations can be overridden by using a Castle configuration file. When creating the WindsorContainer, passing in a configuration file reference will ensure that this overrides any registration.

Alternatively, the use of IHandlerSelectors in Castle Windsor can allow for dynamic selection of components at runtime. For more information, see this article from Ayende http://ayende.com/Blog/archive/2008/10/05/windsor-ihandlerselector.aspx

Why are you using the Spark view engine rather than the standard ASP.NET MVC one? What benefits does it bring?

The Spark view engine could be described as a DSL for a rendering view markup, whereas the out-the-box Webforms view engine feels more like reusing what’s already available to do the job. The Webforms syntax feels clunky when you require looping or conditional logic as the angle bracket syntax breaks up the flow of the HTML. There is also very little support for common view-specific scenarios.

Spark on the other hand embeds logic directly into the HTML markup, meaning the views still look and feel like HTML. As well as the purely aesthetic benefits, this makes it easier for non-.NET iDevs to understand and grok what’s going on in the view code. The view files are simpler, terser and also rely on a convention-based approach for defaults meaning they are quicker to work with. Team development has less friction as iDevs rely less on .Net devs for integrating markup into a solution. There are also a lot of really nice language features e.g. automatically creating and exposing Index, IsFirst and IsLast variables for use within a loop which makes formatting lists very easy. It supports automatic HTML encoding of strings and has cached-region support. The rendering logic is super fast as is simply writing to a StringBuilder. This also means that Spark can be used as a generic view renderer, outside the scope of a web-application with an HTTP context e.g. for rendering tokenized HTML views for an email.

The downsides are that you lose some tooling support – intellisense is limited but can be achieved through an installer available from the Spark website. ReSharper ignores .spark files so refactoring is more manual, however the benefits far outweigh these downsides.

http://sparkviewengine.com/

How does the validation work? Looks like there is some client side validation (xVal) and some server side validation (DataAnnotations?). How does this all fit together?

MVC 1.0 provided no out-the-box UI validation mechanism. The only validation support available is the System.ComponentModel.DataAnnotations namespace which provides some basic validation attributes that can be added to a class. However, there is no validation runner – e.g. to check whether a class is valid based on the attributes, and no way to represent this validation in the UI i.e. through JavaScript so validation is server-side only.

Steve Sanderson’s xVal framework was built to solve this problem, acting as a middleman between server side and client side validation. xVal is not a validation framework itself, but acts as a broker – translating server side validation rules from a number of supported frameworks (Data Annotations, NHibernate Validator…) into a canonical format and then outputting these rules in client side JavaScript for interpretation by one of the supported UI validation frameworks (jQuery.Validator, ASP.NET Ajax Validation).

xVal is initialized in the ValidationInitialiser – setting up the server side framework provider to use. An HTML helper method is used to add the client side validation rules, based on a particular class (see Q3 r.e. FormModels) and referencing the appropriate JavaScript files will wire up the UI validation.

There’s a few moving parts in this solution, but once it’s set up, adding client and server validation to forms is easy – by following the conventions.

N.B. In MVC 2.0, equivalent functionality to xVal has been rolled into the framework so this becomes largely redundant. xVal would still be useful in scenarios where an alternative validation framework is required e.g. NHibernate.Validator.

http://xval.codeplex.com/

How does the error handing work? What is ELMAH and why are we using it?

ELMAH stands for Error Logging Modules and Handlers and is a really easy way to set up logging for unhandled application errors. This is not a replacement for a logging framework e.g. Log4Net as only deals with unhandled exceptions but should be used alongside your standard logging and instrumentation for diagnostic purposes. It can be added to a solution without changing a single line of code – simply be referencing the assembly and adding the relevant web.config entries. However, more advanced config can be used and listeners can be added to log to email, file, database, twitter etc.

It’s a great tool to have in a solution from day one as during the development process, when bugs are often found, ELMAH can really help. It will log the “yellow screen of death” that was originally seen when the exception was thrown, along with the stack trace and which is invaluable when a system tester raises a bug – they can link to the ELMAH reference in order to diagnose what happened. ELMAH can also be used in a production environment to log exceptions whilst the live application is running. This can help Ops teams diagnose badly performing applications. Viewing the ELMAH logs is easy and can be done directly in the browser via the elmah.axd HTTP handler. This can be locked down using any standard authorization mechanism, restricting access to the error logs if necessary.

http://code.google.com/p/elmah/

Looks like you are only creating mappers to map between single entities (i.e. a Person to a PersonViewModel) and not for mapping collections (i.e. List<Person> to List<PersonViewModel>). The lists seem to be mapped using extension methods (e.g. MapAllUsing). What would you do in the instance where a Trade is mapped to a List<FinancialPosting> etc?

A mapper always returns one output, which could be of type List<T>. In this case, it depends on how the FinancialPostings are created from the Trade. If the Trade contains a list of information which is used to create the list of FinancialPostings then it could be that the mapping is actually from a child property of the Trade e.g.

var financialPostings = Trade.Transactions.MapAllUsing(mapper);

However, if the Trade object is needed for each FinancialPosting then I would simply have the mapper return a List<T>. This is completely acceptable.

Where one collection is mapped to another, then I would recommend using the MapAllUsing() extension method and create a mapper to map the single entities. This allows for a greater reuse, easier testability and a separation of concerns between the mapping logic and the iteration logic.

There seem to be many overloaded IMapper interfaces (inputs 1-5). Why is this? How are these overloads used? What happens if I’m mapping 6 inputs?

This is simply because the calling code has required this many inputs to map to one output. A controller could for example get data from a number of sources in order to display a complex view so the mapper interface has just been updated to suit this requirement. Adding more and more inputs to the mapper interface could highlight a code smell, but without context it’s hard to say whether this is a justified trade off. The benefits of using the IMapper interface are that the mappers are automatically registered into the IOC container and that the MapAllUsing extension method can be used to chain mappers together.

How does auto-mapper work? How do I configure auto-mapper to deal with complex mappings?

Automapper works great at mapping one object to another when property names match, or object hierarchies are to be flattened. This really suits entity -> viewmodel and formmodel -> entity type mapping in MVC controllers. It works via reflection, but mapping profiles are cached and lots of effort has been made to avoid performance problems.

Automapper relies on a convention based approach to mapping – i.e. when property names match it knows what to do, but these conventions can be overriden in mapping profiles to map non-standard names, ignore specific properties, or provide custom mapping logic for particular properties. In short, Automapper is very flexible and can be configured to map however you want, however its real benefit is when you stick to the conventions as mapping objects then becomes frictionless.

For more info see http://automapper.codeplex.com/

Would you ever create a mapper which doesn’t use auto-mapper?

Yes – if mapping one object to another results in a setting up a complex mapping profile with automapper, I would definitely consider whether using it is beneficial. In objects that are not using convention-based mapping it could often be easier to manually map properties.

There seem to be a lot more classes than specs. Is this because the code is difficult to test or are the specs for these elsewhere?

Whilst I am a big advocate of test first development, I’m also pragmatic when it comes to what to test. A lot of the code in the Framework project has come out of other projects and has been in use for a long time so it didn’t feel necessary to retro fit test code around it. Specs were written for nearly all new code that was created for the purposes of this solution. There are a number of supporting classes e.g. Registrars and Initialisers that are not tested, Domain and View and FormModels are dumb and provide no logic.

Why are we using the term Tasks? What is a Task?

The Tasks layer is a logical grouping of business logic that provides an application boundary in the solution. Anything above the tasks layer is deemed presentation logic and anything below is deemed reusable application logic. The Sharp Architecture framework originally called this layer of functionality the Application Services, however this led to confusion in teams over the use of the word Service. We found that as soon as we started talking about our Application Services layer as a Tasks Layer, then things seemed to start to make more sense.

What is the granularity of each layer?

Web

The web application project contains application initialization and pure presentation logic. View files, images, CSS etc. The only code at this layer is HTML helper style code for use in view markup and any necessary Initialisers.

Web.Controllers

Controllers are deemed presentation logic as are MVC specific and govern the “flow” of the application. I.e. they handle user inputs (the HTTP GET and POST requests made to the application) and return the correct response (HTML, Javascript, HTTP redirects, HTTP error codes etc). Controllers should hand off to the Tasks layer as soon as possible to “do stuff” and should contain as little logic as possible. This project is as high as the Domain entities go so Controllers contains the necessary Mappers to convert Entities to ViewModels and FormModels to Entities.

Tasks

The tasks layer contains the application business logic. It talks to repositories for retrieving and updating entities and to external services where necessary. It performs server side validation on entities yet is still persistence ignorant.

Infrastructure

The infrastructure project is where data access happens and real services are called. This could be via NHibernate, a web service or another specific implementation. For external services, mapping may be needed between entities and third party types, so there may be mappers present for this purpose.

Domain

Contains all domain entities, value types etc – the business model. Also contains the interfaces for the various other cross-project layers e.g. Tasks, Repositories and Infrastructure. The Domain project is used all the way from the Infrastructure layer up to the Controllers.

Framework

Supporting low level and utility code e.g. logging, caching, extension methods etc. Can be used throughout the solution.

Specs

BDD style tests for the entire application logic.

What BDD/Test frameworks are being used? (MSpec?) Why? Is this going to be the standard?

MSpec is being used as the BDD framework of choice. This has been down to personal preference of the teams that have worked with this framework, however, here are my personal views on why I would pick MSpec:

There seems to be two styles of BDD testing currently in use within the software development community:

The first are those frameworks that aim to produce unit tests in a BDD style syntax in C#. These could be described as an “internal DSL” for BDD testing. They tend to have unit test runner support from within Visual Studio, build tools for integrating into CI processes and produce some form of human readable output, either in HTML format, or within Visual Studio.

The second are those frameworks which are more aimed at QA or BA by providing a non-technical scenario based language for defining BDD style specifications. These could be described as an “external DSL” for BDD testing. These specs tend to be written in text files and suit high-level user story style specifications.

On a very crude level, the first set of frameworks suit how developers work and the second suit QA and business people. A project would benefit from both styles of testing – at a unit level within an automated CI process, and at a UI level via an automated UI testing tool e.g. Selenium running the scripts. Regarding the first scenario, I believe that MSpec is the most mature, featured framework, with the best integration into Visual Studio and the biggest community using it – i.e. it’s the best choice for developers.

This solution does not deal with the latter of these scenarios purely because we have not had that kind of input into the project. I would see more Cucumber-esque BDD style testing as an addition to the MSpec coverage, rather than a replacement.

What is the ServiceLocatorHelper and how/when is it used? Tests only?

Yes – the ServiceLocatorHelper exists to support testing by allowing an easy way to add components to the Service Locator that are required by executed code.

Why is IWindsorContainer being passed around the application? Isn’t the point of having a ServiceLocator that we can easily switch out IoC implementations?

No, the point isn’t really so we can switch out IOC implementations, but rather that we can share the container between anything else that uses the ServiceLocator. So, if a third party product is introduced into the solution and it uses the ServiceLocator to add and retrieve components, potentially we could also add to, remove from and access that list of components via the ServiceLocator too – i.e. the container is abstracted from the specific implementation. We’re not really exploiting this behavior and you’re right that we’re kind of hard coding the use of WindsorContainer in the solution. If/when we meet this requirement, this area will probably need to be revisited.