Integrating N2CMS into Who Can Help Me? Part 5

Tags: , , 3 Comments »

This is part 5 in my series of posts about integrating the .Net open source content management system N2CMS into the Sharp Architecture demo application Who Can Help Me?.

So far, I’ve

  • Added the N2 assemblies
  • Added the edit site to the web project
  • Created the initial content definitions
  • Created the database schema
  • Added root nodes to the CMS database
  • Updated the Web.config
  • Set up the N2 Content Routes and intialised the CMS engine
  • Added a base CMS controller
  • Updated the HomeController to be a CMS controller
  • Tackled authentication with Open ID and the N2 membership providers
  • Built up the HomePage content definition
  • Updated the HomeController, the view model, mappers and the view

We’ve now got content from inside N2 displaying on the Home page of the site, which is excellent and I could stop there. However, there’s just a couple more things I want to cover off before I wrap this series up.

N.B. It should be noted that a lot of this code is going to be out of data very soon! WCHM is still running against MVC 1.0 and my branch is running off an old version of N2CMS. Both the Sharp Architecture project and N2CMS are currently being updated for MVC 2.0 support, and I guess I will also have to do the same as soon as I find the time, but to keep things simple I’m going to stick with the codebase as it is for now.

I want to focus on adding more CMS definitions this time and creating a dynamic navigation that is populated from the CMS tree.

Step 18 – Reworking the About page

Our site only has one content managed page – the Home page, which is nice, but what if we wanted to add a new page to our site through N2 – that’s the point of content management after all! Well, at the minute, nothing much would happen as all our site navigation is hard coded in the Menu.spark view file. I want to be able to add new pages as I like and have them appear in this navigation dynamically. The WCHM site already has an About page, which is kind of a generic text page, so I’m going to re-work this into a N2 TextPage defintion.

I actually added a TextPage definition way back in Changeset 54950 which we can use for a generic page in the site. For recap, here’s the definition:

[PageDefinition("Text Page",
    Description = "A text page.",
    SortOrder = 700,
    IconUrl = "~/edit/img/ico/png/page.png")]
[WithEditableTitle("Page Title", 5, Focus = true, ContainerName = Tabs.Content)]
[WithEditableName(ContainerName = Tabs.Content)]
[RestrictParents(typeof(HomePage), typeof(TextPage))]
public class TextPage : AbstractPage
{
    /// <summary>
    /// Gets or sets BodyText.
    /// </summary>
    /// <value>
    /// The body text.
    /// </value>
    [EditableFreeTextAreaAttribute("Body Text", 100, ContainerName = Tabs.Content,
        HelpText = "Set the body text for the page")]
    public string BodyText
    {
        get { return (string)GetDetail("BodyText"); }
        set { SetDetail("BodyText", value); }
    }
}

We’re restricting this definition to only be available under the home page, or another text page and giving the user the ability to update one property – the Body Text, which is a string property and will be edited using a WYSIWYG HTML editor.

As this is already part of the solution, when I right click on the HomePage in the N2 site tree from within the admin interface and select New, I have the option to create a new TextPage:

image

I can populate some data into the new about page:

image

And save the page, which means it will now appear in the content tree:

image

As yet, we have no controller that handles the TextPage definitions, and no view to render them, however, as we have an existing non-CMS controller called AboutController, selecting the About page in the CMS tree requests a URL of /about, which calls the existing AboutController so we see the existing content.

If we add a new TextController to handle the TextPage definitions and the associated view, in a similar fashion to the HomeController and Index view for the home page the /about route will be handled by N2 instead to display our new CMS content.

[Controls(typeof(TextPage))]
public class TextController : N2Controller<TextPage>
{
    private readonly ITextPageViewModelMapper textPageViewModelMapper;

    public TextController(ITextPageViewModelMapper textPageViewModelMapper)
    {
        this.textPageViewModelMapper = textPageViewModelMapper;
    }

    public override ActionResult Index()
    {
        return View(textPageViewModelMapper.MapFrom(CurrentItem));
    }
}

Here’s the associated mapper class – note that as it’s inheriting from the base page mapper class, which uses AutoMapper, we don’t actually need to add any mapping logic:

public class TextPageViewModelMapper :  BasePageViewModelMapper<TextPage,TextPageViewModel>,
                                        ITextPageViewModelMapper
{
    public TextPageViewModelMapper(IPageViewModelBuilder pageViewModelBuilder) : base(pageViewModelBuilder)
    {
    }
}

This is because the TextPageViewModel has properties with the same name as the TextPage definition, so Automapper can just wire them up:

public class TextPageViewModel : PageViewModel
{
    public string BodyText { get; set; }

    public string Title { get; set; }
}

And finally, here’s the Spark view:

<viewdata model="Text.Model.TextPageViewModel"/>

<content name="title">
 ${Model.Title}
</content>

!{Model.BodyText}

(pretty terse don’t you think!?)

So, know when we navigate to the /about URL, we see our CMS content:

image

Changeset reference 57978

Step 19 – Updating the Navigation

Ok. so now we can add as many pages in the CMS of type TextPage as we like. However, our navigation menu is still hard-coded to write out the original links. We need to make this dynamic so that as we add new pages, our navigation menu stays in synch.

We already have a NavigationController that is being called via the RenderAction method to render out the menu, so we need to update this and the associated view model to get the links from the CMS.

I start by updating the specification for the NavigationController by adding the following specs:

It should_ask_the_cms_tasks_for_the_navigation_items = () =>
    cms_tasks.AssertWasCalled(x => x.GetNavigationItems());

It should_set_the_list_of_cms_links_correctly = () =>
    result.Model<MenuViewModel>().CmsLinks.Count.ShouldEqual(3);

This leads me to creating a new ICmsTasks interface and updating the MenuViewModel to add a list of CmsLinks, which I created as a new LinkViewModel as this could be something that I might re-use elsewhere.

public interface ICmsTasks
{
    List<ContentItem> GetNavigationItems();
}

public class MenuViewModel
{
    public MenuViewModel()
    {
        CmsLinks = new List<LinkViewModel>();
    }

    public bool IsLoggedIn { get; set; }

    public List<LinkViewModel> CmsLinks { get; set; }
}

My new specifications fail until I update the NavigationController to call into the new ICmsTasks:

public class NavigationController : BaseController
{
    private readonly IIdentityTasks identityTasks;
    private readonly ICmsTasks cmsTasks;
    private readonly ILinkViewModelMapper linkViewModelMapper;

    public NavigationController(IIdentityTasks identityTasks, ICmsTasks cmsTasks, ILinkViewModelMapper linkViewModelMapper)
    {
        this.identityTasks = identityTasks;
        this.cmsTasks = cmsTasks;
        this.linkViewModelMapper = linkViewModelMapper;
    }

    public ActionResult Menu()
    {
        return View(
            string.Empty,
            string.Empty,
            new MenuViewModel
            {
                IsLoggedIn = this.identityTasks.IsSignedIn(),
                CmsLinks = cmsTasks.GetNavigationItems().MapAllUsing(linkViewModelMapper)
            });
    }
}

So that’s all great, but now we need to create and implement our real CmsTasks layer. There’s various ways that we could implement the GetNavigationItems() functionality – we could add an extra property to the base page class – ShowInNavigation, and retrieve all of these items, or we could come up with any other complicated requirement. However, for the purposes of this demo, I’m just going to return all the top-level items, that is all the pages under the Home page. This logic would need updating if you want to show nested items or have more of a context specific navigation menu. There’s loads of help in the N2 samples and forums about all of this.

What’s great about integrating N2 and WCHM together is that both projects use CastleWindsor as the IOC container. When we initialise the N2 engine, we pass in our existing WindsorContainer, which means that at run time we can get any of the N2 dependencies out of the container that we need. Because of this, our CmsTasks can treat N2 as just a repository of data and it can depend on any of the N2 interfaces for accessing content. These dependencies will be inject automatically at runtime, meaning our solution is loosely coupled and we can test our tasks easily. Here’s my specs for the CmsTasks:

public abstract class specification_for_cms_tasks : Specification<ICmsTasks, CmsTasks>
{
    protected static IUrlParser n2Repository;

    Establish context = () =>
    {
        n2Repository = DependencyOf<IUrlParser>();
    };
}

public class when_the_cms_tasks_is_asked_for_the_navigation_items : specification_for_cms_tasks
{
    static IList<ContentItem> result;

    Establish context = () =>
        {
            n2Repository.Stub(x => x.StartPage).Return(An<HomePage>());
            n2Repository.StartPage.Children = new List<ContentItem>{new TextPage(), new TextPage()};
        };

    Because of = () => result = subject.GetNavigationItems();

    It should_ask_the_cms_repository_for_all_the_top_level_content_items = () =>
        result.Count.ShouldEqual(2);
}

In this case, I’m depending on the IUrlParser from N2, which gives me direct access to the StartPage content item property, which is my site Home page. From here, I can easily get the list of child content items.

Another good N2 interfaces is the IItemFinder, which gives you access to the fluent linq-style N2 sytnax for finding content items using filters etc. Take a while to look at the N2 codebase and you’ll see that you don’t need to tightly couple your app to the N2.Find… API, which forces you to create fake contexts etc for testing purposes.

Our actual CmsTasks implementation is then as follows:

public class CmsTasks : ICmsTasks
{
    private readonly IUrlParser n2UrlParser;

    public CmsTasks(IUrlParser n2UrlParser)
    {
        this.n2UrlParser = n2UrlParser;
    }

    public IList<ContentItem> GetNavigationItems()
    {
        return n2UrlParser.StartPage.Children;
    }
}

All that’s left now is to update the Menu.spark view. I’ve removed the hard coded reference to the About page and added a loop of <LI> tags over the list of CmsLinks:

<ul>
    <li class="leaf first">!{Html.ActionLink<WhoCanHelpMe.Web.Controllers.Home.HomeController>(x => x.Index(), "Home")}</li>
    <li>!{Html.ActionLink<WhoCanHelpMe.Web.Controllers.Profile.ProfileController>(x => x.Update(), "You!")}</li>
    <li if="Model.IsLoggedIn" class="leaf last">!{Html.ActionLink<WhoCanHelpMe.Web.Controllers.User.UserController>(x => x.SignOut(), "Sign Out")}</li>
  <!-- Loop over the cms links-->
  <li each="var cmsLink in Model.CmsLinks">
    <a href="${cmsLink.Url}">${cmsLink.Title}</a>
  </li>
</ul>

Note that I’m not using the MVC HtmlHelper extension methods to build up the hyperlinks. This is simply because they’re not going to work in a CMS context. Remember that although we’re point to a TextController in this case, our pages are actually called something else (About) and it’s that which will appear in the URL. So, this breaks away from the standard controller/action/index routing convention that MVC uses. The N2ContentRoute handles this all for us, but it means that we lose some of the out the box MVC support – e.g. the HtmlHelperMethods. I think the N2 project has implemented a bunch of it’s own extension methods for this, but I don’t really like adding this to my views as it kind of feels wrong – accessing entities from data stores in the view isn’t something I want to do. I prefer my approach of accessing the N2 data further down the stack and converting to dumb view models before passing to the view.

Ok, so now we’re ready to add more pages. Every page that we add under the site root (the Home page) is going to render a new navigation menu link. I’ve added a few new pages here:

image

Changeset reference 57979

Summary

Hopefully I’ve shown here how I’d go about adding CMS driven navigation to my custom application. Remember – the point behind all of this is that what we’re building is not just a pure-CMS site. If this site was just pure content, entirely driven from a CMS then we could make things a whole lot simpler. However, experience tells me that most applications I built are more of the “custom web app” variety and content management is a supporting function. This is where N2 really comes into its own as it’s really easy to integrate into any form of .Net application.

Next Time

So I’ve just got one more thing to cover off before I wrap this up, and that’s Caching. This is important in a CMS backed application as there’s a lot of dynamic, changeable data and you need to find the right balance of performance and presenting the most up to date information. I’ll go through some of the approaches I’ve taken in the past.

Cheers

Who Can Help Me – Q & A

Tags: , , 10 Comments »

I’ve recently been spending some time working on removing the friction in project start-up, by producing a skeleton solution for an ASP.NET MVC application, based on all the goodness that we’ve thrown into Who Can Help Me?. I ran this by my colleague Simon Brown and asked him to pull it apart and ask any questions that came to mind.

The result of which is quite a good Q & A session that relates directly to the Who Can Help Me solution. These are the kind of questions that people may not feel like asking in the forums, so I thought I’d post up the answers to the questions – kind of like an idiots guide to the reasoning behind a lot of the decisions we took in the journey leading to the Who Can Help Me solution.

It’s a bit of a long one, but here goes…

How do you decide where to store interfaces? Should there be a separate interfaces project? Why are some interfaces in the Domain project and others in individual projects (e.g. Mapper interfaces are in the Web.Controllers project)?

We try to reinforce a convention over configuration approach throughout the entire solution, which means that following the conventions (in this case, keeping things in the same place) means that the IOC registration is frictionless.

The interfaces that are used cross-project (Repositories, Services, Tasks) are stored in the Domain project. The reason for this is that the Domain project is already referenced by the other projects in the solution as the domain entities are used all the way from the repositories to the controller layer. The idea is to keep the solution as loosely coupled as possible, so minimizing the actual hard references between projects. For example – the Tasks project does not need to reference the Repositories project even though it uses repositories heavily as it just talks to repository interfaces which live in the domain layer. Equally, the Controllers project does not need to reference the Tasks project. Ideally, we wouldn’t need to have these references anywhere, but the Web project does have references to everything needed to run the application to ensure that everything is copied into the bin directory. This could be achieved using a custom build task, making the solution completely loosely coupled.

The interfaces that are used within a project (e.g. mappers in the controllers project) live with the implementations that they describe. These interfaces are not used cross-project and only referenced within a project.

Once these conventions are defined, it removes decisions about where to put things. On a micro-scale, this removes the friction and time it takes to add new interfaces to the solution. The Castle Windsor fluent registration exploits these conventions, making IOC registration frictionless i.e. adding new interfaces and implementations does not require any updates to registration code or configuration.

What is the difference between a ViewModel and a FormModel? Is this really necessary? Why are they not all named ViewModel?

A viewmodel is an object used to bind to a view (i.e. for display purposes) and a formmodel is an object used to bind to a set of form parameters (i.e. when an HTML form is submitted). We’ve found that keeping display and update models separate is definitely preferred as they’re rarely the same and serve different purposes. For example, a formmodel is likely to have validation requirements that a viewmodel wont. We’ve found that this naming convention works – the use of view or form is explicit enough to describe what the purpose of the class is. Some people take this a step further and use ViewModel, EditModel, CreateModel, DeleteModel however we’ve found this to be overkill in the applications we have built. However, in a heavy CRUD style application, this may be preferred.

Why is MEF being used to determine Castle Windsor registrations?

This is also to keep the solution as loosely coupled as possible and to reduce friction in adding new components or groups of components that need registering in the IOC container. The solution contains a number of Registrars and Initialisers. The Registrars are used for registering components in the IOC container (using the convention based fluent registration) and the Initialisers are used to set something up for the solution (e.g. replace the MVC view engine, configure the validation framework etc). All of these activities need to happen at application startup so we need something to orchestrate this activity.

We could have a component that does this by calling directly into all the Registrars and the Initialisers, however this tightly couples the Web project to all other layers in the solution as these components are present in all projects. Also, it adds an overhead in maintenance if a new Registrar or Initialiser is added as the central orchestrator needs to be updated.

The use of MEF avoids this as it treats Registrars and Initialisers as “plugins” that just need to be executed when the application starts. MEF is not performing the IOC registration, simply finding all the Registrar and Initialiser components that need to do something and calling them. This means that there are not hard coded references across all layers of the solution from the Web project and that adding a new Initialiser is simply a case of adding the new class. If it inherits from the correct interface and has the [Export] attribute then it will be found and executed. So, by looking at another initialiser, you have everything you need to know about how to create a new one – you do not need to trawl the solution trying to find the way that they are all wired up.

All of your Castle registrations are dependent on the use of conventions and assume only one implementation of each interface is used by each component. How do you deal with the case for example where two services consume a different implementation of an IRepository?

We’ve found the main benefit in using dependency injection and IOC is not of the ability to swap out components at runtime, but that it aids the development process by making a loosely coupled solution that is really easy to test. It is a very rare requirement that a registration needs to occur based on a condition and that there may be two possible implementations of one interface. The conventions used here do rely on the fact that there is a one to one mapping between contract and implementation, however this can be changed easily if required.

The fluent registrations can be overridden by using a Castle configuration file. When creating the WindsorContainer, passing in a configuration file reference will ensure that this overrides any registration.

Alternatively, the use of IHandlerSelectors in Castle Windsor can allow for dynamic selection of components at runtime. For more information, see this article from Ayende http://ayende.com/Blog/archive/2008/10/05/windsor-ihandlerselector.aspx

Why are you using the Spark view engine rather than the standard ASP.NET MVC one? What benefits does it bring?

The Spark view engine could be described as a DSL for a rendering view markup, whereas the out-the-box Webforms view engine feels more like reusing what’s already available to do the job. The Webforms syntax feels clunky when you require looping or conditional logic as the angle bracket syntax breaks up the flow of the HTML. There is also very little support for common view-specific scenarios.

Spark on the other hand embeds logic directly into the HTML markup, meaning the views still look and feel like HTML. As well as the purely aesthetic benefits, this makes it easier for non-.NET iDevs to understand and grok what’s going on in the view code. The view files are simpler, terser and also rely on a convention-based approach for defaults meaning they are quicker to work with. Team development has less friction as iDevs rely less on .Net devs for integrating markup into a solution. There are also a lot of really nice language features e.g. automatically creating and exposing Index, IsFirst and IsLast variables for use within a loop which makes formatting lists very easy. It supports automatic HTML encoding of strings and has cached-region support. The rendering logic is super fast as is simply writing to a StringBuilder. This also means that Spark can be used as a generic view renderer, outside the scope of a web-application with an HTTP context e.g. for rendering tokenized HTML views for an email.

The downsides are that you lose some tooling support – intellisense is limited but can be achieved through an installer available from the Spark website. ReSharper ignores .spark files so refactoring is more manual, however the benefits far outweigh these downsides.

http://sparkviewengine.com/

How does the validation work? Looks like there is some client side validation (xVal) and some server side validation (DataAnnotations?). How does this all fit together?

MVC 1.0 provided no out-the-box UI validation mechanism. The only validation support available is the System.ComponentModel.DataAnnotations namespace which provides some basic validation attributes that can be added to a class. However, there is no validation runner – e.g. to check whether a class is valid based on the attributes, and no way to represent this validation in the UI i.e. through JavaScript so validation is server-side only.

Steve Sanderson’s xVal framework was built to solve this problem, acting as a middleman between server side and client side validation. xVal is not a validation framework itself, but acts as a broker – translating server side validation rules from a number of supported frameworks (Data Annotations, NHibernate Validator…) into a canonical format and then outputting these rules in client side JavaScript for interpretation by one of the supported UI validation frameworks (jQuery.Validator, ASP.NET Ajax Validation).

xVal is initialized in the ValidationInitialiser – setting up the server side framework provider to use. An HTML helper method is used to add the client side validation rules, based on a particular class (see Q3 r.e. FormModels) and referencing the appropriate JavaScript files will wire up the UI validation.

There’s a few moving parts in this solution, but once it’s set up, adding client and server validation to forms is easy – by following the conventions.

N.B. In MVC 2.0, equivalent functionality to xVal has been rolled into the framework so this becomes largely redundant. xVal would still be useful in scenarios where an alternative validation framework is required e.g. NHibernate.Validator.

http://xval.codeplex.com/

How does the error handing work? What is ELMAH and why are we using it?

ELMAH stands for Error Logging Modules and Handlers and is a really easy way to set up logging for unhandled application errors. This is not a replacement for a logging framework e.g. Log4Net as only deals with unhandled exceptions but should be used alongside your standard logging and instrumentation for diagnostic purposes. It can be added to a solution without changing a single line of code – simply be referencing the assembly and adding the relevant web.config entries. However, more advanced config can be used and listeners can be added to log to email, file, database, twitter etc.

It’s a great tool to have in a solution from day one as during the development process, when bugs are often found, ELMAH can really help. It will log the “yellow screen of death” that was originally seen when the exception was thrown, along with the stack trace and which is invaluable when a system tester raises a bug – they can link to the ELMAH reference in order to diagnose what happened. ELMAH can also be used in a production environment to log exceptions whilst the live application is running. This can help Ops teams diagnose badly performing applications. Viewing the ELMAH logs is easy and can be done directly in the browser via the elmah.axd HTTP handler. This can be locked down using any standard authorization mechanism, restricting access to the error logs if necessary.

http://code.google.com/p/elmah/

Looks like you are only creating mappers to map between single entities (i.e. a Person to a PersonViewModel) and not for mapping collections (i.e. List<Person> to List<PersonViewModel>). The lists seem to be mapped using extension methods (e.g. MapAllUsing). What would you do in the instance where a Trade is mapped to a List<FinancialPosting> etc?

A mapper always returns one output, which could be of type List<T>. In this case, it depends on how the FinancialPostings are created from the Trade. If the Trade contains a list of information which is used to create the list of FinancialPostings then it could be that the mapping is actually from a child property of the Trade e.g.

var financialPostings = Trade.Transactions.MapAllUsing(mapper);

However, if the Trade object is needed for each FinancialPosting then I would simply have the mapper return a List<T>. This is completely acceptable.

Where one collection is mapped to another, then I would recommend using the MapAllUsing() extension method and create a mapper to map the single entities. This allows for a greater reuse, easier testability and a separation of concerns between the mapping logic and the iteration logic.

There seem to be many overloaded IMapper interfaces (inputs 1-5). Why is this? How are these overloads used? What happens if I’m mapping 6 inputs?

This is simply because the calling code has required this many inputs to map to one output. A controller could for example get data from a number of sources in order to display a complex view so the mapper interface has just been updated to suit this requirement. Adding more and more inputs to the mapper interface could highlight a code smell, but without context it’s hard to say whether this is a justified trade off. The benefits of using the IMapper interface are that the mappers are automatically registered into the IOC container and that the MapAllUsing extension method can be used to chain mappers together.

How does auto-mapper work? How do I configure auto-mapper to deal with complex mappings?

Automapper works great at mapping one object to another when property names match, or object hierarchies are to be flattened. This really suits entity -> viewmodel and formmodel -> entity type mapping in MVC controllers. It works via reflection, but mapping profiles are cached and lots of effort has been made to avoid performance problems.

Automapper relies on a convention based approach to mapping – i.e. when property names match it knows what to do, but these conventions can be overriden in mapping profiles to map non-standard names, ignore specific properties, or provide custom mapping logic for particular properties. In short, Automapper is very flexible and can be configured to map however you want, however its real benefit is when you stick to the conventions as mapping objects then becomes frictionless.

For more info see http://automapper.codeplex.com/

Would you ever create a mapper which doesn’t use auto-mapper?

Yes – if mapping one object to another results in a setting up a complex mapping profile with automapper, I would definitely consider whether using it is beneficial. In objects that are not using convention-based mapping it could often be easier to manually map properties.

There seem to be a lot more classes than specs. Is this because the code is difficult to test or are the specs for these elsewhere?

Whilst I am a big advocate of test first development, I’m also pragmatic when it comes to what to test. A lot of the code in the Framework project has come out of other projects and has been in use for a long time so it didn’t feel necessary to retro fit test code around it. Specs were written for nearly all new code that was created for the purposes of this solution. There are a number of supporting classes e.g. Registrars and Initialisers that are not tested, Domain and View and FormModels are dumb and provide no logic.

Why are we using the term Tasks? What is a Task?

The Tasks layer is a logical grouping of business logic that provides an application boundary in the solution. Anything above the tasks layer is deemed presentation logic and anything below is deemed reusable application logic. The Sharp Architecture framework originally called this layer of functionality the Application Services, however this led to confusion in teams over the use of the word Service. We found that as soon as we started talking about our Application Services layer as a Tasks Layer, then things seemed to start to make more sense.

What is the granularity of each layer?

Web

The web application project contains application initialization and pure presentation logic. View files, images, CSS etc. The only code at this layer is HTML helper style code for use in view markup and any necessary Initialisers.

Web.Controllers

Controllers are deemed presentation logic as are MVC specific and govern the “flow” of the application. I.e. they handle user inputs (the HTTP GET and POST requests made to the application) and return the correct response (HTML, Javascript, HTTP redirects, HTTP error codes etc). Controllers should hand off to the Tasks layer as soon as possible to “do stuff” and should contain as little logic as possible. This project is as high as the Domain entities go so Controllers contains the necessary Mappers to convert Entities to ViewModels and FormModels to Entities.

Tasks

The tasks layer contains the application business logic. It talks to repositories for retrieving and updating entities and to external services where necessary. It performs server side validation on entities yet is still persistence ignorant.

Infrastructure

The infrastructure project is where data access happens and real services are called. This could be via NHibernate, a web service or another specific implementation. For external services, mapping may be needed between entities and third party types, so there may be mappers present for this purpose.

Domain

Contains all domain entities, value types etc – the business model. Also contains the interfaces for the various other cross-project layers e.g. Tasks, Repositories and Infrastructure. The Domain project is used all the way from the Infrastructure layer up to the Controllers.

Framework

Supporting low level and utility code e.g. logging, caching, extension methods etc. Can be used throughout the solution.

Specs

BDD style tests for the entire application logic.

What BDD/Test frameworks are being used? (MSpec?) Why? Is this going to be the standard?

MSpec is being used as the BDD framework of choice. This has been down to personal preference of the teams that have worked with this framework, however, here are my personal views on why I would pick MSpec:

There seems to be two styles of BDD testing currently in use within the software development community:

The first are those frameworks that aim to produce unit tests in a BDD style syntax in C#. These could be described as an “internal DSL” for BDD testing. They tend to have unit test runner support from within Visual Studio, build tools for integrating into CI processes and produce some form of human readable output, either in HTML format, or within Visual Studio.

The second are those frameworks which are more aimed at QA or BA by providing a non-technical scenario based language for defining BDD style specifications. These could be described as an “external DSL” for BDD testing. These specs tend to be written in text files and suit high-level user story style specifications.

On a very crude level, the first set of frameworks suit how developers work and the second suit QA and business people. A project would benefit from both styles of testing – at a unit level within an automated CI process, and at a UI level via an automated UI testing tool e.g. Selenium running the scripts. Regarding the first scenario, I believe that MSpec is the most mature, featured framework, with the best integration into Visual Studio and the biggest community using it – i.e. it’s the best choice for developers.

This solution does not deal with the latter of these scenarios purely because we have not had that kind of input into the project. I would see more Cucumber-esque BDD style testing as an addition to the MSpec coverage, rather than a replacement.

What is the ServiceLocatorHelper and how/when is it used? Tests only?

Yes – the ServiceLocatorHelper exists to support testing by allowing an easy way to add components to the Service Locator that are required by executed code.

Why is IWindsorContainer being passed around the application? Isn’t the point of having a ServiceLocator that we can easily switch out IoC implementations?

No, the point isn’t really so we can switch out IOC implementations, but rather that we can share the container between anything else that uses the ServiceLocator. So, if a third party product is introduced into the solution and it uses the ServiceLocator to add and retrieve components, potentially we could also add to, remove from and access that list of components via the ServiceLocator too – i.e. the container is abstracted from the specific implementation. We’re not really exploiting this behavior and you’re right that we’re kind of hard coding the use of WindsorContainer in the solution. If/when we meet this requirement, this area will probably need to be revisited.

Integrating Postmark into ASP.NET MVC

Tags: , 3 Comments »

I was excited when I first heard about Postmark as it answered a problem that I’ve faced on many projects in the past – how do you send “triggered” emails from your web application? By triggered emails, I mean one time, one hit emails sent to a specific user in response to a specific action e.g. user registration, order confirmation etc.

The answer in the .NET world has always been to “roll your own” email service using the System.Net.Mail namespace and the SMTP capabilities of IIS. Whilst the process of writing this code is straightforward, the hard part is not developing the code, but all the stuff that the developers don’t actually think about. As Postmark describe:

If you’ve ever built or launched a web application, you know that setting up an SMTP server is pretty easy. The basic steps can have you up in running in minutes. What you may not know, is that doing it correctly is complex. For instance:

  • Setting up authentication like SPF and DomainKeys
  • The importance of Reverse DNS
  • Managing connection and message rules for each ISP
  • Applying for ISP whitelisting and feedback loops
  • Accreditation with ISIPP and ReturnPath
  • Tracking bounces and spam complaints
  • Understanding volume over time

These problems have hit me in the past, which is why I’d always look to a third party service for sending emails from my application. There are many services that provide campaign style email sending (i.e. generic or targeted marketing emails sent to a list of users), however finding a company that provides a triggered service (via an API) has always proved impossible. On a recent project, we found a company that provided this service, only to find out that they were deprecating it 3 weeks prior to our application going live. Nice.

So, like I said, I was very excited to hear about Postmark and signed up for the private Beta trial straight away. The service has recently gone live, so now seemed like a good time to knock up a little demo application…

The Demo App

I mainly develop in .NET, especially when working for clients at EMC Consulting, so wanted to focus on this for the demo as it is likely that this will be the way I will use Postmark in anger on a “real” project. There’s already a .NET API available on Github so I set about hooking this up to an MVC application.

My application is just one screen with a text box for an email address. Clicking the submit button will send a test email to the email address entered via the Postmark service.

image

Home Controller

As I only have one screen, I only need one controller. My HomeController handles both the GET and POST requests to the root URL – the first simply returns the view and the second calls my messageSender to send the email and redirects back to the view.

A couple of things to note – my controller is talking to an IMessageSender, injected into the constructor, to keep the controller simple. There is no mention yet of anything to do with Postmark, or any implementation of how this message is going to be sent – to put it simply, the controller shouldn’t know or care about this implementation detail – it just handles the flow of the application.

The second thing is that the controller redirects back to the Index action after the form is submitted – i.e. it issues a HTTP 302 response to redirect back to the root URL. This follows a pretty standard pattern in web applications called Post – Redirect – Get which ensures that after a user submits a form, if they hit refresh in the browser, they’re not going to re-submit the form again. So, if the email is sent successfully, or even it something went wrong, refreshing the browser doesn’t initiate another POST request.

public class HomeController : Controller
{
    private readonly IMessageSender _messageSender;

    public HomeController(IMessageSender messageSender)
    {
        _messageSender = messageSender;
    }

    public ActionResult Index()
    {
        return View();
    }

    [AcceptVerbs(HttpVerbs.Post)]
    public ActionResult Index(string email)
    {
        var response = _messageSender.SendMessage(email);

        TempData["Message"] = response;

        return RedirectToAction("Index");
    }
}

Postmark Message Sender

Whilst the HomeController is working against an IMessageSender, we’re going to need a real implementation in order to actually send messages. This is where our Postmark integration comes in. I’m using the Postmark .Net API in order to call the Postmark service, which requires two configuration values to be set – the API Key and the sender email address. These need to be valid values according to my Postmark account. For simplicity’s sake, I’m storing this in App Settings in my web.config:

<appSettings>
  <clear/>
  <!-- Set this to your email server's API token (Guid) -->
  <add key="ServerToken" value="XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX"/>

  <!-- Set this to a valid sender signature (email address) -->
  <add key="From" value="user@email.com" />

</appSettings>

I don’t like hitting configuration sources directly as it makes classes hard to test, so I like to encapsulate configuration as some kind of service and inject that in to anything that needs it. Our PostmarkMessageSender therefore depends on an IConfigurationSource in order to access these values.

The IMessageSender has one method signature, SendMessage() which takes an email address input parameter and returns a string response – either a success message, or an error message according to what happened when trying to send the email.

Here’s my implementation:

public class PostmarkMessageSender : IMessageSender
{
    private readonly IConfigurationSource _configuration;

    public PostmarkMessageSender(IConfigurationSource configuration)
    {
        _configuration = configuration;
    }

    public string SendMessage(string email)
    {
        var client = new PostmarkClient(_configuration.ServerToken);

        try
        {
            var response = client.SendMessage(BuildMessage(email));

            if (response.Status != PostmarkStatus.Success)
            {
                return response.Message;
            }
        }
        catch (ValidationException ex)
        {
            return ex.Message;
        }

        return "Message sent successfully!";
    }

    private PostmarkMessage BuildMessage(string email)
    {
        return new PostmarkMessage
                   {
                       From =_configuration.FromAddress,
                       To = email,
                       Subject = "Postmark ASP.NET MVC Demo",
                       HtmlBody = "Hello!",
                       TextBody = "Hello!",
                   };
    }
}

Framework

My IConfigurationSource implementation calls directly into the web.config application settings;

public class WebConfigurationSource : IConfigurationSource
{
    public string ServerToken
    {
        get { return WebConfigurationManager.AppSettings["ServerToken"];  }
    }

    public string FromAddress
    {
        get { return WebConfigurationManager.AppSettings["From"]; }
    }
}

And the dependencies are wired up using a Ninject module:

public class WebModule : NinjectModule
{
    public override void Load()
    {
        Bind<IMessageSender>().To<PostmarkMessageSender>();
        Bind<IConfigurationSource>().To<WebConfigurationSource>();
    }
}

Which is configured for the MVC application in the global.asax.cs by inheriting NinjectHttpApplication and overriding the CreateKernel() method:

public class MvcApplication : NinjectHttpApplication
{
    public static void RegisterRoutes(RouteCollection routes)
    {
        routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

        routes.MapRoute(
            "Default", // Route name
            "{controller}/{action}/{id}", // URL with parameters
            new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults
        );

    }

    protected override void OnApplicationStarted()
    {
        RegisterRoutes(RouteTable.Routes);
        RegisterAllControllersIn(Assembly.GetExecutingAssembly());
    }

    protected override IKernel CreateKernel()
    {
        return new StandardKernel(new WebModule());
    }
}

And that’s it – this whole app took me about 30 minutes to build (and another 30 mins to make it look pretty ;) ) and I now have a fully functioning email-sending application. This is a fairly simple and trivial example, but it shows just how easy this is to do. A real-world application might have requirements around batching up emails and sending them asynchronously or sending to multiple address. However, hopefully this shows that whatever the requirements, you now only have to focus on the code and let somebody else think about the actual task of sending the messages – and producing the stats:

image

Source code

Source code for this demo application is available on Github at the following location. Enjoy…

Design by j david macor.com.Original WP Theme & Icons by N.Design Studio