It’s no secret that I love the Spark view engine. I’ve blogged about it before and nearly two years down the line since I first used it (and a number of production projects later), I still think it’s the best view engine out there for ASP.NET MVC.

In a nutshell, there’s two reasons why I think this.

1. The in-built view-specific syntax is so well thought out and makes building view files really easy.

For example, I love the ?{} conditional syntax which means that when placed next to an attribute, it will only be output if the statement evaluates to true:

<div id="errorMessage" style="display:none;?{Model.ErrorMessage == null}">
    Error message...
</div>

If you combine this inside a loop with the auto-defined xIsFirst, xIsLast variables you can do something like this:

<ul>
    <li each="var product in Model.Products" class="first?{productIsFirst} last?{productIsLast}">
        ${product.Name}
    </li>
</ul>

This is really powerful stuff. Adding classes to the last or first element in a list is something I always get asked to do by my friendly interface developers and before Spark (and deep in WebForms territory) this always meant helper methods, or messy logic inside view files. Nasty business. (By the way, in the example above, if the item is neither the first or the last element, then the entire class=”” attribute will just be ingored, i.e. no messy empty HTML attributes are rendered).

2. It fits the way I work in a multi-functional team.

I tend to be lucky enough to have dedicated interface developers on a project who specialise in creating beautifully clean, standards compliant, accessible HTML. I let them do their job and they let me do mine. Spark allows us to work side by side nicely without treading on each other’s toes. Integrating static HTML pages delivered by an iDev is really easy as the syntax is terse and succinct. Equally, an iDev looking at a Spark view file doesn’t run a mile screaming (which they used to do when faced with a WebForms .aspx page full of server controls). As Louis states -

The idea is to allow the HTML to dominate the flow and the code to fit seamlessly.

I think it does this beautifully. If I was more of a one-man-band and was responsible for creating everything myself, or did not have experienced web developers to hand, I’d probably prefer NHaml, which seems to have a much more developer focussed approach. I can definitely see the appeal here, but like I say, I tend to work with guys who know HTML and give me HTML to integrate into my applications.

Which brings me on to the subject of this post…

So things have moved on a bit since I started my last project and this time around I was given a lovely set of static HTML pages from a completely separate digital agency altogether. These people obviously know what they’re doing and have fully embraced HTML 5 and all it’s new syntax and features.

“Great” I thought, this should be easy. Just need to go through the views, binding up my data and adding in Spark logic wherever possible. And then I got this:

image

Turns out that <section> is a new HTML 5 element which was being used to great effect in my HTML. Turns out, it’s also a key word in the world of Spark and the two don’t play together too nicely. A few others have run into this problem and there’s a couple of suggestions:

Use the the namespace feature in Spark. This involves adding a prefix attribute in your Spark configuration like so:

<spark>
  <pages prefix="s">
  </pages>
</spark>

Which then means you need to qualify all your Spark elements:

<s:use content="view" />

I didn’t like this approach and found that it broke a lot of the terseness of the Spark syntax. It meant that I couldn’t use the shorthand method for calling partial views simply by specifying the file name.

The next suggestion was to wrap the <section> elements in the !{} syntax, effectively rendering them as non-encoded HTML literals:

!{"<section class='box error'>"}
    Error message...
!{"</section>"}

This approached worked the best – whilst making the <section> elements themselves a bit ugly, it left everything else Spark related in tact.

So, I got past that issue, thinking I was home free, only to be faced with:

image

Oh dear. Turns out that <header> and <footer> are also new elements in HTML 5. I tend to create partial views for both my header and footer logic and up to now have named them (quite sensibly) _header.spark and _footer.spark. Using the shorthand syntax for rendering views, I was able to call them in my layout file like so:

<body>
    <header />
        <use content="view" />
    <footer />
</body>

Well, not any more. Spark is trying to render my partial view called _header.spark, which contains my HTML 5 markup, including the <header> element. Hence the recursive rendering error.

The only solution I found to this was to break from tradition and rename my partial view files to _headerNav.spark and _footerNav.spark, which avoids the naming conflicts altogether.

Summary

HTML 5 brings some new markup syntax which conflicts with the inner workings of the Spark view engine. The most noticeable impact is the <section> element, which cannot be used as it stands with Spark without applying one of the workarounds detailed above.

Care should also be taken when naming partial views so as not to create naming conflicts with the new HTML elements available in HTML 5.

That said, I would still use Spark on projects as the benefits still massively outweigh these downsides. Hopefully the <section> issue will be resolved in a future release, but for now I’m prepared to live with my views being slightly less sparkly.