I read my colleague Simon Munro’s recent post about deferring costs to the future and how he thinks this should apply to the software development process. As a developer who is an advocate of all the things that Simon claims are simply improving future maintainability at a higher cost, I felt compelled to give a different opinion.
On my current project, the timescales are short and the budget is tight. The pace is fast and the requirements are evolving and changing on a weekly, if not daily basis. This is why we run Agile projects – to be able to adapt quickly and change direction, focus, or scope as the project evolves. As with any software project, what a client wanted at the outset is practically guaranteed to differ from what they want when the time comes to “go live”. So, we work in short iterations, allowing for continuous feedback and input from the client – checkpoints along the way to make sure we’ve understood the requirements correctly, that the requirements are still valid and the client is happy with what we’re doing.
The concepts that Simon refers to – Test Driven Development (TDD) and good software craftsmanship, practices and principles are, in my opinion, essential in the delivery of Agile software projects. Yes, TDD makes a solution more maintainable down the line because you quickly know if you’ve broken something, but I think that the argument that its sole benefit is maintainability is a pretty naive one.
“Trendy software developers have recently been going on a lot about TDD, software craftsmanship, practices and principles that further the ability of software development teams to build good, reliable, fast, secure and maintainable software. A big part of the current themes is maintainability – with lots of statistics relating to the total cost of software over its lifetime indicating that the initial development costs are minimal (say30%) compared to the cost of maintenance in future.”
I don’t consider myself a trendy software developer, but I do consider myself a good software developer. In my opinion, all of these engineering practises are about eliminating waste from the development process. If it’s costs that we’re worried about, then I would argue that adhering to and following this principles will save you money – and not just in phases 2 and 3 of the project, but during the initial development phase. I want to take each of the points Simon mentions and give reasons, aside from future maintainability, why they are necessary in delivering better software faster.
“Tests are written, concerns are separated, control is inverted, things are decoupled, design is done by writing tests and refactoring is frequent – with one of the benefits being a system that can be easily maintained”
Test Driven Development
On my current project, we’re using Behaviour Driven Development (BDD) – a kind of mash-up between TDD and Domain Driven Design (DDD). The reason for this approach is exactly the same reason that Simon gives for not doing this kind of thing – short timescales, small budget. The real benefit is the middle D – “Driven”. The tests (or in our case the specifications) drive the development.
- Our specifications (tests) follow our requirements or user stories. If I work on a new task, writing out the specifications for this task first gives me a clear idea of when I’ve covered all the requirements. I don’t write specifications for requirements that aren’t there, therefore I don’t waste time over engineering the solution.
- I then make the solution compile by adding in the components needed to satisfy the specifications. I don’t add anything that the specifications don’t need and therefore don’t waste time over engineering the solution. I only add code that adds business value.
- I then make the specifications valid (make the tests pass) by implementing the code in the new components. I have only added what is needed to meet the requirements.
- If I hadn’t written my tests first, how would I know when I am done? How would I know what components I needed. Ok, fair enough, I could have had a fair estimate at what it is I needed to do, but by committing the requirements to code in the form of specifications or tests first, I know exactly when I can move on and start the next feature.
- Another side effect of committing the specifications to code is that we can easily generate a report of all the specifications in the system. We use JP Boohoo’s bdddoc tool to do this. This can give a non-technical user (the client, a project manager, a business analyst) a concrete view of what it is that the code actually does at any point in time.
Inversion Of Control
Because of the design principles of Dependency Injection and Inversion of Control I can write my tests quickly and concentrate on the “system under test” – i.e. the specific thing that I am developing. I can “mock-out” external dependencies – other components, databases, external services etc and they’re expected behaviours to allow me to simulate what happens in those situations. This means there’s no unexpected surprises further down the line when everything is integrated together. This isn’t about maintainability – it’s about reducing the number of bugs that will occur when components and functionality are integrated together during the day to day delivery of the project.
During my last sprint review I demoed a checkout process that worked using a fake payment service. This was because the decision on the actual payment service to be used had not been finalised, but because my solution is loosely coupled this did not matter. I was able to demo working software to the client quicker because of this design principle. When the real payment service implementation is implemented, the amount of refactoring is minimal as the service will conform to the contract that we have defined and are coding against.
Things change. Requirements change. Technology changes. What I do this week may be redundant next week if the client changes their mind. I need to be able to refactor. If I can’t refactor easily or without fear then we have a problem. This isn’t for maintainability in the future, this is about changing things now without immediately introducing bugs that our System Tester has to raise and re-test once fixed. More time wasted.
Everybody isn’t perfect all the time. People have off-days and people will write code that isn’t always as good as it could be. As I have a solid set of test coverage I can refactor code in the solution (not just my own) mercilessly without fear. I refactor code on a daily basis.
Separation Of Concerns
Our “concerns are separated”. We are using Asp.Net MVC and we are following Sharp Architecture, a solid architectural foundation for MVC applications. We are lucky enough to work on a project where we have .Net developers and UI Developers. These are different roles, with different skill sets. Because we are using Asp.Net MVC we can both work side by side on a feature without treading on each others toes. We can deliver quicker and we have less breaking changes. I don’t put .Net code in the UI mark-up and the IDev doesn’t put CSS styles in .Net code. I don’t really know why I have to even argue this one.
Our Continuous Integration process builds our solution and runs all the tests every time something is checked in. If for some reason it fails, we are notified immediately. Our System Tester can only take and deploy successful builds. No time is wasted on deploying and testing broken builds.
“This advice, I believe, applies to organizations (obviously), teams and individuals – would you want to be branded in the office as the developer that always estimates that things will take three times longer than your peers? How is that going to impact your career when jobs are on the line?”
I don’t agree that by using good engineering practises my estimates are going to be three times longer. I would argue that I work quicker than a lot of my colleagues because of all the things mentioned above. Projects that aren’t running test-first with CI and good solution architecture are the ones that get delayed and over run in the end due to massive backlogs of bugs and time consuming change requests.
I also don’t see the point in under-estimating something to try and give a better impression of how good a developer you are. That’s just ridiculous. When jobs are on the line, companies are looking for the best quality people, not the people who gives the lowest estimates. You can’t hide from the fact that something doesn’t work or isn’t finished no matter how quickly you said you’d do it.
The point I think that Simon is trying to argue is that we shouldn’t over engineer solutions unnecessarily “just in case”, which I agree with. “You ain’t gonna need it” is another good development principle which is really enforced when working test-first (especially if using BDD). However, over-engineering a solution is very different from using good engineering practices to develop a solution. I am a consultant and have a responsibility to deliver the best possible solution for the client given the requirements, scope and budget. Without following the (widely accepted) solid engineering practices detailed above, I strongly believe that I am unable to do this.