/ embedded

Thoughts on Embedded Test Driven Development (part 2 of 2)

This 2-part series covers my experience trying out TDD for my embedded system at work. Part 1 covers my thoughts prior to attending a 3-day training course on embedded TDD, and part 2 covers my attitude after taking the course and doing more extensive TDD exercises.

I just finished James Grenning's TDD for Embedded C course (given via the Barr Group). After starting this whole TDD journey off being rather skeptical, I can now firmly say that I see the light and BRING ON THE KOOL-AID!

Kool-aid man as Sgt Slaughter

In all seriousness though, I really enjoyed getting to know James and seeing how TDD (and other agile practices) can legitimately work with embedded software. His insight into the embedded realm shined through again and again - particularly when we went over mocking an IO driver for a flash programmer. Additionally it was very refreshing to hear him bring up:

  • integration events with hardware
  • getting work done long before hardware is ready and
  • TESTING our work to ensure that the logic is correct and that it works according to what the datasheet says (because datasheets are ALWAYS RIGHT, right?)

The case for mocks

I mentioned in my first post that I wasn't completely sold on mock objects. I have come to think that they are as simple or as complex as you want to make them.

If I have a datasheet that says these commands to the IC must be executed in this order, then I can easily set up a mock for a low-level interface to ensure that my code does in fact send those commands in that order. If I want to see what happens when my code gets told by the master of some shared resource to wait 10,000 times before it is granted access, I can easily do that too. Mocks give me this ability right out of the box.

Mocks also play into the case for having tests as documentation. The mock clearly calls out the order in which function calls need to be executed, and what arguments they must contain. Strict timing of the commands is a bit harder, but if you're running your unit tests on your target hardware every now and then you can probably arrange to time things without a great deal of effort. And when the hardware arrives and the datasheet left out some small detail, you update the test using the mock and there you go. Maybe you can put in a comment saying why this test almost matches the datasheet. Then it is retained forever in the vaults of your version control system. Along with your name. And the date. And a clever commit message.

What does 'agile' even mean?

One of the surprises for me was the last day of the training when James went over a broader view of agile and how it can be used effectively in embedded projects. He reiterated again and again that agile was always about being pragmatic - the idea that "agile" means you go as fast as you can and never document anything was never the intent of the authors of the agile manifesto. You need to take a reasonable approach, and reduce wasted efforts on non-value-add activities and deliverables.

A successful project that used some agile practices might make people think that it was just agile making the work go faster, but in reality it was agile making people NOT do the work that doesn't add value. With agile, you relentlessly focus on the work that furthers your project's goals. Along with that, how do we get the most out of this work? You get the most out of it when the PEOPLE doing the work decide that the most important thing they can do is to accept that change will happen, and respond to it as quickly, efficiently, and responsibly as possible.

In this mindset, you begin to see the work you do very differently. Some things (like testing) that you may have delayed or avoided in the past can become addicting when you see that approaching it a different way greatly improves the quality of your work.

Concluding thoughts

I'm sure my opinion will shift a bit as I try to implement these ideas in projects at work and at home. A few things that I don't think are going to change are my convictions that:

  • almost all code CAN be tested
  • almost all code SHOULD be tested (this does not mean you need to write every possible test case - there are diminishing returns with everything at some point)
  • writing code that helps facilitate testing leads to code that is easy to read / understand

And code that is easy to read / understand is most likely easy to change. And with those 3 things (read, understand, change) I will have best positioned myself, the software team, and the business to accommodate whatever unexpected things come our way. Which is not to say that we will deal with them with grace or without a bit of whining, but the end result is going to be much better than if we had written a bunch of code and waited 6 months to get hardware before testing anything.