2012-05-03

Declaring Pass or Fail - Handling Broken Assumptions

When using TDD, it's a good practice to declare - aloud or in your mind - whether the next test run will pass or fail (and in what way it will fail). Then when your assumption about the outcome happens to be wrong, you'll be surprised and you can start looking more closely at why on earth the code is not behaving as you thought it would.

I had one such situation in my Let's Code screencasts where I missed a mistake - I had written code that's not needed to pass any tests - and noticed it only five months later when analyzing the code with PIT mutation testing. You can see how that untested line of code was written in Let's Code Jumi #62 at 24:40, and how it was found in Let's Code Jumi #148 at 4:15 (the rest of the episode and the start of episode 149 goes into fixing it).

I would be curious to figure out a discipline which would help to avoid problems like that.

Here is what happened:

I was developing my little actors library. I already had a multi-threaded version of it working and now I was implementing a single-threaded version of it to make testing actors easier. I used contract tests to drive the implementation of the single-threaded version. Since the tests were originally written for the multi-threaded version, they required some tweaking to make them work for both implementations, with and without concurrency.

I was already so far that all but one contract test were passing, when I wrote the fateful line idle = false; and ran the tests - I had expected them to pass, but that one test was still failing. So then I investigated why the test did not pass and found out that I had not yet updated the test to work with the single-threaded implementation. After fixing the test, it started failing for another reason (a missing try-catch), so I implemented that - but I did not notice that the line I had added earlier did not contribute to passing the test. Only much later did I notice (thanks to PIT) that I was missing a test case to cover that one line.

So I've been thinking, how to avoid mistakes like this in the future? I don't yet have an answer.

Maybe some sort of mental checklist to use when I have written some production code but it doesn't make the test pass because of a bug in the test. Maybe if I would undo all changes to production code before fixing the test, would that avoid the problem? Maybe the IDE could help by highlighting suspicious code - the IDE could have two buttons for running tests, one where the assumption is that the tests will pass and another where they are expected to fail. Then when an assumption is broken, it would highlight all code that was written since the last time tests passed and/or assumptions were correct, which might help in inspecting the code.

Or maybe all problems like this can be found automatically with mutation testing and I won't need a special procedure to avoid introducing them?


UPDATE: In a following blog post I'm experimenting a better way of doing this refactoring.

4 comments:

  1. Hello,

    thanks for mentioning the PIT testing, it seems like an interesting thing to look at.

    From what you say it seems like you simply broke the red-green cycle by modifying your test and code concurrently. Therefore you're following a practice different from TDD and you are likely to create such 'anomalies' in your code. To adhere strictly to TDD, one should revert to previously 'green' state and try again, though I see no danger in taking a shortcut - just remove all production code modifications before extending the tests.

    As for the IDE extension you suggest, in Idea you can use local history to revert to a state where all tests passed even if you didn't perform a VCS commit at that point. Making a button that would 'pass' when the tests fail seems complicated. It is more than that - you usually need to see that exactly one test failed and for the 'right' reason. I don't see any simple way to make the IDE aware of such an assumption.

    ReplyDelete
  2. Thanks for your comment. It made me think about the root cause of my mistake.

    I'm guessing my mistake been that when I started writing the second implementation, I should have taken smaller steps and been more systematic in refactoring the contract tests and making the new implementation pass them. I shouldn't have tackled more than one failing tests at a time and I should have refactored each test before trying to make it pass. I knew that I had to refactor them and I wanted to do it one test at a time, but then I forgot to do it for that one test, probably because I was not systematic enough.

    Maybe one way to do it better would be like this:

    1. Extract a contract test from the old implementation's tests by extracting factory methods for the SUT, creating the abstract base test class and moving the old implementation's tests there.

    2. Create a second concrete test class which extends the contract tests, but *override all methods from the contract test and mark them ignored*. This would avoid getting lots of failing tests appearing at once.

    3. Unignore one contract test and implement the feature in the new implementation. If the test requires refactoring, keep it ignored until the test would work for both implementations, and only after that try implementing it. This would give a systematic way for updating the contract tests.

    I think I will retry creating that second implementation to see how this plan will work out.

    ReplyDelete
  3. Hello,

    I think what you suggest is on the right track, but I would find it more transparent to just copy/paste stuff instead of extending and ignoring. Implementation inheritance is just hidden copy/paste anyway. :)

    So maybe

    1) Copy/paste one test to a new class.

    2) Implement code/modify the test a few times until satisfied.

    3) Dedup the test wrt old/new test classes. (If you intend to keep them both. Remove old one otherwise.)

    4) Rinse and repeat with next test.

    PS. I just watched a couple of episodes from Jumi (about Actors) and I think what you do is very interesting. So I will watch some more. Thanks!

    ReplyDelete
  4. I've posted the results of my experiment: http://blog.orfjackal.net/2012/05/passing-contract-tests-while.html

    ReplyDelete