When using TDD, it's a good practice to declare - aloud or in your mind - whether the next test run will pass or fail (and in what way it will fail). Then when your assumption about the outcome happens to be wrong, you'll be surprised and you can start looking more closely at why on earth the code is not behaving as you thought it would.
I had one such situation in my Let's Code screencasts where I missed a mistake - I had written code that's not needed to pass any tests - and noticed it only five months later when analyzing the code with PIT mutation testing. You can see how that untested line of code was written in Let's Code Jumi #62 at 24:40, and how it was found in Let's Code Jumi #148 at 4:15 (the rest of the episode and the start of episode 149 goes into fixing it).
I would be curious to figure out a discipline which would help to avoid problems like that.
Here is what happened:
I was developing my little actors library. I already had a multi-threaded version of it working and now I was implementing a single-threaded version of it to make testing actors easier. I used contract tests to drive the implementation of the single-threaded version. Since the tests were originally written for the multi-threaded version, they required some tweaking to make them work for both implementations, with and without concurrency.
I was already so far that all but one contract test were passing, when I wrote the fateful line
idle = false; and ran the tests - I had expected them to pass, but that one test was still failing. So then I investigated why the test did not pass and found out that I had not yet updated the test to work with the single-threaded implementation. After fixing the test, it started failing for another reason (a missing try-catch), so I implemented that - but I did not notice that the line I had added earlier did not contribute to passing the test. Only much later did I notice (thanks to PIT) that I was missing a test case to cover that one line.
So I've been thinking, how to avoid mistakes like this in the future? I don't yet have an answer.
Maybe some sort of mental checklist to use when I have written some production code but it doesn't make the test pass because of a bug in the test. Maybe if I would undo all changes to production code before fixing the test, would that avoid the problem? Maybe the IDE could help by highlighting suspicious code - the IDE could have two buttons for running tests, one where the assumption is that the tests will pass and another where they are expected to fail. Then when an assumption is broken, it would highlight all code that was written since the last time tests passed and/or assumptions were correct, which might help in inspecting the code.
Or maybe all problems like this can be found automatically with mutation testing and I won't need a special procedure to avoid introducing them?
UPDATE: In a following blog post I'm experimenting a better way of doing this refactoring.