This is the main reason we have assertion libraries and don't just use boolean expressions.
My favorite test framework of all time, rspec-given[1] (by the late, great Jim Weirich) solved this a different way by using AST introspection to extract both sides of a boolean expression and give a helpful message even with an assertion like: `Then { result == 7 }`
With the rapidly increasing number of websites and applications that feel "an error occurred" is an acceptable error message, I'm glad someone is pointing out that failures should contain actionable information, but I'm also greatly saddened this is needed. I truly do not understand how anyone, much less developers, needs to be told that the content of an error message is important. It really boggles my mind. How is it not obvious to the point of pain that you need to know WHY something failed, not only that it failed?
certainly it is preferable to have actionable test failures, but also it should be very easy to rerun a failed test with additional prints or debug code if not. It is important that the way tests are run is not so heavy and inflexible that it takes more than 30s or so from identifying a failed test to rerunning the same test and seeing additional output. Putting effort to make all tests actionable is probably in reality a strategy to deal with terrible, inflexible, very high latency cloud CI setups.
If you like this blog post I think you’d like this book https://www.artofunittesting.com/
Test naming convention defined there of
[UnitOfWork_StateUnderTest_ExpectedBehavior]
Always resonated with me as from that you could also discern bugs in test code from developer’s intent.
I feel this is pretty low on the list of things I want from a test.
Regressions should be rare; if you get a test failure when you break something's public API then that is already a success - we had the right test case!
That you then maybe need to spend a few minutes finding out exactly what is failing, I can live with that.
I'm often left scratching my head about what behaviour a test was even supposed to test! One thing I find very helpful is to add my expectation as string message to every assertion in the test. When the test fails, it will tell me which expectation was violated.
This is why a powerful assertions library is so important.
(JS with Vitest, etc. is good, Rust - not so much)
This one is pretty basic, but it's still very important, and even though it is basic, many tests fail this basic rule all the time. Sometimes it's not really the test's fault: the fact that it's hard to make the test failure actionable actually reveals a design failure in the code. Protip: very seldom does it make sense for a function to return a boolean value indicating success or failure. (If you're a C++ person who hates or for other reasons, can't use exceptions, I hear you: my personal preference is something like expected<T> or StatusOr. I am a bit sad that C++ error handling still remains as fragmented as ever...)
TotT is one of those kinds of things that I really miss from Google. None of the TotTs were exactly groundbreaking, but they almost all made for excellent rules of thumb that would rarely ever be bad advice, and often times even if they seemed simple and obvious, you could still find places to apply them in your actual codebase. (TotT definitely nudged me into making some test improvements when I worked at Google.)