¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

Re: "The design signals are weak"


 

On Fri, Jul 21, 2023 at 9:34?PM <groups-io.20191212@...> wrote:
I find this pretty provocative, so of course, that interests me immediately
Allow me to begin with a disclaimer: that the thoughts that follow are still in something of a draft phase, and may indeed lead nowhere useful when pursued to their logical end.? K?

Excellent! That's what we're here for. :)
Which "survey literature" has drawn conclusions about the strength of design error signals?
None that I know of - as far as I can tell, the question doesn't even exist yet outside of this thread.

The conclusions (currently weakly held) are mine, based on what twenty? twenty-five? of reading without finding much of anything that I recognize as addressing the topic.

Got it.
What kind of "weak" is your "weak" here?
Consider the refactoring task in the TDD cycle: make a behavior preserving change, run the tests, loop until some stop condition is reached.

If we make a mistake, the change that we intended to be behavior preserving may not be.? But when we run the tests, the tests detect the change, and give us a strong signal that an error has occurred.? If we are running the tests frequently (as the discipline tells us to do) then the tests further tell us that the error was recent.? Taken to an extreme, the mistake must have occurred in the last change you made (and the remedy is trivial - revert and try again).

I would call this a strong signal: tests fail, processes return error statuses, green pixels turn read, the cloister bell sounds.? Really hard to argue that you are "doing TDD" if you miss the signals.

Now, consider instead a different sort of mistake; the intended change is behavior preserving, and the change is executed as intended by the programmer, but the change makes the design worse.

  1. What, if any, signals do we get to alert us to the fact that the design has been made worse?
  2. How much time passes before the programmer is aware of the signal
  3. How does TDD improve these answers, in comparison to either
    1. just doing it
    2. the next best alternative

The answer certainly isn't the execution of the tests, because the tests will bless any behavior preserving change.

I follow this so far. I haven't read the rest of this conversation yet, so it's possible that someone else will have asked the same questions or raised the same points and you've answered them.

# "Weak" compared to what?

When you call this a "weak signal", are you calling it "weak" only in comparison to the obviously "strong signal" of "tests should have passed, but they failed"? or are you judging them "weak" by some other standard? I think I'd like to know about that standard.

# Other signals

I identify at least two kinds of signals coming from practising TDD:

- The result of running the tests, often reduced to "all pass" or "some fail". If it's not the result I expected, then that reminds me to investigate further.
- The annoyance of reading or writing the tests. If that feels strange, then that reminds me to reconsider my design choices.

You seem to classify the second kind of signal as "actually pretty weak", which I'd like to understand further. Certainly I agree that, on average, Signals of the Second Kind are weaker than Signals of the First Kind.

Regarding only Signals of the Second Kind, I tend to interpret them most often as "design risk signals". TDD taught me to adopt this principle: If the tests seem poorly designed, then assume that the cause lies in the design of the production code. I consider this a kind of primary principle that distinguishes TDD from test-first programming.

Some design risk signals seem quite strong to me, such as "Why do I need to parse text in my assertion to check arithmetic?!" and others feel quite weak to me, such as "These side-effects seem related to each other, so maybe it's time to introduce an intermediate abstraction that simplifies the interaction. Maybe they're 3 implementations of the same interface." I could imagine someone claiming that these signals are weak on average, but they don't feel weak on average to me. Some of them feel weak (I don't always react to them when I see them) and some of them feel strong (this is obviously a problem and I will fix it unless I have a very good reason not to). Do you mean "weak" and "strong" in similar ways to this?

I could also imagine someone saying "I don't need TDD to notice these problems", which must be obviously true. Do you mean "weak" in a way similar to this? I don't consider this to point to the strength of the signals, but rather whether the programmer already gets those signals "strongly enough" from somewhere else.

Am I anywhere close to what you mean by "weak" here? If not, then I'd like to know what it's like for you.

Thanks!
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002

Join [email protected] to automatically receive all group messages.