Keyboard Shortcuts
ctrl + shift + ? :
Show all keyboard shortcuts
ctrl + g :
Navigate to a group
ctrl + shift + f :
Find
ctrl + / :
Quick actions
esc to dismiss
Likes
Search
Re: "The design signals are weak"
On Fri, Jul 21, 2023 at 9:34?PM <groups-io.20191212@...> wrote: I find this pretty provocative, so of course, that interests me immediatelyAllow me to begin with a disclaimer: that the thoughts that follow are still in something of a draft phase, and may indeed lead nowhere useful when pursued to their logical end.? K? Excellent! That's what we're here for. :) Which "survey literature" has drawn conclusions about the strength of design error signals?None that I know of - as far as I can tell, the question doesn't even exist yet outside of this thread. Got it. What kind of "weak" is your "weak" here?Consider the refactoring task in the TDD cycle: make a behavior preserving change, run the tests, loop until some stop condition is reached. I follow this so far. I haven't read the rest of this conversation yet, so it's possible that someone else will have asked the same questions or raised the same points and you've answered them. # "Weak" compared to what? When you call this a "weak signal", are you calling it "weak" only in comparison to the obviously "strong signal" of "tests should have passed, but they failed"? or are you judging them "weak" by some other standard? I think I'd like to know about that standard. # Other signals I identify at least two kinds of signals coming from practising TDD: - The result of running the tests, often reduced to "all pass" or "some fail". If it's not the result I expected, then that reminds me to investigate further. - The annoyance of reading or writing the tests. If that feels strange, then that reminds me to reconsider my design choices. You seem to classify the second kind of signal as "actually pretty weak", which I'd like to understand further. Certainly I agree that, on average, Signals of the Second Kind are weaker than Signals of the First Kind. Regarding only Signals of the Second Kind, I tend to interpret them most often as "design risk signals". TDD taught me to adopt this principle: If the tests seem poorly designed, then assume that the cause lies in the design of the production code. I consider this a kind of primary principle that distinguishes TDD from test-first programming. Some design risk signals seem quite strong to me, such as "Why do I need to parse text in my assertion to check arithmetic?!" and others feel quite weak to me, such as "These side-effects seem related to each other, so maybe it's time to introduce an intermediate abstraction that simplifies the interaction. Maybe they're 3 implementations of the same interface." I could imagine someone claiming that these signals are weak on average, but they don't feel weak on average to me. Some of them feel weak (I don't always react to them when I see them) and some of them feel strong (this is obviously a problem and I will fix it unless I have a very good reason not to). Do you mean "weak" and "strong" in similar ways to this? I could also imagine someone saying "I don't need TDD to notice these problems", which must be obviously true. Do you mean "weak" in a way similar to this? I don't consider this to point to the strength of the signals, but rather whether the programmer already gets those signals "strongly enough" from somewhere else. Am I anywhere close to what you mean by "weak" here? If not, then I'd like to know what it's like for you. Thanks! J. B. (Joe) Rainsberger :: ?:: :: Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration. -- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002 |
to navigate to use esc to dismiss