There was a function named "updateSubscription" which was used to set a "newSubscription" which was then returned.?? Inside the very large function was a line which read "validation = isValid(<long list of conditions>);" this was untyped TypeScript. Then later it read "if (validation) {isInvalid = false; return validation}"
I had thought that validation was a boolean and refactored out the logic before calling the function. But validation was actually an http response object. Hide quoted text On Mon, Feb 17, 2020, 01:40 A. Lester Buck III < buck@...> wrote: Well that is just torturing us TDD newbies....
Can you share examples on the list?? What does it mean for a function or idiom to lie about what they are doing in non-obvious ways?
Thanks!
Lester
On Sun, Feb 16, 2020, 23:46 Russell Gold < russ@...> wrote: Massively agree. With legacy tests, I usually try to start with discovery tests - that is, I take my best guess at what the code is doing and write tests to see if I am right.?
Of course, that doesn¡¯t mean that it is what the author¡¯s *wanted* the code to do, but it the best approximation I have. That way, at least I can minimize the chances of breaking current behavior unintentionally.?
The most frustrating part is when this legacy code was written just six months prior...
----------------- Author,?Getting Started with Apache Maven <> Author, HttpUnit <> and SimpleStub <> Now blogging at <>
Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!
On Feb 16, 2020, at 1:12 PM, Avi Kessner < akessner@...> wrote:
Lately, while helping with some legacy code, I've been finding that my "safe refactorings" aren't quite as safe as I wanted them to be, and that if I had started with the complete test first, I would have saved myself 10 minutes of thinking in the wrong?direction. Especially when functions or idioms lie about what they are doing in non obvious ways.
Always good to test assumptions. :)
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger ::??|??|?
|