All of it very interesting indeed!
Wrt. integration tests -> I¡¯ve recently split our IT suite into two so that we can easily run more of our ITs during build. Now we have:
- ¡°Integration Tests ¨C
Local¡±: all tests that reach outside of the process, but the access is local only, no network calls. Examples: everything using filesystem (and we have a few layers building on top of each other) or calling OS apis like time functions, locale, etc..
These can now run in our team build.
- ¡°Integration Tests ¨C
Authenticated¡±: tests that make authenticated network calls (like calling cloud APIs). These are not easy to run in our build: first because we don¡¯t allow credentials of any kind to be checked in, second the build process does not have access to the
public internet. There¡¯s a way to allow build to query for credentials, but it¡¯s not trivial and a ¡°later¡± concern for today. Those we run locally using each user¡¯s locally stored authentication.
- (there would be a third one ¨C
Public Internet Unauthenticated, but we don¡¯t make such calls in our code)
- I also sometimes wonder if it¡¯d make sense to have partial UI as ITs. Today I consider those e2e tests.
?
UTs ¨C anything that can run in-memory, minus slow (>30ms) ones, minus tests that span architectural boundaries (even if configured in memory).
?
It's interesting to me.?
Today I wrote a test which was confirming the behavior of a queue to write to a DB with the correct changes.
I starting by writing the test in our e2e folder, because I assumed the test would take multiple seconds. However, since we are using docker containers for all the dbs and queue services, I was actually able to remove all my polling loops
waiting for data to be updated, and the test looks like a typical unit test, (it's 5 lines long) though it takes 700ms to execute in the IDE.?
I used to be able to distinguish unit/integration/e2e by saying unit all runs in memory, integration involves one network call, and e2e involves multiple network/frontend calls and most closely resembles what an end user would experience.?
But now, my faith in such distinctions are fading.
?
toggle quoted message
Show quoted text
On Fri, 13 May 2022, 17:14 J. B. Rainsberger, < me@...> wrote:
It's ALL jargon! In the first meaning (in most dictionaries I checked) of "language peculiar to a particular trade profession or group,"
rather than in the sense of something negative or not understood.
Yup. And since human nature is to let meanings wander but keep the words the same, if we want a self-regulating system, then we need some people to nudge us back in the direction of using words that convey more broadly the meanings we wish
to convey. I am one of those people. :) I have no illusion about fixing anything, but if I can make it easier for more people to understand us, I'll do that.
The "micro-test" movement has grown rather large, so I think you should critique it (if you want to) by using the definitions that
already exist, not just what the word "sounds like."
Well... "micro" suggests "small" and I happen to know the origin of the term as well as its originators, so I feel pretty confident in clarifying the original intention. :)
Our TDD, XP and Agile movements have been plagued by the search for terms, which everyone will automatically understand
(in the same way we understand them) upon first encountering them. There really are no terms like that. In the rather basic
dictionary I have by my desk, there are 14 definitions of "test" and 13 for "unit." ("Behavior" has only four, which will make some
proponents happy, but they are mostly pretty non-specific and don't include "Category for which Charlie used to get a low
grade on his report card!")
Generally, to understand the "jargon" of a group, you need at least a definition. Sometimes you may need to read an article,
have a conversation or even digest a book or two. While that may sound like a problem, I've never seen it arise within the
actual teams that do the work. Of course, when we get on the internet with people following different usage, we may not
immediately understand one another. I think that's just a fact of life. We should mitigate it by not calling "hot" things "cold"
or "fast" stuff "slow" or "small" tests "big". But non-self-explanatory terms are easily bested by people who work at understanding.
Yup. I have no illusions of success, but I see the value in nudging. Either it helps or it doesn't. We all have different pet projects. I don't mind.
I've encountered a person who believed that "refactor" means "add a feature" and there is much confusion about what "integration test" means. Sometimes big groups come to understand X to mean not X. If I can help reduce that confusion,
I'm happy to continue, at least in my spare time.
--
J. B. (Joe) Rainsberger :: ?:: ::
--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002
|