¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

Re: Classifying tests: problem? solution? something else?


 

Totally worth you?Maur¨ªcio there.

Having been responsible (inherited) for a 40 min test suite on an app worked on by hundreds of devs: determinism and cost were high on my priority list.

Fox
---

On Thu, 6 Jul 2023, 06:06 Mauricio Aniche, <mauricioaniche@...> wrote:
In the past two years, where I have been working on a codebase with hundreds of thousands of tests and almost a hundred different teams touching it, I started to ¡°care less¡± about semantically classifying tests. That, team members can come up with an agreement of what makes more sense to them in their context. Do we really need to have a single classification company-wise?

Nowadays, I really care about classifying tests in terms of their infrastructure costs. This matters globally and must defined at company level, because although code isn¡¯t (well, sometimes is) shared among teams, resources are. Reliability is also another category that I care. Your want the tests you run in the pre-merge to give you 100% sound signal.

Do we allow multithreading in our unit test suite, or should these tests be somewhere else? Do we allow mock server in it? When do we need to run all the tests and when can we just run a subset of them? How can we bring (costly) integration tests to the pre-merge? What to do with flaky tests; should we delete them, should we keep them there waiting for someone to fix them, should we move them to another place? These are questions that have been on my mind when I talk about segregating tests.

Cheers,
Mauricio?


On Tue, 4 Jul 2023 at 18:19, George Dinwiddie <lists@...> wrote:
I agree that the naming can be confusing because often the same name
means different things to different people. I don't get too hung up on
the naming of types of tests (though I love Gpaw's "microtests" because
it gets out of the "unit test" mire). Instead, I tray to talk about the
meaning the other person has behind the name.

When I started doing TDD, I sorted my tests into three categories:
? - "unit tests" which tested in memory code without any other dependencies
? - "database tests" which tested code dependent on the database. This
led me to using the Adapter Pattern so I could isolate my unit tests
from the database and test only the adapter against a real database.
? - "deployed tests" which required the system to be deployed in order
to run. These tended to be "story tests," though I found that by
delegating from the requirements of the app server (J2EE in those days)
to Plain Old Java Objects with the same API, I could implement most of
the story tests the same way as unit tests, so "story tests" became
another category.

Eventually I had need to call other systems beyond the database, so
those became another classification of tests, but done in the same
fashion as the database tests.

? - George

On 7/4/23 9:38 AM, J. B. Rainsberger wrote:
>? > I think the test name separation by unit test/integration test/micro
> test/behaviour test doesn't work, but I'm not sure what's the "sensible"
> way to separate them yetlike fast/slow/io/non-io?
> business/technical?structure vs behaviour?
>
> I'm curious about this, because I hear it from time to time.
>
> I have some questions, and we don't have to limit those to Tony,
> although I'm also interested in his replies:
>
> 1. What kind of "doesn't work"? It works for me, so maybe we have
> different ideas about how it could work or should work.
>
> 2. I classify tests in order to better understand all the different
> kinds of intent and purpose we have when we write them. This helps me
> decide how to choose which tests to write next. What challenges do you
> have with all these test classifications?
>
> 3. Some people report that there are too many test classifications to
> understand well. They become confused. I empathize. Why don't you simply
> ignore those classifications until you need them?
>
> Finally, as for the difference between business and technical tests,
> when I talk about TDD I tend to focus on technical tests, because that's
> my context for TDD: to focus on the code. I handle Customer Tests (or
> business tests) quite differently, and I only sometimes practise what
> we've called Story-TDD or Acceptance-TDD. I practise it most often when
> I play the role of Customer for myself, such as on my volunteer
> projects. I try _very hard_ to clarify this for the people I teach, but
> I always run the risk that the message doesn't make it through.
> --
> J. B. (Joe) Rainsberger :: tdd.training <>?::
> <> ::
> <>
>
> Replies from this account routinely take a few days, which allows me to
> reply thoughtfully. I reply more quickly to messages that clearly
> require answers urgently. If you need something from me and are on a
> deadline, then let me know how soon you need a reply so that I can
> better help you to get what you need. Thank you for your consideration.
>
> --
> J. B. (Joe) Rainsberger :: <>
> :: <>
> :: <>
> Teaching evolutionary design and TDD since 2002
>

--
? ----------------------------------------------------------------------
? ?* George Dinwiddie *? ? ? ? ? ? ? ? ? ? ?
? ?Software Development? ? ? ? ? ? ? ? ? ?
? ?Consultant and Coach? ? ? ? ?
? ----------------------------------------------------------------------






--
--
Maur¨ªcio Aniche
Author of

Join [email protected] to automatically receive all group messages.