Keyboard Shortcuts
Likes
- Testdrivendevelopment
- Messages
Search
Re: [TDD] Long running test suite
Empty out the integration and up tests and sub in more microtests. On Mon, May 23, 2016, 12:15 'Niklas Bergkvist' niklas.bergkvist@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
|
|||||
Re: [TDD] Long running test suite
A common approach is to split your tests by type (unit, integration, UI), or at least by speed (fast, slow). Then, always run the fast tests before check-in, but let the build/CI server run the slow tests (all the tests). This way, you're able to check in frequently but you still have the CI server there to catch anything your unit tests (the fast ones) miss. If you're not sure whether your tests are unit or integration tests, I wrote about the differences a while ago here:? Steve On Sun, May 22, 2016 at 11:25 PM, Joselito D Moreno joenmoreno@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
--
Steve Smith
|
|||||
Re: [TDD] Long running test suite
Niklas Bergkvist
Hi,
toggle quoted message
Show quoted text
I'd say you have a good opportunity for refacoring! If your tests cover your functionality; go a head and refactor into smaller functional units. good luck! /niklasb <-----Ursprungligt Meddelande----->
|
|||||
Re: [TDD] Long running test suite
I don't know if it still works this way, but the Flex framework used to run a dedicated nightly build for the UI tests that could take a really long time to run. They had their own Jenkins build devoted to them?and turned them off for other builds. The only way to handle long tests like that, IMO, is to accept that they are not tests for things which should be fixed within the day, and can be fixed or examined on the next day after the tests have run. brought to you by the letters A, V, and I and the number 47 On Mon, May 23, 2016 at 1:25 PM, Russell Gold russ@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
|
|||||
Re: [TDD] Long running test suite
¿ªÔÆÌåÓýIt might well depend on the size and complexity of the system. I agree that ten minutes is long, but¡ on a system with several million lines of code and and tends of thousands of unit tests, it could take quite a while, especially if your build tool runs each test class separately to prevent tests from affecting one another.There are a few things you could look into. First, as Adam notes, you may have a lot of tests which are too slow. Unit tests should be very very fast - if they are taking more than a few milliseconds each, they are too slow and need to be rewritten. Second, your system could be too large. There is no reason that a single releasable item should be millions of lines long. Ideally, you should be able to break it into smaller projects which integrate. But, a bit of perspective. The ideal world isn¡¯t always the real one. I have worked on enormously large systems that had very poor unit test coverage, and used functional tests for everything. The full check in suite on one ran over 8 hours, which was mitigated by building once (which took over an hour by itself) and then spawning each test to a separate computer, leading to checkin times of 2-3 hours. Very very painful. ----------------- Author,?Getting Started with Apache Maven <> Author, HttpUnit <> and SimpleStub <> Come read my webnovel,?Take a Lemon?<>,? and listen to the Misfile radio play <>!
|
|||||
Re: [TDD] Long running test suite
25 minutes is ridiculously long. The old XP rule was 10 minutes and that was with 90s hardware. I start to get annoyed when it takes more than a few seconds. Likely the problem is too many of the wrong kinds of tests. What you want are what our friend Mike Hill calls microtests--small tests that exercise one behavior of one interface in isolation. What you don't want is things that emulate users poking at a real UI. Anything in between is diminishing returns. A couple integration tests will give you some confidence. A bunch of them will get so annoying that you stop paying attention ("just run it again...") On May 22, 2016, at 20:25, "Joselito D Moreno joenmoreno@... [testdrivendevelopment]" <testdrivendevelopment@...> wrote: ? |
|||||
Re: [TDD] Long running test suite
We use??. ?Be sure not to run more parallel test processes than you have CPU cores available, or the tests won't run as fast as they can. -- Albert Davidson Chou On Sunday, May 22, 2016 8:25 PM, "Joselito D Moreno joenmoreno@... [testdrivendevelopment]" wrote: Hello,
To those who have experienced test suites that run long, say ~25 minutes, what are some techniques you have used to mitigate it?? We still would like to be able to check in often and run the test suites prior to checking our code in to our repository but these long running tests makes us stretch what we mean with "often". Joen |
|||||
Long running test suite
Hello, To those who have experienced test suites that run long, say ~25 minutes, what are some techniques you have used to mitigate it?? We still would like to be able to check in often and run the test suites prior to checking our code in to our repository but these long running tests makes us stretch what we mean with "often". Joen |
|||||
Re: [TDD] How to use TDD to test a process?
There's always Growing Object Oriented Software Guided By Tests by Steve Freeman and Nat Pryce if you want to know more about the mockist approach. On 28 April 2016 at 04:05, sailor.gao@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
|
|||||
Re: [TDD] How to use TDD to test a process?
Keith Ray
¿ªÔÆÌåÓýIf you're using C or C++ I recommend James Grenning 's book on "Embedded TDD" C. Keith Ray twitter: @ckeithray On Apr 27, 2016, at 2:56 AM, 'Niklas Bergkvist' niklas.bergkvist@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
|
|||||
Re: [TDD] How to use TDD to test a process?
On Tue, Apr 26, 2016 at 10:24 PM, sailor.gao@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
I recommend you to read the following two books: ?* TDD by example - Kent Beck. ?* Growing Object Oriented Software Guided by Tests - Steve Freeman and Nat Pryce. ?
Abra?os,
´³´Ç²õ³Ü¨¦ |
|||||
Re: [TDD] How to use TDD to test a process?
Niklas Bergkvist
Sailor - read the book Test Driven Development: By Example?by Kent Beck.
toggle quoted message
Show quoted text
br niklasb <-----Ursprungligt Meddelande----->
|
|||||
Re: [TDD] How to use TDD to test a process?
Thanks for reply. Nice of you.
Actually I don't know how to write the test cases ( step by step using TDD).? I know we should code by interface not implemention and we should consider the requirement not how to implement it. But in TDD, does it also work in the same way?? Thank you! Best regards, Sailor Gao |
|||||
Re: [TDD] How to use TDD to test a process?
Donaldson, John
Sailor,
If you have a good mocking tool it¡¯s very little effort to create a mock object. Then you can make assertions such as which methods should be called, and in what order. (For example Mockito). Do you need to test the process? Are you worried about the conditional aspects of your process? Then you could also consider extracting the conditions out into a separately testable unit. John D. From: testdrivendevelopment@... [mailto:testdrivendevelopment@...] Sent: 26 April 2016 03:31 To: testdrivendevelopment@... Subject: RE: [TDD] How to use TDD to test a process? Thanks for the reply. Yes, actually I don't want to know the side effect. I just want to test the logic. I had planned to use stub or mock. But there are several branches. I thought it is a little duplicated. Because if gotten good code coverage, in every case I have to write the stub or mock for methods called. Could you give me some advice? Thank you. |
|||||
Re: [TDD] How to use TDD to test a process?
Here is the a list of possible tests for the UserService.Record method: ? ???Record__UserWithSameNameAlreadyExists__DeleteTheUser ???????????????????????????????????????????????? // Asserts only on calling the method _userDao.Delete with appropriate arguments ¨C you have to use a mock to do this ???Record__ UserWithSameNameAlreadyExists _DeleteFailed__ReturnsFalse ?????????????????????? // Asserts on the method Record returning false ¨C you have to use a stub here to ensure that Delete returns false ???Record__ UserWithSameNameAlreadyExists__SavesNewUserDetails?????????????????????????????????? // Asserts only on calling the method _userDao.Save with appropriate arguments (the passed user in this case) ¨C again a mock is needed ???Record__ UserWithSameNameAlreadyExists_SavingFailed__ReturnsFalse????????? ??????????????? // Asserts again that the method Record should return false ¨C a stub for the method Save to return false ? (The test names are written using the notation?__) ? Pl. note that this is the minimum no. of tests that the method Record can have (given its cyclomatic complexity - 4) There may be other cases, which evolve depending on the data complexity, which gives more dimensions to the above tests. For ex. there may be null data, which is probably expected/not expected to be handled by the method. ? It is also but usual to be tempted to assert on more than one expected result in a single test, but the best practices prescribe not to do so, as it may be not be clear - which expectation failed. ? One way to check if you have enough tests is to see, if the list of tests are giving you the functional specification of the method under test. It is more efficient and useful technique in a test first approach, where the unit test spec could be checked before coding. Also, it may be possible to sort out glitches like, whether the userDao is actually expected user name or id. ? Best Regards, Surya |