Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Executable Unit-Tests #9

Open
bassoy opened this issue May 28, 2018 · 11 comments
Open

Multiple Executable Unit-Tests #9

bassoy opened this issue May 28, 2018 · 11 comments
Assignees
Labels
discussion Discussion for future refactoring gsoc Google Summer of Code

Comments

@bassoy
Copy link
Collaborator

bassoy commented May 28, 2018

Boost's unit-test framework allows to define test-suite and organize tests. Do we need to provide multiple executables for different matrix and vector unit-tests? We could specify each test-suite and test conveniently using command line arguments.

Note that because of that matrix and vector headers are included and template class instances!

@bassoy bassoy added the discussion Discussion for future refactoring label May 28, 2018
@bassoy
Copy link
Collaborator Author

bassoy commented May 28, 2018

The tensor addition uses BOOST_TEST_DYN_LINK link test module dynamically for instance.

@stefanseefeld
Copy link

I'd prefer multiple test executables to isolate tests as much as possible. There are of course tradeoffs to be made: if most of the code is shared, we'd pay a performance penalty for compiling multiple test executables.

@bassoy
Copy link
Collaborator Author

bassoy commented May 28, 2018

Tests are very much isolated through test_suite. You can run and test each suite and test in isolation. I am not sure of the benefits of multiple test executables.

@stefanseefeld
Copy link

A few reasons:

  • compilation time
  • compilation failures in a single test case will prevent all test cases from the same binary to run

@bassoy
Copy link
Collaborator Author

bassoy commented May 28, 2018

compilation time

I thought just the contrary. You include the matrix and vector headers multiple times for multiple test executables. In one executable only once.

compilation failures in a single test case will prevent all test cases from the same binary to run

No, it will continue with the remaining tests and report about the failed and succeeded ones.

@stefanseefeld
Copy link

Right, so let me elaborate: Depending on the code layout, the total compilation time may be reduced if we only compile a single executable. However, total compilation time isn't all that counts. It might be more useful to get feedback from individual test cases as early as possible, so interleaving compilation and running test cases, rather than compile-all, then run-all, may be preferable.
If all the tests are to be performed through a single test executable, and the executable can't be built because code from one specific test case fails to compile, no test case will be able to run. Using multiple test executables solves that particular issue.

@bassoy
Copy link
Collaborator Author

bassoy commented May 28, 2018

Okay, but this means that we are limiting ourselves from using the full features of the boost unit test framework because we fear that someone might push broken code to the dev branch.

Would push-requests with broken (non-tested) code be accepted for the dev branch?
Is pushing in the development branch without unit-testing allowed?
Isn't exactly this issue the reason for a development and master branch (with additional feature branches) with their own unit-tests and continuous integration?

@stefanseefeld
Copy link

I don't think the usefulness of tests should be measured exclusively in a context in which all tests pass. Quite the opposite ! :-)

Yes, ideally all tests always pass, in particular before a pull request is accepted and merged. But unit testing is an important tool to get there, not only once you have arrived. (And don't forget the scenario where initially all tests pass, then someone introduces some change that causes regressions. It's much harder to understand what change exactly causes the regression if all in a sudden the entire test suite fails, rather than just one.)

What features are we missing if we split test suites into multiple executables ?

@bassoy
Copy link
Collaborator Author

bassoy commented May 28, 2018

It's much harder to understand what change exactly causes the regression if all in a sudden the entire test suite fails, rather than just one.

Right, but if just one test fails, other tests will still run. Finding the problem depends on how you designed your tests within test suites. Note that I still have multiple cpp files, one cpp may declare one test suite, logically categorizing my unit tests. But only one executable. However, if my test suite in test_tensor_strides.cpp fails test_tensor_extents.cpp still runs.

What features are we missing if we split test suites into multiple executables ?

  • With boost unit test you can define multiple test suites BOOST_AUTO_TEST_SUITE that simplifies the construction of the test tree.
  • I can categorize unit tests with test suites and declare dependencies (see test dependencies
  • I can enable and disable tests suites and/or tests easily using command line options. This is so convenient when a test does not pass. You can easily disable all other tests using command line options. Now, we have to modify the Jamfile in order to disable the tests. This to me is an awful approach.
  • I can control the logging output for the complete framework. Just imagine you would like to specify the verbosity of the output. You would need to do it for all multiple separated unit tests.

@stefanseefeld
Copy link

stefanseefeld commented May 28, 2018 via email

@bassoy
Copy link
Collaborator Author

bassoy commented May 29, 2018 via email

@bassoy bassoy added the gsoc Google Summer of Code label Jul 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Discussion for future refactoring gsoc Google Summer of Code
Projects
None yet
Development

No branches or pull requests

3 participants