-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate code coverage reports #157
Comments
I think it should be possible to generate code coverage using tox with:
However, that fails when we reach A theory: enabling code coverage changes the cell execution count numbering so the attempt to instantiate a model fails due to the issue described in #156. |
Tried enabling code coverage report generation and uploading to codecov.io in PR #178 and 57f9928. The Travis CI build logs show code coverage report sent to stdout but attempts to upload report to codecov.io fail:
Was the code coverage report written to disk by |
A colleague has deployed this successfully on their GitHub repository I think - shall I ask for some further information? |
@jarmarshall Thanks, might expedite things. |
Hi @willfurnass - I am that college! Our current project is C++-based so we're using gcov to generate our coverage data meaning that things are a little different. However you should definitely be writing your coverage data to a file rather than stdout and, if the codecov script still isn't finding it, you can also manually point the script at your coverage data. For example we do this: |
Thanks @neworderofjamie - does that help @willfurnass ? |
Thanks @neworderofjamie; I'm getting a |
Code coverage for the (stub of) a unit test suite (tests/test_all.py) now measured and pushed to codecov.io: https://codecov.io/gh/DiODeProject/MuMoT. |
Can't generate code coverage with
as when pytest+nbval are run with coverage support enabled the explicit references to cell numbers are incorrect. Addressing #156 may help with this. |
Coverage is still fairly low but I think the reporting is working now. Closing. |
@willfurnass if we can get code coverage working off notebooks as well (see issue #158 ?) this number should increase - in the short term, that would be preferred as we're short of time to write proper unit tests before publication... |
At present we capture coverage data for (mostly stub) unit tests and from executing TestNotebooks/MuMoTtest.ipynb (without running regression tests on it); from tox.ini:
Should we expand on this to capture coverage data when running through docs/MuMoTuserManual.ipynb too? |
I think not; reopen if you disagree. |
In the first instance can generate interactively using pytest-cov and tox, then can move on to automating the measuring of code coverage using Travis CI and CodeCov.
The text was updated successfully, but these errors were encountered: