Skip to content

Commit

Permalink
Update documenation
Browse files Browse the repository at this point in the history
  • Loading branch information
apcraig committed Apr 14, 2020
1 parent b94e2b5 commit 0944003
Showing 1 changed file with 33 additions and 21 deletions.
54 changes: 33 additions & 21 deletions doc/source/user_guide/ug_testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -496,6 +496,39 @@ The reporting can also be automated in a test suite by adding ``--report`` to ``
With ``--report``, the suite will create all the tests, build and submit them,
wait for all runs to be complete, and run the results and report_results scripts.

.. _codecoverage:

Code Coverage Testing
------------------------

The ``--codecov`` feature in **icepack.setup** provides a method to diagnose code coverage.
This argument turns on special compiler flags including reduced optimization and then
invokes the gcov tool.
This option is currently only available with the gnu compiler and on a few systems.
To use, submit a full test suite using a version of Icepack on the Consortium master
and the gnu compiler with the ``--codecov`` argument.
The test suite will run and then a report will be generated and uploaded to
the `codecov.io site <https://codecov.io/gh/CICE-Consortium/Icepack>`_ by the
**report_codecov.csh** script.

This is a special diagnostic test and does not constitute proper model testing.
General use is not recommended, this is mainly used as a diagnostic to periodically
assess test coverage. The interaction with codecov.io is not always robust and
can be tricky to manage. Some constraints are that the output generated at runtime
is copied into the directory where compilation took place. That means each
test should be compiled separately. Tests that invoke multiple runs
(such as exact restart) will only save coverage information
for the last run, so some coverage information may be lost. The gcov tool can
be a little slow to run on large test suites, and the codecov.io bash uploader
(that runs gcov and uploads the data to codecov.io) is constantly evolving.
Finally, gcov requires that the diagnostic output be copied into the git sandbox for
analysis. These constraints are handled by the current scripts, but may change
in the future.

A sample job submission would look like ::

$ ./icepack.setup -m conrad -e gnu --suite base_suite,travis_suite,quick_suite --testid cc01 --codecov

.. _testplotting:

Test Plotting
Expand Down Expand Up @@ -550,24 +583,3 @@ This plotting script can be used to plot the following variables:
- snow-ice (m)
- initial energy change (:math:`W/m^2`)

.. _codecoverage:

Code Coverage Testing
------------------------

The ``--codecov`` feature in **icepack.setup** provides a method to diagnose code coverage.
This argument turns on special compiler flags including reduced optimization and then
invokes the gcov tool.
This option is currently only available with the gnu compiler and on a few systems.
To use, submit a full test suite using a version of Icepack on the Consortium master
and the gnu compiler with the ``--codecov`` argument to **icepack.setup**.
The test suite will run and then a report will be generated and uploaded to
the `codecov.io site <https://codecov.io/gh/CICE-Consortium/Icepack>`_ by the
**report_codecov.csh** script.
This is a special diagnostic test and does not constitute proper model testing.
General use is not recommended, this is mainly used as a diagnostic to periodically
assess test coverage. In addition, the interaction with codecov.io is not always robust.
A sample job submission would look like ::

$ ./icepack.setup -m conrad -e gnu --suite base_suite,travis_suite,quick_suite --testid cc01 --codecov

3 comments on commit 0944003

@rallard77
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to run on gordon using the gnu compiler with he option --codecov. All the tests passed. However, I am not sure if the code coverage report was uploaded.

@phil-blain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick:
s/documenation/documentation

(in the commit message)

@apcraig
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, but I suspect it wasn't. As indicated in the documentation, this will need to be done off the CICE-Consortium master version and it still needs to be tested after the PR is complete. Because of the interaction between the codecov.io tool and the setup on the CICE/Icepack side, there are constraints how testing can be done. In addition, we also need to test the ability of various users to upload to the codecov tool. I initialized the dashboard in codecov, and I have been the only one testing so far. There is still a lot to verify and understand about how this is best used.

My idea is that we run the codecov option as needed, but maybe just a few times a year. This does not need to be part of our weekly testing nor part of our PR requirements at this point. It could be, but I don't see a lot of benefit. I thnk we test the coverage a few times a year and make efforts to close gaps in our test coverage at that frequency. We certainly want to make sure multiple people can do the testing, but we can work on that once I have verified the basic system is working on the Consortium master.

Please sign in to comment.