diff --git a/doc/source/developer_guide/dg_scripts.rst b/doc/source/developer_guide/dg_scripts.rst index cdb066e6a..da5ef7d24 100755 --- a/doc/source/developer_guide/dg_scripts.rst +++ b/doc/source/developer_guide/dg_scripts.rst @@ -83,7 +83,7 @@ examples of the syntax. .. _build: Build Scripts ------------ +-------------- CICE uses GNU Make to build the model. There is a common **Makefile** for all machines. Each machine provides a Macros file to define some Makefile variables diff --git a/doc/source/user_guide/ug_implementation.rst b/doc/source/user_guide/ug_implementation.rst index e2a144ab3..eea7d8310 100644 --- a/doc/source/user_guide/ug_implementation.rst +++ b/doc/source/user_guide/ug_implementation.rst @@ -106,15 +106,15 @@ Big Endian files. In CESM, the sea ice model may exchange coupling fluxes using a different grid than the computational grid. This functionality is -activated using the namelist variable ``gridcpl\_file``. +activated using the namelist variable ``gridcpl_file``. *********************** Grid domains and blocks *********************** In general, the global gridded domain is -``nx\_global`` :math:`\times`\ ``ny\_global``, while the subdomains used in the -block distribution are ``nx\_block`` :math:`\times`\ ``ny\_block``. The +``nx_global`` :math:`\times`\ ``ny_global``, while the subdomains used in the +block distribution are ``nx_block`` :math:`\times`\ ``ny_block``. The physical portion of a subdomain is indexed as [``ilo:ihi``, ``jlo:jhi``], with nghost “ghost” or “halo" cells outside the domain used for boundary conditions. These parameters are illustrated in :ref:`fig-grid` in one @@ -141,14 +141,14 @@ Figure :ref:`fig-grid` shows the grid parameters for a sample one-dimensional, 2 global domain decomposed into four local subdomains. Each local domain has one ghost (halo) cell on each side, and the physical portion of the local domains are labeled ``ilo:ihi``. The parameter -``nx\_block`` is the total number of cells in the local domain, including +``nx_block`` is the total number of cells in the local domain, including ghost cells, and the same numbering system is applied to each of the four subdomains. The user sets the ``NTASKS`` and ``NTHRDS`` settings in **cice.settings** -and chooses a block size ``block\_size\_x`` :math:`\times`\ ``block\_size\_y``, -``max\_blocks``, and decomposition information ``distribution\_type``, ``processor\_shape``, -and ``distribution\_type`` in **ice\_in**. That information is used to +and chooses a block size ``block_size_x`` :math:`\times`\ ``block_size_y``, +``max_blocks``, and decomposition information ``distribution_type``, ``processor_shape``, +and ``distribution_type`` in **ice\_in**. That information is used to determine how the blocks are distributed across the processors, and how the processors are distributed across the grid domain. Recommended combinations of these @@ -159,8 +159,8 @@ but the user can overwrite the defaults by manually changing the values in information to the log file, and if the block size or max blocks is inconsistent with the task and thread size, the model will abort. The code will also print a warning if the maximum number of blocks is too large. -Although this is not fatal, it does use extra memory. If ``max\_blocks`` is -set to -1, the code will compute a ``max\_blocks`` on the fly. +Although this is not fatal, it does use extra memory. If ``max_blocks`` is +set to -1, the code will compute a ``max_blocks`` on the fly. A loop at the end of routine *create\_blocks* in module **ice\_blocks.F90** will print the locations for all of the blocks on @@ -173,10 +173,10 @@ manually set in the code in each case (independently of the dbug flag in **ice\_in**), as there may be hundreds or thousands of blocks to print and this information should be needed only rarely. This information is much easier to look at using a debugger such as Totalview. There is also -an output field that can be activated in `icefields\_nml`, ``f\_blkmask``, +an output field that can be activated in `icefields\_nml`, ``f_blkmask``, that prints out the variable ``blkmask`` to the history file and which labels the blocks in the grid decomposition according to ``blkmask = -my\_task + iblk/100``. +my_task + iblk/100``. ************* Tripole grids @@ -198,11 +198,11 @@ poles and the cells between them can be grid T cells, making a “T-fold.” Both of these options are also supported by the OPA/NEMO ocean model, which calls the U-fold an “f-fold” (because it uses the Arakawa C-grid in which U cells are on T-rows). The choice of tripole grid is given by -the namelist variable ``ns\_boundary\_type``, ‘tripole’ for the U-fold and +the namelist variable ``ns_boundary_type``, ‘tripole’ for the U-fold and ‘tripoleT’ for the T-fold grid. In the U-fold tripole grid, the poles have U-index -:math:`{\tt nx\_global}/2` and ``nx\_global`` on the top U-row of the +:math:`{\tt nx\_global}/2` and ``nx_global`` on the top U-row of the physical grid, and points with U-index i and :math:`{\tt nx\_global-i}` are coincident. Let the fold have U-row index :math:`n` on the global grid; this will also be the T-row index of the T-row to the south of the @@ -267,7 +267,7 @@ masked by land, periodic conditions wrap the domain around the globe. CICE can be run on regional grids with open boundary conditions; except for variables describing grid lengths, non-land halo cells along the grid edge must be filled by restoring them to specified values. The -namelist variable ``restore\_ice`` turns this functionality on and off; the +namelist variable ``restore_ice`` turns this functionality on and off; the restoring timescale ``trestore`` may be used (it is also used for restoring ocean sea surface temperature in stand-alone ice runs). This implementation is only intended to provide the “hooks" for a more @@ -279,7 +279,7 @@ allow Neumann boundary conditions, which must be set explicitly. This has been done in an unreleased branch of the code; contact Elizabeth for more information. -For exact restarts using restoring, set ``restart\_ext`` = true in namelist +For exact restarts using restoring, set ``restart_ext`` = true in namelist to use the extended-grid subroutines. On tripole grids, the order of operations used for calculating elements @@ -308,27 +308,27 @@ The logical masks ``tmask`` and ``umask`` (which correspond to the real masks In addition to the land masks, two other masks are implemented in *dyn\_prep* in order to reduce the dynamics component’s work on a global -grid. At each time step the logical masks ``ice\_tmask`` and ``ice\_umask`` are +grid. At each time step the logical masks ``ice_tmask`` and ``ice_umask`` are determined from the current ice extent, such that they have the value “true” wherever ice exists. They also include a border of cells around the ice pack for numerical purposes. These masks are used in the dynamics component to prevent unnecessary calculations on grid points where there is no ice. They are not used in the thermodynamics component, so that ice may form in previously ice-free cells. Like the -land masks ``hm`` and ``uvm``, the ice extent masks ``ice\_tmask`` and ``ice\_umask`` +land masks ``hm`` and ``uvm``, the ice extent masks ``ice_tmask`` and ``ice_umask`` are for T cells and U cells, respectively. Improved parallel performance may result from utilizing halo masks for boundary updates of the full ice state, incremental remapping transport, or for EVP or EAP dynamics. These options are accessed through the -logical namelist flags ``maskhalo\_bound``, ``maskhalo\_remap``, and -``maskhalo\_dyn``, respectively. Only the halo cells containing needed +logical namelist flags ``maskhalo_bound``, ``maskhalo_remap``, and +``maskhalo_dyn``, respectively. Only the halo cells containing needed information are communicated. -Two additional masks are created for the user’s convenience: ``lmask\_n`` -and ``lmask\_s`` can be used to compute or write data only for the northern +Two additional masks are created for the user’s convenience: ``lmask_n`` +and ``lmask_s`` can be used to compute or write data only for the northern or southern hemispheres, respectively. Special constants (``spval`` and -``spval\_dbl``, each equal to :math:`10^{30}`) are used to indicate land +``spval_dbl``, each equal to :math:`10^{30}`) are used to indicate land points in the history files and diagnostics. @@ -338,13 +338,13 @@ points in the history files and diagnostics. Performance *************** -Namelist options (*domain\_nml*) provide considerable flexibility for +Namelist options (*domain_nml*) provide considerable flexibility for finding efficient processor and block configuration. Some of these choices are illustrated in :ref:`fig-distrb`. Users have control -of many aspects of the decomposition such as the block size (``block\_size\_x``, -``block\_size\_y``), the ``distribution\_type``, the ``distribution\_wght``, -the ``distribution\_wght\_file`` (when ``distribution\_type`` = ``wghtfile``), -and the ``processor\_shape`` (when ``distribution\_type`` = ``cartesian``). +of many aspects of the decomposition such as the block size (``block_size_x``, +``block_size_y``), the ``distribution_type``, the ``distribution_wght``, +the ``distribution_wght_file`` (when ``distribution_type`` = ``wghtfile``), +and the ``processor_shape`` (when ``distribution_type`` = ``cartesian``). The user specifies the total number of tasks and threads in **cice.settings** and the block size and decompostion in the namelist file. The main trades @@ -361,7 +361,7 @@ volume-to-surface ratio important for communication cost. Often 3 to 8 blocks per processor provide the decompositions flexiblity to create reasonable load balance configurations. -The ``distribution\_type`` options allow standard cartesian distributions +The ``distribution_type`` options allow standard cartesian distributions of blocks, redistribution via a ‘rake’ algorithm for improved load balancing across processors, and redistribution based on space-filling curves. There are also additional distribution types @@ -395,7 +395,7 @@ Figure :ref:`fig-distrbB` shows sample decompositions for (a) spiral center and (b) wghtfile for an Arctic polar grid. (c) is the weight field in the input file use to drive the decompostion in (b). -``processor\_shape`` is used with the ``distribution\_type`` cartesian option, +``processor_shape`` is used with the ``distribution_type`` cartesian option, and it allocates blocks to processors in various groupings such as tall, thin processor domains (``slenderX1`` or ``slenderX2``, often better for sea ice simulations on global grids where nearly all of @@ -405,14 +405,14 @@ which maximize the volume to surface ratio (and therefore on-processor computations to message passing, if there were ice in every grid cell). In cases where the number of processors is not a perfect square (4, 9, 16...), the -``processor\_shape`` namelist variable allows the user to choose how the +``processor_shape`` namelist variable allows the user to choose how the processors are arranged. Here again, it is better in the sea ice model to have more processors in x than in y, for example, 8 processors arranged 4x2 (``square-ice``) rather than 2x4 (``square-pop``). The latter option is offered for direct-communication compatibility with POP, in which this is the default. -``distribution\_wght`` chooses how the work-per-block estimates are +``distribution_wght`` chooses how the work-per-block estimates are weighted. The ‘block’ option is the default in POP and it weights each block equally. This is useful in POP which always has work in each block and is written with a lot of @@ -422,7 +422,7 @@ direct-communication compatibility with POP. The ‘latitude’ option weights the blocks based on latitude and the number of ocean grid cells they contain. Many of the non-cartesian decompositions support automatic land block elimination and provide alternative ways to -decompose blocks without needing the ``distribution\_wght``. +decompose blocks without needing the ``distribution_wght``. The rake distribution type is initialized as a standard, Cartesian distribution. Using the work-per-block estimates, blocks are “raked" @@ -549,7 +549,7 @@ layers and the ice thickness distribution defined by ``kcatbound`` = 0. Restart information for some tracers is also included in the netCDF restart files. -Three namelist variables control model initialization, ``ice\_ic``, ``runtype``, +Three namelist variables control model initialization, ``ice_ic``, ``runtype``, and ``restart``, as described in :ref:`tab-ic`. It is possible to do an initial run from a file **filename** in two ways: (1) set runtype = ‘initial’, restart = true and ice\_ic = **filename**, or (2) runtype = @@ -562,7 +562,7 @@ true or false, depending on whether the tracer restart data exist. With the second option, tracer restart flags are set to ‘continue’ for all active tracers. -An additional namelist option, ``restart\_ext`` specifies whether halo cells +An additional namelist option, ``restart_ext`` specifies whether halo cells are included in the restart files. This option is useful for tripole and regional grids, but can not be used with PIO. @@ -577,8 +577,8 @@ her own routines. Whether the code is to be run in stand-alone or coupled mode is determined at compile time, as described below. Table :ref:`tab-ic` shows ice initial state resulting from combinations of -``ice\_ic``, ``runtype`` and ``restart``. :math:`^a`\ If false, restart is reset to -true. :math:`^b`\ restart is reset to false. :math:`^c`\ ice\_ic is +``ice_ic``, ``runtype`` and ``restart``. :math:`^a`\ If false, restart is reset to +true. :math:`^b`\ restart is reset to false. :math:`^c`\ ice_ic is reset to ‘none.’ .. _tab-ic: @@ -607,9 +607,9 @@ The time step is chosen based on stability of the transport component (both horizontal and in thickness space) and on resolution of the physical forcing. CICE allows the dynamics, advection and ridging portion of the code to be run with a shorter timestep, -:math:`\Delta t_{dyn}` (``dt\_dyn``), than the thermodynamics timestep +:math:`\Delta t_{dyn}` (``dt_dyn``), than the thermodynamics timestep :math:`\Delta t` (``dt``). In this case, ``dt`` and the integer ndtd are -specified, and ``dt\_dyn`` = ``dt/ndtd``. +specified, and ``dt_dyn`` = ``dt/ndtd``. A conservative estimate of the horizontal transport time step bound, or CFL condition, under remapping yields @@ -632,8 +632,8 @@ As discussed in :cite:`Lipscomb07`, the maximum time step in practice is usually determined by the time scale for large changes in the ice strength (which depends in part on wind strength). Using the strength parameterization of :cite:`Rothrock75`, limits the time step to :math:`\sim`\ 30 -minutes for the old ridging scheme (``krdg\_partic`` = 0), and to -:math:`\sim`\ 2 hours for the new scheme (``krdg\_partic`` = 1), assuming +minutes for the old ridging scheme (``krdg_partic`` = 0), and to +:math:`\sim`\ 2 hours for the new scheme (``krdg_partic`` = 1), assuming :math:`\Delta x` = 10 km. Practical limits may be somewhat less, depending on the strength of the atmospheric winds. @@ -646,7 +646,7 @@ growth rate. For the 5-category ice thickness distribution used as the default in this distribution, this is not a stringent limitation: :math:`\Delta t < 19.4` hr, assuming :math:`\max(f) = 40` cm/day. -In the classic EVP or EAP approach (``kdyn`` = 1 or 2, ``revised\_evp`` = false), +In the classic EVP or EAP approach (``kdyn`` = 1 or 2, ``revised_evp`` = false), the dynamics component is subcycled ndte (:math:`N`) times per dynamics time step so that the elastic waves essentially disappear before the next time step. The subcycling time step (:math:`\Delta @@ -657,7 +657,7 @@ t_e`) is thus A second parameter, :math:`E_\circ` (``eyc``), defines the elastic wave damping timescale :math:`T`, described in Section :ref:`dynam`, as -``eyc\ * dt\_dyn``. The forcing terms are not updated during the subcycling. +``eyc * dt_dyn``. The forcing terms are not updated during the subcycling. Given the small step (``dte``) at which the EVP dynamics model is subcycled, the elastic parameter :math:`E` is also limited by stability constraints, as discussed in :cite:`Hunke97`. Linear stability @@ -683,7 +683,7 @@ temperature :math:`T_{sfc}` is computed internally. The numerical constraint on the thermodynamics time step is associated with the transport scheme rather than the thermodynamic solver. -For the revised EVP approach (``kdyn`` = 1, ``revised\_evp`` = true), the +For the revised EVP approach (``kdyn`` = 1, ``revised_evp`` = true), the relaxation parameter ``arlx1i`` effectively sets the damping timescale in the problem, and ``brlx`` represents the effective subcycling :cite:`Bouillon13` (see Section :ref:`revp`). @@ -699,16 +699,16 @@ History files ************* Model output data is averaged over the period(s) given by ``histfreq`` and -``histfreq\_n``, and written to binary or netCDF files prepended by ``history\_file`` -in **ice\_in**. These settings for history files are set in the -**setup\_nml** section of **ice\_in** (see :ref:`tabnamelist`). -If ``history\_file`` = ‘iceh’ then the +``histfreq_n``, and written to binary or netCDF files prepended by ``history_file`` +in **ice_in**. These settings for history files are set in the +**setup_nml** section of **ice_in** (see :ref:`tabnamelist`). +If ``history_file`` = ‘iceh’ then the filenames will have the form **iceh.[timeID].nc** or **iceh.[timeID].da**, depending on the output file format chosen in **cice.settings** (set -``ICE\_IOTYPE``). The netCDF history files are CF-compliant; header information for +``ICE_IOTYPE``). The netCDF history files are CF-compliant; header information for data contained in the netCDF files is displayed with the command ``ncdump -h filename.nc``. Parallel netCDF output is available using the PIO library; the -attribute ``io\_flavor`` distinguishes output files written with PIO from +attribute ``io_flavor`` distinguishes output files written with PIO from those written with standard netCDF. With binary files, a separate header file is written with equivalent information. Standard fields are output according to settings in the **icefields\_nml** section of **ice\_in** @@ -735,7 +735,7 @@ monthly) via its namelist flag, `f\_` :math:`\left<{var}\right>`, which is now a character string corresponding to ``histfreq`` or ‘x’ for none. (Grid variable flags are still logicals, since they are written to all files, no matter what the frequency is.) If there are no namelist flags -with a given ``histfreq`` value, or if an element of ``histfreq\_n`` is 0, then +with a given ``histfreq`` value, or if an element of ``histfreq_n`` is 0, then no file will be written at that frequency. The output period can be discerned from the filenames. @@ -766,7 +766,7 @@ The history variable names must be unique for netCDF, so in cases where a variable is written at more than one frequency, the variable name is appended with the frequency in files after the first one. In the example above, ``meltb`` is called ``meltb`` in the monthly file (for backward -compatibility with the default configuration) and ``meltb\_h`` in the +compatibility with the default configuration) and ``meltb_h`` in the 6-hourly file. Using the same frequency twice in ``histfreq`` will have unexpected @@ -774,7 +774,7 @@ consequences and currently will cause the code to abort. It is not possible at the moment to output averages once a month and also once every 3 months, for example. -If ``write\_ic`` is set to true in **ice\_in**, a snapshot of the same set +If ``write_ic`` is set to true in **ice\_in**, a snapshot of the same set of history fields at the start of the run will be written to the history directory in **iceh\_ic.[timeID].nc(da)**. Several history variables are hard-coded for instantaneous output regardless of the averaging flag, at @@ -811,16 +811,16 @@ Diagnostic files Like ``histfreq``, the parameter ``diagfreq`` can be used to regulate how often output is written to a log file. The log file unit to which diagnostic -output is written is set in **ice\_fileunits.F90**. If ``diag\_type`` = +output is written is set in **ice\_fileunits.F90**. If ``diag_type`` = ‘stdout’, then it is written to standard out (or to **ice.log.[ID]** if you redirect standard out as in **cice.run**); otherwise it is written -to the file given by ``diag\_file``. In addition to the standard diagnostic +to the file given by ``diag_file``. In addition to the standard diagnostic output (maximum area-averaged thickness, velocity, average albedo, total ice area, and total ice and snow volumes), the namelist options -``print\_points`` and ``print\_global`` cause additional diagnostic information -to be computed and written. ``print\_global`` outputs global sums that are +``print_points`` and ``print_global`` cause additional diagnostic information +to be computed and written. ``print_global`` outputs global sums that are useful for checking global conservation of mass and energy. -``print\_points`` writes data for two specific grid points. Currently, one +``print_points`` writes data for two specific grid points. Currently, one point is near the North Pole and the other is in the Weddell Sea; these may be changed in **ice\_in**. @@ -888,27 +888,27 @@ Restart files ************* CICE provides restart data in binary unformatted or netCDF formats, via -the ``ICE\_IOTYPE`` flag in **cice.settings** and namelist variable -``restart\_format``. Restart and history files must use the same format. As +the ``ICE_IOTYPE`` flag in **cice.settings** and namelist variable +``restart_format``. Restart and history files must use the same format. As with the history output, there is also an option for writing parallel netCDF restart files using PIO. The restart files created by CICE contain all of the variables needed for a full, exact restart. The filename begins with the character string ‘iced.’, and the restart dump frequency is given by the namelist -variables ``dumpfreq`` and ``dumpfreq\_n``. The pointer to the filename from +variables ``dumpfreq`` and ``dumpfreq_n``. The pointer to the filename from which the restart data is to be read for a continuation run is set in -``pointer\_file``. The code assumes that auxiliary binary tracer restart +``pointer_file``. The code assumes that auxiliary binary tracer restart files will be identified using the same pointer and file name prefix, but with an additional character string in the file name that is associated with each tracer set. All variables are included in netCDF restart files. Additional namelist flags provide further control of restart behavior. -``dump\_last`` = true causes a set of restart files to be written at the end +``dump_last`` = true causes a set of restart files to be written at the end of a run when it is otherwise not scheduled to occur. The flag -``use\_restart\_time`` enables the user to choose to use the model date -provided in the restart files. If ``use\_restart\_time`` = false then the +``use_restart_time`` enables the user to choose to use the model date +provided in the restart files. If ``use_restart_time`` = false then the initial model date stamp is determined from the namelist parameters. lcdf64 = true sets 64-bit netCDF output, allowing larger file sizes. @@ -917,7 +917,7 @@ of the “extended" global grid, including the physical domain and ghost (halo) cells around the outer edges, allow exact restarts on regional grids with open boundary conditions, and they will also simplify restarts on the various tripole grids. They are accessed by setting -``restart\_ext`` = true in namelist. Extended grid restarts are not +``restart_ext`` = true in namelist. Extended grid restarts are not available when using PIO; in this case extra halo update calls fill ghost cells for tripole grids (do not use PIO for regional grids). @@ -929,5 +929,5 @@ initialized with no ice. The gx3 case was run for 1 year using the 1997 forcing data provided with the code. The gx1 case was run for 20 years, so that the date of restart in the file is 1978-01-01. Note that the restart dates provided in the restart files can be overridden using the -namelist variables ``use\_restart\_time``, ``year\_init`` and ``istep0``. The -forcing time can also be overridden using ``fyear\_init``. +namelist variables ``use_restart_time``, ``year_init`` and ``istep0``. The +forcing time can also be overridden using ``fyear_init``. diff --git a/doc/source/user_guide/ug_running.rst b/doc/source/user_guide/ug_running.rst index 69b82d93e..4abd52d0a 100644 --- a/doc/source/user_guide/ug_running.rst +++ b/doc/source/user_guide/ug_running.rst @@ -10,7 +10,7 @@ Quick-start instructions are provided in the :ref:`quickstart` section. .. _software: Software Requirements -------- +---------------------- To run stand-alone, CICE requires @@ -162,7 +162,7 @@ case directory, NOT the run directory. .. _case_options: **cice.setup** Command Line Options -~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cice.setup -h`` provides a summary of the command line options. There are three different modes, ``--case``, ``--test``, and ``--suite``. This section provides details about the relevant options for setting up cases with examples. Testing will be described in greater detail in the :ref:`testing` section. diff --git a/doc/source/user_guide/ug_testing.rst b/doc/source/user_guide/ug_testing.rst index c5f5a4e29..3c0f0386f 100644 --- a/doc/source/user_guide/ug_testing.rst +++ b/doc/source/user_guide/ug_testing.rst @@ -867,10 +867,6 @@ The script will produce output similar to: Additionally, the exit code from the test (``echo $?``) will be 0 if the test passed, and 1 if the test failed. -Implementation notes: 1) Provide a pass/fail on each of the confidence -intervals, 2) Facilitate output of a bitmap for each test so that -locations of failures can be identified. - The ``cice.t-test.py`` requires memory to store multiple two-dimensional fields spanning 1825 unique timesteps, a total of several GB. An appropriate resource is needed to run the script. If the script runs out of memory on an interactive resource, try @@ -907,6 +903,8 @@ Below is an example of a step-by-step procedure for testing a code change that m cd testsuite.test0 ./results.csh + # Note which tests failed and determine which namelist options are responsible for the failures + .. If the regression comparisons fail, then you may want to run the QC test, @@ -920,6 +918,7 @@ If the regression comparisons fail, then you may want to run the QC test, ./cice.setup -m onyx -e intel --test smoke -g gx1 -p 44x1 --testid qc_base -s qc,medium cd onyx_intel_smoke_gx1_44x1_medium_qc.qc_base + # modify ice_in to activate the namelist options that were determined above ./cice.build ./cice.submit @@ -928,6 +927,7 @@ If the regression comparisons fail, then you may want to run the QC test, ./cice.setup -m onyx -e intel --test smoke -g gx1 -p 44x1 -testid qc_test -s qc,medium cd onyx_intel_smoke_gx1_44x1_medium_qc.qc_test + # modify ice_in to activate the namelist options that were determined above ./cice.build ./cice.submit