From 7d2cbfd17a5280236f68d9091f52647c70925bf3 Mon Sep 17 00:00:00 2001 From: Gillian Petro <96886803+gspetro-NOAA@users.noreply.github.com> Date: Thu, 5 Oct 2023 09:20:40 -0400 Subject: [PATCH 1/3] [develop]: Update ConfigWorkflow chapter to HEAD of develop (#915) * This PR updates the ConfigWorkflow.rst file to align with the current state of config_defaults.yaml in develop. * This PR also adds in-code documentation for undocumented variables in config_defaults.yaml. * There are a few odds & ends in other parts of the documentation to fix errors or make updates that came into develop very recently. --------- Co-authored-by: Michael J. Kavulich, Jr Co-authored-by: Michael Lueken <63728921+MichaelLueken@users.noreply.github.com> Co-authored-by: Christina Holt <56881914+christinaholtNOAA@users.noreply.github.com> --- .../BuildingRunningTesting/BuildSRW.rst | 4 +- .../ContainerQuickstart.rst | 20 +- .../DefaultVarsTable.rst | 44 +- .../source/BuildingRunningTesting/RunSRW.rst | 6 +- .../BuildingRunningTesting/WE2Etests.rst | 24 +- .../CustomizingTheWorkflow/ConfigWorkflow.rst | 1160 ++++++++++++---- docs/UsersGuide/source/Reference/Glossary.rst | 3 + ush/config_defaults.yaml | 1165 +++++++++++------ 8 files changed, 1692 insertions(+), 734 deletions(-) diff --git a/docs/UsersGuide/source/BuildingRunningTesting/BuildSRW.rst b/docs/UsersGuide/source/BuildingRunningTesting/BuildSRW.rst index 4ba8073df5..5b871fe87f 100644 --- a/docs/UsersGuide/source/BuildingRunningTesting/BuildSRW.rst +++ b/docs/UsersGuide/source/BuildingRunningTesting/BuildSRW.rst @@ -294,8 +294,8 @@ If the ``devbuild.sh`` approach failed, users need to set up their environment t .. code-block:: console - source etc/lmod-setup.sh gaea - source etc/lmod-setup.csh gaea + source /path/to/ufs-srweather-app/etc/lmod-setup.sh gaea + source /path/to/ufs-srweather-app/etc/lmod-setup.csh gaea .. note:: diff --git a/docs/UsersGuide/source/BuildingRunningTesting/ContainerQuickstart.rst b/docs/UsersGuide/source/BuildingRunningTesting/ContainerQuickstart.rst index d2ccb82a8c..bcc17e73ce 100644 --- a/docs/UsersGuide/source/BuildingRunningTesting/ContainerQuickstart.rst +++ b/docs/UsersGuide/source/BuildingRunningTesting/ContainerQuickstart.rst @@ -69,12 +69,12 @@ Build the Container ------------------------ .. hint:: - If a ``singularity: command not found`` error message appears when working on Level 1 platforms, try running: ``module load singularity``. + If a ``singularity: command not found`` error message appears when working on Level 1 platforms, try running: ``module load singularity`` or (on Derecho) ``module load apptainer``. Level 1 Systems ^^^^^^^^^^^^^^^^^^ -On most Level 1 systems, a container named ``ubuntu20.04-intel-srwapp-develop.img`` has already been built at the following locations: +On most Level 1 systems, a container named ``ubuntu20.04-intel-ue-1.4.1-srw-dev.img`` has already been built at the following locations: .. table:: Locations of pre-built containers @@ -95,14 +95,14 @@ On most Level 1 systems, a container named ``ubuntu20.04-intel-srwapp-develop.im +--------------+--------------------------------------------------------+ .. note:: - * Singularity is only available on the Gaea C5 partition, and therefore container use is only supported on Gaea C5. + * On Gaea, Singularity/Apptainer is only available on the C5 partition, and therefore container use is only supported on Gaea C5. * The NOAA Cloud containers are accessible only to those with EPIC resources. Users can simply set an environment variable to point to the container: .. code-block:: console - export img=/path/to/ubuntu20.04-intel-srwapp-develop.img + export img=/path/to/ubuntu20.04-intel-ue-1.4.1-srw-dev.img Users may convert the container ``.img`` file to a writable sandbox. This step is required when running on Cheyenne but is optional on other systems: @@ -110,14 +110,12 @@ Users may convert the container ``.img`` file to a writable sandbox. This step i singularity build --sandbox ubuntu20.04-intel-srwapp $img -.. COMMENT: What about on Derecho? - When making a writable sandbox on Level 1 systems, the following warnings commonly appear and can be ignored: .. code-block:: console INFO: Starting build... - INFO: Verifying bootstrap image ubuntu20.04-intel-srwapp-develop.img + INFO: Verifying bootstrap image ubuntu20.04-intel-ue-1.4.1-srw-dev.img WARNING: integrity: signature not found for object group 1 WARNING: Bootstrap image could not be verified, but build will continue. @@ -241,7 +239,7 @@ To generate the forecast experiment, users must: #. :ref:`Set experiment parameters ` #. :ref:`Run a script to generate the experiment workflow ` -The first two steps depend on the platform being used and are described here for Level 1 platforms. Users will need to adjust the instructions to their machine if their local machine is a Level 2-4 platform. +The first two steps depend on the platform being used and are described here for Level 1 platforms. Users will need to adjust the instructions to match their machine configuration if their local machine is a Level 2-4 platform. .. _SetUpPythonEnvC: @@ -277,8 +275,6 @@ The ``wflow_`` modulefile will then output instructions to activate th then the user should run ``conda activate workflow_tools``. This will activate the ``workflow_tools`` conda environment. The command(s) will vary from system to system, but the user should see ``(workflow_tools)`` in front of the Terminal prompt at this point. -.. COMMENT: Containers are old and still say regional_workflow... - .. _SetUpConfigFileC: Configure the Workflow @@ -295,13 +291,13 @@ where: * ``-c`` indicates the compiler on the user's local machine (e.g., ``intel/2022.1.2``) * ``-m`` indicates the :term:`MPI` on the user's local machine (e.g., ``impi/2022.1.2``) * ```` refers to the local machine (e.g., ``hera``, ``jet``, ``noaacloud``, ``mac``). See ``MACHINE`` in :numref:`Section %s ` for a full list of options. - * ``-i`` indicates the container image that was built in :numref:`Step %s ` (``ubuntu20.04-intel-srwapp`` or ``ubuntu20.04-intel-srwapp-develop.img`` by default). + * ``-i`` indicates the container image that was built in :numref:`Step %s ` (``ubuntu20.04-intel-srwapp`` or ``ubuntu20.04-intel-ue-1.4.1-srw-dev.img`` by default). For example, on Hera, the command would be: .. code-block:: console - ./stage-srw.sh -c=intel/2022.1.2 -m=impi/2022.1.2 -p=hera -i=ubuntu20.04-intel-srwapp-develop.img + ./stage-srw.sh -c=intel/2022.1.2 -m=impi/2022.1.2 -p=hera -i=ubuntu20.04-intel-ue-1.4.1-srw-dev.img .. attention:: diff --git a/docs/UsersGuide/source/BuildingRunningTesting/DefaultVarsTable.rst b/docs/UsersGuide/source/BuildingRunningTesting/DefaultVarsTable.rst index 0b07f204ef..faaf9129c2 100644 --- a/docs/UsersGuide/source/BuildingRunningTesting/DefaultVarsTable.rst +++ b/docs/UsersGuide/source/BuildingRunningTesting/DefaultVarsTable.rst @@ -11,36 +11,28 @@ Table of Variables in ``config_defaults.yaml`` * - Group Name - Configuration variables * - User - - RUN_ENVIR, MACHINE, ACCOUNT, HOMEdir, USHdir, SCRIPTSdir, JOBSdir, SORCdir, PARMdir, MODULESdir, EXECdir, VX_CONFIG_DIR, METPLUS_CONF, MET_CONFIG, UFS_WTHR_MDL_DIR, ARL_NEXUS_DIR + - RUN_ENVIR, MACHINE, ACCOUNT, HOMEdir, USHdir, SCRIPTSdir, JOBSdir, SORCdir, PARMdir, MODULESdir, EXECdir, METPLUS_CONF, UFS_WTHR_MDL_DIR, ARL_NEXUS_DIR * - Platform - WORKFLOW_MANAGER, NCORES_PER_NODE, TASKTHROTTLE, BUILD_MOD_FN, WFLOW_MOD_FN, BUILD_VER_FN, RUN_VER_FN, SCHED,PARTITION_DEFAULT, QUEUE_DEFAULT, PARTITION_HPSS, QUEUE_HPSS, PARTITION_FCST, QUEUE_FCST, REMOVE_MEMORY, RUN_CMD_SERIAL, RUN_CMD_UTILS, RUN_CMD_FCST, RUN_CMD_POST, RUN_CMD_PRDGEN, RUN_CMD_AQM, - RUN_CMD_AQMLBC, SCHED_NATIVE_CMD, CCPA_OBS_DIR, MRMS_OBS_DIR, NDAS_OBS_DIR, NOHRSC_OBS_DIR, DOMAIN_PREGEN_BASEDIR, PRE_TASK_CMDS, + RUN_CMD_AQMLBC, SCHED_NATIVE_CMD, PRE_TASK_CMDS, CCPA_OBS_DIR, NOHRSC_OBS_DIR, MRMS_OBS_DIR, NDAS_OBS_DIR, DOMAIN_PREGEN_BASEDIR, TEST_EXTRN_MDL_SOURCE_BASEDIR, TEST_AQM_INPUT_BASEDIR, TEST_PREGEN_BASEDIR, TEST_ALT_EXTRN_MDL_SYSBASEDIR_ICS, TEST_ALT_EXTRN_MDL_SYSBASEDIR_LBCS, - TEST_VX_FCST_INPUT_BASEDIR, FIXgsm, FIXaer, FIXlut, FIXorg, FIXsfc, FIXshp, FIXgsi, FIXcrtm, FIXcrtmupp, EXTRN_MDL_DATA_STORES + TEST_VX_FCST_INPUT_BASEDIR, FIXgsm, FIXaer, FIXlut, FIXorg, FIXsfc, FIXshp, FIXcrtm, FIXcrtmupp, EXTRN_MDL_DATA_STORES * - Workflow - WORKFLOW_ID, RELATIVE_LINK_FLAG, USE_CRON_TO_RELAUNCH, CRON_RELAUNCH_INTVL_MNTS, CRONTAB_LINE, LOAD_MODULES_RUN_TASK_FP, EXPT_BASEDIR, EXPT_SUBDIR, EXEC_SUBDIR, EXPTDIR, DOT_OR_USCORE, EXPT_CONFIG_FN, CONSTANTS_FN, RGNL_GRID_NML_FN, FV3_NML_FN, FV3_NML_BASE_SUITE_FN, FV3_NML_YAML_CONFIG_FN, FV3_NML_BASE_ENS_FN, FV3_EXEC_FN, DATA_TABLE_FN, DIAG_TABLE_FN, FIELD_TABLE_FN, DIAG_TABLE_TMPL_FN, FIELD_TABLE_TMPL_FN, MODEL_CONFIG_FN, NEMS_CONFIG_FN, AQM_RC_FN, AQM_RC_TMPL_FN, FV3_NML_BASE_SUITE_FP, FV3_NML_YAML_CONFIG_FP, FV3_NML_BASE_ENS_FP, DATA_TABLE_TMPL_FP, DIAG_TABLE_TMPL_FP, FIELD_TABLE_TMPL_FP, - MODEL_CONFIG_TMPL_FP, NEMS_CONFIG_TMPL_FP, AQM_RC_TMPL_FP, DATA_TABLE_FP, FIELD_TABLE_FP, NEMS_CONFIG_FP, FV3_NML_FP, FV3_NML_CYCSFC_FP, - FV3_NML_RESTART_FP, FV3_NML_STOCH_FP, FV3_NML_RESTART_STOCH_FP, FCST_MODEL, WFLOW_XML_FN, GLOBAL_VAR_DEFNS_FN, ROCOTO_YAML_FN, EXTRN_MDL_VAR_DEFNS_FN, + MODEL_CONFIG_TMPL_FP, NEMS_CONFIG_TMPL_FP, AQM_RC_TMPL_FP, DATA_TABLE_FP, FIELD_TABLE_FP, NEMS_CONFIG_FP, FV3_NML_FP, + FV3_NML_STOCH_FP, FCST_MODEL, WFLOW_XML_FN, GLOBAL_VAR_DEFNS_FN, ROCOTO_YAML_FN, EXTRN_MDL_VAR_DEFNS_FN, WFLOW_LAUNCH_SCRIPT_FN, WFLOW_LAUNCH_LOG_FN, GLOBAL_VAR_DEFNS_FP, ROCOTO_YAML_FP, WFLOW_LAUNCH_SCRIPT_FP, WFLOW_LAUNCH_LOG_FP, FIXdir, FIXam, FIXclim, FIXlam, THOMPSON_MP_CLIMO_FN, THOMPSON_MP_CLIMO_FP, CCPP_PHYS_SUITE, CCPP_PHYS_SUITE_FN, CCPP_PHYS_SUITE_IN_CCPP_FP, CCPP_PHYS_SUITE_FP, CCPP_PHYS_DIR, FIELD_DICT_FN, FIELD_DICT_IN_UWM_FP, FIELD_DICT_FP, GRID_GEN_METHOD, PREDEF_GRID_NAME, DATE_FIRST_CYCL, DATE_LAST_CYCL, INCR_CYCL_FREQ, FCST_LEN_HRS, - FCST_LEN_CYCL, LONG_FCST_LEN, CYCL_HRS_SPINSTART, CYCL_HRS_PRODSTART, BOUNDARY_LEN_HRS, BOUNDARY_LONG_LEN_HRS, BOUNDARY_PROC_GROUP_NUM, - PREEXISTING_DIR_METHOD, VERBOSE, DEBUG, COMPILER, SYMLINK_FIX_FILES, DO_REAL_TIME, COLDSTART, WARMSTART_CYCLE_DIR, + FCST_LEN_CYCL, LONG_FCST_LEN, PREEXISTING_DIR_METHOD, VERBOSE, DEBUG, COMPILER, SYMLINK_FIX_FILES, DO_REAL_TIME, COLDSTART, WARMSTART_CYCLE_DIR, * - NCO - envir_default, NET_default, RUN_default, model_ver_default, OPSROOT_default, COMROOT_default, DATAROOT_default, DCOMROOT_default, LOGBASEDIR_default, - COMIN_BASEDIR, COMOUT_BASEDIR, NWGES, NWGES_BASEDIR, DBNROOT_default, SENDECF_default, SENDDBN_default, SENDDBN_NTC_default, SENDCOM_default, + COMIN_BASEDIR, COMOUT_BASEDIR, DBNROOT_default, SENDECF_default, SENDDBN_default, SENDDBN_NTC_default, SENDCOM_default, SENDWEB_default, KEEPDATA_default, MAILTO_default, MAILCC_default - * - gsi - - niter1, niter2, l_obsprvdiag, diag_radardbz, write_diag_2, bkgerr_vs, bkgerr_hzscl, usenewgfsberror, netcdf_diag, binary_diag, readin_localization, - beta1_inv, ens_h, ens_v, regional_ensemble_option, grid_ratio_fv3, grid_ratio_ens, i_en_perts_io, q_hyb_ens, ens_fast_read, l_PBL_pseudo_SurfobsT, - l_PBL_pseudo_SurfobsQ, i_use_2mQ4B, i_use_2mT4B, i_T_Q_adjust, l_rtma3d, i_precip_vertical_check, HYBENSMEM_NMIN, ANAVINFO_FN, ANAVINFO_DBZ_FN, - ENKF_ANAVINFO_FN, ENKF_ANAVINFO_DBZ_FN, CONVINFO_FN, BERROR_FN, OBERROR_FN, HYBENSINFO_FN, cld_bld_hgt, l_precip_clear_only, l_qnr_from_qr, beta_recenter - * - rrfs - - DO_RRFS_DEV, DO_NLDN_LGHT, DO_ENKFUPDATE, DO_DACYCLE, DO_SURFACE_CYCLE * - task_make_grid - GRID_DIR, ESGgrid_LON_CTR, ESGgrid_LAT_CTR, ESGgrid_DELX, ESGgrid_DELY, ESGgrid_NX, ESGgrid_NY, ESGgrid_WIDE_HALO_WIDTH, ESGgrid_PAZI, GFDLgrid_LON_T6_CTR, GFDLgrid_LAT_T6_CTR, GFDLgrid_NUM_CELLS, GFDLgrid_STRETCH_FAC, GFDLgrid_REFINE_RATIO, GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G, @@ -53,7 +45,7 @@ Table of Variables in ``config_defaults.yaml`` - EXTRN_MDL_NAME_ICS, EXTRN_MDL_ICS_OFFSET_HRS, FV3GFS_FILE_FMT_ICS, EXTRN_MDL_SYSBASEDIR_ICS, USE_USER_STAGED_EXTRN_FILES, EXTRN_MDL_SOURCE_BASEDIR_ICS, EXTRN_MDL_FILES_ICS * - task_get_extrn_lbcs - - EXTRN_MDL_NAME_LBCS, LBC_SPEC_INTVL_HRS, EXTRN_MDL_LBCS_OFFSET_HRS, FV3GFS_FILE_FMT_LBCS, LBCS_SEARCH_HRS, EXTRN_MDL_LBCS_SEARCH_OFFSET_HRS, EXTRN_MDL_SYSBASEDIR_LBCS, + - EXTRN_MDL_NAME_LBCS, LBC_SPEC_INTVL_HRS, EXTRN_MDL_LBCS_OFFSET_HRS, FV3GFS_FILE_FMT_LBCS, EXTRN_MDL_SYSBASEDIR_LBCS, USE_USER_STAGED_EXTRN_FILES,EXTRN_MDL_SOURCE_BASEDIR_LBCS, EXTRN_MDL_FILES_LBCS * - task_make_ics - KMP_AFFINITY_MAKE_ICS, OMP_NUM_THREADS_MAKE_ICS, OMP_STACKSIZE_MAKE_ICS, USE_FVCOM, FVCOM_WCSTART, FVCOM_DIR, FVCOM_FILE, VCOORD_FILE @@ -71,14 +63,6 @@ Table of Variables in ``config_defaults.yaml`` - KMP_AFFINITY_RUN_PRDGEN, OMP_NUM_THREADS_RUN_PRDGEN, OMP_STACKSIZE_RUN_PRDGEN, DO_PARALLEL_PRDGEN, ADDNL_OUTPUT_GRIDS: [] * - task_plot_allvars: - COMOUT_REF, PLOT_FCST_START, PLOT_FCST_INC, PLOT_FCST_END, PLOT_DOMAINS - * - task_analysis_gsi - - TN_ANALYSIS_GSI, TN_OBSERVER_GSI, TN_OBSERVER_GSI_ENSMEAN, KMP_AFFINITY_ANALYSIS, OMP_NUM_THREADS_ANALYSIS, OMP_STACKSIZE_ANALYSIS, OBSPATH_TEMPLATE - * - task_process_radarref - - RADAR_REF_THINNING, RADARREFL_MINS, RADARREFL_TIMELEVEL, OBS_SUFFIX - * - task_get_da_obs - - NLDN_NEEDED, NLDN_LIGHTNING, NSSLMOSAIC, RAP_OBS_BUFR - * - task_process_bufrobs - - OBSPATH_TEMPLATE * - task_nexus_emission - PPN_NEXUS_EMISSION, KMP_AFFINITY_NEXUS_EMISSION, OMP_NUM_THREADS_NEXUS_EMISSION, OMP_STACKSIZE_NEXUS_EMISSION * - task_bias_correction_o3 @@ -88,12 +72,12 @@ Table of Variables in ``config_defaults.yaml`` * - Global - USE_CRTM, CRTM_DIR, DO_ENSEMBLE, NUM_ENS_MEMBERS, ENSMEM_NAMES, FV3_NML_ENSMEM_FPS, ENS_TIME_LAG_HRS, DO_SHUM, DO_SPPT, DO_SKEB, ISEED_SHUM, ISEED_SPPT, ISEED_SKEB, NEW_LSCALE, SHUM_MAG, SHUM_LSCALE, SHUM_TSCALE, SHUM_INT, SPPT_MAG, SPPT_LOGIT, SPPT_LSCALE, SPPT_TSCALE, SPPT_INT, SPPT_SFCLIMIT, - SKEB_MAG, SKEB_LSCALE, SKEP_TSCALE, SKEB_INT, SKEBNORM, SKEB_VDOF, USE_ZMTNBLCK, DO_SPP, ISEED_SPP, SPP_VAR_LIST, SPP_MAG_LIST, SPP_LSCALE, - SPP_TSCALE, SPP_SIGTOP1, SPP_SIGTOP2, SPP_STDDEV_CUTOFF, DO_LSM_SPP, LSM_SPP_TSCALE, LSM_SPP_LSCALE, ISEED_LSM_SPP, LSM_SPP_VAR_LIST, + SKEB_MAG, SKEB_LSCALE, SKEP_TSCALE, SKEB_INT, SKEBNORM, SKEB_VDOF, USE_ZMTNBLCK, DO_SPP, SPP_VAR_LIST, SPP_MAG_LIST, SPP_LSCALE, + SPP_TSCALE, SPP_SIGTOP1, SPP_SIGTOP2, SPP_STDDEV_CUTOFF, ISEED_SPP, DO_LSM_SPP, LSM_SPP_TSCALE, LSM_SPP_LSCALE, ISEED_LSM_SPP, LSM_SPP_VAR_LIST, LSM_SPP_MAG_LIST, HALO_BLEND, PRINT_DIFF_PGR * - Verification - - OBS_CCPA_APCP01h_FN_TEMPLATE, OBS_CCPA_APCPgt01h_FN_TEMPLATE, OBS_MRMS_REFC_FN_TEMPLATE, OBS_MRMS_RETOP_FN_TEMPLATE, - OBS_NDAS_SFCorUPA_FN_TEMPLATE, OBS_NDAS_SFCorUPA_FN_METPROC_TEMPLATE, VX_FCST_MODEL_NAME, VX_FIELDS, VX_APCP_ACCUMS_HRS, VX_FCST_INPUT_BASEDIR, + - OBS_CCPA_APCP01h_FN_TEMPLATE, OBS_CCPA_APCPgt01h_FN_TEMPLATE, OBS_NOHRSC_ASNOW_FN_TEMPLATE, OBS_MRMS_REFC_FN_TEMPLATE, OBS_MRMS_RETOP_FN_TEMPLATE, + OBS_NDAS_SFCorUPA_FN_TEMPLATE, OBS_NDAS_SFCorUPA_FN_METPROC_TEMPLATE, VX_FCST_MODEL_NAME, VX_FIELDS, VX_APCP_ACCUMS_HRS, VX_ASNOW_ACCUMS_HRS, VX_FCST_INPUT_BASEDIR, VX_OUTPUT_BASEDIR, VX_NDIGITS_ENSMEM_NAMES, FCST_SUBDIR_TEMPLATE, FCST_FN_TEMPLATE, FCST_FN_METPROC_TEMPLATE, NUM_MISSING_OBS_FILES_MAX, NUM_MISSING_FCST_FILES_MAX * - cpl_aqm_parm - CPL_AQM, DO_AQM_DUST, DO_AQM_CANOPY, DO_AQM_PRODUCT, DO_AQM_CHEM_LBCS, DO_AQM_GEFS_LBCS, DO_AQM_SAVE_AIRNOW_HIST, DO_AQM_SAVE_FIRE, DCOMINbio_default, @@ -101,6 +85,6 @@ Table of Variables in ``config_defaults.yaml`` DCOMINairnow_default, COMINbicor, COMOUTbicor, AQM_CONFIG_DIR, AQM_BIO_FILE, AQM_DUST_FILE_PREFIX, AQM_DUST_FILE_SUFFIX, AQM_CANOPY_FILE_PREFIX, AQM_CANOPY_FILE_SUFFIX, AQM_FIRE_FILE_PREFIX, AQM_FIRE_FILE_SUFFIX, AQM_FIRE_FILE_OFFSET_HRS, AQM_FIRE_ARCHV_DIR, AQM_RC_FIRE_FREQUENCY, AQM_RC_PRODUCT_FN, AQM_RC_PRODUCT_FREQUENCY, AQM_LBCS_FILES, AQM_GEFS_FILE_PREFIX, AQM_GEFS_FILE_CYC, NEXUS_INPUT_DIR, NEXUS_FIX_DIR, - NEXUS_GRID_FN, NUM_SPLIT_NEXUS: 3NEXUS_GFS_SFC_OFFSET_HRS, NEXUS_GFS_SFC_DIR, NEXUS_GFS_SFC_ARCHV_DIR + NEXUS_GRID_FN, NUM_SPLIT_NEXUS, NEXUS_GFS_SFC_OFFSET_HRS, NEXUS_GFS_SFC_DIR, NEXUS_GFS_SFC_ARCHV_DIR * - Rocoto - - attrs, cycledefs, entities, log, tasks: taskgroups + - attrs, cycledefs, entities, log, tasks, taskgroups diff --git a/docs/UsersGuide/source/BuildingRunningTesting/RunSRW.rst b/docs/UsersGuide/source/BuildingRunningTesting/RunSRW.rst index 68b5f55eed..c7ea16d6b0 100644 --- a/docs/UsersGuide/source/BuildingRunningTesting/RunSRW.rst +++ b/docs/UsersGuide/source/BuildingRunningTesting/RunSRW.rst @@ -206,7 +206,7 @@ In future shells, you can activate and use this environment with: source ~/conda/etc/profile.d/conda.sh conda activate uwtools -See the `workflow-tools respository `__ for additional documentation. +See the `workflow-tools repository `__ for additional documentation. Modify a ``wflow_`` File `````````````````````````````````````` @@ -561,8 +561,6 @@ output over the :term:`CONUS`. It generates graphics plots for a number of varia * Max/Min 2-5 km updraft helicity * Sea level pressure (SLP) -.. COMMENT: * 500 hPa heights, winds, and vorticity --> seems to be omitted? Why? - This workflow task can produce both plots from a single experiment and difference plots that compare the same cycle from two experiments. When plotting the difference, the two experiments must be on the same domain and available for the same cycle starting date/time and forecast hours. Other parameters may differ (e.g., the experiments may use different physics suites). @@ -702,7 +700,7 @@ Run the following command from the ``ufs-srweather-app/ush`` directory to genera The last line of output from this script, starting with ``*/1 * * * *`` or ``*/3 * * * *``, can be saved and used later to automatically run portions of the workflow if users have the Rocoto workflow manager installed on their system. -This workflow generation script creates an experiment directory and populates it with all the data needed to run through the workflow. The flowchart in :numref:`Figure %s ` describes the experiment generation process. The ``generate_FV3LAM_wflow.py``: +This workflow generation script creates an experiment directory and populates it with all the data needed to run through the workflow. The flowchart in :numref:`Figure %s ` describes the experiment generation process. The ``generate_FV3LAM_wflow.py`` script: #. Runs the ``setup.py`` script to set the configuration parameters. This script reads three other configuration scripts in order: diff --git a/docs/UsersGuide/source/BuildingRunningTesting/WE2Etests.rst b/docs/UsersGuide/source/BuildingRunningTesting/WE2Etests.rst index 9e03b451b2..3c20fc8120 100644 --- a/docs/UsersGuide/source/BuildingRunningTesting/WE2Etests.rst +++ b/docs/UsersGuide/source/BuildingRunningTesting/WE2Etests.rst @@ -26,6 +26,9 @@ The list of fundamental and comprehensive tests can be viewed in the ``ufs-srwea For convenience, the WE2E tests are currently grouped into the following categories (under ``ufs-srweather-app/tests/WE2E/test_configs/``): +* ``aqm`` + This category tests the :term:`AQM` configuration of the SRW App. + * ``custom_grids`` This category tests custom grids aside from those specified in ``ufs-srweather-app/ush/predef_grid_params.yaml``. These tests help ensure a wide range of domain sizes, resolutions, and locations will work as expected. These test files can also serve as examples for how to set your own custom domain. @@ -38,15 +41,30 @@ For convenience, the WE2E tests are currently grouped into the following categor * ``grids_extrn_mdls_suites_nco`` This category of tests ensures that the workflow running in **NCO mode** (i.e., with ``RUN_ENVIR`` set to ``"nco"``) completes successfully for various combinations of predefined grids, physics suites, and input data from different external models. Note that in NCO mode, an operational run environment is used. This involves a specific directory structure and variable names (see :numref:`Section %s `). +* ``ufs_case_studies`` + This category tests that the workflow running in community mode completes successfully when running cases derived from the `ufs-case-studies repository `__. + * ``verification`` This category specifically tests the various combinations of verification capabilities using METPlus. -* ``release_SRW_v1`` - This category was reserved for the official "Graduate Student Test" case for the Version 1 SRW code release. - * ``wflow_features`` This category of tests ensures that the workflow completes successfully with particular features/capabilities activated. +.. note:: + + Users should be aware that some tests assume :term:`HPSS` access. + + * ``custom_ESGgrid_Great_Lakes_snow_8km`` and ``MET_verification_only_vx_time_lag`` require HPSS access, as well as ``rstprod`` access on both :term:`RDHPCS` and HPSS. + * On certain machines, the *community* test assumes HPSS access. If the ``ush/machine/*.yaml`` file contains the following lines, and these paths are different from what is provided in ``TEST_EXTRN_MDL_SOURCE_BASEDIR``, users will need to have HPSS access or modify the tests to point to another data source: + + .. code-block:: console + + data: + ics_lbcs: + FV3GFS: + RAP: + HRRR: + Some tests are duplicated among the above categories via symbolic links, both for legacy reasons (when tests for different capabilities were consolidated) and for convenience when a user would like to run all tests for a specific category (e.g., verification tests). Running the WE2E Tests diff --git a/docs/UsersGuide/source/CustomizingTheWorkflow/ConfigWorkflow.rst b/docs/UsersGuide/source/CustomizingTheWorkflow/ConfigWorkflow.rst index 247a2fdd03..11d882c4f9 100644 --- a/docs/UsersGuide/source/CustomizingTheWorkflow/ConfigWorkflow.rst +++ b/docs/UsersGuide/source/CustomizingTheWorkflow/ConfigWorkflow.rst @@ -3,14 +3,14 @@ ================================================================================================ Workflow Parameters: Configuring the Workflow in ``config.yaml`` and ``config_defaults.yaml`` ================================================================================================ -To create the experiment directory and workflow when running the SRW Application, the user must create an experiment configuration file (usually named ``config.yaml``). This file contains experiment-specific information, such as forecast dates, grid and physics suite choices, data directories, and other relevant settings. To help the user, two sample configuration files have been included in the ``ush`` directory: ``config.community.yaml`` and ``config.nco.yaml``. The first is for running experiments in *community* mode (``RUN_ENVIR`` set to "community"), and the second is for running experiments in *nco* mode (``RUN_ENVIR`` set to "nco"). The content of these files can be copied into ``config.yaml`` and used as the starting point from which to generate a variety of experiment configurations for the SRW App. Note that for this release, only *community* mode is supported. +To create the experiment directory and workflow when running the SRW Application, the user must create an experiment configuration file (named ``config.yaml`` by default). This file contains experiment-specific information, such as forecast dates, grid and physics suite choices, data directories, and other relevant settings. To help the user, two sample configuration files have been included in the ``ush`` directory: ``config.community.yaml`` and ``config.nco.yaml``. The first is for running experiments in *community* mode (``RUN_ENVIR`` set to "community"), and the second is for running experiments in *nco* mode (``RUN_ENVIR`` set to "nco"). The content of these files can be copied into ``config.yaml`` and used as the starting point from which to generate a variety of experiment configurations for the SRW App. Note that for public releases, only *community* mode is supported. There is an extensive list of experiment parameters that a user can set when configuring the experiment. Not all of these parameters need to be set explicitly by the user in ``config.yaml``. If a user does not define a variable in the ``config.yaml`` script, its value in ``config_defaults.yaml`` will be used, or the value will be reset depending on other parameters, such as the platform (``MACHINE``) selected for the experiment. .. note:: - The ``config_defaults.yaml`` file contains the full list of experiment parameters that a user may set in ``config.yaml``. The user cannot set parameters in ``config.yaml`` that are not initialized in ``config_defaults.yaml``, with the notable exception in the ``rocoto`` section, described in :numref:`Chapter %s `. + The ``config_defaults.yaml`` file contains the full list of experiment parameters that a user may set in ``config.yaml``. The user cannot set parameters in ``config.yaml`` that are not initialized in ``config_defaults.yaml``, with the notable exception of the ``rocoto`` section, described in :numref:`Chapter %s `. -The following is a list of the parameters in the ``config_defaults.yaml`` file. For each parameter, the default value and a brief description is provided. +The following is a list of the parameters in the ``config_defaults.yaml`` file. For each parameter, the default value and a brief description are provided. .. _user: @@ -27,20 +27,53 @@ If non-default parameters are selected for the variables in this section, they s | January 19, 2022 | Version 11.0.0 - Setting ``RUN_ENVIR`` to "community" is recommended in most cases for users who are not planning to implement their code into operations at NCO. Valid values: ``"nco"`` | ``"community"`` + Setting ``RUN_ENVIR`` to "community" is recommended in most cases for users who are not running in NCO's production environment. Valid values: ``"nco"`` | ``"community"`` ``MACHINE``: (Default: "BIG_COMPUTER") - The machine (a.k.a. platform or system) on which the workflow will run. Currently supported platforms are listed on the `SRW App Wiki page `__. When running the SRW App on any ParellelWorks/NOAA Cloud system, use "NOAACLOUD" regardless of the underlying system (AWS, GCP, or Azure). Valid values: ``"HERA"`` | ``"ORION"`` | ``"JET"`` | ``"CHEYENNE"`` | ``"GAEA"`` | ``"NOAACLOUD"`` | ``"STAMPEDE"`` | ``"ODIN"`` | ``"MACOS"`` | ``"LINUX"`` | ``"SINGULARITY"`` | ``"WCOSS2"`` + The machine (a.k.a. platform or system) on which the workflow will run. Currently supported platforms are listed on the `SRW App Wiki page `__. When running the SRW App on any ParallelWorks/NOAA Cloud system, use "NOAACLOUD" regardless of the underlying system (AWS, GCP, or Azure). Valid values: ``"HERA"`` | ``"ORION"`` | ``"JET"`` | ``"CHEYENNE"`` | ``"DERECHO"`` | ``"GAEA"`` | ``"NOAACLOUD"`` | ``"STAMPEDE"`` | ``"ODIN"`` | ``"MACOS"`` | ``"LINUX"`` | ``"SINGULARITY"`` | ``"WCOSS2"`` (Check ``ufs-srweather-app/ush/valid_param_vals.yaml`` for the most up-to-date list of supported platforms.) .. hint:: - Users who are NOT on a named, supported Level 1 or 2 platform will need to set the ``MACHINE`` variable to ``LINUX`` or ``MACOS``; to combine use of a Linux or MacOS platform with the Rocoto workflow manager, users will also need to set ``WORKFLOW_MANAGER: "rocoto"`` in the ``platform:`` section of ``config.yaml``. This combination will assume a Slurm batch manager when generating the XML. + Users who are NOT on a named, supported Level 1 or 2 platform will need to set the ``MACHINE`` variable to ``LINUX`` or ``MACOS``. To combine use of a Linux or MacOS platform with the Rocoto workflow manager, users will also need to set ``WORKFLOW_MANAGER: "rocoto"`` in the ``platform:`` section of ``config.yaml``. This combination will assume a Slurm batch manager when generating the XML. -``MACHINE_FILE``: (Default: "") - Path to a configuration file with machine-specific settings. If none is provided, ``setup.py`` will attempt to set the path to a configuration file for a supported platform. - -``ACCOUNT``: (Default: "project_name") +``ACCOUNT``: (Default: "") The account under which users submit jobs to the queue on the specified ``MACHINE``. To determine an appropriate ``ACCOUNT`` field for `Level 1 `__ systems, users may run the ``groups`` command, which will return a list of projects that the user has permissions for. Not all of the listed projects/groups have an HPC allocation, but those that do are potentially valid account names. On some systems, the ``saccount_params`` command will display additional account details. +Application Directories +-------------------------- + +``HOMEdir``: (Default: ``'{{ user.HOMEdir }}'``) + The path to the user's ``ufs-srweather-app`` clone. This path is set in ``ush/setup.py`` as the parent directory to ``USHdir``. + +``USHdir``: (Default: ``'{{ user.USHdir }}'``) + The path to the user's ``ush`` directory in their ``ufs-srweather-app`` clone. This path is set automatically in the main function of ``setup.py`` and corresponds to the location of ``setup.py`` (i.e., the ``ush`` directory). + +``SCRIPTSdir``: (Default: ``'{{ [HOMEdir, "scripts"]|path_join }}'``) + The path to the user's ``scripts`` directory in their ``ufs-srweather-app`` clone. + +``JOBSdir``: (Default: ``'{{ [HOMEdir, "jobs"]|path_join }}'``) + The path to the user's ``jobs`` directory in their ``ufs-srweather-app`` clone. + +``SORCdir``: (Default: ``'{{ [HOMEdir, "sorc"]|path_join }}'``) + The path to the user's ``sorc`` directory in their ``ufs-srweather-app`` clone. + +``PARMdir``: (Default: ``'{{ [HOMEdir, "parm"]|path_join }}'``) + The path to the user's ``parm`` directory in their ``ufs-srweather-app`` clone. + +``MODULESdir``: (Default: ``'{{ [HOMEdir, "modulefiles"]|path_join }}'``) + The path to the user's ``modulefiles`` directory in their ``ufs-srweather-app`` clone. + +``EXECdir``: (Default: ``'{{ [HOMEdir, workflow.EXEC_SUBDIR]|path_join }}'``) + The path to the user's ``exec`` directory in their ``ufs-srweather-app`` clone. + +``METPLUS_CONF``: (Default: ``'{{ [PARMdir, "metplus"]|path_join }}'``) + The path to the directory where the user's final METplus configuration file resides. By default, METplus configuration files reside in ``ufs-srweather-app/parm/metplus``. + +``UFS_WTHR_MDL_DIR``: (Default: ``'{{ userUFS_WTHR_MDL_DIR }}'``) + The path to the location where the UFS Weather Model code is located within the ``ufs-srweather-app`` clone. This parameter is set in ``setup.py`` and uses information from the ``Externals.cfg`` file to build the correct path. It is built with knowledge of ``HOMEdir`` and often corresponds to ``ufs-srweather-app/sorc/ufs-weather-model``. + +``ARL_NEXUS_DIR``: (Default: ``'{{ [SORCdir, "arl_nexus"]|path_join }}'``) + The path to the user's NEXUS directory. By default, NEXUS source code resides in ``ufs-srweather-app/sorc/arl_nexus``. + .. _PlatformConfig: PLATFORM Configuration Parameters @@ -54,44 +87,32 @@ If non-default parameters are selected for the variables in this section, they s ``NCORES_PER_NODE``: (Default: "") The number of cores available per node on the compute platform. Set for supported platforms in ``setup.py``, but it is now also configurable for all platforms. -``BUILD_MOD_FN``: (Default: "") - Name of an alternative build module file to use if running on an unsupported platform. It is set automatically for supported machines. +``TASKTHROTTLE``: (Default: 1000) + The number of active tasks that can be run simultaneously. For Linux/MacOS systems, it makes sense to set this to 1 because these systems often have a small number of available cores/CPUs and therefore less capacity to run multiple tasks simultaneously. -``WFLOW_MOD_FN``: (Default: "") - Name of an alternative workflow module file to use if running on an unsupported platform. It is set automatically for supported machines. +``BUILD_MOD_FN``: (Default: ``'build_{{ user.MACHINE|lower() }}_{{ workflow.COMPILER }}'``) + Name of an alternative build modulefile to use if running on an unsupported platform. It is set automatically for supported machines. -``BUILD_VER_FN``: (Default: "") - File name containing the version of the modules used for building the app. Currently, WCOSS2 only uses this file. +``WFLOW_MOD_FN``: (Default: ``'wflow_{{ user.MACHINE|lower() }}'``) + Name of an alternative workflow modulefile to use if running on an unsupported platform. It is set automatically for supported machines. -``RUN_VER_FN``: (Default: "") - File name containing the version of the modules used for running the app. Currently, WCOSS2 only uses this file. +``BUILD_VER_FN``: (Default: ``'build.ver.{{ user.MACHINE|lower() }}'``) + File name containing the version of the modules used for building the App. Currently, only WCOSS2 uses this file. + +``RUN_VER_FN``: (Default: ``'run.ver.{{ user.MACHINE|lower() }}'``) + File name containing the version of the modules used for running the App. Currently, only WCOSS2 uses this file. .. _sched: ``SCHED``: (Default: "") The job scheduler to use (e.g., Slurm) on the specified ``MACHINE``. Leaving this an empty string allows the experiment generation script to set it automatically depending on the machine the workflow is running on. Valid values: ``"slurm"`` | ``"pbspro"`` | ``"lsf"`` | ``"lsfcray"`` | ``"none"`` -``SCHED_NATIVE_CMD``: (Default: "") - Allows an extra parameter to be passed to the job scheduler (Slurm or PBSPRO) via XML Native command. - -``DOMAIN_PREGEN_BASEDIR``: (Default: "") - For use in NCO mode only (``RUN_ENVIR: "nco"``). The base directory containing pregenerated grid, orography, and surface climatology files. This is an alternative for setting ``GRID_DIR``, ``OROG_DIR``, and ``SFC_CLIMO_DIR`` individually. For the pregenerated grid specified by ``PREDEF_GRID_NAME``, these "fixed" files are located in: - - .. code-block:: console - - ${DOMAIN_PREGEN_BASEDIR}/${PREDEF_GRID_NAME} - - The workflow scripts will create a symlink in the experiment directory that will point to a subdirectory (having the same name as the experiment grid) under this directory. This variable should be set to a null string in ``config_defaults.yaml``, but it can be changed in the user-specified workflow configuration file set by ``EXPT_CONFIG_FN`` (usually ``config.yaml``). - -``PRE_TASK_CMDS``: (Default: "") - Pre-task commands such as ``ulimit`` needed by tasks. For example: ``'{ ulimit -s unlimited; ulimit -a; }'`` - Machine-Dependent Parameters ------------------------------- These parameters vary depending on machine. On `Level 1 and 2 `__ systems, the appropriate values for each machine can be viewed in the ``ush/machine/.sh`` scripts. To specify a value other than the default, add these variables and the desired value in the ``config.yaml`` file so that they override the ``config_defaults.yaml`` and machine default values. ``PARTITION_DEFAULT``: (Default: "") - This variable is only used with the Slurm job scheduler (i.e., when ``SCHED: "slurm"``). This is the default partition to which Slurm submits workflow tasks. When a variable that designates the partition (e.g., ``PARTITION_HPSS``, ``PARTITION_FCST``; see below) is **not** specified, the task will be submitted to the default partition indicated in the ``PARTITION_DEFAULT`` variable. If this value is not set or is set to an empty string, it will be (re)set to a machine-dependent value. Options are machine-dependent and include: ``""`` | ``"hera"`` | ``"normal"`` | ``"orion"`` | ``"sjet"`` | ``"vjet"`` | ``"kjet"`` | ``"xjet"`` | ``"workq"`` + This variable is only used with the Slurm job scheduler (i.e., when ``SCHED: "slurm"``). This is the default partition to which Slurm submits workflow tasks. If the task's ``PARTITION_HPSS`` or ``PARTITION_FCST`` (see below) parameters are **not** specified, the task will be submitted to the default partition indicated in the ``PARTITION_DEFAULT`` variable. If this value is not set or is set to an empty string, it will be (re)set to a machine-dependent value. Options are machine-dependent and include: ``""`` | ``"hera"`` | ``"normal"`` | ``"orion"`` | ``"sjet"`` | ``"vjet"`` | ``"kjet"`` | ``"xjet"`` | ``"workq"`` ``QUEUE_DEFAULT``: (Default: "") The default queue or QOS to which workflow tasks are submitted (QOS is Slurm's term for queue; it stands for "Quality of Service"). If the task's ``QUEUE_HPSS`` or ``QUEUE_FCST`` parameters (see below) are not specified, the task will be submitted to the queue indicated by this variable. If this value is not set or is set to an empty string, it will be (re)set to a machine-dependent value. Options are machine-dependent and include: ``""`` | ``"batch"`` | ``"dev"`` | ``"normal"`` | ``"regular"`` | ``"workq"`` @@ -108,19 +129,39 @@ These parameters vary depending on machine. On `Level 1 and 2 `. When pulling observations from NOAA HPSS, the data retrieved will be placed in the ``CCPA_OBS_DIR`` directory. METplus is configured to verify 01-, 03-, 06-, and 24-h accumulated precipitation using hourly CCPA files. +``NOHRSC_OBS_DIR``: (Default: ``"{{ workflow.EXPTDIR }}/obs_data/nohrsc/proc"``) + User-specified location of top-level directory where NOHRSC 6- and 24-hour snowfall accumulation files used by METplus are located (or, if retrieved by the workflow, where they will be placed). See comments in file scripts/exregional_get_verif_obs.sh for more details about files and directory structure + + .. attention:: + Do not set this to the same path as other ``*_OBS_DIR`` variables; otherwise unexpected results and data loss may occur. .. note:: - There is a problem with the valid time in the metadata for files valid from 19 - 00 UTC (i.e., files under the "00" directory) for dates up until 2021-05-04. The script to pull the CCPA data from the NOAA HPSS (``scripts/exregional_get_verif_obs.sh``) has an example of how to account for this and organize the data into a more intuitive format. - -``NOHRSC_OBS_DIR``: (Default: "") - User-specified location of top-level directory where NOHRSC 06- and 24-hour snowfall accumulation files (available every 6 and 12 hours respectively) used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_nohrsc`` task. (This task is activated in the workflow by using the taskgroup file ``parm/wflow/verify_pre.yaml``). + Due to limited availability of NOHRSC observation data on NOAA :term:`HPSS` and the likelihood that snowfall accumulation verification will not be desired outside of winter cases, this verification option is currently not present in the workflow by default. In order to use it, the verification environment variable ``VX_FIELDS`` should be updated to include ``ASNOW``. This will allow the related workflow tasks to be run. - METplus configuration files require the use of a predetermined directory structure and file names. If the NOHRSC files are user-provided, they need to follow the anticipated naming structure: ``{YYYYMMDD}/sfav2_CONUS_{AA}h_{YYYYMMDD}{HH}_grid184.grb2``, where AA is the 2-digit accumulation duration, and YYYYMMDD and HH are as described in the note :ref:`above `. When pulling observations from NOAA HPSS, the data retrieved will be placed in the ``NOHRSC_OBS_DIR`` directory. METplus is configured to verify 06-, and 24-h accumulated precipitation using NOHRSC files. +``MRMS_OBS_DIR``: (Default: ``"{{ workflow.EXPTDIR }}/obs_data/mrms/proc"``) + User-specified location of the directory where :term:`MRMS` composite reflectivity and echo top files used by METplus are located (or, if retrieved by the workflow, where they will be placed). See comments in the ``scripts/exregional_get_verif_obs.sh`` for more details about files and directory structure. + + .. attention:: + Do not set this to the same path as other ``*_OBS_DIR`` variables; otherwise unexpected results and data loss may occur. - .. note:: - Due to limited availability of NOHRSC observation data on NOAA HPSS, and the likelihood that snowfall acumulation verification will not be desired outside of winter cases, this verification option is currently not present in the workflow by default. In order to use it, the verification environment variable VX_FIELDS should be updated to include ``ASNOW``. This will allow the related workflow tasks to be run. +``NDAS_OBS_DIR``: (Default: ``"{{ workflow.EXPTDIR }}/obs_data/ndas/proc"``) + User-specified location of top-level directory where :term:`NDAS` prepbufr files used by METplus are located (or, if retrieved by the workflow, where they will be placed). See comments in file ``scripts/exregional_get_verif_obs.sh`` for more details about files and directory structure. + + .. attention:: + Do not set this to the same path as other ``*_OBS_DIR`` variables; otherwise unexpected results and data loss may occur. -``MRMS_OBS_DIR``: (Default: "") - User-specified location of top-level directory where MRMS composite reflectivity files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_mrms`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify_pre.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. +Other Platform-Specific Directories +-------------------------------------- - METplus configuration files require the use of a predetermined directory structure and file names. Therefore, if the MRMS files are user-provided, they need to follow the anticipated naming structure: ``{YYYYMMDD}/MergedReflectivityQCComposite_00.50_{YYYYMMDD}-{HH}{mm}{SS}.grib2``, where YYYYMMDD and {HH}{mm}{SS} are as described in the note :ref:`above `. +``DOMAIN_PREGEN_BASEDIR``: (Default: "") + For use in NCO mode only (``RUN_ENVIR: "nco"``). The base directory containing pregenerated grid, orography, and surface climatology files. This is an alternative for setting ``GRID_DIR``, ``OROG_DIR``, and ``SFC_CLIMO_DIR`` individually. For the pregenerated grid specified by ``PREDEF_GRID_NAME``, these "fixed" files are located in: -.. note:: - METplus is configured to look for a MRMS composite reflectivity file for the valid time of the forecast being verified, which is why the minutes and seconds of the filename are hard-coded as "0000". Because MRMS composite reflectivity files do not typically match the valid time exactly, a script (``ush/mrms_pull_topofhour.py``) is called from within the MRMS task that identifies and renames the MRMS file nearest to the valid time to match the valid time of the forecast. This script can also be called separately for staging data independently of the workflow. + .. code-block:: console -``NDAS_OBS_DIR``: (Default: "") - User-specified location of the top-level directory where NDAS prepbufr files used by METplus are located. This parameter needs to be set for both user-provided observations and for observations that are retrieved from the NOAA :term:`HPSS` (if the user has access) via the ``get_obs_ndas`` task (activated in the workflow automatically when using the taskgroup file ``parm/wflow/verify_pre.yaml``). When pulling observations directly from NOAA HPSS, the data retrieved will be placed in this directory. METplus is configured to verify near-surface variables hourly and upper-air variables at 00 and 12 UTC with NDAS prepbufr files. + ${DOMAIN_PREGEN_BASEDIR}/${PREDEF_GRID_NAME} - METplus configuration files require the use of predetermined file names. Therefore, if the NDAS files are user-provided, they need to follow the anticipated naming structure: ``prepbufr.ndas.{YYYYMMDDHH}``, where YYYYMMDDHH is as described in the note :ref:`above `. The script to pull the NDAS data from the NOAA HPSS (``scripts/exregional_get_verif_obs.sh``) has an example of how to rename the NDAS data into a more intuitive format with the valid time listed in the file name. + The workflow scripts will create a symlink in the experiment directory that will point to a subdirectory (having the same name as the experiment grid) under this directory. Test Directories ---------------------- -These directories are used only by the ``run_WE2E_tests.py`` script, so they are not used unless the user runs a Workflow End-to-End (WE2E) test (see :numref:`Chapter %s `). Their function corresponds to the same variables without the ``TEST_`` prefix. Users typically should not modify these variables. For any alterations, the logic in the ``run_WE2E_tests.py`` script would need to be adjusted accordingly. +These directories are used only by the ``run_WE2E_tests.py`` script, so they are not used unless the user runs a Workflow End-to-End (WE2E) test (see :numref:`Section %s `). Their function corresponds to the same variables without the ``TEST_`` prefix. Users typically should not modify these variables. For any alterations, the logic in the ``run_WE2E_tests.py`` script would need to be adjusted accordingly. ``TEST_EXTRN_MDL_SOURCE_BASEDIR``: (Default: "") This parameter allows testing of user-staged files in a known location on a given platform. This path contains a limited dataset and likely will not be useful for most user experiments. +``TEST_AQM_INPUT_BASEDIR``: (Default: "") + The path to user-staged AQM fire emission data for WE2E testing. + ``TEST_PREGEN_BASEDIR``: (Default: "") Similar to ``DOMAIN_PREGEN_BASEDIR``, this variable sets the base directory containing pregenerated grid, orography, and surface climatology files for WE2E tests. This is an alternative for setting ``GRID_DIR``, ``OROG_DIR``, and ``SFC_CLIMO_DIR`` individually. @@ -184,6 +238,43 @@ These directories are used only by the ``run_WE2E_tests.py`` script, so they are ``TEST_CCPA_OBS_DIR``, ``TEST_MRMS_OBS_DIR``, ``TEST_NDAS_OBS_DIR``: (Default: "") These parameters are used by the testing script to test the mechanism that allows user to point to data streams on disk for observation data for verification tasks. They test the ability for users to set ``CCPA_OBS_DIR``, ``MRMS_OBS_DIR``, and ``NDAS_OBS_DIR`` respectively. +``TEST_VX_FCST_INPUT_BASEDIR``: (Default: "") + The path to user-staged forecast files for WE2E testing of verificaton using user-staged forecast files in a known location on a given platform. + +.. _SystemFixedFiles: + +Fixed File Directory Parameters +---------------------------------- + +These parameters are associated with the fixed (i.e., static) files. On `Level 1 & 2 `__ systems, fixed files are pre-staged with paths defined in the ``setup.py`` script. Because the default values are platform-dependent, they are set to a null string in ``config_defaults.yaml``. Then these null values are overwritten in ``setup.py`` with machine-specific values or with a user-specified value from ``config.yaml``. + +``FIXgsm``: (Default: "") + Path to the system directory containing the majority of fixed (i.e., time-independent) files that are needed to run the FV3-LAM model. + +``FIXaer``: (Default: "") + Path to the system directory containing :term:`MERRA2` aerosol climatology files. Only used if running with a physics suite that uses Thompson microphysics. + +``FIXlut``: (Default: "") + Path to the system directory containing the lookup tables for optics properties. Only used if running with a physics suite that uses Thompson microphysics. + +``FIXorg``: (Default: "") + Path to the system directory containing static orography data (used by the ``make_orog`` task). Can be the same as ``FIXgsm``. + +``FIXsfc``: (Default: "") + Path to the system directory containing the static surface climatology input fields, used by ``sfc_climo_gen``. These files are only used if the ``MAKE_SFC_CLIMO`` task is meant to run. + +``FIXshp``: (Default: "") + System directory containing the graphics shapefiles. On Level 1 systems, these are set within the machine files. Users on other systems will need to provide the path to the directory that contains the *Natural Earth* shapefiles. + +``FIXcrtm``: (Default: "") + Path to system directory containing CRTM fixed files. + +``FIXcrtmupp``: (Default: "") + Path to system directory containing CRTM fixed files specifically for UPP. + +``EXTRN_MDL_DATA_STORES``: (Default: "") + A list of data stores where the scripts should look for external model data. The list is in priority order. If disk information is provided via ``USE_USER_STAGED_EXTRN_FILES`` or a known location on the platform, the disk location will be highest priority. Valid values (in priority order): ``disk`` | ``hpss`` | ``aws`` | ``nomads``. + .. _workflow: WORKFLOW Configuration Parameters @@ -191,19 +282,31 @@ WORKFLOW Configuration Parameters If non-default parameters are selected for the variables in this section, they should be added to the ``workflow:`` section of the ``config.yaml`` file. +``WORKFLOW_ID``: (Default: ``!nowtimestamp ''``) + Unique ID for the workflow run that will be set in ``setup.py``. + +``RELATIVE_LINK_FLAG``: (Default: "--relative") + How to make links. The default is relative links; users may set an empty string for absolute paths in links. + .. _Cron: Cron-Associated Parameters ------------------------------ -Cron is a job scheduler accessed through the command-line on UNIX-like operating systems. It is useful for automating tasks such as the ``rocotorun`` command, which launches each workflow task in the SRW App. Cron periodically checks a cron table (aka crontab) to see if any tasks are are ready to execute. If so, it runs them. +Cron is a job scheduler accessed through the command-line on UNIX-like operating systems. It is useful for automating tasks such as the ``rocotorun`` command, which launches each workflow task in the SRW App. Cron periodically checks a cron table (aka crontab) to see if any tasks are ready to execute. If so, it runs them. ``USE_CRON_TO_RELAUNCH``: (Default: false) - Flag that determines whether or not a line is added to the user's cron table, which calls the experiment launch script every ``CRON_RELAUNCH_INTVL_MNTS`` minutes. Valid values: ``True`` | ``False`` + Flag that determines whether to add a line to the user's cron table, which calls the experiment launch script every ``CRON_RELAUNCH_INTVL_MNTS`` minutes. Valid values: ``True`` | ``False`` ``CRON_RELAUNCH_INTVL_MNTS``: (Default: 3) The interval (in minutes) between successive calls of the experiment launch script by a cron job to (re)launch the experiment (so that the workflow for the experiment kicks off where it left off). This is used only if ``USE_CRON_TO_RELAUNCH`` is set to true. +``CRONTAB_LINE``: (Default: "") + The launch command that will appear in the crontab (e.g., ``*/3 * * * * cd && ./launch_FV3LAM_wflow.sh called_from_cron="TRUE"``). + +``LOAD_MODULES_RUN_TASK_FP``: (Default: ``'{{ [user.USHdir, "load_modules_run_task.sh"]|path_join }}'``) + Path to the ``load_modules_run_task.sh`` file. + .. _DirParams: Directory Parameters @@ -219,18 +322,20 @@ Directory Parameters EXPTDIR="${EXPT_BASEDIR}/${EXPT_SUBDIR}" - This parameter cannot be left as a null string. It must be set to a non-null value in the user-defined experiment configuration file (i.e., ``config.yaml``). + This parameter must be set to a non-null value in the user-defined experiment configuration file (i.e., ``config.yaml``). ``EXEC_SUBDIR``: (Default: "exec") The name of the subdirectory of ``ufs-srweather-app`` where executables are installed. +``EXPTDIR``: (Default: ``'{{ [workflow.EXPT_BASEDIR, workflow.EXPT_SUBDIR]|path_join }}'``) + The full path to the experiment directory. By default, this value is equivalent to ``"${EXPT_BASEDIR}/${EXPT_SUBDIR}"``, but the user can define it differently in the configuration file if desired. + Pre-Processing File Separator Parameters -------------------------------------------- ``DOT_OR_USCORE``: (Default: "_") This variable sets the separator character(s) to use in the names of the grid, mosaic, and orography fixed files. Ideally, the same separator should be used in the names of these fixed files as in the surface climatology fixed files. Valid values: ``"_"`` | ``"."`` - Set File Name Parameters ---------------------------- @@ -238,37 +343,109 @@ Set File Name Parameters Name of the user-specified configuration file for the forecast experiment. ``CONSTANTS_FN``: (Default: "constants.yaml") - Name of the file containing definitions of various mathematical, physical, and SRW App contants. + Name of the file containing definitions of various mathematical, physical, and SRW App constants. ``RGNL_GRID_NML_FN``: (Default: "regional_grid.nml") Name of the file containing namelist settings for the code that generates an "ESGgrid" regional grid. -``FV3_NML_BASE_SUITE_FN``: (Default: "input.nml.FV3") - Name of the Fortran file containing the forecast model's base suite namelist (i.e., the portion of the namelist that is common to all physics suites). +``FV3_NML_FN``: (Default: "input.nml") + Name of the forecast model's namelist file. It includes the information in ``FV3_NML_BASE_SUITE_FN`` (i.e., ``input.nml.FV3``), ``FV3_NML_YAML_CONFIG_FN`` (i.e., ``FV3.input.yml``), and the user configuration file (i.e., ``config.yaml``). -``FV3_NML_YAML_CONFIG_FN``: (Default: "FV3.input.yml") - Name of YAML configuration file containing the forecast model's namelist settings for various physics suites. +``FV3_NML_BASE_SUITE_FN``: (Default: ``"{{ FV3_NML_FN }}.FV3"``) + Name of the Fortran file containing the forecast model's base suite namelist (i.e., the portion of the namelist that is common to all physics suites). By default, it will be named ``input.nml.FV3``. -``FV3_NML_BASE_ENS_FN``: (Default: "input.nml.base_ens") +``FV3_NML_YAML_CONFIG_FN``: (Default: ``"FV3.input.yml"``) + Name of the YAML configuration file containing the forecast model's namelist settings for various physics suites. + +``FV3_NML_BASE_ENS_FN``: (Default: ``"{{ FV3_NML_FN }}.base_ens"``) Name of the Fortran file containing the forecast model's base ensemble namelist (i.e., the original namelist file from which each of the ensemble members' namelist files is generated). ``FV3_EXEC_FN``: (Default: "ufs_model") Name to use for the forecast model executable. -``DIAG_TABLE_TMPL_FN``: (Default: "") - Name of a template file that specifies the output fields of the forecast model. The selected physics suite is appended to this file name in ``setup.py``, taking the form ``{DIAG_TABLE_TMPL_FN}.{CCPP_PHYS_SUITE}``. Generally, the SRW App expects to read in the default value set in ``setup.py`` (i.e., ``diag_table.{CCPP_PHYS_SUITE}``), and users should **not** specify a value for ``DIAG_TABLE_TMPL_FN`` in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes, and (2) they also change the names of the ``diag_table`` options in the ``ufs-srweather-app/parm`` directory. +``DATA_TABLE_FN``: ( Default: "data_table") + Name of the file that contains the data table read in by the forecast model. + +``DIAG_TABLE_FN``: ( Default: "diag_table") + Prefix for the name of the file that specifies the output fields of the forecast model. + +``FIELD_TABLE_FN``: ( Default: "field_table") + Prefix for the name of the file that specifies the :term:`tracers ` that the forecast model will read in from the :term:`IC/LBC ` files. + +.. _tmpl-fn-warning: + +.. attention:: + + For the file names below, the SRW App expects to read in the default value set in ``setup.py`` (e.g., ``diag_table.{CCPP_PHYS_SUITE}``), and users should **not** specify a value for these variables in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes and (2) they also change the names of the corresponding files in the ``ufs-srweather-app/parm`` directory (e.g., change the names of ``diag_table`` options in ``parm`` when setting ``DIAG_TABLE_TMPL_FN``). + +``DIAG_TABLE_TMPL_FN``: (Default: ``'diag_table.{{ CCPP_PHYS_SUITE }}'``) + Name of a template file that specifies the output fields of the forecast model. The selected physics suite is appended to this file name in ``setup.py``, taking the form ``{DIAG_TABLE_TMPL_FN}.{CCPP_PHYS_SUITE}``. In general, users should not set this variable in their configuration file (see :ref:`note `). + +``FIELD_TABLE_TMPL_FN``: (Default: ``'field_table.{{ CCPP_PHYS_SUITE }}'``) + Name of a template file that specifies the :term:`tracers ` that the forecast model will read in from the :term:`IC/LBC ` files. The selected physics suite is appended to this file name in ``setup.py``, taking the form ``{FIELD_TABLE_TMPL_FN}.{CCPP_PHYS_SUITE}``. In general, users should not set this variable in their configuration file (see :ref:`note `). + +``MODEL_CONFIG_FN``: (Default: "model_configure") + Name of a file that contains settings and configurations for the :term:`NUOPC`/:term:`ESMF` main component. In general, users should not set this variable in their configuration file (see :ref:`note `). + +``NEMS_CONFIG_FN``: (Default: "nems.configure") + Name of a file that contains information about the various :term:`NEMS` components and their run sequence. In general, users should not set this variable in their configuration file (see :ref:`note `). + +``AQM_RC_FN``: (Default: "aqm.rc") + Name of resource file for NOAA Air Quality Model (AQM). + +``AQM_RC_TMPL_FN``: (Default: "aqm.rc") + Template file name of resource file for NOAA Air Quality Model (AQM). + +Set File Path Parameters +---------------------------- + +``FV3_NML_BASE_SUITE_FP``: (Default: ``'{{ [user.PARMdir, FV3_NML_BASE_SUITE_FN]|path_join }}'``) + Path to the ``FV3_NML_BASE_SUITE_FN`` file. + +``FV3_NML_YAML_CONFIG_FP``: (Default: ``'{{ [user.PARMdir, FV3_NML_YAML_CONFIG_FN]|path_join }}'``) + Path to the ``FV3_NML_YAML_CONFIG_FN`` file. + +``FV3_NML_BASE_ENS_FP``: (Default: ``'{{ [EXPTDIR, FV3_NML_BASE_ENS_FN]|path_join }}'``) + Path to the ``FV3_NML_BASE_ENS_FN`` file. + +``DATA_TABLE_TMPL_FP``: (Default: ``'{{ [user.PARMdir, DATA_TABLE_FN]|path_join }}'``) + Path to the ``DATA_TABLE_FN`` file. -``FIELD_TABLE_TMPL_FN``: (Default: "") - Name of a template file that specifies the :term:`tracers ` that the forecast model will read in from the :term:`IC/LBC ` files. The selected physics suite is appended to this file name in ``setup.py``, taking the form ``{FIELD_TABLE_TMPL_FN}.{CCPP_PHYS_SUITE}``. Generally, the SRW App expects to read in the default value set in ``setup.py`` (i.e., ``field_table.{CCPP_PHYS_SUITE}``), and users should **not** specify a different value for ``FIELD_TABLE_TMPL_FN`` in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes, and (2) they also change the names of the ``field_table`` options in the ``ufs-srweather-app/parm`` directory. +``DIAG_TABLE_TMPL_FP``: (Default: ``'{{ [user.PARMdir, DIAG_TABLE_TMPL_FN]|path_join }}'``) + Path to the ``DIAG_TABLE_TMPL_FN`` file. -``DATA_TABLE_TMPL_FN``: (Default: "") - Name of a template file that contains the data table read in by the forecast model. Generally, the SRW App expects to read in the default value set in ``setup.py`` (i.e., ``data_table``), and users should **not** specify a different value for ``DATA_TABLE_TMPL_FN`` in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes, and (2) they also change the name of ``data_table`` in the ``ufs-srweather-app/parm`` directory. +``FIELD_TABLE_TMPL_FP``: (Default: ``'{{ [user.PARMdir, FIELD_TABLE_TMPL_FN]|path_join }}'``) + Path to the ``FIELD_TABLE_TMPL_FN`` file. -``MODEL_CONFIG_TMPL_FN``: (Default: "") - Name of a template file that contains settings and configurations for the :term:`NUOPC`/:term:`ESMF` main component. Generally, the SRW App expects to read in the default value set in ``setup.py`` (i.e., ``model_configure``), and users should **not** specify a different value for ``MODEL_CONFIG_TMPL_FN`` in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes, and (2) they also change the name of ``model_configure`` in the ``ufs-srweather-app/parm`` directory. +``MODEL_CONFIG_TMPL_FP``: (Default: ``'{{ [user.PARMdir, MODEL_CONFIG_FN]|path_join }}'``) + Path to the ``MODEL_CONFIG_FN`` file. -``NEMS_CONFIG_TMPL_FN``: (Default: "") - Name of a template file that contains information about the various :term:`NEMS` components and their run sequence. Generally, the SRW App expects to read in the default value set in ``setup.py`` (i.e., ``nems.configure``), and users should **not** specify a different value for ``NEMS_CONFIG_TMPL_FN`` in their configuration file (i.e., ``config.yaml``) unless (1) the file name required by the model changes, and (2) they also change the name of ``nems.configure`` in the ``ufs-srweather-app/parm`` directory. +``NEMS_CONFIG_TMPL_FP``: (Default: ``'{{ [user.PARMdir, NEMS_CONFIG_FN]|path_join }}'``) + Path to the ``NEMS_CONFIG_FN`` file. + +``AQM_RC_TMPL_FP``: (Default: ``'{{ [user.PARMdir, AQM_RC_TMPL_FN]|path_join }}'``) + Path to the ``AQM_RC_TMPL_FN`` file. + + +*Experiment Directory* Files and Paths +------------------------------------------ + +This section contains files and paths to files that are staged in the experiment directory at configuration time. + +``DATA_TABLE_FP``: (Default: ``'{{ [EXPTDIR, DATA_TABLE_FN]|path_join }}'``) + Path to the data table in the experiment directory. + +``FIELD_TABLE_FP``: (Default: ``'{{ [EXPTDIR, FIELD_TABLE_FN]|path_join }}'``) + Path to the field table in the experiment directory. (The field table specifies tracers that the forecast model reads in.) + +``NEMS_CONFIG_FP``: (Default: ``'{{ [EXPTDIR, NEMS_CONFIG_FN]|path_join }}'``) + Path to the ``NEMS_CONFIG_FN`` file in the experiment directory. + +``FV3_NML_FP``: (Default: ``'{{ [EXPTDIR, FV3_NML_FN]|path_join }}'``) + Path to the ``FV3_NML_FN`` file in the experiment directory. + +``FV3_NML_STOCH_FP``: (Default: ``'{{ [EXPTDIR, [FV3_NML_FN, "_stoch"]|join ]|path_join }}'``) + Path to a namelist file that includes stochastic physics namelist parameters. ``FCST_MODEL``: (Default: "ufs-weather-model") Name of forecast model. Valid values: ``"ufs-weather-model"`` | ``"fv3gfs_aqm"`` @@ -277,7 +454,10 @@ Set File Name Parameters Name of the Rocoto workflow XML file that the experiment generation script creates. This file defines the workflow for the experiment. ``GLOBAL_VAR_DEFNS_FN``: (Default: "var_defns.sh") - Name of the file (a shell script) containing definitions of the primary and secondary experiment variables (parameters). This file is sourced by many scripts (e.g., the J-job scripts corresponding to each workflow task) in order to make all the experiment variables available in those scripts. The primary variables are defined in the default configuration script (``config_defaults.yaml``) and in ``config.yaml``. The secondary experiment variables are generated by the experiment generation script. + Name of the file (a shell script) containing definitions of the primary and secondary experiment variables (parameters). This file is sourced by many scripts (e.g., the J-job scripts corresponding to each workflow task) in order to make all the experiment variables available in those scripts. The primary variables are defined in the default configuration file (``config_defaults.yaml``) and in the user configuration file (``config.yaml``). The secondary experiment variables are generated by the experiment generation script. + +``ROCOTO_YAML_FN``: (Default: "rocoto_defns.yaml") + Name of the YAML file containing the YAML workflow definition from which the Rocoto XML file is created. ``EXTRN_MDL_VAR_DEFNS_FN``: (Default: "extrn_mdl_var_defns") Name of the file (a shell script) containing the definitions of variables associated with the external model from which :term:`ICs` or :term:`LBCs` are generated. This file is created by the ``GET_EXTRN_*`` task because the values of the variables it contains are not known before this task runs. The file is then sourced by the ``MAKE_ICS`` and ``MAKE_LBCS`` tasks. @@ -288,10 +468,45 @@ Set File Name Parameters ``WFLOW_LAUNCH_LOG_FN``: (Default: "log.launch_FV3LAM_wflow") Name of the log file that contains the output from successive calls to the workflow launch script (``WFLOW_LAUNCH_SCRIPT_FN``). +``GLOBAL_VAR_DEFNS_FP``: (Default: ``'{{ [EXPTDIR, GLOBAL_VAR_DEFNS_FN] |path_join }}'``) + Path to the global variable definition file (``GLOBAL_VAR_DEFNS_FN``) in the experiment directory. + +``ROCOTO_YAML_FP``: (Default: ``'{{ [EXPTDIR, ROCOTO_YAML_FN] |path_join }}'``) + Path to the Rocoto YAML configuration file (``ROCOTO_YAML_FN``) in the experiment directory. + +``WFLOW_LAUNCH_SCRIPT_FP``: (Default: ``'{{ [user.USHdir, WFLOW_LAUNCH_SCRIPT_FN] |path_join }}'``) + Path to the workflow launch script (``WFLOW_LAUNCH_SCRIPT_FN``) in the experiment directory. + +``WFLOW_LAUNCH_LOG_FP``: (Default: ``'{{ [EXPTDIR, WFLOW_LAUNCH_LOG_FN] |path_join }}'``) + Path to the log file (``WFLOW_LAUNCH_LOG_FN``) in the experiment directory that contains output from successive calls to the workflow launch script. + +Experiment Fix File Paths +--------------------------- + +These parameters are associated with the fixed (i.e., static) files. Unlike the file path parameters in :numref:`Section %s `, which pertain to the locations of system data, the parameters in this section indicate fix file paths within the experiment directory (``$EXPTDIR``). + +``FIXdir``: (Default: ``'{{ EXPTDIR if rocoto.tasks.get("task_make_grid") else [user.HOMEdir, "fix"]|path_join }}'``) + Location where fix files will be stored for a given experiment. + +``FIXam``: (Default: ``'{{ [FIXdir, "fix_am"]|path_join }}'``) + Directory containing the fixed files (or symlinks to fixed files) for various fields on global grids (which are usually much coarser than the native FV3-LAM grid). + +``FIXclim``: (Default: ``'{{ [FIXdir, "fix_clim"]|path_join }}'``) + Directory containing the MERRA2 aerosol climatology data file and lookup tables for optics properties. + +``FIXlam``: (Default: ``'{{ [FIXdir, "fix_lam"]|path_join }}'``) + Directory containing the fixed files (or symlinks to fixed files) for the grid, orography, and surface climatology on the native FV3-LAM grid. + +``THOMPSON_MP_CLIMO_FN``: (Default: "Thompson_MP_MONTHLY_CLIMO.nc") + Name of file that contains aerosol climatology data. It can be used to generate approximate versions of the aerosol fields needed by Thompson microphysics. This file will be used to generate such approximate aerosol fields in the :term:`ICs` and :term:`LBCs` if Thompson MP is included in the physics suite and if the external model for ICs or LBCs does not already provide these fields. + +``THOMPSON_MP_CLIMO_FP``: (Default: ``'{{ [FIXam, THOMPSON_MP_CLIMO_FN]|path_join }}'``) + Path to the file that contains aerosol climatology data (i.e., path to ``THOMPSON_MP_CLIMO_FN``). + .. _CCPP_Params: CCPP Parameter ------------------- +----------------- ``CCPP_PHYS_SUITE``: (Default: "FV3_GFS_v16") This parameter indicates which :term:`CCPP` (Common Community Physics Package) physics suite to use for the forecast(s). The choice of physics suite determines the forecast model's namelist file, the diagnostics table file, the field table file, and the XML physics suite definition file, which are staged in the experiment directory or the :term:`cycle` directories under it. @@ -307,8 +522,31 @@ CCPP Parameter | ``"FV3_WoFS_v0"`` | ``"FV3_RAP"`` - Other valid values can be found in the ``ush/valid_param_vals.yaml`` file, but users can not expect full support for these schemes. + Other valid values can be found in the ``ush/valid_param_vals.yaml`` file, but users cannot expect full support for these schemes. + +``CCPP_PHYS_SUITE_FN``: (Default: ``'suite_{{ CCPP_PHYS_SUITE }}.xml'``) + The name of the suite definition file (SDF) used for the experiment. + +``CCPP_PHYS_SUITE_IN_CCPP_FP``: (Default: ``'{{ [user.UFS_WTHR_MDL_DIR, "FV3", "ccpp", "suites", CCPP_PHYS_SUITE_FN] |path_join }}'``) + The full path to the suite definition file (SDF) in the forecast model's directory structure (e.g., ``/path/to/ufs-srweather-app/sorc/ufs-weather-model/FV3/ccpp/suites/$CCPP_PHYS_SUITE_FN``). + +``CCPP_PHYS_SUITE_FP``: (Default: ``'{{ [workflow.EXPTDIR, CCPP_PHYS_SUITE_FN]|path_join }}'``) + The full path to the suite definition file (SDF) in the experiment directory. + +``CCPP_PHYS_DIR``: (Default: ``'{{ [user.UFS_WTHR_MDL_DIR, "FV3", "ccpp", "physics", "physics"] |path_join }}'``) + The directory containing the CCPP physics source code. This is needed to link table(s) contained in that repository. + +Field Dictionary Parameters +------------------------------ + +``FIELD_DICT_FN``: (Default: "fd_nems.yaml") + The name of the field dictionary file. This file is a community-based dictionary for shared coupling fields and is automatically generated by the :term:`NUOPC` Layer. + +``FIELD_DICT_IN_UWM_FP``: (Default: ``'{{ [user.UFS_WTHR_MDL_DIR, "tests", "parm", FIELD_DICT_FN]|path_join }}'``) + The full path to ``FIELD_DICT_FN`` within the forecast model's directory structure (e.g., ``/path/to/ufs-srweather-app/sorc/ufs-weather-model/tests/parm/$FIELD_DICT_FN``). +``FIELD_DICT_FP``: (Default: ``'{{ [workflow.EXPTDIR, FIELD_DICT_FN]|path_join }}'``) + The full path to ``FIELD_DICT_FN`` in the experiment directory. .. _GridGen: @@ -330,47 +568,124 @@ Grid Generation Parameters If the experiment uses a **user-defined grid** (i.e., if ``PREDEF_GRID_NAME`` is set to a null string), then ``GRID_GEN_METHOD`` must be set in the experiment configuration file. Otherwise, the experiment generation will fail because the generation scripts check to ensure that the grid name is set to a non-empty string before creating the experiment directory. +.. _PredefGrid: + +Predefined Grid Parameters +------------------------------ + +``PREDEF_GRID_NAME``: (Default: "") + This parameter indicates which (if any) predefined regional grid to use for the experiment. Setting ``PREDEF_GRID_NAME`` provides a convenient method of specifying a commonly used set of grid-dependent parameters. The predefined grid settings can be viewed in the script ``ush/set_predef_grid_params.yaml``. + + **Currently supported options:** + + | ``"RRFS_CONUS_25km"`` + | ``"RRFS_CONUS_13km"`` + | ``"RRFS_CONUS_3km"`` + | ``"SUBCONUS_Ind_3km"`` + + **Other valid values include:** + + | ``"AQM_NA_13km"`` + | ``"CONUS_25km_GFDLgrid"`` + | ``"CONUS_3km_GFDLgrid"`` + | ``"GSD_HRRR_25km"`` + | ``"RRFS_AK_13km"`` + | ``"RRFS_AK_3km"`` + | ``"RRFS_CONUScompact_25km"`` + | ``"RRFS_CONUScompact_13km"`` + | ``"RRFS_CONUScompact_3km"`` + | ``"RRFS_NA_13km"`` + | ``"RRFS_NA_3km"`` + | ``"WoFS_3km"`` + +.. note:: + + * If ``PREDEF_GRID_NAME`` is set to a valid predefined grid name, the grid generation method, the (native) grid parameters, and the write component grid parameters are set to predefined values for the specified grid, overwriting any settings of these parameters in the user-specified experiment configuration file (``config.yaml``). In addition, if the time step ``DT_ATMOS`` and the computational parameters (``LAYOUT_X``, ``LAYOUT_Y``, and ``BLOCKSIZE``) are not specified in that configuration file, they are also set to predefined values for the specified grid. + + * If ``PREDEF_GRID_NAME`` is set to an empty string, it implies that the user will provide the native grid parameters in the user-specified experiment configuration file (``config.yaml``). In this case, the grid generation method, the native grid parameters, the write component grid parameters, the main time step (``DT_ATMOS``), and the computational parameters (``LAYOUT_X``, ``LAYOUT_Y``, and ``BLOCKSIZE``) must be set in the configuration file. Otherwise, the values of the parameters in the default experiment configuration file (``config_defaults.yaml``) will be used. + Forecast Parameters ---------------------- ``DATE_FIRST_CYCL``: (Default: "YYYYMMDDHH") - Starting date of the first forecast in the set of forecasts to run. Format is "YYYYMMDDHH". + Starting cycle date of the first forecast in the set of forecasts to run. Format is "YYYYMMDDHH". ``DATE_LAST_CYCL``: (Default: "YYYYMMDDHH") - Starting date of the last forecast in the set of forecasts to run. Format is "YYYYMMDDHH". + Starting cycle date of the last forecast in the set of forecasts to run. Format is "YYYYMMDDHH". ``INCR_CYCL_FREQ``: (Default: 24) Increment in hours for Rocoto cycle frequency. The default is 24, which means cycl_freq=24:00:00. ``FCST_LEN_HRS``: (Default: 24) - The length of each forecast, in integer hours. + The length of each forecast in integer hours. (Or the short forecast length when there are different lengths.) + +``LONG_FCST_LEN_HRS``: (Default: ``'{% if FCST_LEN_HRS < 0 %}{{ FCST_LEN_CYCL|max }}{% else %}{{ FCST_LEN_HRS }}{% endif %}'``) + The length of the longer forecast in integer hours in a system that varies the length of the forecast by time of day. There is no need for the user to update this value directly, as it is derived from ``FCST_LEN_CYCL`` when ``FCST_LEN_HRS=-1``. + +.. note:: + + Shorter forecasts are often used to save resources. However, users may wish to gain insight further into the future. In such cases, users can periodically run a longer forecast. For example, in an experiment, a researcher might run 18-hour forecasts for most forecast hours but run a longer 48-hour forecast at "synoptic times" (e.g., 0, 6, 12, 18 UTC). This is particularly common with resource-intensive :term:`DA ` systems that cycle frequently. + +``FCST_LEN_CYCL``: (Default: ``- '{{ FCST_LEN_HRS }}'``) + The length of forecast for each cycle in a given day (in integer hours). This is valid only when ``FCST_LEN_HRS = -1``. This pattern recurs for all cycle dates. Must have the same number of entries as cycles per day (as defined by 24/``INCR_CYCL_FREQ``), or if less than one day the entries must include the length of each cycle to be run. By default, it is set to a 1-item list containing the standard fcst length. + +.. hint:: + The interaction of ``FCST_LEN_HRS``, ``LONG_FCST_LEN_HRS``, and ``FCST_LEN_CYCL`` can be confusing. As an example, take an experiment with cycling every three hours, a short forecast length of 18 hours, and a long forecast length of 48 hours. The long forecasts are run at 0 and 12 UTC. Users would put the following entry in their configuration file: + + .. code-block:: console + + FCST_LEN_HRS: -1 + FCST_LEN_CYCL: + - 48 + - 18 + - 18 + - 18 + - 48 + - 18 + - 18 + - 18 + + By setting ``FCST_LEN_HRS: -1``, the experiment will derive the values of ``FCST_LEN_HRS`` (18) and ``LONG_FCST_LEN_HRS`` (48) for each cycle date. Pre-Existing Directory Parameter ------------------------------------ ``PREEXISTING_DIR_METHOD``: (Default: "delete") - This variable determines how to deal with pre-existing directories (resulting from previous calls to the experiment generation script using the same experiment name [``EXPT_SUBDIR``] as the current experiment). This variable must be set to one of three valid values: ``"delete"``, ``"rename"``, or ``"quit"``. The behavior for each of these values is as follows: + This variable determines how to deal with pre-existing directories (resulting from previous calls to the experiment generation script using the same experiment name [``EXPT_SUBDIR``] as the current experiment). This variable must be set to one of three valid values: ``"delete"``, ``"rename"``, ``"reuse"``, or ``"quit"``. The behavior for each of these values is as follows: * **"delete":** The preexisting directory is deleted and a new directory (having the same name as the original preexisting directory) is created. * **"rename":** The preexisting directory is renamed and a new directory (having the same name as the original pre-existing directory) is created. The new name of the preexisting directory consists of its original name and the suffix "_old###", where ``###`` is a 3-digit integer chosen to make the new name unique. + * **"reuse":** This method will keep the preexisting directory intact. However, when the preexisting directory is ``$EXPDIR``, this method will save all old files to a subdirectory ``oldxxx/`` and then populate new files into the ``$EXPDIR`` directory. This is useful to keep ongoing runs uninterrupted; rocotoco ``*db`` files and previous cycles will stay and hence there is no need to manually copy or move ``*db`` files and previous cycles back, and there is no need to manually restart related rocoto tasks failed during the workflow generation process. This method may be best suited for incremental system reuses. + * **"quit":** The preexisting directory is left unchanged, but execution of the currently running script is terminated. In this case, the preexisting directory must be dealt with manually before rerunning the script. -Verbose Parameter ---------------------- +Detailed Output Messages +-------------------------- +These variables are flags that indicate whether to print more detailed messages. + ``VERBOSE``: (Default: true) Flag that determines whether the experiment generation and workflow task scripts print out extra informational messages. Valid values: ``True`` | ``False`` -Debug Parameter --------------------- ``DEBUG``: (Default: false) Flag that determines whether to print out very detailed debugging messages. Note that if DEBUG is set to true, then VERBOSE will also be reset to true if it isn't already. Valid values: ``True`` | ``False`` -Compiler ------------ +Other +-------- ``COMPILER``: (Default: "intel") Type of compiler invoked during the build step. Currently, this must be set manually; it is not inherited from the build system in the ``ufs-srweather-app`` directory. Valid values: ``"intel"`` | ``"gnu"`` +``SYMLINK_FIX_FILES``: (Default: true) + Flag that indicates whether to symlink fix files to the experiment directory (if true) or copy them (if false). Valid values: ``True`` | ``False`` + +``DO_REAL_TIME``: (Default: false) + Switch for real-time run. Valid values: ``True`` | ``False`` + +``COLDSTART``: (Default: true) + Flag for turning on/off cold start for the first cycle. Valid values: ``True`` | ``False`` + +``WARMSTART_CYCLE_DIR``: (Default: "/path/to/warm/start/cycle/dir") + Path to the cycle directory where RESTART subdirectory is located for warm start. .. _NCOModeParms: @@ -380,26 +695,71 @@ NCO-Specific Variables A standard set of environment variables has been established for *nco* mode to simplify the production workflow and improve the troubleshooting process for operational and preoperational models. These variables are only used in *nco* mode (i.e., when ``RUN_ENVIR: "nco"``). When non-default parameters are selected for the variables in this section, they should be added to the ``nco:`` section of the ``config.yaml`` file. .. note:: - Only *community* mode is fully supported for this release. *nco* mode is used by those at the Environmental Modeling Center (EMC) and Global Systems Laboratory (GSL) who are working on pre-implementation operational testing. Other users should run the SRW App in *community* mode. + Only *community* mode is fully supported for releases. *nco* mode is used by those at the Environmental Modeling Center (EMC) and Global Systems Laboratory (GSL) who are working on pre-implementation operational testing. Other users should run the SRW App in *community* mode. -``envir, NET, model_ver, RUN``: +``envir_default, NET_default, model_ver_default, RUN_default``: Standard environment variables defined in the NCEP Central Operations WCOSS Implementation Standards document. These variables are used in forming the path to various directories containing input, output, and workflow files. The variables are defined in the `WCOSS Implementation Standards `__ document (pp. 4-5) as follows: - ``envir``: (Default: "para") + ``envir_default``: (Default: "para") Set to "test" during the initial testing phase, "para" when running in parallel (on a schedule), and "prod" in production. - ``NET``: (Default: "rrfs") + ``NET_default``: (Default: "srw") Model name (first level of ``com`` directory structure) - ``model_ver``: (Default: "v1.0.0") + ``model_ver_default``: (Default: "v1.0.0") Version number of package in three digits (second level of ``com`` directory) - ``RUN``: (Default: "rrfs") - Name of model run (third level of ``com`` directory structure). In general, same as ``$NET``. + ``RUN_default``: (Default: "srw") + Name of model run (third level of ``com`` directory structure). In general, same as ``${NET_default}``. -``OPSROOT``: (Default: "") +``OPSROOT_default``: (Default: ``'{{ workflow.EXPT_BASEDIR }}/../nco_dirs'``) The operations root directory in *nco* mode. +``COMROOT_default``: (Default: ``'{{ OPSROOT_default }}/com'``) + The ``com`` root directory for input/output data that is located on the current system (typically ``$OPSROOT_default/com``). + +``DATAROOT_default``: (Default: ``'{{OPSROOT_default }}/tmp'``) + Directory containing the (temporary) working directory for running jobs; typically named ``$OPSROOT_default/tmp`` in production. + +``DCOMROOT_default``: (Default: ``'{{OPSROOT_default }}/dcom'``) + ``dcom`` root directory, typically ``$OPSROOT_default/dcom``. This directory contains input/incoming data that is retrieved from outside WCOSS. + +``LOGBASEDIR_default``: (Default: ``'{% if user.RUN_ENVIR == "nco" %}{{ [OPSROOT_default, "output"]|path_join }}{% else %}{{ [workflow.EXPTDIR, "log"]|path_join }}{% endif %}'``) + Directory in which the log files from the workflow tasks will be placed. + +``COMIN_BASEDIR``: (Default: ``'{{ COMROOT_default }}/{{ NET_default }}/{{ model_ver_default }}'``) + ``com`` directory for current model's input data, typically ``$COMROOT/$NET/$model_ver/$RUN.$PDY``. + +``COMOUT_BASEDIR``: (Default: ``'{{ COMROOT_default }}/{{ NET_default }}/{{ model_ver_default }}'``) + ``com`` directory for current model's output data, typically ``$COMROOT/$NET/$model_ver/$RUN.$PDY``. + +``DBNROOT_default``: (Default: "") + Root directory for the data-alerting utilities. + +``SENDECF_default``: (Default: false) + Boolean variable used to control ``ecflow_client`` child commands. + +``SENDDBN_default``: (Default: false) + Boolean variable used to control sending products off WCOSS2. + +``SENDDBN_NTC_default``: (Default: false) + Boolean variable used to control sending products with WMO headers off WCOSS2. + +``SENDCOM_default``: (Default: false) + Boolean variable to control data copies to ``$COMOUT``. + +``SENDWEB_default``: (Default: false) + Boolean variable used to control sending products to a web server, often ``ncorzdm``. + +``KEEPDATA_default``: (Default: true) + Boolean variable used to specify whether or not the working directory should be kept upon successful job completion. + +``MAILTO_default``: (Default: "") + List of email addresses to send email to. + +``MAILCC_default``: (Default: "") + List of email addresses to CC on email. + .. _make-grid: MAKE_GRID Configuration Parameters @@ -412,7 +772,7 @@ Basic Task Parameters For each workflow task, certain parameter values must be passed to the job scheduler (e.g., Slurm), which submits a job for the task. Typically, users do not need to adjust the default values. - ``GRID_DIR``: (Default: "") + ``GRID_DIR``: (Default: ``'{{ [workflow.EXPTDIR, "grid"]|path_join if rocoto.tasks.get("task_make_grid") else "" }}'``) The directory containing pre-generated grid files when the ``MAKE_GRID`` task is not meant to run. .. _ESGgrid: @@ -440,17 +800,17 @@ The following parameters must be set if using the "ESGgrid" method to generate a ``ESGgrid_NY``: (Default: "") The number of cells in the meridional direction on the regional grid. -``ESGgrid_PAZI``: (Default: "") - The rotational parameter for the "ESGgrid" (in degrees). - ``ESGgrid_WIDE_HALO_WIDTH``: (Default: "") - The width (in number of grid cells) of the :term:`halo` to add around the regional grid before shaving the halo down to the width(s) expected by the forecast model. The user need not specify this variable since it is set automatically in ``set_gridparams_ESGgrid.py`` + The width (in number of grid cells) of the :term:`halo` to add around the regional grid before shaving the halo down to the width(s) expected by the forecast model. The user need not specify this variable since it is set automatically in ``set_gridparams_ESGgrid.py``. .. _WideHalo: .. note:: A :term:`halo` is the strip of cells surrounding the regional grid; the halo is used to feed in the lateral boundary conditions to the grid. The forecast model requires **grid** files containing 3-cell- and 4-cell-wide halos and **orography** files with 0-cell- and 3-cell-wide halos. In order to generate grid and orography files with appropriately-sized halos, the grid and orography tasks create preliminary files with halos around the regional domain of width ``ESGgrid_WIDE_HALO_WIDTH`` cells. The files are then read in and "shaved" down to obtain grid files with 3-cell-wide and 4-cell-wide halos and orography files with 0-cell-wide and 3-cell-wide halos. The original halo that gets shaved down is referred to as the "wide" halo because it is wider than the 0-cell-wide, 3-cell-wide, and 4-cell-wide halos that users eventually end up with. Note that the grid and orography files with the wide halo are only needed as intermediates in generating the files with 0-cell-, 3-cell-, and 4-cell-wide halos; they are not needed by the forecast model. +``ESGgrid_PAZI``: (Default: "") + The rotational parameter for the "ESGgrid" (in degrees). + GFDLgrid Settings --------------------- @@ -508,7 +868,7 @@ Note that the regional grid is defined with respect to a "parent" global cubed-s j-index on tile 6 at which the regional grid (tile 7) ends. ``GFDLgrid_USE_NUM_CELLS_IN_FILENAMES``: (Default: "") - Flag that determines the file naming convention to use for grid, orography, and surface climatology files (or, if using pregenerated files, the naming convention that was used to name these files). These files usually start with the string ``"C${RES}_"``, where ``RES`` is an integer. In the global forecast model, ``RES`` is the number of points in each of the two horizontal directions (x and y) on each tile of the global grid (defined here as ``GFDLgrid_NUM_CELLS``). If this flag is set to true, ``RES`` will be set to ``GFDLgrid_NUM_CELLS`` just as in the global forecast model. If it is set to false, we calculate (in the grid generation task) an "equivalent global uniform cubed-sphere resolution" --- call it ``RES_EQUIV`` --- and then set ``RES`` equal to it. ``RES_EQUIV`` is the number of grid points in each of the x and y directions on each tile that a global UNIFORM (i.e., stretch factor of 1) cubed-sphere grid would need to have in order to have the same average grid size as the regional grid. This is a more useful indicator of the grid size because it takes into account the effects of ``GFDLgrid_NUM_CELLS``, ``GFDLgrid_STRETCH_FAC``, and ``GFDLgrid_REFINE_RATIO`` in determining the regional grid's typical grid size, whereas simply setting ``RES`` to ``GFDLgrid_NUM_CELLS`` doesn't take into account the effects of ``GFDLgrid_STRETCH_FAC`` and ``GFDLgrid_REFINE_RATIO`` on the regional grid's resolution. Nevertheless, some users still prefer to use ``GFDLgrid_NUM_CELLS`` in the file names, so we allow for that here by setting this flag to true. + Flag that determines the file naming convention to use for grid, orography, and surface climatology files (or, if using pregenerated files, the naming convention that was used to name these files). These files usually start with the string ``"C${RES}_"``, where ``RES`` is an integer. In the global forecast model, ``RES`` is the number of points in each of the two horizontal directions (x and y) on each tile of the global grid (defined here as ``GFDLgrid_NUM_CELLS``). If this flag is set to true, ``RES`` will be set to ``GFDLgrid_NUM_CELLS`` just as in the global forecast model. If it is set to false, we calculate (in the grid generation task) an "equivalent global uniform cubed-sphere resolution" --- call it ``RES_EQUIV`` --- and then set ``RES`` equal to it. ``RES_EQUIV`` is the number of grid points in each of the x and y directions on each tile that a global UNIFORM (i.e., stretch factor of 1) cubed-sphere grid would need to have in order to have the same average grid size as the regional grid. This is a more useful indicator of the grid size because it takes into account the effects of ``GFDLgrid_NUM_CELLS``, ``GFDLgrid_STRETCH_FAC``, and ``GFDLgrid_REFINE_RATIO`` in determining the regional grid's typical grid size, whereas simply setting ``RES`` to ``GFDLgrid_NUM_CELLS`` doesn't take into account the effects of ``GFDLgrid_STRETCH_FAC`` and ``GFDLgrid_REFINE_RATIO`` on the regional grid's resolution. Nevertheless, some users still prefer to use ``GFDLgrid_NUM_CELLS`` in the file names, so we allow for that here by setting this flag to true. .. _make-orog: @@ -518,17 +878,15 @@ MAKE_OROG Configuration Parameters Non-default parameters for the ``make_orog`` task are set in the ``task_make_orog:`` section of the ``config.yaml`` file. ``KMP_AFFINITY_MAKE_OROG``: (Default: "disabled") - Intel Thread Affinity Interface for the ``make_orog`` task. See :ref:`this note ` for more information on thread affinity. Settings for the ``make_orog`` task is disabled because this task does not use parallelized code. + Intel Thread Affinity Interface for the ``make_orog`` task. See :ref:`this note ` for more information on thread affinity. Settings for the ``make_orog`` task are disabled because this task does not use parallelized code. ``OMP_NUM_THREADS_MAKE_OROG``: (Default: 6) The number of OpenMP threads to use for parallel regions. - - ``OMP_STACKSIZE_MAKE_OROG``: (Default: "2048m") Controls the size of the stack for threads created by the OpenMP implementation. -``OROG_DIR``: (Default: "") +``OROG_DIR``: (Default: ``'{{ [workflow.EXPTDIR, "orog"]|path_join if rocoto.tasks.get("task_make_orog") else "" }}'``) The directory containing pre-generated orography files to use when the ``MAKE_OROG`` task is not meant to run. .. _make-sfc-climo: @@ -547,7 +905,7 @@ Non-default parameters for the ``make_sfc_climo`` task are set in the ``task_mak ``OMP_STACKSIZE_MAKE_SFC_CLIMO``: (Default: "1024m") Controls the size of the stack for threads created by the OpenMP implementation. -``SFC_CLIMO_DIR``: (Default: "") +``SFC_CLIMO_DIR``: (Default: ``'{{ [workflow.EXPTDIR, "sfc_climo"]|path_join if rocoto.tasks.get("task_make_sfc_climo") else "" }}'``) The directory containing pre-generated surface climatology files to use when the ``MAKE_SFC_CLIMO`` task is not meant to run. .. _task_get_extrn_ics: @@ -576,18 +934,18 @@ For each workflow task, certain parameter values must be passed to the job sched File and Directory Parameters -------------------------------- -``USE_USER_STAGED_EXTRN_FILES``: (Default: false) - Flag that determines whether the workflow will look for the external model files needed for generating :term:`ICs` in user-specified directories (rather than fetching them from mass storage like NOAA :term:`HPSS`). Valid values: ``True`` | ``False`` - -``EXTRN_MDL_SOURCE_BASEDIR_ICS``: (Default: "") - Directory containing external model files for generating ICs. If ``USE_USER_STAGED_EXTRN_FILES`` is set to true, the workflow looks within this directory for a subdirectory named "YYYYMMDDHH", which contains the external model files specified by the array ``EXTRN_MDL_FILES_ICS``. This "YYYYMMDDHH" subdirectory corresponds to the start date and cycle hour of the forecast (see :ref:`above `). These files will be used to generate the :term:`ICs` on the native FV3-LAM grid. This variable is not used if ``USE_USER_STAGED_EXTRN_FILES`` is set to false. - ``EXTRN_MDL_SYSBASEDIR_ICS``: (Default: '') A known location of a real data stream on a given platform. This is typically a real-time data stream as on Hera, Jet, or WCOSS. External model files for generating :term:`ICs` on the native grid should be accessible via this data stream. The way the full path containing these files is constructed depends on the user-specified external model for ICs (defined above in :numref:`Section %s ` ``EXTRN_MDL_NAME_ICS``). .. note:: This variable must be defined as a null string in ``config_defaults.yaml`` so that if it is specified by the user in the experiment configuration file (``config.yaml``), it remains set to those values, and if not, it gets set to machine-dependent values. +``USE_USER_STAGED_EXTRN_FILES``: (Default: false) + Flag that determines whether the workflow will look for the external model files needed for generating :term:`ICs` in user-specified directories (rather than fetching them from mass storage like NOAA :term:`HPSS`). Valid values: ``True`` | ``False`` + +``EXTRN_MDL_SOURCE_BASEDIR_ICS``: (Default: "") + Directory containing external model files for generating ICs. If ``USE_USER_STAGED_EXTRN_FILES`` is set to true, the workflow looks within this directory for a subdirectory named "YYYYMMDDHH", which contains the external model files specified by the array ``EXTRN_MDL_FILES_ICS``. This "YYYYMMDDHH" subdirectory corresponds to the start date and cycle hour of the forecast (see :ref:`above `). These files will be used to generate the :term:`ICs` on the native FV3-LAM grid. This variable is not used if ``USE_USER_STAGED_EXTRN_FILES`` is set to false. + ``EXTRN_MDL_FILES_ICS``: (Default: "") Array containing templates of the file names to search for in the ``EXTRN_MDL_SOURCE_BASEDIR_ICS`` directory. This variable is not used if ``USE_USER_STAGED_EXTRN_FILES`` is set to false. A single template should be used for each model file type that is used. Users may use any of the Python-style templates allowed in the ``ush/retrieve_data.py`` script. To see the full list of supported templates, run that script with the ``-h`` option. @@ -604,20 +962,6 @@ File and Directory Parameters EXTRN_MDL_FILES_ICS=[ gfs.t{hh}z.pgrb2.0p25.f{fcst_hr:03d} ] -``EXTRN_MDL_DATA_STORES``: (Default: "") - A list of data stores where the scripts should look to find external model data. The list is in priority order. If disk information is provided via ``USE_USER_STAGED_EXTRN_FILES`` or a known location on the platform, the disk location will receive highest priority. Valid values: ``disk`` | ``hpss`` | ``aws`` | ``nomads`` - -NOMADS Parameters ---------------------- - -Set parameters associated with NOMADS online data. - -``NOMADS``: (Default: false) - Flag controlling whether to use NOMADS online data. Valid values: ``True`` | ``False`` - -``NOMADS_file_type``: (Default: "nemsio") - Flag controlling the format of the data. Valid values: ``"GRIB2"`` | ``"grib2"`` | ``"NEMSIO"`` | ``"nemsio"`` - .. _task_get_extrn_lbcs: GET_EXTRN_LBCS Configuration Parameters @@ -635,19 +979,24 @@ For each workflow task, certain parameter values must be passed to the job sched ``EXTRN_MDL_NAME_LBCS``: (Default: "FV3GFS") The name of the external model that will provide fields from which lateral boundary condition (LBC) files (except for the 0-th hour LBC file) will be generated for input into the forecast model. Valid values: ``"GSMGFS"`` | ``"FV3GFS"`` | ``"GEFS"`` | ``"GDAS"`` | ``"RAP"`` | ``"HRRR"`` | ``"NAM"`` -``LBC_SPEC_INTVL_HRS``: (Default: "6") +``LBC_SPEC_INTVL_HRS``: (Default: 6) The interval (in integer hours) at which LBC files will be generated. This is also referred to as the *boundary update interval*. Note that the model selected in ``EXTRN_MDL_NAME_LBCS`` must have data available at a frequency greater than or equal to that implied by ``LBC_SPEC_INTVL_HRS``. For example, if ``LBC_SPEC_INTVL_HRS`` is set to "6", then the model must have data available at least every 6 hours. It is up to the user to ensure that this is the case. -``EXTRN_MDL_LBCS_OFFSET_HRS``: (Default: "") - Users may wish to use lateral boundary conditions from a forecast that was started earlier than the start of the forecast configured here. This variable indicates how many hours earlier the external model started than the FV3 forecast configured here. For example, if the forecast should use lateral boundary conditions from the GFS started 6 hours earlier, then ``EXTRN_MDL_LBCS_OFFSET_HRS: "6"``. Note: the default value is model-dependent and is set in ``ush/set_extrn_mdl_params.py``. +``EXTRN_MDL_LBCS_OFFSET_HRS``: (Default: ``'{{ 3 if EXTRN_MDL_NAME_LBCS == "RAP" else 0 }}'``) + Users may wish to use lateral boundary conditions from a forecast that was started earlier than the start of the forecast configured here. This variable indicates how many hours earlier the external model started than the forecast configured here. For example, if the forecast should use lateral boundary conditions from a GFS forecast started six hours earlier, then ``EXTRN_MDL_LBCS_OFFSET_HRS: 6``. Note: the default value is model-dependent and is set in ``ush/set_extrn_mdl_params.py``. ``FV3GFS_FILE_FMT_LBCS``: (Default: "nemsio") If using the FV3GFS model as the source of the :term:`LBCs` (i.e., if ``EXTRN_MDL_NAME_LBCS: "FV3GFS"``), this variable specifies the format of the model files to use when generating the LBCs. Valid values: ``"nemsio"`` | ``"grib2"`` | ``"netcdf"`` - File and Directory Parameters -------------------------------- +``EXTRN_MDL_SYSBASEDIR_LBCS``: (Default: '') + Same as ``EXTRN_MDL_SYSBASEDIR_ICS`` but for :term:`LBCs`. A known location of a real data stream on a given platform. This is typically a real-time data stream as on Hera, Jet, or WCOSS. External model files for generating :term:`LBCs` on the native grid should be accessible via this data stream. The way the full path containing these files is constructed depends on the user-specified external model for LBCs (defined above in :numref:`Section %s ` ``EXTRN_MDL_NAME_LBCS`` above). + + .. note:: + This variable must be defined as a null string in ``config_defaults.yaml`` so that if it is specified by the user in the experiment configuration file (``config.yaml``), it remains set to those values, and if not, it gets set to machine-dependent values. + ``USE_USER_STAGED_EXTRN_FILES``: (Default: false) Analogous to ``USE_USER_STAGED_EXTRN_FILES`` in :term:`ICs` but for :term:`LBCs`. Flag that determines whether the workflow will look for the external model files needed for generating :term:`LBCs` in user-specified directories (rather than fetching them from mass storage like NOAA :term:`HPSS`). Valid values: ``True`` | ``False`` @@ -655,28 +1004,8 @@ File and Directory Parameters Analogous to ``EXTRN_MDL_SOURCE_BASEDIR_ICS`` but for :term:`LBCs` instead of :term:`ICs`. Directory containing external model files for generating LBCs. If ``USE_USER_STAGED_EXTRN_FILES`` is set to true, the workflow looks within this directory for a subdirectory named "YYYYMMDDHH", which contains the external model files specified by the array ``EXTRN_MDL_FILES_LBCS``. This "YYYYMMDDHH" subdirectory corresponds to the start date and cycle hour of the forecast (see :ref:`above `). These files will be used to generate the :term:`LBCs` on the native FV3-LAM grid. This variable is not used if ``USE_USER_STAGED_EXTRN_FILES`` is set to false. -``EXTRN_MDL_SYSBASEDIR_LBCS``: (Default: '') - Same as ``EXTRN_MDL_SYSBASEDIR_ICS`` but for :term:`LBCs`. A known location of a real data stream on a given platform. This is typically a real-time data stream as on Hera, Jet, or WCOSS. External model files for generating :term:`LBCs` on the native grid should be accessible via this data stream. The way the full path containing these files is constructed depends on the user-specified external model for LBCs (defined above in :numref:`Section %s ` ``EXTRN_MDL_NAME_LBCS`` above). - - .. note:: - This variable must be defined as a null string in ``config_defaults.yaml`` so that if it is specified by the user in the experiment configuration file (``config.yaml``), it remains set to those values, and if not, it gets set to machine-dependent values. - ``EXTRN_MDL_FILES_LBCS``: (Default: "") Analogous to ``EXTRN_MDL_FILES_ICS`` but for :term:`LBCs` instead of :term:`ICs`. Array containing templates of the file names to search for in the ``EXTRN_MDL_SOURCE_BASEDIR_LBCS`` directory. This variable is not used if ``USE_USER_STAGED_EXTRN_FILES`` is set to false. A single template should be used for each model file type that is used. Users may use any of the Python-style templates allowed in the ``ush/retrieve_data.py`` script. To see the full list of supported templates, run that script with the ``-h`` option. For examples, see the ``EXTRN_MDL_FILES_ICS`` variable above. - -``EXTRN_MDL_DATA_STORES``: (Default: "") - Analogous to ``EXTRN_MDL_DATA_STORES`` in :term:`ICs` but for :term:`LBCs`. A list of data stores where the scripts should look to find external model data. The list is in priority order. If disk information is provided via ``USE_USER_STAGED_EXTRN_FILES`` or a known location on the platform, the disk location will receive highest priority. Valid values: ``disk`` | ``hpss`` | ``aws`` | ``nomads`` - -NOMADS Parameters ---------------------- - -Set parameters associated with NOMADS online data. Analogus to :term:`ICs` NOMADS Parameters. - -``NOMADS``: (Default: false) - Flag controlling whether to use NOMADS online data. - -``NOMADS_file_type``: (Default: "nemsio") - Flag controlling the format of the data. Valid values: ``"GRIB2"`` | ``"grib2"`` | ``"NEMSIO"`` | ``"nemsio"`` MAKE_ICS Configuration Parameters ====================================== @@ -697,13 +1026,13 @@ For each workflow task, certain parameter values must be passed to the job sched ``OMP_STACKSIZE_MAKE_ICS``: (Default: "1024m") Controls the size of the stack for threads created by the OpenMP implementation. -FVCOM Parameter +FVCOM Parameters ------------------- ``USE_FVCOM``: (Default: false) Flag that specifies whether to update surface conditions in FV3-:term:`LAM` with fields generated from the Finite Volume Community Ocean Model (:term:`FVCOM`). If set to true, lake/sea surface temperatures, ice surface temperatures, and ice placement will be overwritten using data provided by FVCOM. Setting ``USE_FVCOM`` to true causes the executable ``process_FVCOM.exe`` in the ``MAKE_ICS`` task to run. This, in turn, modifies the file ``sfc_data.nc`` generated by ``chgres_cube`` during the ``make_ics`` task. Note that the FVCOM data must already be interpolated to the desired FV3-LAM grid. Valid values: ``True`` | ``False`` ``FVCOM_WCSTART``: (Default: "cold") - Define if this is a "warm" start or a "cold" start. Setting this to "warm" will read in ``sfc_data.nc`` generated in a RESTART directory. Setting this to "cold" will read in the ``sfc_data.nc`` generated from ``chgres_cube`` in the ``make_ics`` portion of the workflow. Valid values: ``"cold"`` | ``"COLD"`` | ``"warm"`` | ``"WARM"`` + Define whether this is a "warm" start or a "cold" start. Setting this to "warm" will read in the ``sfc_data.nc`` file generated in a RESTART directory. Setting this to "cold" will read in the ``sfc_data.nc`` file generated from ``chgres_cube`` in the ``make_ics`` portion of the workflow. Valid values: ``"cold"`` | ``"COLD"`` | ``"warm"`` | ``"WARM"`` ``FVCOM_DIR``: (Default: "") User-defined directory where the ``fvcom.nc`` file containing :term:`FVCOM` data already interpolated to the FV3-LAM native grid is located. The file in this directory must be named ``fvcom.nc``. @@ -711,6 +1040,11 @@ FVCOM Parameter ``FVCOM_FILE``: (Default: "fvcom.nc") Name of the file located in ``FVCOM_DIR`` that has :term:`FVCOM` data interpolated to the FV3-LAM grid. This file will be copied later to a new location, and the name will be changed to ``fvcom.nc`` if a name other than ``fvcom.nc`` is selected. +Vertical Coordinate File Parameter +------------------------------------ + +``VCOORD_FILE``: (Default: ``"{{ workflow.FIXam }}/global_hyblev.l65.txt"``) + Full path to the file used to set the vertical coordinates in FV3. This file should be the same in both ``make_ics`` and ``make_lbcs``. MAKE_LBCS Configuration Parameters ====================================== @@ -726,6 +1060,12 @@ Non-default parameters for the ``make_lbcs`` task are set in the ``task_make_lbc ``OMP_STACKSIZE_MAKE_LBCS``: (Default: "1024m") Controls the size of the stack for threads created by the OpenMP implementation. +Vertical Coordinate File Parameter +------------------------------------ + +``VCOORD_FILE``: (Default: ``"{{ workflow.FIXam }}/global_hyblev.l65.txt"``) + Full path to the file used to set the vertical coordinates in FV3. This file should be the same in both ``make_ics`` and ``make_lbcs``. + .. _FcstConfigParams: FORECAST Configuration Parameters @@ -738,6 +1078,22 @@ Basic Task Parameters For each workflow task, certain parameter values must be passed to the job scheduler (e.g., Slurm), which submits a job for the task. +``NNODES_RUN_FCST``: (Default: ``'{{ (PE_MEMBER01 + PPN_RUN_FCST - 1) // PPN_RUN_FCST }}'``) + The number of nodes to request from the job scheduler for the forecast task. + +``PPN_RUN_FCST``: (Default: ``'{{ platform.NCORES_PER_NODE // OMP_NUM_THREADS_RUN_FCST }}'``) + Processes per node for the forecast task. + +``FV3_EXEC_FP``: (Default: ``'{{ [user.EXECdir, workflow.FV3_EXEC_FN]|path_join }}'``) + Full path to the forecast model executable. + +``IO_LAYOUT_Y``: (Default: 1) + Specifies how many MPI ranks to use in the Y direction for input/output (I/O). + + .. note:: + + ``IO_LAYOUT_X`` does not explicitly exist because its value is assumed to be 1. + ``KMP_AFFINITY_RUN_FCST``: (Default: "scatter") Intel Thread Affinity Interface for the ``run_fcst`` task. @@ -751,10 +1107,10 @@ For each workflow task, certain parameter values must be passed to the job sched For more information, see the `Intel Development Reference Guide `__. -``OMP_NUM_THREADS_RUN_FCST``: (Default: 2) - The number of OpenMP threads to use for parallel regions. Corresponds to the ``atmos_nthreads`` value in ``model_configure``. +``OMP_NUM_THREADS_RUN_FCST``: (Default: 1) + The number of OpenMP threads to use for parallel regions. Corresponds to the ``ATM_omp_num_threads`` value in ``nems.configure``. -``OMP_STACKSIZE_RUN_FCST``: (Default: "1024m") +``OMP_STACKSIZE_RUN_FCST``: (Default: "512m") Controls the size of the stack for threads created by the OpenMP implementation. .. _ModelConfigParams: @@ -765,31 +1121,34 @@ Model Configuration Parameters These parameters set values in the Weather Model's ``model_configure`` file. ``DT_ATMOS``: (Default: "") - Time step for the outermost atmospheric model loop in seconds. This corresponds to the frequency at which the physics routines and the top level dynamics routine are called. (Note that one call to the top-level dynamics routine results in multiple calls to the horizontal dynamics, :term:`tracer` transport, and vertical dynamics routines; see the `FV3 dycore scientific documentation `__ for details.) Must be set. Takes an integer value. In the SRW App, a default value for ``DT_ATMOS`` appears in the ``set_predef_grid_params.yaml`` script, but a different value can be set in ``config.yaml``. In general, the smaller the grid cell size is, the smaller this value needs to be in order to avoid numerical instabilities during the forecast. + The main forecast model integration time step (positive integer value). This is the time step for the outermost atmospheric model loop in seconds. It corresponds to the frequency at which the physics routines and the top level dynamics routine are called. (Note that one call to the top-level dynamics routine results in multiple calls to the horizontal dynamics, :term:`tracer` transport, and vertical dynamics routines; see the `FV3 dycore scientific documentation `__ for details.) In the SRW App, a default value for ``DT_ATMOS`` appears in the ``set_predef_grid_params.yaml`` script, but a different value can be set in ``config.yaml``. In general, the smaller the grid cell size is, the smaller this value needs to be in order to avoid numerical instabilities during the forecast. + +``FHROT``: (Default: 0) + Forecast hour at restart. ``RESTART_INTERVAL``: (Default: 0) - Frequency of the output restart files in hours. Using the default interval (0), restart files are produced at the end of a forecast run. When ``RESTART_INTERVAL: 1``, restart files are produced every hour with the prefix "YYYYMMDD.HHmmSS." in the ``RESTART`` directory. + Frequency of the output restart files in hours. + + * Using the default interval (0), restart files are produced at the end of a forecast run. + * When ``RESTART_INTERVAL: 1``, restart files are produced every hour with the prefix "YYYYMMDD.HHmmSS." in the ``RESTART`` directory. + * When ``RESTART_INTERVAL: 1 2 5``, restart files are produced only at forecast hours 1, 2, and 5. .. _InlinePost: ``WRITE_DOPOST``: (Default: false) - Flag that determines whether to use the inline post option. The default ``WRITE_DOPOST: false`` does not use the inline post functionality, and the ``run_post`` tasks are called from outside of the Weather Model. If ``WRITE_DOPOST: true``, the ``WRITE_DOPOST`` flag in the ``model_configure`` file will be set to true, and the post-processing (:term:`UPP`) tasks will be called from within the Weather Model. This means that the post-processed files (in :term:`grib2` format) are output by the Weather Model at the same time that it outputs the ``dynf###.nc`` and ``phyf###.nc`` files. Setting ``WRITE_DOPOST: true`` turns off the separate ``run_post`` task in ``setup.py`` to avoid unnecessary computations. Valid values: ``True`` | ``False`` + Flag that determines whether to use the inline post option, which calls the Unified Post Processor (:term:`UPP`) from within the UFS Weather Model. The default ``WRITE_DOPOST: false`` does not use the inline post functionality, and the ``run_post`` tasks are called from outside of the UFS Weather Model. If ``WRITE_DOPOST: true``, the ``WRITE_DOPOST`` flag in the ``model_configure`` file will be set to true, and the post-processing (:term:`UPP`) tasks will be called from within the Weather Model. This means that the post-processed files (in :term:`grib2` format) are output by the Weather Model at the same time that it outputs the ``dynf###.nc`` and ``phyf###.nc`` files. Setting ``WRITE_DOPOST: true`` turns off the separate ``run_post`` task in ``setup.py`` to avoid unnecessary computations. Valid values: ``True`` | ``False`` Computational Parameters ---------------------------- -``LAYOUT_X, LAYOUT_Y``: (Default: "") - The number of :term:`MPI` tasks (processes) to use in the two horizontal directions (x and y) of the regional grid when running the forecast model. - -``BLOCKSIZE``: (Default: "") - The amount of data that is passed into the cache at a time. - -.. note:: +``LAYOUT_X``: (Default: ``'{{ LAYOUT_X }}'``) + The number of :term:`MPI` tasks (processes) to use in the x direction of the regional grid when running the forecast model. - In ``config_defaults.yaml`` the computational parameters are set to null strings so that: +``LAYOUT_Y``: (Default: ``'{{ LAYOUT_Y }}'``) + The number of :term:`MPI` tasks (processes) to use in the y direction of the regional grid when running the forecast model. - #. If the experiment is using a predefined grid and the user sets the parameter in the user-specified experiment configuration file (i.e., ``config.yaml``), that value will be used in the forecast(s). Otherwise, the default value for that predefined grid will be used. - #. If the experiment is *not* using a predefined grid (i.e., it is using a custom grid whose parameters are specified in the experiment configuration file), then the user must specify a value for the parameter in that configuration file. Otherwise, the parameter will remain set to a null string, and the experiment generation will fail because the generation scripts check to ensure that all the parameters defined in this section are set to non-empty strings before creating the experiment directory. +``BLOCKSIZE``: (Default: ``'{{ BLOCKSIZE }}'``) + The amount of data that is passed into the cache at a time. .. _WriteComp: @@ -797,22 +1156,29 @@ Write-Component (Quilting) Parameters ----------------------------------------- .. note:: - The :term:`UPP` (called by the ``RUN_POST`` task) cannot process output on the native grid types ("GFDLgrid" and "ESGgrid"), so output fields are interpolated to a **write component grid** before writing them to an output file. The output files written by the UFS Weather Model use an Earth System Modeling Framework (:term:`ESMF`) component, referred to as the **write component**. This model component is configured with settings in the ``model_configure`` file, as described in `Section 4.2.3 `__ of the UFS Weather Model documentation. + The :term:`UPP` (called by the ``run_post`` task) cannot process output on the native grid types ("GFDLgrid" and "ESGgrid"), so output fields are interpolated to a **write component grid** before writing them to an output file. The output files written by the UFS Weather Model use an Earth System Modeling Framework (:term:`ESMF`) component, referred to as the **write component**. This model component is configured with settings in the ``model_configure`` file, as described in `Section 4.2.3 `__ of the UFS Weather Model documentation. ``QUILTING``: (Default: true) .. attention:: - The regional grid requires the use of the write component, so users generally should not need to change the default value for ``QUILTING``. + The regional grid requires the use of the write component, so users generally should not change the default value for ``QUILTING``. - Flag that determines whether to use the write component for writing forecast output files to disk. If set to true, the forecast model will output files named ``dynf$HHH.nc`` and ``phyf$HHH.nc`` (where ``HHH`` is the 3-digit forecast hour) containing dynamics and physics fields, respectively, on the write-component grid. For example, the output files for the 3rd hour of the forecast would be ``dynf$003.nc`` and ``phyf$003.nc``. (The regridding from the native FV3-LAM grid to the write-component grid is done by the forecast model.) If ``QUILTING`` is set to false, then the output file names are ``fv3_history.nc`` and ``fv3_history2d.nc``, and they contain fields on the native grid. Although the UFS Weather Model can run without quilting, the regional grid requires the use of the write component. Therefore, QUILTING should be set to true when running the SRW App. If ``QUILTING`` is set to false, the ``RUN_POST`` (meta)task cannot run because the :term:`UPP` code called by this task cannot process fields on the native grid. In that case, the ``RUN_POST`` (meta)task will be automatically removed from the Rocoto workflow XML. The :ref:`INLINE POST ` option also requires ``QUILTING`` to be set to true in the SRW App. Valid values: ``True`` | ``False`` + Flag that determines whether to use the write component for writing forecast output files to disk. When set to true, the forecast model will output files named ``dynf$HHH.nc`` and ``phyf$HHH.nc`` (where ``HHH`` is the 3-digit forecast hour) containing dynamics and physics fields, respectively, on the write-component grid. For example, the output files for the 3rd hour of the forecast would be ``dynf$003.nc`` and ``phyf$003.nc``. (The regridding from the native FV3-LAM grid to the write-component grid is done by the forecast model.) If ``QUILTING`` is set to false, then the output file names are ``fv3_history.nc`` and ``fv3_history2d.nc``, and they contain fields on the native grid. Although the UFS Weather Model can run without quilting, the regional grid requires the use of the write component. Therefore, QUILTING should be set to true when running the SRW App. If ``QUILTING`` is set to false, the ``run_post`` (meta)task cannot run because the :term:`UPP` code called by this task cannot process fields on the native grid. In that case, the ``RUN_POST`` (meta)task will be automatically removed from the Rocoto workflow XML. The :ref:`INLINE POST ` option also requires ``QUILTING`` to be set to true in the SRW App. Valid values: ``True`` | ``False`` ``PRINT_ESMF``: (Default: false) Flag that determines whether to output extra (debugging) information from :term:`ESMF` routines. Note that the write component uses ESMF library routines to interpolate from the native forecast model grid to the user-specified output grid (which is defined in the model configuration file ``model_configure`` in the forecast run directory). Valid values: ``True`` | ``False`` -``WRTCMP_write_groups``: (Default: 1) +``PE_MEMBER01``: (Default: ``'{{ LAYOUT_Y * LAYOUT_X + WRTCMP_write_groups * WRTCMP_write_tasks_per_group if QUILTING else LAYOUT_Y * LAYOUT_X}}'``) + The number of MPI processes required by the forecast. When QUILTING is true, it is calculated as: + + .. math:: + + LAYOUT_X * LAYOUT_Y + WRTCMP\_write\_groups * WRTCMP_write_tasks_per_group + +``WRTCMP_write_groups``: (Default: "") The number of write groups (i.e., groups of :term:`MPI` tasks) to use in the write component. Each write group will write to one set of output files (a ``dynf${fhr}.nc`` and a ``phyf${fhr}.nc`` file, where ``${fhr}`` is the forecast hour). Each write group contains ``WRTCMP_write_tasks_per_group`` tasks. Usually, one write group is sufficient. This may need to be increased if the forecast is proceeding so quickly that a single write group cannot complete writing to its set of files before there is a need/request to start writing the next set of files at the next output time. -``WRTCMP_write_tasks_per_group``: (Default: 20) +``WRTCMP_write_tasks_per_group``: (Default: "") The number of MPI tasks to allocate for each write group. ``WRTCMP_output_grid``: (Default: "''") @@ -864,82 +1230,17 @@ Write-Component (Quilting) Parameters ``WRTCMP_dy``: (Default: "") Grid cell size (in meters) along the y-axis of the Lambert conformal projection. -.. _PredefGrid: - -Predefined Grid Parameters ------------------------------- - -``PREDEF_GRID_NAME``: (Default: "") - This parameter indicates which (if any) predefined regional grid to use for the experiment. Setting ``PREDEF_GRID_NAME`` provides a convenient method of specifying a commonly used set of grid-dependent parameters. The predefined grid settings can be viewed in the script ``ush/set_predef_grid_params.yaml``. - - **Currently supported options:** - - | ``"RRFS_CONUS_25km"`` - | ``"RRFS_CONUS_13km"`` - | ``"RRFS_CONUS_3km"`` - | ``"SUBCONUS_Ind_3km"`` - - **Other valid values include:** - - | ``"CONUS_25km_GFDLgrid"`` - | ``"CONUS_3km_GFDLgrid"`` - | ``"EMC_AK"`` - | ``"EMC_HI"`` - | ``"EMC_PR"`` - | ``"EMC_GU"`` - | ``"GSL_HAFSV0.A_25km"`` - | ``"GSL_HAFSV0.A_13km"`` - | ``"GSL_HAFSV0.A_3km"`` - | ``"GSD_HRRR_AK_50km"`` - | ``"RRFS_AK_13km"`` - | ``"RRFS_AK_3km"`` - | ``"RRFS_CONUScompact_25km"`` - | ``"RRFS_CONUScompact_13km"`` - | ``"RRFS_CONUScompact_3km"`` - | ``"RRFS_NA_13km"`` - | ``"RRFS_NA_3km"`` - | ``"RRFS_SUBCONUS_3km"`` - | ``"WoFS_3km"`` - -.. note:: - - * If ``PREDEF_GRID_NAME`` is set to a valid predefined grid name, the grid generation method, the (native) grid parameters, and the write component grid parameters are set to predefined values for the specified grid, overwriting any settings of these parameters in the user-specified experiment configuration file (``config.yaml``). In addition, if the time step ``DT_ATMOS`` and the computational parameters (``LAYOUT_X``, ``LAYOUT_Y``, and ``BLOCKSIZE``) are not specified in that configuration file, they are also set to predefined values for the specified grid. - - * If ``PREDEF_GRID_NAME`` is set to an empty string, it implies that the user will provide the native grid parameters in the user-specified experiment configuration file (``config.yaml``). In this case, the grid generation method, the native grid parameters, the write component grid parameters, the main time step (``DT_ATMOS``), and the computational parameters (``LAYOUT_X``, ``LAYOUT_Y``, and ``BLOCKSIZE``) must be set in the configuration file. Otherwise, the values of the parameters in the default experiment configuration file (``config_defaults.yaml``) will be used. - Aerosol Climatology Parameter --------------------------------- -``USE_MERRA_CLIMO``: (Default: false) - Flag that determines whether :term:`MERRA2` aerosol climatology data and lookup tables for optics properties are obtained. Valid values: ``True`` | ``False`` - - .. COMMENT: When would it be appropriate to obtain these files? - -Fixed File Parameters -------------------------- +``USE_MERRA_CLIMO``: (Default: ``'{{ workflow.CCPP_PHYS_SUITE == "FV3_GFS_v15_thompson_mynn_lam3km" or workflow.CCPP_PHYS_SUITE == "FV3_GFS_v17_p8" }}'``) + Flag that determines whether :term:`MERRA2` aerosol climatology data and lookup tables for optics properties are obtained. This value should be set to false until MERRA2 climatology and Thompson microphysics are fully implemented in supported physics suites. Valid values: ``True`` | ``False`` -These parameters are associated with the fixed (i.e., static) files. On `Level 1 & 2 `__ systems, fixed files are pre-staged with paths defined in the ``setup.py`` script. Because the default values are platform-dependent, they are set to a null string in ``config_defaults.yaml``. Then these null values are overwritten in ``setup.py`` with machine-specific values or with a user-specified value from ``config.yaml``. - -``FIXgsm``: (Default: "") - System directory in which the majority of fixed (i.e., time-independent) files that are needed to run the FV3-LAM model are located. - -``FIXaer``: (Default: "") - System directory where :term:`MERRA2` aerosol climatology files are located. - -``FIXlut``: (Default: "") - System directory where the lookup tables for optics properties are located. - -``FIXshp``: (Default: "") - System directory where the graphics shapefiles are located. On Level 1 systems, these are set within the machine files. Users on other systems will need to provide the path to the directory that contains the *Natural Earth* shapefiles. - -``FIXorg``: (Default: "") - The location on disk of the static input files used by the ``make_orog`` task (i.e., ``orog.x`` and ``shave.x``). Can be the same as ``FIXgsm``. - -``FIXsfc``: (Default: "") - The location on disk of the static surface climatology input fields, used by ``sfc_climo_gen``. These files are only used if the ``MAKE_SFC_CLIMO`` task is meant to run. +Restart Parameters +-------------------- +``DO_FCST_RESTART``: (Default: false) + Flag to turn on/off restart capability of forecast task. -``SYMLINK_FIX_FILES``: (Default: true) - Flag that indicates whether to symlink or copy fix files to the experiment directory. RUN_POST Configuration Parameters ===================================== @@ -969,7 +1270,7 @@ Set parameters associated with subhourly forecast model output and post-processi Flag that indicates whether the forecast model will generate output files on a sub-hourly time interval (e.g., 10 minutes, 15 minutes). This will also cause the post-processor to process these sub-hourly files. If this variable is set to true, then ``DT_SUBHOURLY_POST_MNTS`` should be set to a valid value between 1 and 59. Valid values: ``True`` | ``False`` ``DT_SUB_HOURLY_POST_MNTS``: (Default: 0) - Time interval in minutes between the forecast model output files (only used if ``SUB_HOURLY_POST`` is set to true). If ``SUB_HOURLY_POST`` is set to true, this needs to be set to a valid two-digit integer between 1 and 59. Note that if ``SUB_HOURLY_POST`` is set to true but ``DT_SUB_HOURLY_POST_MNTS`` is set to 0, ``SUB_HOURLY_POST`` will get reset to false in the experiment generation scripts (there will be an informational message in the log file to emphasize this). Valid values: ``0`` | ``1`` | ``2`` | ``3`` | ``4`` | ``5`` | ``6`` | ``10`` | ``12`` | ``15`` | ``20`` | ``30`` + Time interval in minutes between the forecast model output files (only used if ``SUB_HOURLY_POST`` is set to true). If ``SUB_HOURLY_POST`` is set to true, this needs to be set to a valid integer between 1 and 59. Note that if ``SUB_HOURLY_POST`` is set to true, but ``DT_SUB_HOURLY_POST_MNTS`` is set to 0, ``SUB_HOURLY_POST`` will be reset to false in the experiment generation scripts (there will be an informational message in the log file to emphasize this). Valid values: ``0`` | ``1`` | ``2`` | ``3`` | ``4`` | ``5`` | ``6`` | ``10`` | ``12`` | ``15`` | ``20`` | ``30`` Customized Post Configuration Parameters -------------------------------------------- @@ -982,14 +1283,17 @@ Set parameters for customizing the :term:`UPP`. ``CUSTOM_POST_CONFIG_FP``: (Default: "") The full path to the custom post flat file, including filename, to be used for post-processing. This is only used if ``CUSTOM_POST_CONFIG_FILE`` is set to true. -``POST_OUTPUT_DOMAIN_NAME``: (Default: "") - Domain name (in lowercase) used to construct the names of the output files generated by the :term:`UPP`. If using a predefined grid, ``POST_OUTPUT_DOMAIN_NAME`` defaults to ``PREDEF_GRID_NAME``. If using a custom grid, ``POST_OUTPUT_DOMAIN_NAME`` must be specified by the user. The post output files are named as follows: +``POST_OUTPUT_DOMAIN_NAME``: (Default: ``'{{ workflow.PREDEF_GRID_NAME }}'``) + Domain name (in lowercase) used to construct the names of the output files generated by the :term:`UPP`. If using a predefined grid, ``POST_OUTPUT_DOMAIN_NAME`` defaults to ``PREDEF_GRID_NAME``. If using a custom grid, ``POST_OUTPUT_DOMAIN_NAME`` must be specified by the user. Note that this variable is first changed to lower case before being used to construct the file names. + + The post output files are named as follows: .. code-block:: console - $NET.tHHz.[var_name].f###.${POST_OUTPUT_DOMAIN_NAME}.grib2 - - Note that this variable is first changed to lower case before being used to construct the file names. + ${NET_default}.tHHz.[var_name].f###.${POST_OUTPUT_DOMAIN_NAME}.grib2 + +``TESTBED_FIELDS_FN``: (Default: "") + The file that lists grib2 fields to be extracted for testbed files. An empty string means no need to generate testbed files. RUN_PRDGEN Configuration Parameters ===================================== @@ -1010,23 +1314,31 @@ For each workflow task, certain parameter values must be passed to the job sched Controls the size of the stack for threads created by the OpenMP implementation. ``DO_PARALLEL_PRDGEN``: (Default: false) - Flag that determines whether to use CFP to run the product generation job in parallel. CFP is a utility that allows the user to launch a number of small jobs across nodes/cpus in one batch command. This option should be used with the ``RRFS_NA_3km`` grid and ``PPN_RUN_PRDGEN`` should be set to 22. + Flag that determines whether to use CFP to run the product generation job in parallel. CFP is a utility that allows the user to launch a number of small jobs across nodes/CPUs in one batch command. This option should be used with the ``RRFS_NA_3km`` grid, and ``PPN_RUN_PRDGEN`` should be set to 22. ``ADDNL_OUTPUT_GRIDS``: (Default: []) - Set additional output grids for wgrib2 remapping, if any. Space-separated list of strings, e.g., ( "130" "242" "clue"). Default is no additional grids. - -``TESTBED_FIELDS_FN``: (Default: "") - The file which lists grib2 fields to be extracted for testbed files. Empty string means no need to generate testbed files. + Set additional output grids for wgrib2 remapping, if any. Space-separated list of strings, e.g., ( "130" "242" "clue"). Default is no additional grids. Current options as of 23 Apr 2021: + + * "130" (CONUS 13.5 km) + * "200" (Puerto Rico 16 km) + * "221" (North America 32 km) + * "242" (Alaska 11.25 km) + * "243" (Pacific 0.4-deg) + * "clue" (NSSL/SPC 3-km CLUE grid for 2020/2021) + * "hrrr" (HRRR 3-km CONUS grid) + * "hrrre" (HRRRE 3-km CONUS grid) + * "rrfsak" (RRFS 3-km Alaska grid) + * "hrrrak" (HRRR 3-km Alaska grid) .. _PlotVars: PLOT_ALLVARS Configuration Parameters ======================================== -Typically, the following parameters must be set explicitly by the user in the configuration file (``config.yaml``) when executing the plotting tasks. +Typically, the following parameters must be set explicitly by the user in the ``task_plot_allvars:`` section of the configuration file (``config.yaml``) when executing the plotting tasks. ``COMOUT_REF``: (Default: "") - The directory where the GRIB2 files from post-processing are located. In *community* mode (i.e., when ``RUN_ENVIR: "community"``), this directory will correspond to the location in the experiment directory where the post-processed output can be found (e.g., ``$EXPTDIR/$DATE_FIRST_CYCL/postprd``). In *nco* mode, this directory should be set to the location of the ``COMOUT`` directory and end with ``$PDY/$cyc``. For more detail on *nco* standards and directory naming conventions, see `WCOSS Implementation Standards `__ (particularly pp. 4-5). + Path to the reference experiment's COMOUT directory. This is the directory where the GRIB2 files from post-processing are located. In *community* mode (i.e., when ``RUN_ENVIR: "community"``), this directory will correspond to the location in the experiment directory where the post-processed output can be found (e.g., ``$EXPTDIR/$DATE_FIRST_CYCL/postprd``). In *nco* mode, this directory should be set to the location of the ``COMOUT`` directory and end with ``$PDY/$cyc``. For more detail on *nco* standards and directory naming conventions, see `WCOSS Implementation Standards `__ (particularly pp. 4-5). ``PLOT_FCST_START``: (Default: 0) The starting forecast hour for the plotting task. For example, if a forecast starts at 18h/18z, this is considered the 0th forecast hour, so "starting forecast hour" should be 0, not 18. If a forecast starts at 18h/18z, but the user only wants plots from the 6th forecast hour on, "starting forecast hour" should be 6. @@ -1035,11 +1347,61 @@ Typically, the following parameters must be set explicitly by the user in the co Forecast hour increment for the plotting task. For example, if the user wants plots for each forecast hour, they should set ``PLOT_FCST_INC: 1``. If the user only wants plots for some of the output (e.g., every 6 hours), they should set ``PLOT_FCST_INC: 6``. ``PLOT_FCST_END``: (Default: "") - The last forecast hour for the plotting task. For example, if a forecast run for 24 hours, and the user wants plots for each available hour of forecast output, they should set ``PLOT_FCST_END: 24``. If the user only wants plots from the first 12 hours of the forecast, the "last forecast hour" should be 12. + The last forecast hour for the plotting task. By default, ``PLOT_FCST_END`` is set to the same value as ``FCST_LEN_HRS``. For example, if a forecast runs for 24 hours, and the user wants plots for each available hour of forecast output, they do not need to set ``PLOT_FCST_END``, and it will automatically be set to 24. If the user only wants plots from the first 12 hours of the forecast, the "last forecast hour" should be 12 (i.e., ``PLOT_FCST_END: 12``). ``PLOT_DOMAINS``: (Default: ["conus"]) Domains to plot. Currently supported options are ["conus"], ["regional"], or both (i.e., ["conus", "regional"]). +Air Quality Modeling (AQM) Parameters +====================================== + +This section includes parameters related to Air Quality Modeling (AQM) tasks. Note that AQM features are not currently supported for community use. + +NEXUS_EMISSION Configuration Parameters +------------------------------------------------- + +Non-default parameters for the ``nexus_emission_*`` tasks are set in the ``task_nexus_emission:`` section of the ``config.yaml`` file. + +``PPN_NEXUS_EMISSION``: (Default: ``'{{ platform.NCORES_PER_NODE // OMP_NUM_THREADS_NEXUS_EMISSION }}'``) + Processes per node for the ``nexus_emission_*`` tasks. + +``KMP_AFFINITY_NEXUS_EMISSION``: (Default: "scatter") + Intel Thread Affinity Interface for the ``nexus_emission_*`` tasks. See :ref:`this note ` for more information on thread affinity. + +``OMP_NUM_THREADS_NEXUS_EMISSION``: (Default: 2) + The number of OpenMP threads to use for parallel regions. + +``OMP_STACKSIZE_NEXUS_EMISSION``: (Default: "1024m") + Controls the size of the stack for threads created by the OpenMP implementation. + +BIAS_CORRECTION_O3 Configuration Parameters +------------------------------------------------- + +Non-default parameters for the ``bias_correction_o3`` tasks are set in the ``task_bias_correction_o3:`` section of the ``config.yaml`` file. + +``KMP_AFFINITY_BIAS_CORRECTION_O3``: "scatter" + Intel Thread Affinity Interface for the ``bias_correction_o3`` task. See :ref:`this note ` for more information on thread affinity. + +``OMP_NUM_THREADS_BIAS_CORRECTION_O3``: (Default: 32) + The number of OpenMP threads to use for parallel regions. + +``OMP_STACKSIZE_BIAS_CORRECTION_O3``: (Default: "2056M") + Controls the size of the stack for threads created by the OpenMP implementation. + +BIAS_CORRECTION_PM25 Configuration Parameters +------------------------------------------------- + +Non-default parameters for the ``bias_correction_pm25`` tasks are set in the ``task_bias_correction_pm25:`` section of the ``config.yaml`` file. + +``KMP_AFFINITY_BIAS_CORRECTION_PM25``: (Default: "scatter") + Intel Thread Affinity Interface for the ``bias_correction_pm25`` task. See :ref:`this note ` for more information on thread affinity. + +``OMP_NUM_THREADS_BIAS_CORRECTION_PM25``: (Default: 32) + The number of OpenMP threads to use for parallel regions. + +``OMP_STACKSIZE_BIAS_CORRECTION_PM25``: (Default: "2056M") + Controls the size of the stack for threads created by the OpenMP implementation. + Global Configuration Parameters =================================== @@ -1065,9 +1427,19 @@ Set parameters associated with running ensembles. ``DO_ENSEMBLE``: (Default: false) Flag that determines whether to run a set of ensemble forecasts (for each set of specified cycles). If this is set to true, ``NUM_ENS_MEMBERS`` forecasts are run for each cycle, each with a different set of stochastic seed values. When false, a single forecast is run for each cycle. Valid values: ``True`` | ``False`` -``NUM_ENS_MEMBERS``: (Default: 1) +``NUM_ENS_MEMBERS``: (Default: 0) The number of ensemble members to run if ``DO_ENSEMBLE`` is set to true. This variable also controls the naming of the ensemble member directories. For example, if ``NUM_ENS_MEMBERS`` is set to 8, the member directories will be named *mem1, mem2, ..., mem8*. This variable is not used unless ``DO_ENSEMBLE`` is set to true. +``ENSMEM_NAMES``: (Default: ``'{% for m in range(NUM_ENS_MEMBERS) %}{{ "mem%03d, " % m }}{% endfor %}'``) + A list of names for the ensemble member names following the format mem001, mem002, etc. + +``FV3_NML_ENSMEM_FPS``: (Default: ``'{% for mem in ENSMEM_NAMES %}{{ [EXPTDIR, "%s_%s" % FV3_NML_FN, mem]|path_join }}{% endfor %}'``) + Paths to the corresponding ensemble member namelists in the experiment directory + +``ENS_TIME_LAG_HRS``: (Default: ``'[ {% for m in range([1,NUM_ENS_MEMBERS]|max) %} 0, {% endfor %} ]'``) + Time lag (in hours) to use for each ensemble member. For a deterministic forecast, this is a one-element array. Default values of array elements are zero. + + .. _stochastic-physics: Stochastic Physics Parameters @@ -1076,7 +1448,7 @@ Stochastic Physics Parameters Set default ad-hoc stochastic physics options. For the most updated and detailed documentation of these parameters, see the `UFS Stochastic Physics Documentation `__. ``NEW_LSCALE``: (Default: true) - Use correct formula for converting a spatial legnth scale into spectral space. + Use correct formula for converting a spatial length scale into spectral space. Specific Humidity (SHUM) Perturbation Parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -1102,7 +1474,7 @@ Specific Humidity (SHUM) Perturbation Parameters .. _SPPT: Stochastically Perturbed Physics Tendencies (SPPT) Parameters -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SPPT perturbs full physics tendencies *after* the call to the physics suite, unlike :ref:`SPP ` (below), which perturbs specific tuning parameters within a physics scheme. @@ -1164,7 +1536,6 @@ Stochastic Kinetic Energy Backscatter (SKEB) Parameters ``SKEB_VDOF``: (Default: 10) The number of degrees of freedom in the vertical direction for the SKEB random pattern. - .. _SPP: Parameters for Stochastically Perturbed Parameterizations (SPP) @@ -1173,14 +1544,11 @@ Parameters for Stochastically Perturbed Parameterizations (SPP) SPP perturbs specific tuning parameters within a physics :term:`parameterization ` (unlike :ref:`SPPT `, which multiplies overall physics tendencies by a random perturbation field *after* the call to the physics suite). Patterns evolve and are applied at each time step. Each SPP option is an array, applicable (in order) to the :term:`RAP`/:term:`HRRR`-based parameterization listed in ``SPP_VAR_LIST``. Enter each value of the array in ``config.yaml`` as shown below without commas or single quotes (e.g., ``SPP_VAR_LIST: [ "pbl" "sfc" "mp" "rad" "gwd" ]`` ). Both commas and single quotes will be added by Jinja when creating the namelist. .. note:: - SPP is currently only available for specific physics schemes used in the RAP/HRRR physics suite. Users need to be aware of which :term:`SDF` is chosen when turning this option on. Of the four supported physics suites, the full set of parameterizations can only be used with the ``FV3_HRRR`` option for ``CCPP_PHYS_SUITE``. + SPP is currently only available for specific physics schemes used in the RAP/HRRR physics suite. Users need to be aware of which :term:`SDF` is chosen when turning this option on. Of the five supported physics suites, the full set of parameterizations can only be used with the ``FV3_HRRR`` option for ``CCPP_PHYS_SUITE``. ``DO_SPP``: (Default: false) Flag to turn SPP on or off. SPP perturbs parameters or variables with unknown or uncertain magnitudes within the physics code based on ranges provided by physics experts. Valid values: ``True`` | ``False`` -``ISEED_SPP``: (Default: [ 4, 5, 6, 7, 8 ] ) - Seed for setting the random number sequence for the perturbation pattern. - ``SPP_VAR_LIST``: (Default: [ "pbl", "sfc", "mp", "rad", "gwd" ] ) The list of parameterizations to perturb: planetary boundary layer (PBL), surface physics (SFC), microphysics (MP), radiation (RAD), gravity wave drag (GWD). Valid values: ``"pbl"`` | ``"sfc"`` | ``"rad"`` | ``"gwd"`` | ``"mp"`` @@ -1202,6 +1570,8 @@ SPP perturbs specific tuning parameters within a physics :term:`parameterization ``SPP_STDDEV_CUTOFF``: (Default: [ 1.5, 1.5, 2.5, 1.5, 1.5 ] ) Limit for possible perturbation values in standard deviations from the mean. +``ISEED_SPP``: (Default: [ 4, 5, 6, 7, 8 ] ) + Seed for setting the random number sequence for the perturbation pattern. Land Surface Model (LSM) SPP ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -1211,7 +1581,11 @@ Land surface perturbations can be applied to land model parameters and land mode The parameters below turn on SPP in Noah or RUC LSM (support for Noah MP is in progress). Please be aware of the :term:`SDF` that you choose if you wish to turn on Land Surface Model (LSM) SPP. SPP in LSM schemes is handled in the ``&nam_sfcperts`` namelist block instead of in ``&nam_sppperts``, where all other SPP is implemented. ``DO_LSM_SPP``: (Default: false) - Turns on Land Surface Model (LSM) Stochastic Physics Parameterizations (SPP). When true, sets ``lndp_type=2``, which applies land perturbations to the selected paramaters using a newer scheme designed for data assimilation (DA) ensemble spread. LSM SPP perturbs uncertain land surface fields ("smc" "vgf" "alb" "sal" "emi" "zol" "stc") based on recommendations from physics experts. Valid values: ``True`` | ``False`` + Turns on Land Surface Model (LSM) Stochastic Physics Parameterizations (SPP). When true, sets ``lndp_type=2``, which applies land perturbations to the selected parameters using a newer scheme designed for data assimilation (DA) ensemble spread. LSM SPP perturbs uncertain land surface fields ("smc" "vgf" "alb" "sal" "emi" "zol" "stc") based on recommendations from physics experts. Valid values: ``True`` | ``False`` + +.. attention:: + + Only five perturbations at a time can be applied currently, but all seven are shown in the ``LSM_SPP_*`` variables below. ``LSM_SPP_TSCALE``: (Default: [ 21600, 21600, 21600, 21600, 21600, 21600, 21600 ] ) Decorrelation timescales in seconds. @@ -1233,5 +1607,245 @@ The parameters below turn on SPP in Noah or RUC LSM (support for Noah MP is in p Halo Blend Parameter ------------------------ ``HALO_BLEND``: (Default: 10) - Number of cells to use for "blending" the external solution (obtained from the :term:`LBCs`) with the internal solution from the FV3LAM :term:`dycore`. Specifically, it refers to the number of rows into the computational domain that should be blended with the LBCs. Cells at which blending occurs are all within the boundary of the native grid; they don't involve the 4 cells outside the boundary where the LBCs are specified (which is a different :term:`halo`). Blending is necessary to smooth out waves generated due to mismatch between the external and internal solutions. To shut :term:`halo` blending off, set this to zero. + Number of cells to use for "blending" the external solution (obtained from the :term:`LBCs`) with the internal solution from the FV3LAM :term:`dycore`. Specifically, it refers to the number of rows into the computational domain that should be blended with the LBCs. Cells at which blending occurs are all within the boundary of the native grid; they don't involve the 4 cells outside the boundary where the LBCs are specified (which is a different :term:`halo`). Blending is necessary to smooth out waves generated due to mismatch between the external and internal solutions. To shut :term:`halo` blending off, set this variable to zero. + +Pressure Tendency Diagnostic +------------------------------ +``PRINT_DIFF_PGR``: (Default: false) + Option to turn on/off the pressure tendency diagnostic. + +Verification Parameters +========================== + +Non-default parameters for verification tasks are set in the ``verification:`` section of the ``config.yaml`` file. + +Templates for Observation Files +--------------------------------- + +This section includes template variables for :term:`CCPA`, :term:`MRMS`, :term:`NOHRSC`, and :term:`NDAS` observation files. + +``OBS_CCPA_APCP01h_FN_TEMPLATE``: (Default: ``'{valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2'``) + File name template used to obtain the input observation files (in the ``PcpCombine_obs`` tasks) that contain the 1-hour accumulated precipitation (APCP) from which APCP for longer accumulations will be generated. + +``OBS_CCPA_APCPgt01h_FN_TEMPLATE``: (Default: ``'${OBS_CCPA_APCP01h_FN_TEMPLATE}_a${ACCUM_HH}h.nc'``) + File name template used to generate the observation files (in the ``PcpCombine_obs`` tasks) containing accumulated precipitation for accumulation periods longer than 1-hour. + +``OBS_NOHRSC_ASNOW_FN_TEMPLATE``: (Default: ``'{valid?fmt=%Y%m%d}/sfav2_CONUS_${ACCUM_HH}h_{valid?fmt=%Y%m%d%H}_grid184.grb2'``) + File name template for NOHRSC snow observations. + +``OBS_MRMS_REFC_FN_TEMPLATE``: (Default: ``'{valid?fmt=%Y%m%d}/MergedReflectivityQCComposite_00.50_{valid?fmt=%Y%m%d}-{valid?fmt=%H%M%S}.grib2'``) + File name template for :term:`MRMS` reflectivity observations. + +``OBS_MRMS_RETOP_FN_TEMPLATE``: (Default: ``'{valid?fmt=%Y%m%d}/EchoTop_18_00.50_{valid?fmt=%Y%m%d}-{valid?fmt=%H%M%S}.grib2'``) + File name template for MRMS echo top observations. + +``OBS_NDAS_SFCorUPA_FN_TEMPLATE``: (Default: ``'prepbufr.ndas.{valid?fmt=%Y%m%d%H}'``) + File name template for :term:`NDAS` surface and upper air observations. + +``OBS_NDAS_SFCorUPA_FN_METPROC_TEMPLATE``: (Default: ``'${OBS_NDAS_SFCorUPA_FN_TEMPLATE}.nc'``) + File name template for NDAS surface and upper air observations after processing by MET's ``pb2nc`` tool (to change format to NetCDF). + +VX Forecast Model Name +------------------------ + +``VX_FCST_MODEL_NAME``: (Default: ``'{{ nco.NET_default }}.{{ task_run_post.POST_OUTPUT_DOMAIN_NAME }}'``) + String that specifies a descriptive name for the model being verified. This is used in forming the names of the verification output files as well as in the contents of those files. + +``VX_FIELDS``: (Default: [ "APCP", "REFC", "RETOP", "SFC", "UPA" ]) + The fields or groups of fields for which verification tasks will run. Because ``ASNOW`` is often not of interest in cases outside of winter, and because observation files are not located for retrospective cases on NOAA HPSS before March 2020, ``ASNOW`` is not included by default. ``"ASNOW"`` may be added to this list in order to include the related verification tasks in the workflow. Valid values: ``"APCP"`` | ``"REFC"`` | ``"RETOP"`` | ``"SFC"`` | ``"UPA"`` | ``"ASNOW"`` + +``VX_APCP_ACCUMS_HRS``: (Default: [ 1, 3, 6, 24 ]) + The accumulation periods (in hours) to consider for accumulated precipitation (APCP). If ``VX_FIELDS`` contains ``"APCP"``, then ``VX_APCP_ACCUMS_HRS`` must contain at least one element. If ``VX_FIELDS`` does not contain ``"APCP"``, ``VX_APCP_ACCUMS_HRS`` will be ignored. + +``VX_ASNOW_ACCUMS_HRS``: (Default: [ 6, 24 ]) + The accumulation periods (in hours) to consider for ``ASNOW`` (accumulated snowfall). If ``VX_FIELDS`` contains ``"ASNOW"``, then ``VX_ASNOW_ACCUMS_HRS`` must contain at least one element. If ``VX_FIELDS`` does not contain ``"ASNOW"``, ``VX_ASNOW_ACCUMS_HRS`` will be ignored. + +Verification (VX) Directories +------------------------------ + +``VX_FCST_INPUT_BASEDIR``: (Default: ``'{% if user.RUN_ENVIR == "nco" %}$COMOUT/../..{% else %}{{ workflow.EXPTDIR }}{% endif %}'``) + Template for top-level directory containing forecast (but not obs) files that will be used as input into METplus for verification. + +``VX_OUTPUT_BASEDIR``: (Default: ``'{% if user.RUN_ENVIR == "nco" %}$COMOUT/metout{% else %}{{ workflow.EXPTDIR }}{% endif %}'``) + Template for top-level directory in which METplus will place its output. + +``VX_NDIGITS_ENSMEM_NAMES``: 3 + Number of digits in the ensemble member names. This is a configurable variable to allow users to change its value (e.g., to go from "mem004" to "mem04") when using staged forecast files that do not use the same number of digits as the SRW App. + +Verification (VX) File Name and Path Templates +------------------------------------------------ + +This section contains file name and path templates used in the verification (VX) tasks. + +``FCST_SUBDIR_TEMPLATE``: (Default: ``'{% if user.RUN_ENVIR == "nco" %}${NET_default}.{init?fmt=%Y%m%d?shift=-${time_lag}}/{init?fmt=%H?shift=-${time_lag}}{% else %}{init?fmt=%Y%m%d%H?shift=-${time_lag}}{% if global.DO_ENSEMBLE %}/${ensmem_name}{% endif %}/postprd{% endif %}'``) + A template for the subdirectory containing input forecast files for VX tasks. + +``FCST_FN_TEMPLATE``: (Default: ``'${NET_default}.t{init?fmt=%H?shift=-${time_lag}}z{% if user.RUN_ENVIR == "nco" and global.DO_ENSEMBLE %}.${ensmem_name}{% endif %}.prslev.f{lead?fmt=%HHH?shift=${time_lag}}.${POST_OUTPUT_DOMAIN_NAME}.grib2'``) + A template for the forecast file names used as input to verification tasks. + +``FCST_FN_METPROC_TEMPLATE``: (Default: ``'${NET_default}.t{init?fmt=%H}z{% if user.RUN_ENVIR == "nco" and global.DO_ENSEMBLE %}.${ensmem_name}{% endif %}.prslev.f{lead?fmt=%HHH}.${POST_OUTPUT_DOMAIN_NAME}_${VAR}_a${ACCUM_HH}h.nc'``) + A template for how to name the forecast files for accumulated precipitation (APCP) with greater than 1-hour accumulation (i.e., 3-, 6-, and 24-hour accumulations) after processing by ``PcpCombine``. + +``NUM_MISSING_OBS_FILES_MAX``: (Default: 2) + For verification tasks that need observational data, this specifies the maximum number of observation files that may be missing. If more than this number are missing, the verification task will error out. + Note that this is a crude way of checking that there are enough observations to conduct verification since this number should probably depend on the field being verified, the time interval between observations, the length of the forecast, etc. An alternative may be to specify the maximum allowed fraction of observation files that can be missing (i.e., the number missing divided by the number that are expected to exist). + +``NUM_MISSING_FCST_FILES_MAX``: (Default: 0) + For verification tasks that need forecast data, this specifies the maximum number of post-processed forecast files that may be missing. If more than this number are missing, the verification task will not be run. + +Coupled AQM Configuration Parameters +===================================== + +Non-default parameters for coupled Air Quality Modeling (AQM) tasks are set in the ``cpl_aqm_parm:`` section of the ``config.yaml`` file. Note that coupled AQM features are not currently supported for community use. + +``CPL_AQM``: (Default: false) + Coupling flag for air quality modeling. + +``DO_AQM_DUST``: (Default: true) + Flag turning on/off AQM dust option in AQM_RC. + +``DO_AQM_CANOPY``: (Default: false) + Flag turning on/off AQM canopy option in AQM_RC. + +``DO_AQM_PRODUCT``: (Default: true) + Flag turning on/off AQM output products in AQM_RC. + +``DO_AQM_CHEM_LBCS``: (Default: true) + Add chemical LBCs to chemical LBCs. + +``DO_AQM_GEFS_LBCS``: (Default: false) + Add GEFS aerosol LBCs to chemical LBCs. + +``DO_AQM_SAVE_AIRNOW_HIST``: (Default: false) + Save bias-correction airnow training data. + +``DO_AQM_SAVE_FIRE``: (Default: false) + Archive fire emission file to HPSS. + +``DCOMINbio_default``: (Default: "") + Path to the directory containing AQM bio files. + +``DCOMINdust_default``: (Default: "/path/to/dust/dir") + Path to the directory containing AQM dust file. + +``DCOMINcanopy_default``: (Default: "/path/to/canopy/dir") + Path to the directory containing AQM canopy files. + +``DCOMINfire_default``: (Default: "") + Path to the directory containing AQM fire files. + +``DCOMINchem_lbcs_default``: (Default: "") + Path to the directory containing chemical LBC files. + +``DCOMINgefs_default``: (Default: "") + Path to the directory containing GEFS aerosol LBC files. + +``DCOMINpt_src_default``: (Default: "/path/to/point/source/base/directory") + Parent directory containing point source files. + +``DCOMINairnow_default``: (Default: "/path/to/airnow/obaservation/data") + Path to the directory containing AIRNOW observation data. + +``COMINbicor``: (Default: "/path/to/historical/airnow/data/dir") + Path of reading in historical training data for bias correction. + +``COMOUTbicor``: (Default: "/path/to/historical/airnow/data/dir") + Path to save the current cycle's model output and AirNow observations as training data for future use. ``$COMINbicor`` and ``$COMOUTbicor`` can be distinguished by the ``${yyyy}${mm}${dd}`` under the same location. + +``AQM_CONFIG_DIR``: (Default: "") + Configuration directory for AQM. + +``AQM_BIO_FILE``: (Default: "BEIS_SARC401.ncf") + File name of AQM BIO file. + +``AQM_DUST_FILE_PREFIX``: (Default: "FENGSHA_p8_10km_inputs") + Prefix of AQM dust file. + +``AQM_DUST_FILE_SUFFIX``: (Default: ".nc") + Suffix and extension of AQM dust file. + +``AQM_CANOPY_FILE_PREFIX``: (Default: "gfs.t12z.geo") + File name of AQM canopy file. + +``AQM_CANOPY_FILE_SUFFIX``: (Default: ".canopy_regrid.nc") + Suffix and extension of AQM CANOPY file. + +``AQM_FIRE_FILE_PREFIX``: (Default: "GBBEPx_C401GRID.emissions_v003") + Prefix of AQM FIRE file. + +``AQM_FIRE_FILE_SUFFIX``: (Default: ".nc") + Suffix and extension of AQM FIRE file. + +``AQM_FIRE_FILE_OFFSET_HRS``: (Default: 0) + Time offset when retrieving fire emission data files. In a real-time run, the data files for :term:`ICs/LBCs` are not ready for use until the case starts. To resolve this issue, a real-time run uses the input data files in the previous cycle. For example, if the experiment run cycle starts at 12z, and ``AQM_FIRE_FILE_OFFSET_HRS: 6``, the fire emission data file from the previous cycle (06z) is used. + +``AQM_FIRE_ARCHV_DIR``: (Default: "/path/to/archive/dir/for/RAVE/on/HPSS") + Path to the archive directory for RAVE emission files on :term:`HPSS`. + +``AQM_RC_FIRE_FREQUENCY``: (Default: "static") + Fire frequency in ``aqm.rc``. + +``AQM_RC_PRODUCT_FN``: (Default: "aqm.prod.nc") + File name of AQM output products. + +``AQM_RC_PRODUCT_FREQUENCY``: (Default: "hourly") + Frequency of AQM output products. + +``AQM_LBCS_FILES``: (Default: "gfs_bndy_chen_.tile7.000.nc") + File name of chemical LBCs. + +``AQM_GEFS_FILE_PREFIX``: (Default: "geaer") + Prefix of AQM GEFS file ("geaer" or "gfs"). + +``AQM_GEFS_FILE_CYC``: (Default: "") + Cycle of the GEFS aerosol LBC files only if it is fixed. + +``NEXUS_INPUT_DIR``: (Default: "") + Same as ``GRID_DIR`` but for the the air quality emission generation task. Should be blank for the default value specified in ``setup.sh``. + +``NEXUS_FIX_DIR``: (Default: "") + Directory containing ``grid_spec`` files as the input file of NEXUS. + +``NEXUS_GRID_FN``: (Default: "grid_spec_GSD_HRRR_25km.nc") + File name of the input ``grid_spec`` file of NEXUS. + +``NUM_SPLIT_NEXUS``: (Default: 3) + Number of split NEXUS emission tasks. + +``NEXUS_GFS_SFC_OFFSET_HRS``: (Default: 0) + Time offset when retrieving GFS surface data files. + +``NEXUS_GFS_SFC_DIR``: (Default: "") + Path to directory containing GFS surface data files. This is set to ``COMINgfs`` when ``DO_REAL_TIME=TRUE``. + +``NEXUS_GFS_SFC_ARCHV_DIR``: (Default: "/NCEPPROD/hpssprod/runhistory") + Path to archive directory for gfs surface files on HPSS. + +Rocoto Parameters +=================== + +Non-default Rocoto workflow parameters are set in the ``rocoto:`` section of the ``config.yaml`` file. This section is structured as follows: + +.. code-block:: console + + rocoto: + attrs: "" + cycledefs: "" + entities: "" + log: "" + tasks: + taskgroups: "" + +Users are most likely to use the ``taskgroups:`` component of the ``rocoto:`` section to add or delete groups of tasks from the default list of tasks. For example, to add plotting tasks, users would add: + +.. code-block:: console + + rocoto: + ... + tasks: + taskgroups: '{{ ["parm/wflow/prep.yaml", "parm/wflow/coldstart.yaml", "parm/wflow/post.yaml", "parm/wflow/plot.yaml"]|include }}' + +See :numref:`Section %s ` for more information on the components of the ``rocoto:`` section and how to define a Rocoto workflow. + + diff --git a/docs/UsersGuide/source/Reference/Glossary.rst b/docs/UsersGuide/source/Reference/Glossary.rst index 8a61939d18..ebeffa33e8 100644 --- a/docs/UsersGuide/source/Reference/Glossary.rst +++ b/docs/UsersGuide/source/Reference/Glossary.rst @@ -220,6 +220,9 @@ Glossary RAP `Rapid Refresh `__. The continental-scale NOAA hourly-updated assimilation/modeling system operational at :term:`NCEP`. RAP covers North America and is comprised primarily of a numerical forecast model and an analysis/assimilation system to initialize that model. RAP is complemented by the higher-resolution 3km High-Resolution Rapid Refresh (:term:`HRRR`) model. + RDHPCS + NOAA Research & Development High-Performance Computing Systems. + Repository A central location in which files (e.g., data, code, documentation) are stored and managed. diff --git a/ush/config_defaults.yaml b/ush/config_defaults.yaml index 012e886530..09571845a9 100644 --- a/ush/config_defaults.yaml +++ b/ush/config_defaults.yaml @@ -50,6 +50,50 @@ user: # ACCOUNT: # The account under which to submit jobs to the queue. # + # HOMEdir: + # The path to the user's ufs-srweather-app clone. This path + # is set in ush/setup.py as the parent directory to USHdir. + # + # USHdir: + # The path to the user's ush directory in their ufs-srweather-app clone. + # This path is set automatically in the main function of setup.py and + # corresponds to the location of setup.py (i.e., the ush directory). + # + # SCRIPTSdir: + # The path to the user's scripts directory in their ufs-srweather-app clone. + # + # JOBSdir: + # The path to the user's jobs directory in their ufs-srweather-app clone. + # + # SORCdir: + # The path to the user's sorc directory in their ufs-srweather-app clone. + # + # PARMdir: + # The path to the user's parm directory in their ufs-srweather-app clone. + # + # MODULESdir: + # The path to the user's modulefiles directory in their ufs-srweather-app + # clone. + # + # EXECdir: + # The path to the user's exec directory in their ufs-srweather-app clone. + # + # METPLUS_CONF: + # The path to the user's final METplus configuration file. By default, + # METplus configuration files reside in ufs-srweather-app/parm/metplus. + # + # UFS_WTHR_MDL_DIR: + # The path to the location where the weather + # model code is located within the ufs-srweather-app clone. This + # parameter is set in setup.py and uses information from the + # Externals.cfg file to build the correct path. It is built with + # knowledge of HOMEdir and often corresponds to ufs-srweather-app/ + # sorc/ufs-weather-model. + # + # ARL_NEXUS_DIR: + # The path to the user's NEXUS directory. By default, NEXUS source + # code resides in ufs-srweather-app/sorc/arl_nexus + # #----------------------------------------------------------------------- MACHINE: "BIG_COMPUTER" ACCOUNT: "" @@ -62,9 +106,7 @@ user: PARMdir: '{{ [HOMEdir, "parm"]|path_join }}' MODULESdir: '{{ [HOMEdir, "modulefiles"]|path_join }}' EXECdir: '{{ [HOMEdir, workflow.EXEC_SUBDIR]|path_join }}' - VX_CONFIG_DIR: '{{ [HOMEdir, "parm"]|path_join }}' METPLUS_CONF: '{{ [PARMdir, "metplus"]|path_join }}' - MET_CONFIG: '{{ [PARMdir, "met"]|path_join }}' UFS_WTHR_MDL_DIR: '{{ user.UFS_WTHR_MDL_DIR }}' ARL_NEXUS_DIR: '{{ [SORCdir, "arl_nexus"]|path_join }}' @@ -76,10 +118,10 @@ platform: #----------------------------------------------------------------------- # # WORKFLOW_MANAGER: - # The workflow manager to use (e.g. rocoto). This is set to "none" by + # The workflow manager to use (e.g., rocoto). This is set to "none" by # default, but if the machine name is set to a platform that supports - # rocoto, this will be overwritten and set to "rocoto". If set - # explicitly to rocoto along with the use of the MACHINE=linux target, + # Rocoto, this will be overwritten and set to "rocoto". If set + # explicitly to "rocoto" along with the use of the MACHINE: "linux" target, # the configuration layer assumes a Slurm batch manager when generating # the XML. Valid options: "rocoto" or "none" # @@ -92,23 +134,23 @@ platform: # to 1 makes sense # # BUILD_MOD_FN: - # Name of alternative build module file to use if using an + # Name of an alternative build modulefile to use if using an # unsupported platform. Is set automatically for supported machines. # # WFLOW_MOD_FN: - # Name of alternative workflow module file to use if using an + # Name of alternative workflow modulefile to use if using an # unsupported platform. Is set automatically for supported machines. # # BUILD_VER_FN: - # File name containing the version of the modules used for building the app. - # Currently, WCOSS2 only uses this file. + # File name containing the version of the modules used for building the App. + # Currently, only WCOSS2 uses this file. # # RUN_VER_FN: - # File name containing the version of the modules used for running the app. - # Currently, WCOSS2 only uses this file. + # File name containing the version of the modules used for running the App. + # Currently, only WCOSS2 uses this file. # # SCHED: - # The job scheduler to use (e.g. slurm). Set this to an empty string in + # The job scheduler to use (e.g., Slurm). Set this to an empty string in # order for the experiment generation script to set it depending on the # machine. # @@ -133,28 +175,28 @@ platform: # # PARTITION_HPSS: # If using the slurm job scheduler (i.e. if SCHED is set to "slurm"), - # the partition to which the tasks that get or create links to external + # Tasks that get or create links to external # model files [which are needed to generate initial conditions (ICs) and - # lateral boundary conditions (LBCs)] are submitted. If this is not set + # lateral boundary conditions (LBCs)] are submitted to this partition. If this is not set # or is set to an empty string, it will be (re)set to a machine-dependent # value. This is not used if SCHED is not set to "slurm". # # QUEUE_HPSS: - # The queue or QOS to which the tasks that get or create links to external + # Tasks that get or create links to external # model files [which are needed to generate initial conditions (ICs) and - # lateral boundary conditions (LBCs)] are submitted. If this is not set + # lateral boundary conditions (LBCs)] are submitted to this queue or QOS. If this is not set # or is set to an empty string, it will be (re)set to a machine-dependent # value. # # PARTITION_FCST: # If using the slurm job scheduler (i.e. if SCHED is set to "slurm"), - # the partition to which the task that runs forecasts is submitted. If + # The task that runs forecasts is submitted to this partition. If # this is not set or set to an empty string, it will be (re)set to a # machine-dependent value. This is not used if SCHED is not set to # "slurm". # # QUEUE_FCST: - # The queue or QOS to which the task that runs a forecast is submitted. + # The task that runs a forecast is submitted to this queue or QOS. # If this is not set or set to an empty string, it will be (re)set to a # machine-dependent value. # @@ -186,7 +228,7 @@ platform: # will be ignored unless WORKFLOW_MANAGER: "none". Definitions: # # RUN_CMD_UTILS: - # The run command for pre-processing utilities (shave, orog, sfc_climo_gen, + # The run command for MPI-enabled pre-processing utilities (shave, orog, sfc_climo_gen, # etc.) Can be left blank for smaller domains, in which case the executables # will run without MPI. # @@ -226,7 +268,12 @@ platform: # XML Native command # SCHED_NATIVE_CMD: "" - + # + #----------------------------------------------------------------------- + # Pre task commands such as "ulimit" needed by tasks + #----------------------------------------------------------------------- + # + PRE_TASK_CMDS: "" # #----------------------------------------------------------------------- # @@ -239,8 +286,17 @@ platform: # in file scripts/exregional_get_verif_obs.sh for more details about # files and directory structure, as well as important caveats about # errors in the metadata and file names. - # NOTE: Do not set this to the same path as other *_OBS_DIR variables, - # otherwise unexpected results and data loss may occur + # NOTE: Do not set this to the same path as other *_OBS_DIR variables; + # otherwise unexpected results and data loss may occur. + # + # NOHRSC_OBS_DIR: + # User-specified location of top-level directory where NOHRSC 6- and + # 24-hour snowfall accumulation files used by METplus are located (or, + # if retrieved by the workflow, where they will be placed). See comments + # in file scripts/exregional_get_verif_obs.sh for more details about + # files and directory structure + # NOTE: Do not set this to the same path as other *_OBS_DIR variables; + # otherwise unexpected results and data loss may occur. # # MRMS_OBS_DIR: # User-specified location of the directory where MRMS composite @@ -248,8 +304,8 @@ platform: # retrieved by the workflow, where they will be placed). See comments # in the scripts/exregional_get_verif_obs.sh for more details about # files and directory structure. - # NOTE: Do not set this to the same path as other *_OBS_DIR variables, - # otherwise unexpected results and data loss may occur + # NOTE: Do not set this to the same path as other *_OBS_DIR variables; + # otherwise unexpected results and data loss may occur. # # NDAS_OBS_DIR: # User-specified location of top-level directory where NDAS prepbufr @@ -257,17 +313,8 @@ platform: # where they will be placed). See comments in file # scripts/exregional_get_verif_obs.sh for more details about files # and directory structure. - # NOTE: Do not set this to the same path as other *_OBS_DIR variables, - # otherwise unexpected results and data loss may occur - # - # NOHRSC_OBS_DIR: - # User-specified location of top-level directory where NOHRSC 6- and - # 24-hour snowfall accumulation files used by METplus are located (or, - # if retrieved by the workflow, where they will be placed). See comments - # in file scripts/exregional_get_verif_obs.sh for more details about - # files and directory structure - # NOTE: Do not set this to the same path as other *_OBS_DIR variables, - # otherwise unexpected results and data loss may occur + # NOTE: Do not set this to the same path as other *_OBS_DIR variables; + # otherwise unexpected results and data loss may occur. # #----------------------------------------------------------------------- # @@ -299,20 +346,54 @@ platform: DOMAIN_PREGEN_BASEDIR: "" # #----------------------------------------------------------------------- - # Pre task commands such as "ulimit" needed by tasks + # Test directories used in run_WE2E script #----------------------------------------------------------------------- - # - PRE_TASK_CMDS: "" + # TEST_EXTRN_MDL_SOURCE_BASEDIR: + # This parameter allows testing of user-staged files in a known location + # on a given platform. This path contains a limited dataset and likely + # will not be useful for most user experiments. + # + # TEST_AQM_INPUT_BASEDIR: + # The path to user-staged AQM fire emission data for WE2E testing. + # + # TEST_PREGEN_BASEDIR: + # Similar to DOMAIN_PREGEN_BASEDIR, this variable sets the base + # directory containing pregenerated grid, orography, and surface + # climatology files for WE2E tests. This is an alternative for setting + # GRID_DIR, OROG_DIR, and SFC_CLIMO_DIR individually. + # + # TEST_ALT_EXTRN_MDL_SYSBASEDIR_ICS: + # This parameter is used to test the mechanism that allows users to + # point to a data stream on disk. It sets up a sandbox location that + # mimics the stream in a more controlled way and tests the ability + # to access ICS. + # + # TEST_ALT_EXTRN_MDL_SYSBASEDIR_LBCS: + # This parameter is used to test the mechanism that allows users to + # point to a data stream on disk. It sets up a sandbox location that + # mimics the stream in a more controlled way and tests the ability + # to access LBCS. + # + # TEST_CCPA_OBS_DIR, TEST_MRMS_OBS_DIR, TEST_NDAS_OBS_DIR: + # These parameters are used by the testing script to test the mechanism + # that allows user to point to data streams on disk for observation data + # for verification tasks. They test the ability for users to set + # CCPA_OBS_DIR, MRMS_OBS_DIR, and NDAS_OBS_DIR respectively. + # + # TEST_VX_FCST_INPUT_BASEDIR: + # The path to user-staged forecast files for WE2E testing of verificaton + # using user-staged forecast files in a known location on a given platform. # #----------------------------------------------------------------------- - # Test directories used in run_WE2E script - #----------------------------------------------------------------------- # TEST_EXTRN_MDL_SOURCE_BASEDIR: "" TEST_AQM_INPUT_BASEDIR: "" TEST_PREGEN_BASEDIR: "" TEST_ALT_EXTRN_MDL_SYSBASEDIR_ICS: "" TEST_ALT_EXTRN_MDL_SYSBASEDIR_LBCS: "" + TEST_CCPA_OBS_DIR: "" + TEST_MRMS_OBS_DIR: "" + TEST_NDAS_OBS_DIR: "" TEST_VX_FCST_INPUT_BASEDIR: "" # #----------------------------------------------------------------------- @@ -324,17 +405,22 @@ platform: # files that are needed to run the FV3-LAM model are located # # FIXaer: - # System directory where MERRA2 aerosol climatology files are located + # System directory where MERRA2 aerosol climatology files are located. + # Only used if running with a physics suite that uses Thompson microphysics. # # FIXlut: - # System directory where the lookup tables for optics properties are located + # System directory where the lookup tables for optics properties are located. + # Only used if running with a physics suite that uses Thompson microphysics. # # FIXorg: - # System directory where orography data is located + # System directory where orography data are located # # FIXsfc: - # System directory where surface climatology data is located + # System directory where surface climatology data are located # + # FIXshp: + # System directory where the graphics shapefiles are located. + # # FIXcrtm: # System directory where CRTM fixed files are located # @@ -391,7 +477,7 @@ workflow: # Set cron-associated parameters. Definitions: # # USE_CRON_TO_RELAUNCH: - # Flag that determines whether or not to add a line to the user's cron + # Flag that determines whether to add a line to the user's cron # table to call the experiment launch script every CRON_RELAUNCH_INTVL_MNTS # minutes. # @@ -400,6 +486,11 @@ workflow: # launch script by a cron job to (re)launch the experiment (so that the # workflow for the experiment kicks off where it left off). # + # CRONTAB_LINE: + # The launch command that will appear in the crontab + # + # LOAD_MODULES_RUN_TASK_FP: + # Path to load_modules_run_task.sh. #----------------------------------------------------------------------- # USE_CRON_TO_RELAUNCH: false @@ -444,7 +535,7 @@ workflow: # and orography fixed files. # # Ideally, the same separator should be used in the names of these fixed - # files as the surface climatology fixed files (which always use a "." + # files as in the surface climatology fixed files (which always use a "." # as the separator), i.e. ideally, DOT_OR_USCORE should be set to "." # #----------------------------------------------------------------------- @@ -460,12 +551,16 @@ workflow: # # CONSTANTS_FN: # Name of the file containing definitions of various mathematical, physical, - # and SRW App contants. + # and SRW App constants. # # RGNL_GRID_NML_FN: # Name of file containing the namelist settings for the code that generates # a "ESGgrid" type of regional grid. # + # FV3_NML_FN: + # Name of the forecast model's namelist file. It includes the information in + # FV3_NML_BASE_SUITE_FN (i.e., input.nml.FV3), FV3_NML_YAML_CONFIG_FN (i.e., FV3.input.yml), and the user configuration file (i.e., config.yaml). + # # FV3_NML_BASE_SUITE_FN: # Name of Fortran namelist file containing the forecast model's base suite # namelist, i.e. the portion of the namelist that is common to all physics @@ -486,6 +581,17 @@ workflow: # the directory in which it is created in the build step to the executables # directory (EXECDIR; this is set during experiment generation). # + # DATA_TABLE_FN: + # Name of the file that contains the data table read in by the forecast model. + # + # DIAG_TABLE_FN: + # Prefix for the name of the file that specifies + # the output fields of the forecast model. + # + # FIELD_TABLE_FN: + # Prefix for the name of the file that specifies + # the tracers that the forecast model will read in from the IC/LBC files. + # # DIAG_TABLE_TMPL_FN: # Name of a template file that specifies the output fields of the # forecast model (ufs-weather-model: diag_table) followed by the name @@ -499,20 +605,104 @@ workflow: # Its default value is the name of the file that the ufs weather model expects # to read in. # - # MODEL_CONFIG_TMPL_FN: + # MODEL_CONFIG_FN: # Name of a template file that contains settings and configurations for the # NUOPC/ESMF main component (ufs-weather-model: model_config). Its default # value is the name of the file that the ufs weather model expects to read in. # - # NEMS_CONFIG_TMPL_FN: + # NEMS_CONFIG_FN: # Name of a template file that contains information about the various NEMS # components and their run sequence (ufs-weather-model: nems.configure). # Its default value is the name of the file that the ufs weather model expects # to read in. # + # AQM_RC_FN: + # Name of resource file for NOAA Air Quality Model (AQM). + # # AQM_RC_TMPL_FN: # Template file name of resource file for NOAA Air Quality Model (AQM) + # + #----------------------------------------------------------------------- + # + EXPT_CONFIG_FN: "config.yaml" + CONSTANTS_FN: "constants.yaml" + + RGNL_GRID_NML_FN: "regional_grid.nml" + + FV3_NML_FN: "input.nml" + FV3_NML_BASE_SUITE_FN: "{{ FV3_NML_FN }}.FV3" + FV3_NML_YAML_CONFIG_FN: "FV3.input.yml" + FV3_NML_BASE_ENS_FN: "{{ FV3_NML_FN }}.base_ens" + FV3_EXEC_FN: "ufs_model" + + DATA_TABLE_FN: "data_table" + DIAG_TABLE_FN: "diag_table" + FIELD_TABLE_FN: "field_table" + DIAG_TABLE_TMPL_FN: 'diag_table.{{ CCPP_PHYS_SUITE }}' + FIELD_TABLE_TMPL_FN: 'field_table.{{ CCPP_PHYS_SUITE }}' + MODEL_CONFIG_FN: "model_configure" + NEMS_CONFIG_FN: "nems.configure" + AQM_RC_FN: "aqm.rc" + AQM_RC_TMPL_FN: "aqm.rc" + + # + #----------------------------------------------------------------------- + # FV3_NML_BASE_SUITE_FP: + # Path to the FV3_NML_BASE_SUITE_FN file. + # + # FV3_NML_YAML_CONFIG_FP: + # Path to the FV3_NML_YAML_CONFIG_FN file. + # + # FV3_NML_BASE_ENS_FP: + # Path to the FV3_NML_BASE_ENS_FN file. + # + # DATA_TABLE_TMPL_FP: + # Path to the DATA_TABLE_FN file. + # + # DIAG_TABLE_TMPL_FP: + # Path to the DIAG_TABLE_TMPL_FN file. + # + # FIELD_TABLE_TMPL_FP: + # Path to the FIELD_TABLE_TMPL_FN file. + # + # MODEL_CONFIG_TMPL_FP: + # Path to the MODEL_CONFIG_FN file. + # + # NEMS_CONFIG_TMPL_FP: + # Path to the NEMS_CONFIG_FN file. + # + # AQM_RC_TMPL_FP: + # Path to the AQM_RC_TMPL_FN file. + # + #----------------------------------------------------------------------- # + + FV3_NML_BASE_SUITE_FP: '{{ [user.PARMdir, FV3_NML_BASE_SUITE_FN]|path_join }}' + FV3_NML_YAML_CONFIG_FP: '{{ [user.PARMdir, FV3_NML_YAML_CONFIG_FN]|path_join }}' + FV3_NML_BASE_ENS_FP: '{{ [EXPTDIR, FV3_NML_BASE_ENS_FN]|path_join }}' + DATA_TABLE_TMPL_FP: '{{ [user.PARMdir, DATA_TABLE_FN]|path_join }}' + DIAG_TABLE_TMPL_FP: '{{ [user.PARMdir, DIAG_TABLE_TMPL_FN]|path_join }}' + FIELD_TABLE_TMPL_FP: '{{ [user.PARMdir, FIELD_TABLE_TMPL_FN]|path_join }}' + MODEL_CONFIG_TMPL_FP: '{{ [user.PARMdir, MODEL_CONFIG_FN]|path_join }}' + NEMS_CONFIG_TMPL_FP: '{{ [user.PARMdir, NEMS_CONFIG_FN]|path_join }}' + AQM_RC_TMPL_FP: '{{ [user.PARMdir, AQM_RC_TMPL_FN]|path_join }}' + + # + #----------------------------------------------------------------------- + # These are staged in the exptdir at configuration time + # + # DATA_TABLE_FP: + # Path to the data table in the experiment directory. + # + # FIELD_TABLE_FP: + # Path to the field table in the experiment directory. + # + # NEMS_CONFIG_FP: + # Path to the NEMS_CONFIG_FN file in the experiment directory. + # + # FV3_NML_FP: + # Path to the FV3_NML_FN file in the experiment directory. + # # FCST_MODEL: # Name of forecast model (default=ufs-weather-model) # @@ -521,7 +711,7 @@ workflow: # script creates and that defines the workflow for the experiment. # # GLOBAL_VAR_DEFNS_FN: - # Name of file (a shell script) containing the defintions of the primary + # Name of file (a shell script) containing the definitions of the primary # experiment variables (parameters) defined in this default configuration # script and in the user-specified configuration as well as secondary # experiment variables generated by the experiment generation script. @@ -548,40 +738,25 @@ workflow: # Name of the log file that contains the output from successive calls to # the workflow launch script (WFLOW_LAUNCH_SCRIPT_FN). # + # GLOBAL_VAR_DEFNS_FP: + # Path to the global varibale definition file + # (GLOBAL_VAR_DEFNS_FN) in the experiment directory. + # + # ROCOTO_YAML_FP: + # Path to the Rocoto YAML configuration file + # (ROCOTO_YAML_FN) in the experiment directory. + # + # WFLOW_LAUNCH_SCRIPT_FP: + # Path to the workflow launch script + # (WFLOW_LAUNCH_SCRIPT_FN) in the experiment directory. + # + # WFLOW_LAUNCH_LOG_FP: + # Path to the log file (WFLOW_LAUNCH_LOG_FN) in + # the experiment directory that contains output from successive + # calls to the workflow launch script. + # #----------------------------------------------------------------------- # - EXPT_CONFIG_FN: "config.yaml" - CONSTANTS_FN: "constants.yaml" - - RGNL_GRID_NML_FN: "regional_grid.nml" - - FV3_NML_FN: "input.nml" - FV3_NML_BASE_SUITE_FN: "{{ FV3_NML_FN }}.FV3" - FV3_NML_YAML_CONFIG_FN: "FV3.input.yml" - FV3_NML_BASE_ENS_FN: "{{ FV3_NML_FN }}.base_ens" - FV3_EXEC_FN: "ufs_model" - - DATA_TABLE_FN: "data_table" - DIAG_TABLE_FN: "diag_table" - FIELD_TABLE_FN: "field_table" - DIAG_TABLE_TMPL_FN: 'diag_table.{{ CCPP_PHYS_SUITE }}' - FIELD_TABLE_TMPL_FN: 'field_table.{{ CCPP_PHYS_SUITE }}' - MODEL_CONFIG_FN: "model_configure" - NEMS_CONFIG_FN: "nems.configure" - AQM_RC_FN: "aqm.rc" - AQM_RC_TMPL_FN: "aqm.rc" - - FV3_NML_BASE_SUITE_FP: '{{ [user.PARMdir, FV3_NML_BASE_SUITE_FN]|path_join }}' - FV3_NML_YAML_CONFIG_FP: '{{ [user.PARMdir, FV3_NML_YAML_CONFIG_FN]|path_join }}' - FV3_NML_BASE_ENS_FP: '{{ [EXPTDIR, FV3_NML_BASE_ENS_FN]|path_join }}' - DATA_TABLE_TMPL_FP: '{{ [user.PARMdir, DATA_TABLE_FN]|path_join }}' - DIAG_TABLE_TMPL_FP: '{{ [user.PARMdir, DIAG_TABLE_TMPL_FN]|path_join }}' - FIELD_TABLE_TMPL_FP: '{{ [user.PARMdir, FIELD_TABLE_TMPL_FN]|path_join }}' - MODEL_CONFIG_TMPL_FP: '{{ [user.PARMdir, MODEL_CONFIG_FN]|path_join }}' - NEMS_CONFIG_TMPL_FP: '{{ [user.PARMdir, NEMS_CONFIG_FN]|path_join }}' - AQM_RC_TMPL_FP: '{{ [user.PARMdir, AQM_RC_TMPL_FN]|path_join }}' - - # These are staged in the exptdir at configuration time DATA_TABLE_FP: '{{ [EXPTDIR, DATA_TABLE_FN]|path_join }}' FIELD_TABLE_FP: '{{ [EXPTDIR, FIELD_TABLE_FN]|path_join }}' NEMS_CONFIG_FP: '{{ [EXPTDIR, NEMS_CONFIG_FN]|path_join }}' @@ -651,9 +826,19 @@ workflow: # # *_FN and *_FP variables set the name and paths to the suite # definition files used for the experiment + # + # CCPP_PHYS_SUITE_FN: The name of the suite definition file (SDF) + # used for the experiment. + # + # CCPP_PHYS_SUITE_IN_CCPP_FP: The full path to the suite definition + # file (SDF) in the forecast model's directory structure (e.g., + # /path/to/ufs-srweather-app/sorc/ufs-weather-model/FV3/ccpp/suites/$CCPP_PHYS_SUITE_FN) # + # CCPP_PHYS_SUITE_FP: The full path to the suite definition file + # (SDF) in the experiment directory. + # # CCPP_PHYS_DIR: - # The directory of the CCPP physics source code. This path is needed to + # The directory containing the CCPP physics source code. This path is needed to # be able to use table(s) contained in that repository. #----------------------------------------------------------------------- # @@ -667,6 +852,18 @@ workflow: # # Set the field dictionary file name and paths. # + # FIELD_DICT_FN: + # The name of the field dictionary file. This file is a community-based + # dictionary for shared coupling fields and is automatically generated + # by the NUOPC Layer. + # + # FIELD_DICT_IN_UWM_FP: + # The full path to FIELD_DICT_FN within the forecast model's directory structure + # (e.g., /path/to/ufs-srweather-app/sorc/ufs-weather-model/tests/parm/$FIELD_DICT_FN). + # + # FIELD_DICT_FP: + # The full path to FIELD_DICT_FN in the experiment directory. + # #----------------------------------------------------------------------- # FIELD_DICT_FN: "fd_nems.yaml" @@ -675,38 +872,37 @@ workflow: # #----------------------------------------------------------------------- # - # Set GRID_GEN_METHOD. This variable specifies the method to use to - # generate a regional grid in the horizontal. The values that it can + # Set GRID_GEN_METHOD. This variable specifies the method to use to + # generate a regional grid in the horizontal. The values that it can # take on are: # - # * "GFDLgrid": - # This setting will generate a regional grid by first generating a - # "parent" global cubed-sphere grid and then taking a portion of tile - # 6 of that global grid -- referred to in the grid generation scripts - # as "tile 7" even though it doesn't correspond to a complete tile -- - # and using it as the regional grid. Note that the forecast is run on - # only on the regional grid (i.e. tile 7, not tiles 1 through 6). - # # * "ESGgrid": # This will generate a regional grid using the map projection developed # by Jim Purser of EMC. # + # * "GFDLgrid": + # The "GFDLgrid" method first generates a "parent" global cubed-sphere grid. a + # Then a portion from tile 6 of the global grid is used as the regional grid. + # This regional grid is referred to in the grid generation scripts + # as "tile 7," even though it does not correspond to a complete tile. + # The forecast is run only on the regional grid (i.e., on tile 7, not on tiles 1 through 6). + # # Note that: # - # 1) If the experiment is using one of the predefined grids (i.e. if - # PREDEF_GRID_NAME is set to the name of one of the valid predefined - # grids), then GRID_GEN_METHOD will be reset to the value of + # 1) If the experiment uses a predefined grid (i.e. if + # PREDEF_GRID_NAME is set to the name of a valid predefined + # grid), then GRID_GEN_METHOD will be reset to the value of # GRID_GEN_METHOD for that grid. This will happen regardless of - # whether or not GRID_GEN_METHOD is assigned a value in the user- - # specified experiment configuration file, i.e. any value it may be + # whether GRID_GEN_METHOD is assigned a value in the user- + # specified experiment configuration file. In other words, any value # assigned in the experiment configuration file will be overwritten. # # 2) If the experiment is not using one of the predefined grids (i.e. if # PREDEF_GRID_NAME is set to a null string), then GRID_GEN_METHOD must - # be set in the experiment configuration file. Otherwise, it will - # remain set to a null string, and the experiment generation will - # fail because the generation scripts check to ensure that it is set - # to a non-empty string before creating the experiment directory. + # be set in the experiment configuration file. Otherwise, + # the experiment generation will fail because the generation + # scripts check to ensure that the grid name is set to a non-empty + # string before creating the experiment directory. # #----------------------------------------------------------------------- # @@ -714,8 +910,14 @@ workflow: # #----------------------------------------------------------------------- # - # Set PREDEF_GRID_NAME. This parameter specifies a predefined regional - # grid, as follows: + # Set PREDEF_GRID_NAME. This parameter indicates which (if any) + # predefined regional grid to use for the experiment. + # + # Setting PREDEF_GRID_NAME provides a convenient method of specifying a + # commonly used set of grid-dependent parameters. The predefined grid + # parameters are specified in the script + # + # $HOMEdir/ush/set_predef_grid_params.py # # * If PREDEF_GRID_NAME is set to a valid predefined grid name, the grid # generation method GRID_GEN_METHOD, the (native) grid parameters, and @@ -727,21 +929,15 @@ workflow: # they are also set to predefined values for the specified grid. # # * If PREDEF_GRID_NAME is set to an empty string, it implies the user - # is providing the native grid parameters in the user-specified - # experiment configuration file (EXPT_CONFIG_FN). In this case, the + # will provide the native grid parameters in the user-specified + # experiment configuration file (EXPT_CONFIG_FN). In this case, the # grid generation method GRID_GEN_METHOD, the native grid parameters, - # and the write-component grid parameters as well as the time step - # forecast model's main time step DT_ATMOS and the computational - # parameters LAYOUT_X, LAYOUT_Y, and BLOCKSIZE must be set in that + # the write-component grid parameters, the forecast model's + # main time step DT_ATMOS, and the computational + # parameters (LAYOUT_X, LAYOUT_Y, and BLOCKSIZE) must be set in that # configuration file; otherwise, the values of all of these parameters # in this default experiment configuration file will be used. # - # Setting PREDEF_GRID_NAME provides a convenient method of specifying a - # commonly used set of grid-dependent parameters. The predefined grid - # parameters are specified in the script - # - # $HOMEdir/ush/set_predef_grid_params.py - # #----------------------------------------------------------------------- # PREDEF_GRID_NAME: "" @@ -756,7 +952,7 @@ workflow: # include the first cycle hour. # # DATE_LAST_CYCL: - # Starting cylce date of the LAST forecast in the set of forecasts to run. + # Starting cycle date of the LAST forecast in the set of forecasts to run. # Format is "YYYYMMDDHH". Note: This has recently changed to include # the last cycle hour. # @@ -770,17 +966,16 @@ workflow: # # LONG_FCST_LEN_HRS: # The length of the longer forecast in integer hours in a system that - # varies the length of the forecast by time of day forecasts for a - # shorter period. There is no need for the user to update this value + # varies the length of the forecast by time of day. There is no need for the user to update this value # directly, as it is derived from FCST_LEN_CYCL when FCST_LEN_HRS=-1 # # FCST_LEN_CYCL: # The length of forecast for each cycle date in integer hours. # This is valid only when FCST_LEN_HRS = -1. - # This pattern is recurred for all cycle dates. + # This pattern recurs for all cycle dates. # Must have the same number of entries as cycles per day, or if less # than one day the entries must include the length of each cycle to be - # run. By default, set it to a 1-item list containing the standard + # run. By default, it is set to a 1-item list containing the standard # fcst length. # #----------------------------------------------------------------------- @@ -792,15 +987,14 @@ workflow: FCST_LEN_CYCL: - '{{ FCST_LEN_HRS }}' LONG_FCST_LEN: '{% if FCST_LEN_HRS < 0 %}{{ FCST_LEN_CYCL|max }}{% else %}{{ FCST_LEN_HRS }}{% endif %}' - # #----------------------------------------------------------------------- # - # Set PREEXISTING_DIR_METHOD. This variable determines the method to use - # use to deal with preexisting directories [e.g ones generated by previous + # Set PREEXISTING_DIR_METHOD. This variable determines how + # to deal with preexisting directories [e.g., ones generated by previous # calls to the experiment generation script using the same experiment name # (EXPT_SUBDIR) as the current experiment]. This variable must be set to - # one of "delete", "reuse", "rename", and "quit". The resulting behavior for each + # "delete", "reuse", "rename", or "quit". The resulting behavior for each # of these values is as follows: # # * "delete": @@ -809,26 +1003,22 @@ workflow: # # * "rename": # The preexisting directory is renamed and a new directory (having the - # same name as the original preexisting directory) is created. The new + # same name as the original preexisting directory) is created. The new # name of the preexisting directory consists of its original name and # the suffix "_oldNNN", where NNN is a 3-digit integer chosen to make # the new name unique. # # * "reuse": - # If method is set to "reuse", - # keep preexisting directory intact except that + # This method will keep preexisting directory intact except that # when preexisting directory is $EXPDIR, do the following: - # save all old files to a subdirecotry oldxxx/ and then + # save all old files to a subdirectory oldxxx/ and then # populate new files into the $EXPDIR directory # This is useful to keep ongoing runs uninterrupted: # rocotoco *db files and previous cycles will stay and hence # 1. no need to manually cp/mv *db files and previous cycles back # 2. no need to manually restart related rocoto tasks failed during # the workflow generation process - # This may best suit for incremental system reuses. - # - # Alternatively, one can always elect to use the "rename" option - # and then manually do the above aftermath + # This may be best suited for incremental system reuses. # # * "quit": # The preexisting directory is left unchanged, but execution of the @@ -841,16 +1031,16 @@ workflow: # #----------------------------------------------------------------------- # - # Set flags for more detailed messages. Defintitions: + # Set flags for more detailed messages. Definitions: # # VERBOSE: # This is a flag that determines whether or not the experiment generation - # and workflow task scripts tend to print out more informational messages. + # and workflow task scripts print out more informational messages. # # DEBUG: - # This is a flag that determines whether or not very detailed debugging - # messages are printed to out. Note that if DEBUG is set to TRUE, then - # VERBOSE will also get reset to TRUE if it isn't already. + # This is a flag that determines whether to print out very detailed debugging messages. + # Note that if DEBUG is set to TRUE, then + # VERBOSE will also be reset to TRUE if it isn't already. # #----------------------------------------------------------------------- # @@ -875,7 +1065,7 @@ workflow: #----------------------------------------------------------------------- # # DO_REAL_TIME: - # switch for real-time run + # Switch for real-time run # #----------------------------------------------------------------------- # @@ -884,10 +1074,10 @@ workflow: #----------------------------------------------------------------------- # # COLDSTART: - # Flag turning on/off warm start of the first cycle + # Flag turning on/off cold start for the first cycle # # WARMSTART_CYCLE_DIR: - # Path to the directory where RESTART dir is located for warm start + # Path to the cycle directory where RESTART subdirectory is located for warm start # #----------------------------------------------------------------------- # @@ -929,12 +1119,58 @@ nco: # OPSROOT_default: # The operations root directory in NCO mode. # + # COMROOT_default: + # The com root directory for input/output data that is located on + # the current system. + # + # DATAROOT_default: + # Directory containing the (temporary) working directory for running + # jobs. + # + # DCOMROOT_default: + # dcom root directory, which contains input/incoming data that is + # retrieved from outside WCOSS. + # # LOGBASEDIR_default: # Directory in which the log files from the workflow tasks will be placed. - # - # For more information on NCO standards - # - # https://www.nco.ncep.noaa.gov/idsb/implementation_standards/ImplementationStandards.v11.0.0.pdf + # + # COMIN_BASEDIR: + # com directory for current model's input data, typically + # $COMROOT/$NET/$model_ver/$RUN.$PDY + # + # COMOUT_BASEDIR: + # com directory for current model's output data, typically + # $COMROOT/$NET/$model_ver/$RUN.$PDY + # + # DBNROOT_default: + # Root directory for the data-alerting utilities. + # + # SENDECF_default: + # Boolean variable used to control ecflow_client child commands. + # + # SENDDBN_default: + # Boolean variable used to control sending products off WCOSS2. + # + # SENDDBN_NTC_default: + # Boolean variable used to control sending products with WMO + # headers off WCOSS2. + # + # SENDCOM_default: + # Boolean variable to control data copies to $COMOUT. + # + # SENDWEB_default: + # Boolean variable used to control sending products to a web server, + # often ncorzdm. + # + # KEEPDATA_default: + # Boolean variable used to specify whether or not the working + # directory should be kept upon successful job completion. + # + # MAILTO_default: + # List of email addresses to send email to. + # + # MAILCC_default: + # List of email addresses to CC on email. # #----------------------------------------------------------------------- # @@ -969,7 +1205,7 @@ task_make_grid: #----------------------------------------------------------------------- # # GRID_DIR: - # The directory in which to look for pregenerated grid files if the + # The directory containing pregenerated grid files when the # make_grid task is not set to run. # #----------------------------------------------------------------------- @@ -979,7 +1215,7 @@ task_make_grid: #----------------------------------------------------------------------- # # Set parameters specific to the "ESGgrid" method of generating a regional - # grid (i.e. for GRID_GEN_METHOD set to "ESGgrid"). Definitions: + # grid (i.e. for GRID_GEN_METHOD set to "ESGgrid"). Definitions: # # ESGgrid_LON_CTR: # The longitude of the center of the grid (in degrees). @@ -1005,53 +1241,24 @@ task_make_grid: # the regional grid before shaving the halo down to the width(s) expected # by the forecast model. # - # ESGgrid_PAZI: - # The rotational parameter for the ESG grid (in degrees). - # - # In order to generate grid files containing halos that are 3-cell and - # 4-cell wide and orography files with halos that are 0-cell and 3-cell - # wide (all of which are required as inputs to the forecast model), the - # grid and orography tasks first create files with halos around the regional - # domain of width ESGgrid_WIDE_HALO_WIDTH cells. These are first stored - # in files. The files are then read in and "shaved" down to obtain grid + # The forecast model requires grid files containing 3-cell- and + # 4-cell-wide halos and orography files with 0-cell- and 3-cell-wide halos. + # In order to generate grid and orography files with appropriately-sized halos, + # the grid and orography tasks create preliminary files with halos around the regional domain + # of width ESGgrid_WIDE_HALO_WIDTH cells. + # The files are then read in and "shaved" down to obtain grid # files with 3-cell-wide and 4-cell-wide halos and orography files with - # 0-cell-wide (i.e. no halo) and 3-cell-wide halos. For this reason, we - # refer to the original halo that then gets shaved down as the "wide" - # halo, i.e. because it is wider than the 0-cell-wide, 3-cell-wide, and - # 4-cell-wide halos that we will eventually end up with. Note that the - # grid and orography files with the wide halo are only needed as intermediates + # 0-cell-wide (i.e. no halo) and 3-cell-wide halos. The original halo that + # gets shaved down is referred to as the "wide" halo because it is wider than the 0-cell-wide, + # 3-cell-wide, and 4-cell-wide halos that users eventually end up with. + # Note that the grid and orography files with the wide halo are only needed as intermediates # in generating the files with 0-cell-, 3-cell-, and 4-cell-wide halos; # they are not needed by the forecast model. # NOTE: Probably don't need to make ESGgrid_WIDE_HALO_WIDTH a user-specified # variable. Just set it in the function set_gridparams_ESGgrid.py. # - # Note that: - # - # 1) If the experiment is using one of the predefined grids (i.e. if - # PREDEF_GRID_NAME is set to the name of one of the valid predefined - # grids), then: - # - # a) If the value of GRID_GEN_METHOD for that grid is "GFDLgrid", then - # these parameters will not be used and thus do not need to be reset - # to non-empty strings. - # - # b) If the value of GRID_GEN_METHOD for that grid is "ESGgrid", then - # these parameters will get reset to the values for that grid. - # This will happen regardless of whether or not they are assigned - # values in the user-specified experiment configuration file, i.e. - # any values they may be assigned in the experiment configuration - # file will be overwritten. - # - # 2) If the experiment is not using one of the predefined grids (i.e. if - # PREDEF_GRID_NAME is set to a null string), then: - # - # a) If GRID_GEN_METHOD is set to "GFDLgrid" in the user-specified - # experiment configuration file, then these parameters will not be - # used and thus do not need to be reset to non-empty strings. - # - # b) If GRID_GEN_METHOD is set to "ESGgrid" in the user-specified - # experiment configuration file, then these parameters must be set - # in that configuration file. + # ESGgrid_PAZI: + # The rotational parameter for the ESG grid (in degrees). # #----------------------------------------------------------------------- # @@ -1069,103 +1276,80 @@ task_make_grid: # Set parameters specific to the "GFDLgrid" method of generating a regional # grid (i.e. for GRID_GEN_METHOD set to "GFDLgrid"). The following # parameters will be used only if GRID_GEN_METHOD is set to "GFDLgrid". + # # In this grid generation method: # # * The regional grid is defined with respect to a "parent" global cubed- - # sphere grid. Thus, all the parameters for a global cubed-sphere grid - # must be specified in order to define this parent global grid even - # though the model equations are not integrated on (they are integrated + # sphere grid. Thus, all the parameters for a global cubed-sphere grid + # must be specified even though the model equations are integrated # only on the regional grid). + # Tile 6 has arbitrarily been chosen as the tile to use to orient the + # global parent grid on the sphere (Earth). + # For convenience, the regional grid is denoted as "tile 7" even though it + # is embedded within tile 6 (i.e., it doesn't extend + # beyond the boundary of tile 6). Its exact location within tile 6 is + # is determined by specifying the starting and ending i and j indices + # of the regional grid on tile 6, where i is the grid index in the x + # direction and j is the grid index in the y direction. + # All of this information is set in the variables below. + # + # Definitions of parameters that need to be specified when GRID_GEN_METHOD + # is set to "GFDLgrid": # - # * GFDLgrid_NUM_CELLS is the number of grid cells in either one of the two - # horizontal directions x and y on any one of the 6 tiles of the parent - # global cubed-sphere grid. The mapping from GFDLgrid_NUM_CELLS to a nominal - # resolution (grid cell size) for a uniform global grid (i.e. Schmidt + # GFDLgrid_LON_T6_CTR: + # Longitude of the center of tile 6 (in degrees). + # + # GFDLgrid_LAT_T6_CTR: + # Latitude of the center of tile 6 (in degrees). + # + # GFDLgrid_NUM_CELLS: The number of grid cells in either one of the two + # horizontal directions x and y on any one of the 6 tiles of the parent + # global cubed-sphere grid. + # The mapping from GFDLgrid_NUM_CELLS to a nominal + # resolution (grid cell size) for a uniform global grid (i.e., a grid with Schmidt # stretch factor GFDLgrid_STRETCH_FAC set to 1) for several values of # GFDLgrid_NUM_CELLS is as follows: # # GFDLgrid_NUM_CELLS typical cell size # ------------ ----------------- + # 48 200 km + # 96 100 km # 192 50 km # 384 25 km # 768 13 km # 1152 8.5 km # 3072 3.2 km # - # Note that these are only typical cell sizes. The actual cell size on - # the global grid tiles varies somewhat as we move across a tile. - # - # * Tile 6 has arbitrarily been chosen as the tile to use to orient the - # global parent grid on the sphere (Earth). This is done by specifying - # GFDLgrid_LON_T6_CTR and GFDLgrid_LAT_T6_CTR, which are the longitude - # and latitude (in degrees) of the center of tile 6. - # - # * Setting the Schmidt stretching factor GFDLgrid_STRETCH_FAC to a value - # greater than 1 shrinks tile 6, while setting it to a value less than - # 1 (but still greater than 0) expands it. The remaining 5 tiles change + # * Note that these are only typical cell sizes. The actual cell size on + # the global grid tiles varies somewhat as we move across a tile and is dependent + # on both GFDLgrid_NUM_CELLS and on GFDLgrid_STRETCH_FAC, which + # modifies the shape and size of the tile. + # + # * Note that the name of this parameter + # is really a misnomer because although it has the string "RES" (for + # "resolution") in its name, it specifies number of grid cells, not grid + # size (e.g., in meters or kilometers). However, we keep this name in order + # to remain consistent with the usage of the word "resolution" in the + # global forecast model and other auxiliary code. + # + # GFDLgrid_STRETCH_FAC: + # Stretching factor used in the Schmidt transformation + # applied to the parent cubed-sphere grid. + # Setting the Schmidt stretching factor GFDLgrid_STRETCH_FAC to a value + # greater than 1 shrinks tile 6, while setting it to a value less than + # 1 (but still greater than 0) expands it. The remaining 5 tiles change # shape as necessary to maintain global coverage of the grid. # - # * The cell size on a given global tile depends on both GFDLgrid_NUM_CELLS and - # GFDLgrid_STRETCH_FAC (since changing GFDLgrid_NUM_CELLS changes the number - # of cells in the tile, and changing GFDLgrid_STRETCH_FAC modifies the - # shape and size of the tile). - # - # * The regional grid is embedded within tile 6 (i.e. it doesn't extend - # beyond the boundary of tile 6). Its exact location within tile 6 is - # is determined by specifying the starting and ending i and j indices - # of the regional grid on tile 6, where i is the grid index in the x - # direction and j is the grid index in the y direction. These indices - # are stored in the variables - # - # GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G - # GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G - # GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G - # GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G - # - # * In the forecast model code and in the experiment generation and workflow - # scripts, for convenience the regional grid is denoted as "tile 7" even - # though it doesn't map back to one of the 6 faces of the cube from - # which the parent global grid is generated (it maps back to only a - # subregion on face 6 since it is wholly confined within tile 6). Tile - # 6 may be referred to as the "parent" tile of the regional grid. - # # * GFDLgrid_REFINE_RATIO is the refinement ratio of the regional grid - # (tile 7) with respect to the grid on its parent tile (tile 6), i.e. + # (tile 7) with respect to the grid on its parent tile (tile 6), i.e., # it is the number of grid cells along the boundary of the regional grid # that abut one cell on tile 6. Thus, the cell size on the regional # grid depends not only on GFDLgrid_NUM_CELLS and GFDLgrid_STRETCH_FAC (because # the cell size on tile 6 depends on these two parameters) but also on # GFDLgrid_REFINE_RATIO. Note that as on the tiles of the global grid, - # the cell size on the regional grid is not uniform but varies as we - # move across the grid. - # - # Definitions of parameters that need to be specified when GRID_GEN_METHOD - # is set to "GFDLgrid": - # - # GFDLgrid_LON_T6_CTR: - # Longitude of the center of tile 6 (in degrees). - # - # GFDLgrid_LAT_T6_CTR: - # Latitude of the center of tile 6 (in degrees). - # - # GFDLgrid_NUM_CELLS: - # Number of points in each of the two horizontal directions (x and y) on - # each tile of the parent global grid. Note that the name of this parameter - # is really a misnomer because although it has the string "RES" (for - # "resolution") in its name, it specifies number of grid cells, not grid - # size (in say meters or kilometers). However, we keep this name in order - # to remain consistent with the usage of the word "resolution" in the - # global forecast model and other auxiliary codes. - # - # GFDLgrid_STRETCH_FAC: - # Stretching factor used in the Schmidt transformation applied to the - # parent cubed-sphere grid. - # - # GFDLgrid_REFINE_RATIO: - # Cell refinement ratio for the regional grid, i.e. the number of cells - # in either the x or y direction on the regional grid (tile 7) that abut - # one cell on its parent tile (tile 6). + # the cell size on the regional grid is not uniform but varies as we move across the grid. # + # # GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G: # i-index on tile 6 at which the regional grid (tile 7) starts. # @@ -1190,7 +1374,7 @@ task_make_grid: # set to false, we calculate (in the grid generation task) an "equivalent # global uniform cubed-sphere resolution" -- call it RES_EQUIV -- and # then set RES equal to it. RES_EQUIV is the number of grid points in - # each of the x and y directions on each tile that a global UNIFORM (i.e. + # each of the x and y directions on each tile that a global UNIFORM (i.e., # stretch factor of 1) cubed-sphere grid would have to have in order to # have the same average grid size as the regional grid. This is a more # useful indicator of the grid size because it takes into account the @@ -1202,34 +1386,6 @@ task_make_grid: # in the file names, so we allow for that here by setting this flag to # true. # - # Note that: - # - # 1) If the experiment is using one of the predefined grids (i.e. if - # PREDEF_GRID_NAME is set to the name of one of the valid predefined - # grids), then: - # - # a) If the value of GRID_GEN_METHOD for that grid is "GFDLgrid", then - # these parameters will get reset to the values for that grid. - # This will happen regardless of whether or not they are assigned - # values in the user-specified experiment configuration file, i.e. - # any values they may be assigned in the experiment configuration - # file will be overwritten. - # - # b) If the value of GRID_GEN_METHOD for that grid is "ESGgrid", then - # these parameters will not be used and thus do not need to be reset - # to non-empty strings. - # - # 2) If the experiment is not using one of the predefined grids (i.e. if - # PREDEF_GRID_NAME is set to a null string), then: - # - # a) If GRID_GEN_METHOD is set to "GFDLgrid" in the user-specified - # experiment configuration file, then these parameters must be set - # in that configuration file. - # - # b) If GRID_GEN_METHOD is set to "ESGgrid" in the user-specified - # experiment configuration file, then these parameters will not be - # used and thus do not need to be reset to non-empty strings. - # #----------------------------------------------------------------------- # GFDLgrid_LON_T6_CTR: "" @@ -1246,6 +1402,23 @@ task_make_grid: #---------------------------- # MAKE OROG config parameters #----------------------------- +#----------------------------------------------------------------------- +# KMP_AFFINITY_MAKE_OROG: +# Intel Thread Affinity Interface for the make_orog task. +# Settings for this task are disabled because this task does not use +# parallelized code. +# +# OMP_NUM_THREADS_MAKE_OROG: +# The number of OpenMP threads to use for parallel regions. +# +# OMP_STACKSIZE_MAKE_OROG: +# Controls the size of the stack for threads created by the OpenMP +# implementation. +# +# OROG_DIR: +# The directory containing pre-generated orography files to use when +# the MAKE_OROG task is not meant to run. +#----------------------------------------------------------------------- task_make_orog: KMP_AFFINITY_MAKE_OROG: "disabled" OMP_NUM_THREADS_MAKE_OROG: 6 @@ -1255,6 +1428,21 @@ task_make_orog: #---------------------------- # MAKE SFC CLIMO config parameters #----------------------------- +#----------------------------------------------------------------------- +# KMP_AFFINITY_MAKE_SFC_CLIMO: +# Intel Thread Affinity Interface for the make_sfc_climo task. +# +# OMP_NUM_THREADS_MAKE_SFC_CLIMO: +# The number of OpenMP threads to use for parallel regions. +# +# OMP_STACKSIZE_MAKE_SFC_CLIMO: +# Controls the size of the stack for threads created by the OpenMP +# implementation. +# +# SFC_CLIMO_DIR: +# The directory containing pre-generated surface climatology files +# to use when the MAKE_SFC_CLIMO task is not meant to run. +#----------------------------------------------------------------------- task_make_sfc_climo: KMP_AFFINITY_MAKE_SFC_CLIMO: "scatter" OMP_NUM_THREADS_MAKE_SFC_CLIMO: 1 @@ -1272,19 +1460,19 @@ task_get_extrn_ics: # Definitions: # # EXTRN_MDL_NAME_ICS: - #`The name of the external model that will provide fields from which - # initial condition (including and surface) files will be generated for - # input into the forecast model. + # The name of the external model that will provide fields from which initial + # condition files, surface files, and 0-th hour boundary condition files will + # be generated for input into the forecast model. # # EXTRN_MDL_ICS_OFFSET_HRS: - # Users may wish to start a forecast from a forecast of a previous cycle - # of an external model. This variable sets the number of hours earlier - # the external model started than when the FV3 forecast configured here - # should start. For example, the forecast should start from a 6 hour - # forecast of the GFS, then EXTRN_MDL_ICS_OFFSET_HRS=6. + # Users may wish to start a forecast using forecast data from a previous cycle + # of an external model. This variable indicates how many hours earlier + # the external model started than the FV3 forecast configured here. + # For example, if the forecast should start from a 6-hour + # forecast of the GFS, then EXTRN_MDL_ICS_OFFSET_HRS: "6". # # FV3GFS_FILE_FMT_ICS: - # If using the FV3GFS model as the source of the ICs (i.e. if EXTRN_MDL_NAME_ICS + # If using the FV3GFS model as the source of the ICs (i.e., if EXTRN_MDL_NAME_ICS # is set to "FV3GFS"), this variable specifies the format of the model # files to use when generating the ICs. # @@ -1299,10 +1487,11 @@ task_get_extrn_ics: # Base directories in which to search for external model files. # # EXTRN_MDL_SYSBASEDIR_ICS: - # Base directory on the local machine containing external model files for - # generating ICs on the native grid. The way the full path containing - # these files is constructed depends on the user-specified external model - # for ICs, i.e. EXTRN_MDL_NAME_ICS. + # A known location of a real data stream on a given platform. This is typically + # a real-time data stream as on Hera, Jet, or WCOSS. External model files + # for generating ICs on the native grid should be accessible via this data stream. + # The way the full path containing these files is constructed depends on the + # user-specified external model for ICs, i.e. EXTRN_MDL_NAME_ICS. # # Note that this must be defined as a null string here so that if it is # specified by the user in the experiment configuration file, it remains @@ -1325,13 +1514,13 @@ task_get_extrn_ics: # EXTRN_MDL_SOURCE_BASEDIR_ICS: # Directory in which to look for external model files for generating ICs. # If USE_USER_STAGED_EXTRN_FILES is set to true, the workflow looks in - # this directory (specifically, in a subdirectory under this directory - # named "YYYYMMDDHH" consisting of the starting date and cycle hour of + # this directory for a subdirectory named "YYYYMMDDHH" consisting of the + # starting date and cycle hour of # the forecast, where YYYY is the 4-digit year, MM the 2-digit month, DD # the 2-digit day of the month, and HH the 2-digit hour of the day) for # the external model files specified by the array EXTRN_MDL_FILES_ICS # (these files will be used to generate the ICs on the native FV3-LAM - # grid). This variable is not used if USE_USER_STAGED_EXTRN_FILES is + # grid). This variable is not used if USE_USER_STAGED_EXTRN_FILES is # set to false. # # EXTRN_MDL_FILES_ICS: @@ -1339,10 +1528,10 @@ task_get_extrn_ics: # the directory specified by EXTRN_MDL_SOURCE_BASEDIR_ICS. This # variable is not used if USE_USER_STAGED_EXTRN_FILES is set to false. # A single template should be used for each model file type that is - # meant to be used. You may use any of the Python-style templates + # used. You may use any of the Python-style templates # allowed in the ush/retrieve_data.py script. To see the full list of - # supported templates, run that script with a -h option. Here is an example of - # setting FV3GFS nemsio input files: + # supported templates, run that script with a -h option. For example, to + # set FV3GFS nemsio input files, add: # EXTRN_MDL_FILES_ICS=( gfs.t{hh}z.atmf{fcst_hr:03d}.nemsio \ # gfs.t{hh}z.sfcf{fcst_hr:03d}.nemsio ) # Or for FV3GFS grib files: @@ -1362,7 +1551,7 @@ task_get_extrn_lbcs: #----------------------------------------------------------------------- # # EXTRN_MDL_NAME_LBCS: - #`The name of the external model that will provide fields from which + # The name of the external model that will provide fields from which # lateral boundary condition (LBC) files (except for the 0-th hour LBC # file) will be generated for input into the forecast model. # @@ -1372,21 +1561,22 @@ task_get_extrn_lbcs: # model specified in EXTRN_MDL_NAME_LBCS must have data available at a # frequency greater than or equal to that implied by LBC_SPEC_INTVL_HRS. # For example, if LBC_SPEC_INTVL_HRS is set to 6, then the model must have - # data availble at least every 6 hours. It is up to the user to ensure + # data availble at least every 6 hours. It is up to the user to ensure # that this is the case. # # EXTRN_MDL_LBCS_OFFSET_HRS: # Users may wish to use lateral boundary conditions from a forecast that - # was started earlier than the initial time for the FV3 forecast - # configured here. This variable sets the number of hours earlier - # the external model started than when the FV3 forecast configured here - # should start. For example, the forecast should use lateral boundary - # conditions from the GFS started 6 hours earlier, then - # EXTRN_MDL_LBCS_OFFSET_HRS=6. Defaults to 0 except for RAP, which - # uses a 3 hour offset. + # was started earlier than the initial time for the forecast + # configured here. This variable indicates how many hours earlier + # the external model started than the forecast configured here. + # For example, the forecast should use lateral boundary + # conditions from a GFS forecast started 6 hours earlier, then + # EXTRN_MDL_LBCS_OFFSET_HRS: 6. Defaults to 0 except for RAP, which + # uses a 3 hour offset. The default value is model-dependent and is set + # in ush/set_extrn_mdl_params.py. # # FV3GFS_FILE_FMT_LBCS: - # If using the FV3GFS model as the source of the LBCs (i.e. if + # If using the FV3GFS model as the source of the LBCs (i.e., if # EXTRN_MDL_NAME_LBCS is set to "FV3GFS"), this variable specifies the # format of the model files to use when generating the LBCs. # @@ -1396,12 +1586,13 @@ task_get_extrn_lbcs: LBC_SPEC_INTVL_HRS: 6 EXTRN_MDL_LBCS_OFFSET_HRS: '{{ 3 if EXTRN_MDL_NAME_LBCS == "RAP" else 0 }}' FV3GFS_FILE_FMT_LBCS: "nemsio" + # #----------------------------------------------------------------------- # # EXTRN_MDL_SYSBASEDIR_LBCS: # Same as EXTRN_MDL_SYSBASEDIR_ICS but for LBCs. # - # Note that this must be defined as a null string here so that if it is + # Note that this variable must be defined as a null string here so that if it is # specified by the user in the experiment configuration file, it remains # set to those values, and if not, it gets set to machine-dependent # values. @@ -1415,7 +1606,7 @@ task_get_extrn_lbcs: # User-staged external model directories and files. Definitions: # # USE_USER_STAGED_EXTRN_FILES: - # Analogous to USE_USER_STAGED_EXTRN_FILES in ICS but for LBCs + # Analogous to USE_USER_STAGED_EXTRN_FILES in ICS but for LBCs. # # EXTRN_MDL_SOURCE_BASEDIR_LBCS: # Analogous to EXTRN_MDL_SOURCE_BASEDIR_ICS but for LBCs instead of ICs. @@ -1433,6 +1624,17 @@ task_get_extrn_lbcs: # MAKE ICS config parameters #----------------------------- task_make_ics: + #----------------------------------------------------------------------- + # KMP_AFFINITY_MAKE_ICS: + # Intel Thread Affinity Interface for the make_ics task. + # + # OMP_NUM_THREADS_MAKE_ICS: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_MAKE_ICS: + # Controls the size of the stack for threads created by the OpenMP + # implementation. + #----------------------------------------------------------------------- KMP_AFFINITY_MAKE_ICS: "scatter" OMP_NUM_THREADS_MAKE_ICS: 1 OMP_STACKSIZE_MAKE_ICS: "1024m" @@ -1443,19 +1645,19 @@ task_make_ics: # Flag set to update surface conditions in FV3-LAM with fields generated # from the Finite Volume Community Ocean Model (FVCOM). This will # replace lake/sea surface temperature, ice surface temperature, and ice - # placement. FVCOM data must already be interpolated to the desired - # FV3-LAM grid. This flag will be used in make_ics to modify sfc_data.nc - # after chgres_cube is run by running the routine process_FVCOM.exe + # placement. This flag will be used in make_ics to modify sfc_data.nc + # after chgres_cube is run by running the routine process_FVCOM.exe + # FVCOM data must already be interpolated to the desired FV3-LAM grid. # # FVCOM_WCSTART: - # Define if this is a "warm" start or a "cold" start. Setting this to - # "warm" will read in sfc_data.nc generated in a RESTART directory. - # Setting this to "cold" will read in the sfc_data.nc generated from + # Define whether this is a "warm" start or a "cold" start. Setting this to + # "warm" will read in the sfc_data.nc file generated in a RESTART directory. + # Setting this to "cold" will read in the sfc_data.nc file generated from # chgres_cube in the make_ics portion of the workflow. # # FVCOM_DIR: # User defined directory where FVCOM data already interpolated to FV3-LAM - # grid is located. File name in this path should be "fvcom.nc" to allow + # grid is located. The file in this directory must be named fvcom.nc # # FVCOM_FILE: # Name of file located in FVCOM_DIR that has FVCOM data interpolated to @@ -1478,15 +1680,48 @@ task_make_ics: # MAKE LBCS config parameters #----------------------------- task_make_lbcs: + #------------------------------------------------------------------------ + # + # KMP_AFFINITY_MAKE_LBCS: + # Intel Thread Affinity Interface for the make_lbcs task. + # + # OMP_NUM_THREADS_MAKE_LBCS: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_MAKE_LBCS: + # Controls the size of the stack for threads created by the OpenMP implementation. + # + # VCOORD_FILE: + # Full path to the file used to set the vertical coordinates in FV3. + # This file should be the same in both make_ics and make_lbcs. + # + #------------------------------------------------------------------------ KMP_AFFINITY_MAKE_LBCS: "scatter" OMP_NUM_THREADS_MAKE_LBCS: 1 OMP_STACKSIZE_MAKE_LBCS: "1024m" VCOORD_FILE: "{{ workflow.FIXam }}/global_hyblev.l65.txt" #---------------------------- -# FORECAST config parameters +# IO_LAYOUT_Y FORECAST config parameters #----------------------------- task_run_fcst: + #------------------------------------------------------------------------ + # + # NNODES_RUN_FCST: + # The number of nodes to request from the job scheduler + # for the forecast task. + # + # PPN_RUN_FCST: + # Processes per node for the forecast task. + # + # FV3_EXEC_FP: + # Full path to the forecast model executable. + # + # IO_LAYOUT_Y: + # Specifies how many MPI ranks to use in the Y direction for + # input/output (I/O). (X direction is assumed to be 1.) + # + #------------------------------------------------------------------------ NNODES_RUN_FCST: '{{ (PE_MEMBER01 + PPN_RUN_FCST - 1) // PPN_RUN_FCST }}' PPN_RUN_FCST: '{{ platform.NCORES_PER_NODE // OMP_NUM_THREADS_RUN_FCST }}' FV3_EXEC_FP: '{{ [user.EXECdir, workflow.FV3_EXEC_FN]|path_join }}' @@ -1512,16 +1747,13 @@ task_run_fcst: # guide/openmp-support/openmp-library-support/thread-affinity-interface- # linux-and-windows.html # - # OMP_NUM_THREADS_*: + # OMP_NUM_THREADS_RUN_FCST: # The number of OpenMP threads to use for parallel regions. # - # OMP_STACKSIZE_*: + # OMP_STACKSIZE_RUN_FCST: # Controls the size of the stack for threads created by the OpenMP # implementation. # - # Note that settings for the make_grid and make_orog tasks are not - # included below as they do not use parallelized code. - # #----------------------------------------------------------------------- # KMP_AFFINITY_RUN_FCST: "scatter" @@ -1568,8 +1800,7 @@ task_run_fcst: # # LAYOUT_X, LAYOUT_Y: # The number of MPI tasks (processes) to use in the two horizontal - # directions (x and y) of the regional grid when running the forecast - # model. + # directions (x and y) of the regional grid when running the forecast model. # # BLOCKSIZE: # The amount of data that is passed into the cache at a time. @@ -1600,15 +1831,28 @@ task_run_fcst: # #----------------------------------------------------------------------- # - # Set write-component (quilting) parameters. Definitions: + # Set write-component (quilting) parameters. + # + # NOTE: The UPP (called by the run_post task) cannot process output on the + # native grid types ("GFDLgrid" and "ESGgrid"), so output fields are interpolated + # to a write component grid before writing them to an output file. The output files + # written by the UFS Weather Model use an Earth System Modeling Framework (ESMF) + # component, referred to as the write component. This model component is + # configured with settings in the model_configure file, as described in the UFS + # Weather Model documentation. + # + # Definitions: # # QUILTING: # Flag that determines whether or not to use the write component for # writing output files to disk. The regional grid requires the use of # the write component, so users should not change the default value. - # + # When set to true, the forecast model will output files named dynf$HHH.nc + # and phyf$HHH.nc (where HHH is the 3-digit forecast hour) containing dynamics + # and physics fields, respectively, on the write-component grid. + # # PRINT_ESMF: - # Flag for whether or not to output extra (debugging) information from + # Flag that determines whether to output extra (debugging) information from # ESMF routines. Must be true or false. Note that the write # component uses ESMF library routines to interpolate from the native # forecast model grid to the user-specified output grid (which is defined @@ -1636,6 +1880,7 @@ task_run_fcst: # WRTCMP_cen_lat: # Latitude (in degrees) of the center of the write component grid. Can # usually be set to the corresponding value from the native grid. + # # WRTCMP_lon_lwr_left: # Longitude (in degrees) of the center of the lower-left (southwest) # cell on the write component grid. If using the "rotated_latlon" @@ -1649,7 +1894,8 @@ task_run_fcst: # manually when running an experiment with a user-defined grid. # # ----------------------------------------------------------------------- - # + # Parameters to set when WRTCMP_output_grid is set to "rotated_latlon": + # # WRTCMP_lon_upr_rght: # Longitude (in degrees) of the center of the upper-right (northeast) cell # on the write component grid (expressed in terms of the rotated longitude). @@ -1667,6 +1913,7 @@ task_run_fcst: # in terms of the rotated latitude). # # ----------------------------------------------------------------------- + # Parameters to set when WRTCMP_output_grid is set to "lambert_conformal" # # WRTCMP_stdlat1: # First standard latitude (in degrees) in definition of Lambert conformal @@ -1728,8 +1975,8 @@ task_run_fcst: # #----------------------------------------------------------------------- # - # Flag that determines whether MERRA2 aerosol climatology data and - # lookup tables for optics properties are obtained + # USE_MERRA_CLIMO: Flag that determines whether MERRA2 aerosol climatology + # data and lookup tables for optics properties are obtained # #----------------------------------------------------------------------- # @@ -1738,7 +1985,7 @@ task_run_fcst: #----------------------------------------------------------------------- # # DO_FCST_RESTART: - # Flag turning on/off restart capability of forecast task + # Flag to turn on/off restart capability of forecast task # #----------------------------------------------------------------------- # @@ -1748,6 +1995,16 @@ task_run_fcst: # POST config parameters #----------------------------- task_run_post: + #----------------------------------------------------------------------- + # KMP_AFFINITY_RUN_POST: + # Intel Thread Affinity Interface for the run_post task. + # + # OMP_NUM_THREADS_RUN_POST: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_RUN_POST: + # Controls the size of the stack for threads created by the OpenMP implementation. + #----------------------------------------------------------------------- KMP_AFFINITY_RUN_POST: "scatter" OMP_NUM_THREADS_RUN_POST: 1 OMP_STACKSIZE_RUN_POST: "1024m" @@ -1759,17 +2016,17 @@ task_run_post: # # SUB_HOURLY_POST: # Flag that indicates whether the forecast model will generate output - # files on a sub-hourly time interval (e.g. 10 minutes, 15 minutes, etc). + # files on a sub-hourly time interval (e.g., 10 minutes, 15 minutes). # This will also cause the post-processor to process these sub-hourly # files. If ths is set to true, then DT_SUBHOURLY_POST_MNTS should be - # set to a value between "00" and "59". + # set to a value between 1 and 59. # # DT_SUB_HOURLY_POST_MNTS: - # Time interval in minutes between the forecast model output files. If - # SUB_HOURLY_POST is set to true, this needs to be set to a two-digit - # integer between "01" and "59". This is not used if SUB_HOURLY_POST is - # not set to true. Note that if SUB_HOURLY_POST is set to true but - # DT_SUB_HOURLY_POST_MNTS is set to "00", SUB_HOURLY_POST will get reset + # Time interval in minutes between the forecast model output files + # (only used if SUB_HOURLY_POST is set to true). If + # SUB_HOURLY_POST is set to true, this needs to be set to a valid + # integer between 1 and 59. Note that if SUB_HOURLY_POST is set to true, but + # DT_SUB_HOURLY_POST_MNTS is set to 0, SUB_HOURLY_POST will be reset # to false in the experiment generation scripts (there will be an # informational message in the log file to emphasize this). # @@ -1795,22 +2052,20 @@ task_run_post: # used for post-processing. This is only used if CUSTOM_POST_CONFIG_FILE # is set to true. # - # TESTBED_FIELDS_FN - # The file which lists grib2 fields to be extracted for testbed files - # Empty string means no need to generate testbed files - # # POST_OUTPUT_DOMAIN_NAME: - # Domain name (in lowercase) used in constructing the names of the output - # files generated by UPP [which is called either by running the run_post - # task or by activating the inline post feature (WRITE_DOPOST set to true)]. + # Domain name (in lowercase) used to construct the names of the output files + # generated by the UPP. If using a predefined grid, this variable defaults to PREDEF_GRID_NAME. + # + # If using a custom grid, POST_OUTPUT_DOMAIN_NAME must be specified by + # the user. Note that this variable is first changed to lowercase + # before being used to construct the file names. # The post output files are named as follows: # # ${NET_default}.tHHz.[var_name].f###.${POST_OUTPUT_DOMAIN_NAME}.grib2 - # - # If using a custom grid, POST_OUTPUT_DOMAIN_NAME must be specified by - # the user. If using a predefined grid, POST_OUTPUT_DOMAIN_NAME defaults - # to PREDEF_GRID_NAME. Note that this variable is first changed to lower - # case before being used to construct the file names. + # + # TESTBED_FIELDS_FN + # The file that lists grib2 fields to be extracted for testbed files. + # An empty string means no need to generate testbed files. # #----------------------------------------------------------------------- # @@ -1823,13 +2078,23 @@ task_run_post: # RUN PRDGEN config parameters #----------------------------- task_run_prdgen: + #----------------------------------------------------------------------- + # KMP_AFFINITY_RUN_PRDGEN: + # Intel Thread Affinity Interface for the run_prdgen task. + # + # OMP_NUM_THREADS_RUN_PRDGEN: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_RUN_PRDGEN: + # Controls the size of the stack for threads created by the OpenMP implementation. + #----------------------------------------------------------------------- KMP_AFFINITY_RUN_PRDGEN: "scatter" OMP_NUM_THREADS_RUN_PRDGEN: 1 OMP_STACKSIZE_RUN_PRDGEN: "1024m" #----------------------------------------------------------------------- - # + # DO_PARALLEL_PRDGEN: # Flag that determines whether to use CFP to run the product generation - # job in parallel. This should be used with the RRFS_NA_3km grid. + # job in parallel. This should be used with the RRFS_NA_3km grid. # #----------------------------------------------------------------------- DO_PARALLEL_PRDGEN: false @@ -1837,7 +2102,7 @@ task_run_prdgen: # #----------------------------------------------------------------------- # - # Set additional output grids for wgrib2 remapping, if any + # Set additional output grids for wgrib2 remapping, if any. # Space-separated list of strings, e.g., ( "130" "242" "clue" ) # Default is no additional grids # @@ -1862,7 +2127,7 @@ task_run_prdgen: #----------------------------- task_plot_allvars: #------------------------------------------------------------------------- - # Reference experiment's COMOUT directory. This is where the GRIB2 files + # Path to the reference experiment's COMOUT directory. This is where the GRIB2 files # from postprocessing are located. Make this a template to compare # multiple cycle and dates. COMOUT_REF should end with: # nco mode: $PDY/$cyc @@ -1888,6 +2153,19 @@ task_plot_allvars: # NEXUS_EMISSION config parameters #----------------------------- task_nexus_emission: + #------------------------------------------------------------------------------- + # PPN_NEXUS_EMISSION: + # Processes per node for the nexus_emission_* tasks. + # + # KMP_AFFINITY_NEXUS_EMISSION: + # Intel Thread Affinity Interface for the nexus_emission_* tasks. + # + # OMP_NUM_THREADS_NEXUS_EMISSION: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_NEXUS_EMISSION: + # Controls the size of the stack for threads created by the OpenMP implementation. + #------------------------------------------------------------------------------- PPN_NEXUS_EMISSION: '{{ platform.NCORES_PER_NODE // OMP_NUM_THREADS_NEXUS_EMISSION }}' KMP_AFFINITY_NEXUS_EMISSION: "scatter" OMP_NUM_THREADS_NEXUS_EMISSION: 2 @@ -1897,6 +2175,16 @@ task_nexus_emission: # BIAS_CORRECTION_O3 config parameters #----------------------------- task_bias_correction_o3: + #------------------------------------------------------------------------------- + # KMP_AFFINITY_BIAS_CORRECTION_O3: + # Intel Thread Affinity Interface for the bias_correction_o3 task. + # + # OMP_NUM_THREADS_BIAS_CORRECTION_O3: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_BIAS_CORRECTION_O3: + # Controls the size of the stack for threads created by the OpenMP implementation. + #------------------------------------------------------------------------------- KMP_AFFINITY_BIAS_CORRECTION_O3: "scatter" OMP_NUM_THREADS_BIAS_CORRECTION_O3: 32 OMP_STACKSIZE_BIAS_CORRECTION_O3: "2056M" @@ -1905,6 +2193,16 @@ task_bias_correction_o3: # BIAS_CORRECTION_PM25 config parameters #----------------------------- task_bias_correction_pm25: + #------------------------------------------------------------------------------- + # KMP_AFFINITY_BIAS_CORRECTION_PM25: + # Intel Thread Affinity Interface for the ``bias_correction_pm25`` task. + # + # OMP_NUM_THREADS_BIAS_CORRECTION_PM25: + # The number of OpenMP threads to use for parallel regions. + # + # OMP_STACKSIZE_BIAS_CORRECTION_PM25: + # Controls the size of the stack for threads created by the OpenMP implementation. + #------------------------------------------------------------------------------- KMP_AFFINITY_BIAS_CORRECTION_PM25: "scatter" OMP_NUM_THREADS_BIAS_CORRECTION_PM25: 32 OMP_STACKSIZE_BIAS_CORRECTION_PM25: "2056M" @@ -1944,7 +2242,7 @@ global: # Flag that determines whether to run a set of ensemble forecasts (for # each set of specified cycles). If this is set to true, NUM_ENS_MEMBERS # forecasts are run for each cycle, each with a different set of stochastic - # seed values. Otherwise, a single forecast is run for each cycle. + # seed values. When false, a single forecast is run for each cycle. # # NUM_ENS_MEMBERS: # The number of ensemble members to run if DO_ENSEMBLE is set to true. @@ -1961,8 +2259,8 @@ global: # experiment directory # # ENS_TIME_LAG_HRS: - # Time lag (in hours) to use for each ensemble member. For a deterministic - # forecast, this is a one-element array. Default values of array elements + # Time lag (in hours) to use for each ensemble member. For a deterministic + # forecast, this is a one-element array. Default values of array elements # are zero. # #----------------------------------------------------------------------- @@ -2008,19 +2306,21 @@ global: # #----------------------------------------------------------------------- # - # Set default SPP stochastic physics options. Each SPP option is an array, - # applicable (in order) to the scheme/parameter listed in SPP_VAR_LIST. + # Set default SPP stochastic physics options. + # SPP perturbs specific tuning parameters within a physics parameterization + # (unlike SPPT, which multiplies overall physics tendencies by a random + # perturbation field *after* the call to the physics suite).Patterns evolve + # and are applied at each time step. Each SPP option is an array, + # applicable (in order) to the HRRR-based parameterization listed in SPP_VAR_LIST. # Enter each value of the array in config.yaml as shown below without commas - # or single quotes (e.g., SPP_VAR_LIST=( "pbl" "sfc" "mp" "rad" "gwd" ). + # or single quotes (e.g., SPP_VAR_LIST: [ "pbl" "sfc" "mp" "rad" "gwd" ] ). # Both commas and single quotes will be added by Jinja when creating the # namelist. # # Note that SPP is currently only available for specific physics schemes - # used in the RAP/HRRR physics suite. Users need to be aware of which SDF + # used in the RAP/HRRR physics suite. Users need to be aware of which SDF # is chosen when turning this option on. # - # Patterns evolve and are applied at each time step. - # #----------------------------------------------------------------------- # DO_SPP: false @@ -2063,8 +2363,10 @@ global: #----------------------------------------------------------------------- # # HALO_BLEND: - # Number of rows into the computational domain that should be blended - # with the LBCs. To shut halo blending off, this can be set to zero. + # The number of cells to use for "blending" the external solution (obtained from + # the LBCs) with the internal solution from the FV3LAM dycore. + # Specifically, it refers to the number of rows into the computational domain + # that should be blended with the LBCs. To shut halo blending off, this can be set to zero. # #----------------------------------------------------------------------- # @@ -2073,7 +2375,7 @@ global: #----------------------------------------------------------------------- # # PRINT_DIFF_PGR: - # Option to turn on/off pressure tendency diagnostic + # Option to turn on/off the pressure tendency diagnostic. # #----------------------------------------------------------------------- # @@ -2083,9 +2385,38 @@ global: # verification (vx) parameters #----------------------------- verification: + #----------------------------------------------------------------------- # # Templates for CCPA, MRMS, and NDAS observation files. + # + # OBS_CCPA_APCP01h_FN_TEMPLATE: + # File name template used to obtain the input observation files (in + # the PcpCombine_obs tasks) that contain the 1-hour accumulated + # precipitation (APCP) from which APCP for longer accumulations will + # be generated. + # + # OBS_CCPA_APCPgt01h_FN_TEMPLATE: + # File name template used to generate the observation files (in the + # PcpCombine_obs tasks) containing accumulated precipitation for + # accumulation periods longer than 1-hour. + # + # OBS_NOHRSC_ASNOW_FN_TEMPLATE: + # File name template for NOHRSC snow observations. + # + # OBS_MRMS_REFC_FN_TEMPLATE: + # File name template for MRMS reflectivity observations. + # + # OBS_MRMS_RETOP_FN_TEMPLATE: + # File name template for MRMS echo top observations. + # + # OBS_NDAS_SFCorUPA_FN_TEMPLATE: + # File name template for NDAS surface and upper air observations. + # + # OBS_NDAS_SFCorUPA_FN_METPROC_TEMPLATE: + # File name template for NDAS surface and upper air observations + # after processing by MET's pb2nc tool (to change format to NetCDF). # + OBS_CCPA_APCP01h_FN_TEMPLATE: '{valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2' OBS_CCPA_APCPgt01h_FN_TEMPLATE: '${OBS_CCPA_APCP01h_FN_TEMPLATE}_a${ACCUM_HH}h.nc' OBS_NOHRSC_ASNOW_FN_TEMPLATE: '{valid?fmt=%Y%m%d}/sfav2_CONUS_${ACCUM_HH}h_{valid?fmt=%Y%m%d%H}_grid184.grb2' @@ -2135,15 +2466,26 @@ verification: VX_FCST_INPUT_BASEDIR: '{% if user.RUN_ENVIR == "nco" %}$COMOUT/../..{% else %}{{ workflow.EXPTDIR }}{% endif %}' VX_OUTPUT_BASEDIR: '{% if user.RUN_ENVIR == "nco" %}$COMOUT/metout{% else %}{{ workflow.EXPTDIR }}{% endif %}' # - # Number of digits in the ensember member names. This is a configurable + # Number of digits in the ensemble member names. This is a configurable # variable to allow users to change its value (e.g. to go from "mem004" - # to "mem02") when using staged forecast files that do not use the same + # to "mem04") when using staged forecast files that do not use the same # number of digits as the SRW App. # VX_NDIGITS_ENSMEM_NAMES: 3 # # File name and path templates used in the verification tasks. # + # FCST_SUBDIR_TEMPLATE: A template for the subdirectory containing + # input forecast files for vx tasks. + # + # FCST_FN_TEMPLATE: A template for the forecast file names used as + # input to verification tasks. + # + # FCST_FN_METPROC_TEMPLATE: A template for how to name the forecast + # files for accumulated precipitation (APCP) with greater than 1-hour + # accumulation (i.e., 3-, 6-, and 24-hour accumulations) after + # processing by PcpCombine. + # FCST_SUBDIR_TEMPLATE: '{% if user.RUN_ENVIR == "nco" %}${NET_default}.{init?fmt=%Y%m%d?shift=-${time_lag}}/{init?fmt=%H?shift=-${time_lag}}{% else %}{init?fmt=%Y%m%d%H?shift=-${time_lag}}{% if global.DO_ENSEMBLE %}/${ensmem_name}{% endif %}/postprd{% endif %}' FCST_FN_TEMPLATE: '${NET_default}.t{init?fmt=%H?shift=-${time_lag}}z{% if user.RUN_ENVIR == "nco" and global.DO_ENSEMBLE %}.${ensmem_name}{% endif %}.prslev.f{lead?fmt=%HHH?shift=${time_lag}}.${POST_OUTPUT_DOMAIN_NAME}.grib2' FCST_FN_METPROC_TEMPLATE: '${NET_default}.t{init?fmt=%H}z{% if user.RUN_ENVIR == "nco" and global.DO_ENSEMBLE %}.${ensmem_name}{% endif %}.prslev.f{lead?fmt=%HHH}.${POST_OUTPUT_DOMAIN_NAME}_${VAR}_a${ACCUM_HH}h.nc' @@ -2199,42 +2541,65 @@ cpl_aqm_parm: # DO_AQM_SAVE_FIRE: # Archive fire emission file to HPSS # + # DCOMINbio_default: + # Path to the directory containing AQM bio files + # + # DCOMINdust_default: + # Path to the directory containing AQM dust file + # + # DCOMINcanopy_default: + # Path to the directory containing AQM canopy files + # + # DCOMINfire_default: + # Path to the directory containing AQM fire files + # + # DCOMINchem_lbcs_default: + # Path to the directory containing chemical LBC files + # + # DCOMINgefs_default: + # Path to the directory containing GEFS aerosol LBC files + # + # DCOMINpt_src_default: + # Parent directory containing point source files + # + # DCOMINairnow_default: + # Path to the directory containing AIRNOW observation data + # + # COMINbicor: + # Path of reading in historical training data for biascorrection + # + # COMOUTbicor: + # Path to save the current cycle's model output and AirNow obs as + # training data for future use. $COMINbicor and $COMOUTbicor can be + # distinguished by the ${yyyy}${mm}${dd} under the same location + # # AQM_CONFIG_DIR: # Configuration directory for AQM # - # DCOMINbio: - # Path to the directory containing AQM bio files - # # AQM_BIO_FILE: # File name of AQM BIO file # - # DCOMINdust: - # Path to the directory containing AQM dust file - # # AQM_DUST_FILE_PREFIX: # Frefix of AQM dust file # # AQM_DUST_FILE_SUFFIX: # Suffix and extension of AQM dust file # - # DCOMINcanopy: - # Path to the directory containing AQM canopy files - # # AQM_CANOPY_FILE_PREFIX: # File name of AQM canopy file # # AQM_CANOPY_FILE_SUFFIX: # Suffix and extension of AQM CANOPY file # - # DCOMINfire: - # Path to the directory containing AQM fire files - # # AQM_FIRE_FILE_PREFIX: # Prefix of AQM FIRE file # # AQM_FIRE_FILE_SUFFIX: # Suffix and extension of AQM FIRE file # + # AQM_FIRE_FILE_OFFSET_HRS: + # Time offset when retrieving fire emission data files. + # # AQM_FIRE_ARCHV_DIR: # Path to the archive directory for RAVE emission files on HPSS # @@ -2247,15 +2612,9 @@ cpl_aqm_parm: # AQM_RC_PRODUCT_FREQUENCY: # Frequency of AQM output products # - # DCOMINchem_lbc: - # Path to the directory containing chemical LBC files - # # AQM_LBCS_FILES: # File name of chemical LBCs # - # DCOMINgefs: - # Path to the directory containing GEFS aerosol LBC files - # # AQM_GEFS_FILE_PREFIX: # Prefix of AQM GEFS file ("geaer" or "gfs") # @@ -2285,20 +2644,6 @@ cpl_aqm_parm: # NEXUS_GFS_SFC_ARCHV_DIR: # Path to archive directory for gfs surface files on HPSS # - # DCOMINpt_src: - # Parent directory containing point source files - # - # DCOMINairnow: - # Path to the directory containing AIRNOW observation data - # - # COMINbicor: - # Path of reading in historical training data for biascorrection - # - # COMOUTbicor: - # Path to save the current cycle's model output and AirNow obs as - # training data for future use $COMINbicor and $COMOUTbicor can be - # distuigshed by the ${yyyy}${mm}$dd under the same location - # #----------------------------------------------------------------------- # CPL_AQM: false From aeb6de6888eeff89b149cff918f5dbd085aed615 Mon Sep 17 00:00:00 2001 From: Gillian Petro <96886803+gspetro-NOAA@users.noreply.github.com> Date: Fri, 6 Oct 2023 09:56:37 -0400 Subject: [PATCH 2/3] [develop] Update physics FAQ (#928) Changes were requested for the "How can I add a physics scheme (e.g., YSU PBL) to the UFS SRW App?" section in the User's Guide FAQ. These are those changes. --- docs/UsersGuide/source/Reference/FAQ.rst | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/docs/UsersGuide/source/Reference/FAQ.rst b/docs/UsersGuide/source/Reference/FAQ.rst index 61e63edd63..237980789c 100644 --- a/docs/UsersGuide/source/Reference/FAQ.rst +++ b/docs/UsersGuide/source/Reference/FAQ.rst @@ -249,12 +249,22 @@ How can I add a physics scheme (e.g., YSU PBL) to the UFS SRW App? ==================================================================== At this time, there are ten physics suites available in the SRW App, :ref:`five of which are fully supported `. However, several additional physics schemes are available in the UFS Weather Model (WM) and can be enabled in the SRW App. The CCPP Scientific Documentation details the various `namelist options `__ available in the UFS WM, including physics schemes, and also includes an `overview of schemes and suites `__. -To enable an additional physics scheme, such as the YSU PBL scheme, in the SRW App, users must modify ``ufs-srweather-app/parm/FV3.input.yml`` and set the variable corresponding to the desired physics scheme to *True* under the physics suite they would like to use (e.g., ``do_ysu = True``). -It may be necessary to disable another physics scheme, too. For example, when using the YSU PBL scheme, users should disable the default SATMEDMF PBL scheme (*satmedmfvdifq*) by setting the ``satmedmf`` variable to *False* in the ``FV3.input.yml`` file. -Regardless, users will need to modify the suite definition file (SDF) and recompile the code. For example, to activate the YSU PBL scheme, users should replace the line ``satmedmfvdifq`` with ``ysuvdif`` and recompile the code. +.. attention:: -Users must ensure that they are using the same physics suite in their ``config.yaml`` file as the one they modified in ``FV3.input.yml``. Then, the user can run the ``generate_FV3LAM_wflow.py`` script to generate an experiment and navigate to the experiment directory. They should see ``do_ysu = .true.`` in the namelist file (or a similar statement, depending on the physics scheme selected), which indicates that the YSU PBL scheme is enabled. + Note that when users enable new physics schemes in the SRW App, they are using untested and unverified combinations of physics, which can lead to unexpected and/or poor results. It is recommended that users run experiments only with the supported physics suites and physics schemes unless they have an excellent understanding of how these physics schemes work and a specific research purpose in mind for making such changes. + +To enable an additional physics scheme, such as the YSU PBL scheme, users may need to modify ``ufs-srweather-app/parm/FV3.input.yml``. This is necessary when the namelist has a logical variable corresponding to the desired physics scheme. In this case, it should be set to *True* for the physics scheme they would like to use (e.g., ``do_ysu = True``). + +It may be necessary to disable another physics scheme, too. For example, when using the YSU PBL scheme, users should disable the default SATMEDMF PBL scheme (*satmedmfvdifq*) by setting the ``satmedmf`` variable to *False* in the ``FV3.input.yml`` file. + +It may also be necessary to add or subtract interstitial schemes, so that the communication between schemes and between schemes and the host model is in order. For example, it is necessary that the connections between clouds and radiation are correctly established. + +Regardless, users will need to modify the suite definition file (:term:`SDF`) and recompile the code. For example, to activate the YSU PBL scheme, users should replace the line ``satmedmfvdifq`` with ``ysuvdif`` and recompile the code. + +Depending on the scheme, additional changes to the SDF (e.g., to add, remove, or change interstitial schemes) and to the namelist (to include scheme-specific tuning parameters) may be required. Users are encouraged to reach out on GitHub Discussions to find out more from subject matter experts about recommendations for the specific scheme they want to implement. Users can post on the `SRW App Discussions page `__ or ask their questions directly to the developers of `ccpp-physics `__ and `ccpp-framework `__, which also handle support through GitHub Discussions. + +After making appropriate changes to the SDF and namelist files, users must ensure that they are using the same physics suite in their ``config.yaml`` file as the one they modified in ``FV3.input.yml``. Then, the user can run the ``generate_FV3LAM_wflow.py`` script to generate an experiment and navigate to the experiment directory. They should see ``do_ysu = .true.`` in the namelist file (or a similar statement, depending on the physics scheme selected), which indicates that the YSU PBL scheme is enabled. .. _IC-LBC-gen-issue: From 67b82073390c4954d85db86a343b324d4c1a9978 Mon Sep 17 00:00:00 2001 From: RatkoVasic-NOAA <37597874+RatkoVasic-NOAA@users.noreply.github.com> Date: Fri, 6 Oct 2023 12:22:28 -0400 Subject: [PATCH 3/3] [develop] Replace hpc-stack with spack-stack (#913) * Replaced use of hpc-stack with spack-stack version 1.4.1 for Gaea, Hera, Hercules, Jet, and Orion. * In modulefiles directory, build* and wflow* scripts are updated. Also removed srw_common_spack.lua and all *.lua files are calling same srw_common.lua file. * HPC-Stack will still be used for Derecho, Gaea C5, MacOS, and Linux. --------- Co-authored-by: Natalie Perlin Co-authored-by: EdwardSnyder-NOAA --- modulefiles/build_gaea_intel.lua | 19 ++++++++----- modulefiles/build_hera_gnu.lua | 24 +++++++---------- modulefiles/build_hera_intel.lua | 25 ++++++++--------- modulefiles/build_hercules_intel.lua | 22 +++++++-------- modulefiles/build_jet_intel.lua | 23 ++++++++-------- modulefiles/build_noaacloud_intel.lua | 2 +- modulefiles/build_orion_intel.lua | 21 +++++++-------- modulefiles/srw_common.lua | 18 ++++++------- modulefiles/srw_common_spack.lua | 28 -------------------- modulefiles/tasks/cheyenne/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/derecho/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/gaea/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/hera/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/hercules/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/jet/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/noaacloud/python_srw.lua | 5 +++- modulefiles/tasks/noaacloud/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/tasks/orion/run_vx.local.lua | 23 ++++++++++++++-- modulefiles/wflow_gaea.lua | 6 ----- parm/wflow/verify_pre.yaml | 2 +- ush/machine/gaea.yaml | 2 +- ush/machine/orion.yaml | 10 +++++++ 22 files changed, 258 insertions(+), 133 deletions(-) delete mode 100644 modulefiles/srw_common_spack.lua diff --git a/modulefiles/build_gaea_intel.lua b/modulefiles/build_gaea_intel.lua index d8284ce9af..df93eb2f17 100644 --- a/modulefiles/build_gaea_intel.lua +++ b/modulefiles/build_gaea_intel.lua @@ -5,14 +5,21 @@ the NOAA RDHPC machine Gaea using Intel-2022.0.2 whatis([===[Loads libraries needed for building the UFS SRW App on Gaea ]===]) -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.20.1")) +unload("intel") +unload("cray-mpich") +unload("cray-python") +unload("darshan") -prepend_path("MODULEPATH","/lustre/f2/dev/role.epic/contrib/hpc-stack/intel-classic-2023.1.0/modulefiles/stack") -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-intel-classic", os.getenv("hpc_intel_classic_ver") or "2023.1.0")) -load(pathJoin("hpc-cray-mpich", os.getenv("hpc_cray_mpich_ver") or "7.7.20")) +prepend_path("MODULEPATH", "/lustre/f2/dev/wpo/role.epic/contrib/spack-stack/spack-stack-1.4.1-c4/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/lustre/f2/pdata/esrl/gsd/spack-stack/modulefiles") + +load("stack-intel/2022.0.2") +load("stack-cray-mpich/7.7.20") +load("stack-python/3.9.12") +load("cmake/3.23.1") load("srw_common") +load("ufs-pyenv") -- Need at runtime load("alps") @@ -26,5 +33,3 @@ setenv("CMAKE_Platform","gaea.intel") setenv("CFLAGS","-diag-disable=10441") setenv("FFLAGS","-diag-disable=10441 -fp-model source") - - diff --git a/modulefiles/build_hera_gnu.lua b/modulefiles/build_hera_gnu.lua index 2633fae842..578ed64cbb 100644 --- a/modulefiles/build_hera_gnu.lua +++ b/modulefiles/build_hera_gnu.lua @@ -5,27 +5,21 @@ the NOAA RDHPC machine Hera using GNU 9.2.0 whatis([===[Loads libraries needed for building the UFS SRW App on Hera using GNU 9.2.0 ]===]) -prepend_path("MODULEPATH","/contrib/sutils/modulefiles") -load("sutils") +prepend_path("MODULEPATH", "/scratch1/NCEPDEV/nems/role.epic/spack-stack/spack-stack-1.4.1/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/scratch1/NCEPDEV/jcsda/jedipara/spack-stack/modulefiles") -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.20.1")) - -gnu_ver=os.getenv("gnu_ver") or "9.2.0" -load(pathJoin("gnu", gnu_ver)) - -prepend_path("MODULEPATH", "/scratch1/NCEPDEV/nems/role.epic/hpc-stack/libs/gnu-9.2_ncdf492/modulefiles/stack") - -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-gnu", os.getenv("hpc-gnu_ver") or "9.2")) -load(pathJoin("hpc-mpich", os.getenv("hpc-mpich_ver") or "3.3.2")) +load("stack-gcc/9.2.0") +load("stack-openmpi/4.1.5") +load("stack-python/3.9.12") +load("cmake/3.23.1") load("srw_common") -load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.9.1.0")) +load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.9.0.1")) load(pathJoin("nco", os.getenv("nco_ver") or "5.0.6")) -load(pathJoin("openblas", os.getenv("openblas_ver") or "0.3.23")) +load(pathJoin("openblas", os.getenv("openblas_ver") or "0.3.19")) +load("ufs-pyenv") -unsetenv("MKLROOT") setenv("CMAKE_C_COMPILER","mpicc") setenv("CMAKE_CXX_COMPILER","mpicxx") setenv("CMAKE_Fortran_COMPILER","mpif90") diff --git a/modulefiles/build_hera_intel.lua b/modulefiles/build_hera_intel.lua index 7ec884eeba..f04f4d4df1 100644 --- a/modulefiles/build_hera_intel.lua +++ b/modulefiles/build_hera_intel.lua @@ -8,27 +8,28 @@ whatis([===[Loads libraries needed for building the UFS SRW App on Hera ]===]) prepend_path("MODULEPATH","/contrib/sutils/modulefiles") load("sutils") -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.20.1")) +prepend_path("MODULEPATH", "/scratch1/NCEPDEV/nems/role.epic/spack-stack/spack-stack-1.4.1/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/scratch1/NCEPDEV/jcsda/jedipara/spack-stack/modulefiles") -intel_ver=os.getenv("intel_ver") or "2022.1.2" -load(pathJoin("intel", intel_ver)) +stack_intel_ver=os.getenv("stack_intel_ver") or "2021.5.0" +load(pathJoin("stack-intel", stack_intel_ver)) -impi_ver=os.getenv("impi_ver") or "2022.1.2" -load(pathJoin("impi", impi_ver)) +stack_impi_ver=os.getenv("stack_impi_ver") or "2021.5.1" +load(pathJoin("stack-intel-oneapi-mpi", stack_impi_ver)) -prepend_path("MODULEPATH","/scratch1/NCEPDEV/nems/role.epic/hpc-stack/libs/intel-2022.1.2_ncdf492/modulefiles/stack") +stack_python_ver=os.getenv("stack_python_ver") or "3.9.12" +load(pathJoin("stack-python", stack_python_ver)) -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-intel", os.getenv("hpc_intel_ver") or "2022.1.2")) -load(pathJoin("hpc-impi", os.getenv("hpc_impi_ver") or "2022.1.2")) +cmake_ver=os.getenv("cmake_ver") or "3.20.1" +load(pathJoin("cmake", cmake_ver)) load("srw_common") -load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.8.9.0")) -load(pathJoin("nco", os.getenv("nco_ver") or "5.0.6")) +load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.9.0.1")) +load(pathJoin("nco", os.getenv("nco_ver") or "4.9.3")) +load("ufs-pyenv") setenv("CMAKE_C_COMPILER","mpiicc") setenv("CMAKE_CXX_COMPILER","mpiicpc") setenv("CMAKE_Fortran_COMPILER","mpiifort") setenv("CMAKE_Platform","hera.intel") - diff --git a/modulefiles/build_hercules_intel.lua b/modulefiles/build_hercules_intel.lua index 1073952dc5..1772460542 100644 --- a/modulefiles/build_hercules_intel.lua +++ b/modulefiles/build_hercules_intel.lua @@ -3,22 +3,21 @@ This module loads libraries for building the UFS SRW App on the MSU machine Hercules using intel-oneapi-compilers/2022.2.1 ]]) -whatis([===[Loads libraries needed for building the UFS SRW App on Orion ]===]) +whatis([===[Loads libraries needed for building the UFS SRW App on Hercules ]===]) -load("contrib") -load("noaatools") +prepend_path("MODULEPATH", "/work/noaa/epic/role-epic/spack-stack/hercules/spack-stack-1.4.1/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/work/noaa/da/role-da/spack-stack/modulefiles") -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.26.3")) - -prepend_path("MODULEPATH","/work/noaa/epic/role-epic/contrib/hercules/hpc-stack/intel-2022.2.1/modulefiles/stack") -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-intel-oneapi-compilers", os.getenv("hpc_intel_ver") or "2022.2.1")) -load(pathJoin("hpc-intel-oneapi-mpi", os.getenv("hpc_mpi_ver") or "2021.7.1")) +load("stack-intel/2021.7.1") +load("stack-intel-oneapi-mpi/2021.7.1") +load("stack-python/3.9.14") +load("cmake/3.26.3") load("srw_common") -load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.8.9.0")) -load(pathJoin("nco", os.getenv("nco_ver") or "5.0.6")) +load("nccmp/1.9.0.1") +load("nco/5.0.6") +load("ufs-pyenv") setenv("CFLAGS","-diag-disable=10441") setenv("FFLAGS","-diag-disable=10441") @@ -27,4 +26,3 @@ setenv("CMAKE_C_COMPILER","mpiicc") setenv("CMAKE_CXX_COMPILER","mpiicpc") setenv("CMAKE_Fortran_COMPILER","mpiifort") setenv("CMAKE_Platform","hercules.intel") - diff --git a/modulefiles/build_jet_intel.lua b/modulefiles/build_jet_intel.lua index 4387a1ad33..fd19234474 100644 --- a/modulefiles/build_jet_intel.lua +++ b/modulefiles/build_jet_intel.lua @@ -1,25 +1,24 @@ help([[ This module loads libraries for building the UFS SRW App on -the NOAA RDHPC machine Jet using Intel-2022.1.2 +the NOAA RDHPC machine Jet using Intel-2021.5.0 ]]) whatis([===[Loads libraries needed for building the UFS SRW App on Jet ]===]) -prepend_path("MODULEPATH","/contrib/sutils/modulefiles") -load("sutils") +prepend_path("MODULEPATH","/mnt/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.4.1/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/lfs4/HFIP/hfv3gfs/spack-stack/modulefiles") -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.20.1")) - -prepend_path("MODULEPATH","/mnt/lfs4/HFIP/hfv3gfs/role.epic/hpc-stack/libs/intel-2022.1.2_ncdf492/modulefiles/stack") -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-intel", os.getenv("hpc_intel_ver") or "2022.1.2")) -load(pathJoin("hpc-impi", os.getenv("hpc_impi_ver") or "2022.1.2")) +load("stack-intel/2021.5.0") +load("stack-intel-oneapi-mpi/2021.5.1") +load("stack-python/3.9.12") +load("cmake/3.23.1") load("srw_common") -load(pathJoin("prod_util", os.getenv("prod_util_ver") or "1.2.2")) -load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.8.9.0")) -load(pathJoin("nco", os.getenv("nco_ver") or "4.9.3")) +load("prod-util/1.2.2") +load("nccmp/1.9.0.1") +load("nco/5.0.6") +load("ufs-pyenv") setenv("CMAKE_C_COMPILER","mpiicc") setenv("CMAKE_CXX_COMPILER","mpiicpc") diff --git a/modulefiles/build_noaacloud_intel.lua b/modulefiles/build_noaacloud_intel.lua index 6e83757cc8..82a432f822 100644 --- a/modulefiles/build_noaacloud_intel.lua +++ b/modulefiles/build_noaacloud_intel.lua @@ -12,4 +12,4 @@ load("stack-intel") load("stack-intel-oneapi-mpi") load("cmake/3.23.1") -load("srw_common_spack") +load("srw_common") diff --git a/modulefiles/build_orion_intel.lua b/modulefiles/build_orion_intel.lua index 87e35a0a35..0b07e8493e 100644 --- a/modulefiles/build_orion_intel.lua +++ b/modulefiles/build_orion_intel.lua @@ -5,24 +5,21 @@ the MSU machine Orion using Intel-2022.1.2 whatis([===[Loads libraries needed for building the UFS SRW App on Orion ]===]) -load("contrib") -load("noaatools") +prepend_path("MODULEPATH", "/work/noaa/epic/role-epic/spack-stack/spack-stack-1.4.1/envs/unified-env/install/modulefiles/Core") +prepend_path("MODULEPATH", "/work/noaa/da/role-da/spack-stack/modulefiles") -load(pathJoin("cmake", os.getenv("cmake_ver") or "3.22.1")) -load(pathJoin("python", os.getenv("python_ver") or "3.9.2")) - -prepend_path("MODULEPATH","/work/noaa/epic/role-epic/contrib/orion/hpc-stack/intel-2022.1.2_ncdf492/modulefiles/stack") -load(pathJoin("hpc", os.getenv("hpc_ver") or "1.2.0")) -load(pathJoin("hpc-intel", os.getenv("hpc_intel_ver") or "2022.1.2")) -load(pathJoin("hpc-impi", os.getenv("hpc_impi_ver") or "2022.1.2")) +load("stack-intel/2022.0.2") +load("stack-intel-oneapi-mpi/2021.5.1") +load("stack-python/3.9.7") +load("cmake/3.22.1") load("srw_common") -load(pathJoin("nccmp", os.getenv("nccmp_ver") or "1.8.9.0")) -load(pathJoin("nco", os.getenv("nco_ver") or "5.0.6")) +load("nccmp/1.9.0.1") +load("nco/5.0.6") +load("ufs-pyenv") setenv("CMAKE_C_COMPILER","mpiicc") setenv("CMAKE_CXX_COMPILER","mpiicpc") setenv("CMAKE_Fortran_COMPILER","mpiifort") setenv("CMAKE_Platform","orion.intel") - diff --git a/modulefiles/srw_common.lua b/modulefiles/srw_common.lua index f68b4891e0..6cd326a1f8 100644 --- a/modulefiles/srw_common.lua +++ b/modulefiles/srw_common.lua @@ -1,29 +1,29 @@ load_any("jasper/2.0.25","jasper/2.0.32") load_any("zlib/1.2.11","zlib/1.2.13") -load("libpng/1.6.37") +load_any("png/1.6.37","libpng/1.6.37") -load_any("netcdf/4.9.2", "netcdf-c/4.9.2") -load_any("netcdf/4.9.2", "netcdf-fortran/4.6.0") -load_any("pio/2.5.10", "parallelio/2.5.9") +load_any("netcdf/4.9.2","netcdf-c/4.9.2") +load_any("netcdf/4.9.2","netcdf-fortran/4.6.0") +load_any("pio/2.5.10","parallelio/2.5.9","parallelio/2.5.10") load("esmf/8.4.2") load("fms/2023.01") load("bacio/2.4.1") -load("g2/3.4.5") load("crtm/2.4.0") +load("g2/3.4.5") load("g2tmpl/1.10.2") load("ip/3.3.3") load("sp/2.3.3") load("w3emc/2.9.2") -load_any("gftl-shared/v1.5.0", "gftl-shared/1.5.0") -load_any("yafyaml/v0.5.1", "yafyaml/0.5.1") +load_any("gftl-shared/v1.5.0","gftl-shared/1.5.0") +load_any("yafyaml/v0.5.1","yafyaml/0.5.1") load("mapl/2.35.2-esmf-8.4.2") -load("nemsio/2.5.4") +load_any("nemsio/2.5.2","nemsio/2.5.4") load("sfcio/1.4.1") load("sigio/2.3.2") load("w3nco/2.4.1") load_any("wrf_io/1.2.0","wrf-io/1.2.0") -load("wgrib2/2.0.8") +load_any("wgrib2/2.0.8","wgrib2/3.1.1") diff --git a/modulefiles/srw_common_spack.lua b/modulefiles/srw_common_spack.lua deleted file mode 100644 index 5258e91cfe..0000000000 --- a/modulefiles/srw_common_spack.lua +++ /dev/null @@ -1,28 +0,0 @@ -load("jasper/2.0.32") -load("zlib/1.2.13") -load("libpng/1.6.37") -load("netcdf-c/4.9.2") -load("netcdf-fortran/4.6.0") -load("parallelio/2.5.9") -load("esmf/8.4.2") -load("fms/2023.01") - -load("bacio/2.4.1") -load("crtm/2.4.0") -load("g2/3.4.5") -load("g2tmpl/1.10.2") -load("ip/3.3.3") -load("sp/2.3.3") -load("w3emc/2.9.2") - -load("gftl-shared/1.5.0") -load("yafyaml/0.5.1") -load("mapl/2.35.2-esmf-8.4.2") - -load("nemsio/2.5.2") -load("sfcio/1.4.1") -load("sigio/2.3.2") -load("w3nco/2.4.1") -load("wrf-io/1.2.0") - -load("wgrib2/2.0.8") diff --git a/modulefiles/tasks/cheyenne/run_vx.local.lua b/modulefiles/tasks/cheyenne/run_vx.local.lua index 5979a8db96..54cc632c21 100644 --- a/modulefiles/tasks/cheyenne/run_vx.local.lua +++ b/modulefiles/tasks/cheyenne/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.2") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.3") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/derecho/run_vx.local.lua b/modulefiles/tasks/derecho/run_vx.local.lua index 5979a8db96..54cc632c21 100644 --- a/modulefiles/tasks/derecho/run_vx.local.lua +++ b/modulefiles/tasks/derecho/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.2") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.3") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/gaea/run_vx.local.lua b/modulefiles/tasks/gaea/run_vx.local.lua index 5979a8db96..8dccf5e63c 100644 --- a/modulefiles/tasks/gaea/run_vx.local.lua +++ b/modulefiles/tasks/gaea/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/hera/run_vx.local.lua b/modulefiles/tasks/hera/run_vx.local.lua index 5979a8db96..8dccf5e63c 100644 --- a/modulefiles/tasks/hera/run_vx.local.lua +++ b/modulefiles/tasks/hera/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/hercules/run_vx.local.lua b/modulefiles/tasks/hercules/run_vx.local.lua index 5979a8db96..8dccf5e63c 100644 --- a/modulefiles/tasks/hercules/run_vx.local.lua +++ b/modulefiles/tasks/hercules/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/jet/run_vx.local.lua b/modulefiles/tasks/jet/run_vx.local.lua index 5979a8db96..8dccf5e63c 100644 --- a/modulefiles/tasks/jet/run_vx.local.lua +++ b/modulefiles/tasks/jet/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/noaacloud/python_srw.lua b/modulefiles/tasks/noaacloud/python_srw.lua index 602d60842f..ed1e785a4c 100644 --- a/modulefiles/tasks/noaacloud/python_srw.lua +++ b/modulefiles/tasks/noaacloud/python_srw.lua @@ -1 +1,4 @@ -prepend_path("PATH", "/contrib/EPIC/miniconda3/4.12.0/envs/regional_workflow/bin") +prepend_path("MODULEPATH","/contrib/EPIC/miniconda3/modulefiles") +load(pathJoin("miniconda3", os.getenv("miniconda3_ver") or "4.12.0")) + +setenv("SRW_ENV", "workflow_tools") diff --git a/modulefiles/tasks/noaacloud/run_vx.local.lua b/modulefiles/tasks/noaacloud/run_vx.local.lua index 5979a8db96..8dccf5e63c 100644 --- a/modulefiles/tasks/noaacloud/run_vx.local.lua +++ b/modulefiles/tasks/noaacloud/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/tasks/orion/run_vx.local.lua b/modulefiles/tasks/orion/run_vx.local.lua index 5979a8db96..62646d0992 100644 --- a/modulefiles/tasks/orion/run_vx.local.lua +++ b/modulefiles/tasks/orion/run_vx.local.lua @@ -1,6 +1,25 @@ --[[ Compiler-specific modules are used for met and metplus libraries --]] -load(pathJoin("met", os.getenv("met_ver") or "10.1.2")) -load(pathJoin("metplus", os.getenv("metplus_ver") or "4.1.3")) +local met_ver = (os.getenv("met_ver") or "10.1.1") +local metplus_ver = (os.getenv("metplus_ver") or "4.1.1") +if (mode() == "load") then + load(pathJoin("met", met_ver)) + load(pathJoin("metplus",metplus_ver)) +end +local base_met = os.getenv("met_ROOT") or os.getenv("MET_ROOT") +local base_metplus = os.getenv("metplus_ROOT") or os.getenv("METPLUS_ROOT") + +setenv("MET_INSTALL_DIR", base_met) +setenv("MET_BIN_EXEC", pathJoin(base_met,"bin")) +setenv("MET_BASE", pathJoin(base_met,"share/met")) +setenv("MET_VERSION", met_ver) +setenv("METPLUS_VERSION", metplus_ver) +setenv("METPLUS_ROOT", base_metplus) +setenv("METPLUS_PATH", base_metplus) + +if (mode() == "unload") then + unload(pathJoin("met", met_ver)) + unload(pathJoin("metplus",metplus_ver)) +end load("python_srw") diff --git a/modulefiles/wflow_gaea.lua b/modulefiles/wflow_gaea.lua index 3e95511f80..e9e59a41ca 100644 --- a/modulefiles/wflow_gaea.lua +++ b/modulefiles/wflow_gaea.lua @@ -13,12 +13,6 @@ prepend_path("MODULEPATH","/lustre/f2/dev/role.epic/contrib/rocoto/modulefiles") load("rocoto") load("alps") -local MKLROOT="/opt/intel/oneapi/mkl/2023.1.0/" -prepend_path("LD_LIBRARY_PATH",pathJoin(MKLROOT,"lib/intel64")) -pushenv("MKLROOT", MKLROOT) -pushenv("GSI_BINARY_SOURCE_DIR", "/lustre/f2/dev/role.epic/contrib/GSI_data/fix/20230601") -setenv("PMI_NO_PREINITIALIZE","1") - if mode() == "load" then LmodMsgRaw([===[Please do the following to activate conda: > conda activate workflow_tools diff --git a/parm/wflow/verify_pre.yaml b/parm/wflow/verify_pre.yaml index ccc8e0d11e..1ae6894d59 100644 --- a/parm/wflow/verify_pre.yaml +++ b/parm/wflow/verify_pre.yaml @@ -261,4 +261,4 @@ metatask_PcpCombine_fcst_ASNOW_all_accums_all_mems: attrs: age: 00:00:00:30 text: !cycstr '{{ workflow.EXPTDIR }}/@Y@m@d@H/post_files_exist_mem#mem#.txt' - walltime: 00:05:00 + walltime: 00:10:00 diff --git a/ush/machine/gaea.yaml b/ush/machine/gaea.yaml index 8687fd9924..301fa5c5a1 100644 --- a/ush/machine/gaea.yaml +++ b/ush/machine/gaea.yaml @@ -15,7 +15,7 @@ platform: RUN_CMD_POST: srun --export=ALL -n $nprocs RUN_CMD_PRDGEN: srun --export=ALL -n $nprocs RUN_CMD_SERIAL: time - RUN_CMD_UTILS: srun --export=ALL --mpi=pmi2 -n $nprocs + RUN_CMD_UTILS: srun --export=ALL -n $nprocs SCHED_NATIVE_CMD: --clusters=c4 --export=NONE SCHED_NATIVE_CMD_HPSS: --clusters=es --export=NONE PRE_TASK_CMDS: '{ ulimit -s unlimited; ulimit -a; }' diff --git a/ush/machine/orion.yaml b/ush/machine/orion.yaml index 9e2ca454ee..38ccd279f9 100644 --- a/ush/machine/orion.yaml +++ b/ush/machine/orion.yaml @@ -32,3 +32,13 @@ platform: FIXsfc: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/fix/fix_sfc_climo FIXshp: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/NaturalEarth EXTRN_MDL_DATA_STORES: aws nomads +data: + ics_lbcs: + FV3GFS: + nemsio: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/FV3GFS/nemsio/${yyyymmdd}${hh} + grib2: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/FV3GFS/grib2/${yyyymmdd}${hh} + netcdf: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/FV3GFS/netcdf/${yyyymmdd}${hh} + NAM: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/NAM/${yyyymmdd}${hh} + HRRR: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/HRRR/${yyyymmdd}${hh} + RAP: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/RAP/${yyyymmdd}${hh} + GSMGFS: /work/noaa/epic/role-epic/contrib/UFS_SRW_data/develop/input_model_data/GSMGFS/${yyyymmdd}${hh}