From 2b09194e8052896145fb50d58427ad44d736c7c6 Mon Sep 17 00:00:00 2001 From: Christina Holt <56881914+christinaholtNOAA@users.noreply.github.com> Date: Tue, 7 Jun 2022 14:38:19 -0600 Subject: [PATCH] rrfs_ci: Update to top of authoritative develop (#374) * Add missing user-defined stochastic physics options; fix stochastic physics seed generation script (#704) ## DESCRIPTION OF CHANGES: Add missing user-defined options for tendency-based stochastic physics and fix the ensemble-based seed generation script to work regardless of whether stochastic physics is turned on or not. ## TESTS CONDUCTED: Tested on Hera using the following WE2E configurations with and without stochastic physics: config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh config.community_ensemble_2mems.sh ## ISSUE (optional): [Issue #702](https://github.com/ufs-community/regional_workflow/issues/702) ## CONTRIBUTORS (optional): Thanks to @mkavulich and @chan-hoo for finding this problem. * Add namelist option for netCDF4 when running with the 3-km NA domain; update NAM HPSS settings and WE2E tests (#707) * Change to netcdf4 when using the NA 3-km domain * Update HPSS paths for NAM data * Update NAM HPSS locations and dates for WE2E tests. * Remove lines from merge. * Tweaks to allow compiler and build_env_fn to be specified in the run_WE2E_test.sh script (#711) * Changed 20200304 to 20200303 in ush/mrms_pull_topofhour.py (#712) * Remove unused rocoto directory in ush (#720) * Fix bug for nco we2e tests on Orion; re-organize we2e input data and nco we2e tests (#713) * Update machine script for orion * Update machine script for wcoss_dell_p3 * Update we2e run script for wcoss and orion * Reorganize nco we2e tests * remove machine based logic * Add symlink for nco inline post test * Added stand-alone verification scripts (feature/issue_683_standaloneVX) (#726) * Grid-stat and point-stat run scripts. * Stand-alone scripts for verification. * Added comments to gridvx scripts. * Added qsub_job.sh and added comments to provide context on running Vx. * remove machine base logic (#727) * Allow user-defined file names for input template files (#717) * Allow multiple template names * parameterize file_TMPL_FN and add a we2e test * Increase maxtries_task for make_grid/orog/sfc_climo * Modify file name and description * Changes to RRFS 3- and 13-km domains, setup.sh script bug fixes, make_ics task modification, and tweaks to stochastic physics namelist settings (#721) * Modify RRFS North America 3- and 13-km domain configuration and WE2E test. * Change sotyp_from_climo to "true" based on operational RAP grib2 files. * Update for changes to stochastic physics namelist options. * Check for DO_ENSEMBLE="TRUE" when running ensemble verification and turn of VX when running in NCO mode. * Revert to 3-km domain. * Remove commented-out GFDL grid for the RRFS_NA_13km domain * Add RRFS_NA_13km WE2E test * Changes to comments. * Adding 25 km tests to Jet/Hera suites. (#718) * Add a small 3km predefined grid over Indianapolis for testing (#725) * Add 3km grid over Indianapolis. This is about 600km x 600km in extent (200 x 200 grid points). It is intended for use in the WE2E tests. * Edit comments. * Use Python tool for get_extrnl_mdl_file tasks (#681) These changes hook in the Python-based data ingest tool, replacing the previous scripts that handled this work as part of the get_extrn_mdl_file task. No attempt was made in this PR to replace the NOMADS fetching script with the Python utility, but the NOMADS data location has been added to the data_locations.yml file. The functionality to write the data summary file has also been added to the Python tool to match the capabilities of the existing workflow tools. * Increase size of RRFS CONUS grid (#724) Co-authored-by: Benjamin.Blake EMC Co-authored-by: Benjamin.Blake EMC Co-authored-by: Benjamin.Blake EMC Co-authored-by: chan-hoo * add include-style quality mark options in metplus confs (#738) * Add Gaea as a supported platform for the regional_workflow (#734) * Updates to port regional workflow to gaea * Temp change with -v as batch option * new fixes for gaea/slurm * Updated time for make lbcs * added TEST data directory path * Update gaea.sh * fixes for PR * Add more parameters to CSV file containing WE2E test info (#740) ## DESCRIPTION OF CHANGES: The script/function `get_WE2Etest_names_subdirs_descs.sh` (which is called from `run_WE2E_tests.sh` if needed) creates a CSV (Comma-Separated Value) file named `WE2E_test_info.csv` that contains information about the WE2E tests. Currently, this CSV file contains only 3 columns: the test name, the names of any alternate names for the test, and the test description. In order to have a more complete summary of the WE2E tests, this PR modifies `get_WE2Etest_names_subdirs_descs.sh` so that additional information is included in the CSV file. This additional information consists of the values of the following experiment variables for each test: ``` PREDEF_GRID_NAME CCPP_PHYS_SUITE EXTRN_MDL_NAME_ICS EXTRN_MDL_NAME_LBCS DATE_FIRST_CYCL DATE_LAST_CYCL CYCL_HRS INCR_CYCL_FREQ FCST_LEN_HRS LBC_SPEC_INTVL_HRS NUM_ENS_MEMBERS ``` In addition, the script uses this information to calculate the number of times each test calls the forecast model (e.g. if the test uses 3 different cycle dates, then the forecast model will be called 3 times; if it is an ensemble test for a single cycle, the test will call the forecast model as many times as the number of ensemble members). ## TESTS CONDUCTED: The script `run_WE2E_tests.sh` was called that in turn calls `get_WE2Etest_names_subdirs_descs.sh`. This created a new CSV file that contained the new fields (columns). The CSV file was imported into Google Sheets (using "|" as the field/column separator) and looked correct. ## DOCUMENTATION: The documentation is for the most part already within the `get_WE2Etest_names_subdirs_descs.sh`. This PR slightly modifies that documentation to update it. * Update directory structure of NCO mode (#743) * update vertical structure of NCO mode * update sample script for nco * Fix typo on write component of new RRFS CONUS * Default CCPP physics option is FV3_GFS_v16 (#746) * Updated the default CCPP physics option to FV3_GFS_v16 * Updated the default CCPP physics option to FV3_GFS_v16 in config_defaults.sh Co-authored-by: Natalie Perlin * Adds an alternative python workflow generation path (#698) * Workflow in python starting to work. * Use new python_utils package structure. * Some bug fixes. * Use uppercase TRUE/FALSE in var_dfns * Use config.sh by default. * Minor bug fixes. * Remove config.yaml * Update to the latest develop * Remove quotes from numbers in predef grid. * Minor bug fix. * Move validity checker to the bottom of setup * Add more unit tests. * Update with python_utils changes. * Update to latest develop additions (Need to re-run regression test) * Use set_namelist and fill_jinja_template as python functions. * Replace sed regex searches with python re. * Use python realpath. * Construct settings as dictionary before passing to fill_jinja and set_namelist * Use yaml for setting predefined grid parameters. * Use xml parser for ccpp phys suite definition file. * Remove more run_command calls. * Simplify some func argument processing. * Move different config format parsers to same file. * Use os.path.join for the sake of macosx * Remove remaining func argument processing via os.environ. * Minor bug fix in set_extrn_mdl_params.sh * Add suite defn in test_data. * Minor fixes on unittest on jet. * Simplify boolean condition checks. * Include old in renaming of old directories * Fix conflicting yaml !join tag for paths and strings. * Bug fix with setting sfcperst dict. * Imitate "readlink -m" with os.path.realpath instead of os.readlink * Don't use /tmp as that is shared by multiple users. * Bug fix with cron line, maintain quotes around TRUE/FALSE. * Update to latest develop (untested) * Bug fix with existing cron line and quotes. * Bug fix with case-sensitive MACHINE name, and empty EXPT_DIR. * Update to latest develop * More updates. * Bug fix thanks to @willmayfield! Check both starting/ending characters are brackets for shell variable to be considered an array. * Make empty EXPT_BASEDIR workable. * Update to latest develop * Update in predef grid. * Check f90nml as well. Co-authored-by: Daniel Abdi * Fix typo and crontab issue on wcoss dell in workflow python scripts (#750) * Fix typo and failure on wcoss * fix new line issue on wcoss dell * remove capture_output * Get USER from environment Co-authored-by: Daniel Abdi * Add new WE2E configs (#748) ## DESCRIPTION OF CHANGES: Added two new WE2E config files for the Sub-CONUS Indianapolis domain to support the upcoming SRW release. In addition, modified the external data used in the `config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh` to match more common datasets used in the WE2E testing process. ## TESTS CONDUCTED: Successfully ran the new WE2E tests (`config.SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh`, `config.SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh`) and `config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh` on NOAA Parallel Works AWS instance. ## DEPENDENCIES: None. ## DOCUMENTATION: No documentation changes are required. * Added a fixed WoF grid and the python tool to determine the write component parameters (#733) * Added a fixed WoF grid and the python tool to determine the write component parameters * Update set_predef_grid_params.sh * Renamed file as recommended and removed unused lines * Modified comment Co-authored-by: JeffBeck-NOAA <55201531+JeffBeck-NOAA@users.noreply.github.com> Co-authored-by: WYH@MBP * Replace env with modulefiles in scripts (#752) * change env to mod * update we2e script * WE2E script improvements for usability (#745) ## DESCRIPTION OF CHANGES: * Modifications to `run_WE2E_tests.sh`: * Add examples to help/usage statement * Modifications to `check_expts_status.sh`: * Add arguments list that can be processed by `process_args` * Add new optional arguments: `num_log_lines`, `verbose` * Include a help/usage message ## TESTS CONDUCTED: * Ran `run_WE2E_tests.sh --help` from the command line and got the expected help message. * Ran `check_expts_status.sh --help` from the command line and got the expected help message. * Used `run_WE2E_tests.sh` to run a set of 2 WE2E tests -- works as expected. * Used `check_expts_status` to check on the status of the 2 tests run above and got the expected status message. ## DEPENDENCIES: PR #[241](https://github.com/ufs-community/ufs-srweather-app/pull/241) ## DOCUMENTATION: A lot of this PR is documentation in the scripts. There is an accompanying documentation PR #[241](https://github.com/ufs-community/ufs-srweather-app/pull/241) into ufs-srweather-app. * Standardize static data across Tier-1 platforms; fix and improve IC and LBC data retrieval (#744) * Bug fixes (grid size + suppress screen output from module load) (#756) ## DESCRIPTION OF CHANGES: 1) Adjust y-direction size of write-component grid of `SUBCONUS_Ind_3km` predefined grid from 195 to 197 (this was just an oversight in PR #725 ). 2) Redirect output of module load in launch script (`launch_FV3LAM_wflow.sh`) to `/dev/null` to avoid unwanted screen output (which was introduced in PR #[238](https://github.com/ufs-community/ufs-srweather-app/pull/238) in ufs-srweather-app and is about how to load the `regional_workflow` environment and is not relevant in this context). ## TESTS CONDUCTED: 1) Plotted the `SUBCONUS_Ind_3km` grid to ensure it has correct size (it does). 2) Manually ran `launch_FV3LAM_wflow.sh` from the command line to verify that screen output is suppressed (it is). * Update default SPP ISEED array in config_defaults.sh to use unique values (#759) * Modify RRFS North America 3- and 13-km domain configuration and WE2E test. * Modify default ISEED values for SPP * Fix grid in WE2E test * Update workflow python scripts (#760) * update python scripts * Change output file name of run_post to meet NCO standards (#758) * change output file name * change variable name * update python script * remove duplicates * add a check for empty variables * move variable to common area * clean up unnecessary comments * update scripts * remove duplicate * update python scripts * fix user-staged dir path issue in python script * Add POST_OUTPUT_DOMAIN_NAME to WE2E tests for new grids (#763) * Add new var to we2e tests for new grids * rename we2e tests for custom grid * remove unnecessary $ * Modifications to `CODEOWNERS` file (#757) * Add @gspetro-NOAA, @natalie-perlin, and @EdwardSnyder-NOAA to CODEOWNERS so they are notified of all PRs and can review them. * Remove duplicates in CODEOWNERS; remove users who will no longer be working with the repo. * Adding a python utility for summarizing compute. (#769) Adds a utility that summarizes Rocoto database computational usage information. * Add github actions for python unittests. (#747) * Add github actions for python unittests. * Include all python script in ush * Skip defining QUILTING params when it is set to False * Update py_workflow * Update unittest for set_extrn_mdl_params. * Updates from develop. Co-authored-by: Daniel Shawul * Update sample script for NCO mode (#771) * update config.nco.sh * Add comment * Feature/noaacloud (#767) * updates for noaacloud * working version * fixes for noaacloud * added extra modules for post * removed cheyenne-specific crontab editing section (#773) * Pin down hera miniconda3 module file version. (#770) Pin down the version of miniconda3 on Hera, and do not append to the module path. * update staged data dir (#774) Co-authored-by: JeffBeck-NOAA <55201531+JeffBeck-NOAA@users.noreply.github.com> Co-authored-by: Mark Potts <33099090+mark-a-potts@users.noreply.github.com> Co-authored-by: michelleharrold Co-authored-by: Chan-Hoo.Jeon-NOAA <60152248+chan-hoo@users.noreply.github.com> Co-authored-by: gsketefian <31046882+gsketefian@users.noreply.github.com> Co-authored-by: BenjaminBlake-NOAA <52074832+BenjaminBlake-NOAA@users.noreply.github.com> Co-authored-by: Benjamin.Blake EMC Co-authored-by: Benjamin.Blake EMC Co-authored-by: Benjamin.Blake EMC Co-authored-by: chan-hoo Co-authored-by: Will Mayfield <59745143+willmayfield@users.noreply.github.com> Co-authored-by: Natalie Perlin <68030316+natalie-perlin@users.noreply.github.com> Co-authored-by: Natalie Perlin Co-authored-by: danielabdi-noaa <52012304+danielabdi-noaa@users.noreply.github.com> Co-authored-by: Daniel Abdi Co-authored-by: Daniel Abdi Co-authored-by: EdwardSnyder-NOAA <96196752+EdwardSnyder-NOAA@users.noreply.github.com> Co-authored-by: Yunheng Wang <47898913+ywangwof@users.noreply.github.com> Co-authored-by: WYH@MBP Co-authored-by: Michael Kavulich Co-authored-by: Daniel Shawul --- .github/workflows/python_unittests.yaml | 27 + jobs/JREGIONAL_GET_EXTRN_MDL_FILES | 260 +- modulefiles/tasks/cheyenne/get_extrn_ics | 5 +- modulefiles/tasks/cheyenne/get_extrn_lbcs | 5 +- modulefiles/tasks/cheyenne/make_grid.local | 9 +- modulefiles/tasks/cheyenne/make_ics.local | 9 +- modulefiles/tasks/cheyenne/make_lbcs.local | 9 +- .../tasks/cheyenne/pylib_regional_workflow | 9 + modulefiles/tasks/cheyenne/run_fcst.local | 9 +- modulefiles/tasks/cheyenne/run_vx.local | 9 +- modulefiles/tasks/gaea/make_grid.local | 6 + modulefiles/tasks/gaea/make_ics.local | 6 + modulefiles/tasks/gaea/make_lbcs.local | 6 + modulefiles/tasks/gaea/run_fcst.local | 6 + modulefiles/tasks/gaea/run_vx.local | 7 + modulefiles/tasks/hera/get_extrn_ics.local | 1 + modulefiles/tasks/hera/get_extrn_lbcs.local | 1 + modulefiles/tasks/hera/get_obs.local | 4 +- modulefiles/tasks/hera/make_grid.local | 4 +- modulefiles/tasks/hera/make_ics.local | 4 +- modulefiles/tasks/hera/make_lbcs.local | 5 +- .../tasks/hera/miniconda_regional_workflow | 5 + modulefiles/tasks/hera/run_fcst.local | 5 +- modulefiles/tasks/jet/get_extrn_ics.local | 1 + modulefiles/tasks/jet/get_extrn_lbcs.local | 1 + modulefiles/tasks/jet/make_grid.local | 4 +- modulefiles/tasks/jet/make_ics.local | 7 +- modulefiles/tasks/jet/make_lbcs.local | 7 +- .../tasks/jet/miniconda_regional_workflow | 5 + modulefiles/tasks/jet/run_fcst.local | 5 +- .../tasks/noaacloud/get_extrn_ics.local | 7 + .../tasks/noaacloud/get_extrn_lbcs.local | 6 + modulefiles/tasks/noaacloud/make_grid.local | 12 + modulefiles/tasks/noaacloud/make_ics.local | 14 + modulefiles/tasks/noaacloud/make_lbcs.local | 14 + modulefiles/tasks/noaacloud/make_orog.local | 12 + .../tasks/noaacloud/make_sfc_climo.local | 12 + .../noaacloud/miniconda_regional_workflow | 5 + modulefiles/tasks/noaacloud/run_fcst.local | 14 + modulefiles/tasks/noaacloud/run_post.local | 14 + modulefiles/tasks/orion/get_extrn_ics.local | 1 + modulefiles/tasks/orion/get_extrn_lbcs.local | 1 + modulefiles/tasks/orion/make_grid.local | 5 +- modulefiles/tasks/orion/make_ics.local | 5 +- modulefiles/tasks/orion/make_lbcs.local | 5 +- .../tasks/orion/miniconda_regional_workflow | 5 + modulefiles/tasks/orion/run_fcst.local | 5 +- scripts/exregional_get_extrn_mdl_files.sh | 757 +----- scripts/exregional_make_grid.sh | 2 +- scripts/exregional_make_ics.sh | 4 +- scripts/exregional_make_lbcs.sh | 6 +- scripts/exregional_run_ensgridvx.sh | 1 + scripts/exregional_run_ensgridvx_mean.sh | 1 + scripts/exregional_run_ensgridvx_prob.sh | 1 + scripts/exregional_run_enspointvx.sh | 1 + scripts/exregional_run_enspointvx_mean.sh | 1 + scripts/exregional_run_enspointvx_prob.sh | 1 + scripts/exregional_run_fcst.sh | 12 +- scripts/exregional_run_gridstatvx.sh | 1 + scripts/exregional_run_pointstatvx.sh | 1 + scripts/exregional_run_post.sh | 15 +- tests/WE2E/create_WE2E_resource_summary.py | 187 ++ .../WE2E/get_WE2Etest_names_subdirs_descs.sh | 306 ++- tests/WE2E/get_expts_status.sh | 157 +- tests/WE2E/machine_suites/hera.txt | 14 +- tests/WE2E/machine_suites/jet.txt | 14 +- tests/WE2E/run_WE2E_tests.sh | 189 +- tests/WE2E/setup_WE2E_tests.sh | 7 +- ...NUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR.sh | 1 + ..._CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR.sh | 8 +- ...25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta.sh | 8 +- ...km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh | 25 + ...pact_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} | 6 +- ...m_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} | 9 +- ...km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} | 9 +- ...km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh | 25 + ...act_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh} | 7 +- ...m_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh} | 6 +- ...pact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} | 9 +- ...m_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} | 9 +- ...km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} | 9 +- ...km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh | 25 + ..._3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh} | 9 +- ...mpact_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} | 10 +- ...m_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} | 10 +- ...km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} | 10 +- ...cs_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta.sh | 54 + ...s_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1alpha.sh | 2 +- ...S_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh | 2 + ...km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh | 27 + ...US_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh | 27 + ...3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh | 27 + ...m_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh} | 4 +- ..._ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh | 26 - ..._25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh | 25 - ...CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh | 25 - ...GFS_suite_GFS_v15_thompson_mynn_lam3km.sh} | 8 +- ...km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh | 26 - ...mpact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh | 29 + ...ew_ESGgrid.sh => config.custom_ESGgrid.sh} | 6 +- ..._GFDLgrid.sh => config.custom_GFDLgrid.sh} | 2 + ...USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh} | 2 + ..._USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh} | 2 + .../config.get_from_HPSS_ics_HRRR_lbcs_RAP.sh | 4 +- .../wflow_features/config.nco_ensemble.sh | 10 +- .../wflow_features/config.nco_inline_post.sh | 29 +- ...ig.specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE.sh | 4 +- ...g.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh | 6 +- .../config.specify_template_filenames.sh | 30 + .../wflow_features/config.subhourly_post.sh | 4 +- .../config.subhourly_post_ensemble_2mems.sh | 4 +- ush/NOMADS_get_extrn_mdl_files.sh | 12 +- ush/check_expt_config_vars.sh | 2 +- ush/check_ruc_lsm.py | 30 + ush/config.community.sh | 4 +- ush/config.nco.sh | 41 +- ush/config_defaults.sh | 245 +- ush/config_defaults.yaml | 2022 ++++++++++++++ ush/constants.py | 27 + ush/create_diag_table_file.py | 79 + ush/create_model_configure_file.py | 243 ++ ush/fill_jinja_template.py | 18 +- ush/generate_FV3LAM_wflow.py | 1126 ++++++++ ush/generate_FV3LAM_wflow.sh | 78 +- ush/get_crontab_contents.py | 83 + ush/get_crontab_contents.sh | 9 - ush/get_extrn_mdl_file_dir_info.sh | 681 ----- ush/launch_FV3LAM_wflow.sh | 8 +- ush/link_fix.py | 391 +++ ush/load_modules_run_task.sh | 15 +- ush/machine/cheyenne.sh | 25 +- ush/machine/gaea.sh | 70 + ush/machine/hera.sh | 27 +- ush/machine/jet.sh | 21 +- ush/machine/noaacloud.sh | 37 +- ush/machine/odin.sh | 32 +- ush/machine/orion.sh | 28 +- ush/machine/singularity.sh | 4 +- ush/machine/stampede.sh | 34 +- ush/machine/wcoss_dell_p3.sh | 27 +- ush/mrms_pull_topofhour.py | 160 +- ush/predef_grid_params.yaml | 909 +++++++ ush/python_utils/__init__.py | 12 +- ush/python_utils/change_case.py | 25 - .../check_for_preexist_dir_file.py | 2 +- ush/python_utils/config_parser.py | 159 +- ush/python_utils/environment.py | 2 +- ush/python_utils/fv3write_parms_lambert.py | 91 + ush/python_utils/get_elem_inds.py | 2 +- .../get_manage_externals_config_property.py | 54 - ush/python_utils/misc.py | 57 + ush/python_utils/print_input_args.py | 2 +- ush/python_utils/test_python_utils.py | 45 +- ush/python_utils/xml_parser.py | 30 + ush/retrieve_data.py | 286 +- ush/rocoto/.gitignore | 47 - ush/rocoto/fv3gfs_workflow.sh | 52 - ush/rocoto/rocoto.py | 346 --- ush/rocoto/rocoto_viewer.py | 2383 ----------------- ush/rocoto/setup_expt.py | 199 -- ush/rocoto/setup_expt_fcstonly.py | 159 -- ush/rocoto/setup_workflow.py | 831 ------ ush/rocoto/setup_workflow_fcstonly.py | 378 --- ush/rocoto/workflow_utils.py | 341 --- ush/set_FV3nml_ens_stoch_seeds.py | 137 + ...arams.sh => set_FV3nml_ens_stoch_seeds.sh} | 81 +- ush/set_FV3nml_sfc_climo_filenames.py | 141 + ush/set_cycle_dates.py | 54 + ush/set_extrn_mdl_params.py | 56 + ush/set_extrn_mdl_params.sh | 19 +- ush/set_gridparams_ESGgrid.py | 110 + ush/set_gridparams_GFDLgrid.py | 467 ++++ ush/set_namelist.py | 17 +- ush/set_ozone_param.py | 231 ++ ush/set_predef_grid_params.py | 64 + ush/set_predef_grid_params.sh | 338 ++- ush/set_thompson_mp_fix_files.py | 176 ++ ush/setup.py | 2220 +++++++++++++++ ush/setup.sh | 107 +- ush/templates/FV3LAM_wflow.xml | 126 +- ush/templates/data_locations.yml | 77 +- .../parm/metplus/EnsembleStat_APCP01h.conf | 2 +- .../parm/metplus/EnsembleStat_APCP03h.conf | 6 +- .../parm/metplus/EnsembleStat_APCP06h.conf | 6 +- .../parm/metplus/EnsembleStat_APCP24h.conf | 6 +- .../parm/metplus/EnsembleStat_REFC.conf | 2 +- .../parm/metplus/EnsembleStat_RETOP.conf | 2 +- .../parm/metplus/EnsembleStat_conus_sfc.conf | 5 +- .../parm/metplus/EnsembleStat_upper_air.conf | 5 +- .../parm/metplus/GridStat_APCP01h.conf | 2 +- .../parm/metplus/GridStat_APCP03h.conf | 4 +- .../parm/metplus/GridStat_APCP06h.conf | 4 +- .../parm/metplus/GridStat_APCP24h.conf | 4 +- ush/templates/parm/metplus/GridStat_REFC.conf | 2 +- .../parm/metplus/GridStat_RETOP.conf | 2 +- .../parm/metplus/PointStat_conus_sfc.conf | 5 +- .../metplus/PointStat_conus_sfc_mean.conf | 3 +- .../metplus/PointStat_conus_sfc_prob.conf | 3 +- .../parm/metplus/PointStat_upper_air.conf | 5 +- .../metplus/PointStat_upper_air_mean.conf | 3 +- .../metplus/PointStat_upper_air_prob.conf | 3 +- .../RRFS_CONUS_3km/C3357.facsf.tile7.halo0.nc | 0 .../RRFS_CONUS_3km/C3357.facsf.tile7.halo4.nc | 0 .../C3357.maximum_snow_albedo.tile7.halo0.nc | 0 .../C3357.maximum_snow_albedo.tile7.halo4.nc | 0 .../C3357.slope_type.tile7.halo0.nc | 0 .../C3357.slope_type.tile7.halo4.nc | 0 .../C3357.snowfree_albedo.tile7.halo0.nc | 0 .../C3357.snowfree_albedo.tile7.halo4.nc | 0 .../C3357.soil_type.tile7.halo0.nc | 0 .../C3357.soil_type.tile7.halo4.nc | 0 ...C3357.substrate_temperature.tile7.halo0.nc | 0 ...C3357.substrate_temperature.tile7.halo4.nc | 0 .../C3357.vegetation_greenness.tile7.halo0.nc | 0 .../C3357.vegetation_greenness.tile7.halo4.nc | 0 .../C3357.vegetation_type.tile7.halo0.nc | 0 .../C3357.vegetation_type.tile7.halo4.nc | 0 .../RRFS_CONUS_3km/C3357_grid.tile7.halo3.nc | 0 .../RRFS_CONUS_3km/C3357_grid.tile7.halo4.nc | 0 .../RRFS_CONUS_3km/C3357_grid.tile7.halo6.nc | 0 .../RRFS_CONUS_3km/C3357_mosaic.halo3.nc | 0 .../RRFS_CONUS_3km/C3357_mosaic.halo4.nc | 0 .../RRFS_CONUS_3km/C3357_mosaic.halo6.nc | 0 .../C3357_oro_data.tile7.halo0.nc | 0 .../C3357_oro_data.tile7.halo4.nc | 0 .../C3357_oro_data_ls.tile7.halo0.nc | 0 .../C3357_oro_data_ss.tile7.halo0.nc | 0 ush/test_data/suite_FV3_GSD_SAR.xml | 85 + ush/valid_param_vals.sh | 9 +- ush/valid_param_vals.yaml | 85 + ush/wrappers/qsub_job.sh | 19 +- ush/wrappers/run_gridensvx.sh | 22 + ush/wrappers/run_gridvx.sh | 19 + ush/wrappers/run_pointensvx.sh | 20 + ush/wrappers/run_pointvx.sh | 17 + ush/wrappers/sq_job.sh | 21 +- 236 files changed, 12046 insertions(+), 7573 deletions(-) create mode 100644 .github/workflows/python_unittests.yaml create mode 100644 modulefiles/tasks/cheyenne/pylib_regional_workflow create mode 100644 modulefiles/tasks/gaea/make_grid.local create mode 100644 modulefiles/tasks/gaea/make_ics.local create mode 100644 modulefiles/tasks/gaea/make_lbcs.local create mode 100644 modulefiles/tasks/gaea/run_fcst.local create mode 100644 modulefiles/tasks/gaea/run_vx.local create mode 100644 modulefiles/tasks/hera/miniconda_regional_workflow create mode 100644 modulefiles/tasks/jet/miniconda_regional_workflow create mode 100644 modulefiles/tasks/noaacloud/get_extrn_ics.local create mode 100644 modulefiles/tasks/noaacloud/get_extrn_lbcs.local create mode 100644 modulefiles/tasks/noaacloud/make_grid.local create mode 100644 modulefiles/tasks/noaacloud/make_ics.local create mode 100644 modulefiles/tasks/noaacloud/make_lbcs.local create mode 100644 modulefiles/tasks/noaacloud/make_orog.local create mode 100644 modulefiles/tasks/noaacloud/make_sfc_climo.local create mode 100644 modulefiles/tasks/noaacloud/miniconda_regional_workflow create mode 100644 modulefiles/tasks/noaacloud/run_fcst.local create mode 100644 modulefiles/tasks/noaacloud/run_post.local create mode 100644 modulefiles/tasks/orion/miniconda_regional_workflow create mode 100644 tests/WE2E/create_WE2E_resource_summary.py create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh => config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} (68%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh => config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} (65%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh => config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} (65%) create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh => config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh} (67%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh => config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh} (66%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh => config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} (66%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh => config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} (65%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh => config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} (65%) create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh => config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh} (65%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh => config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh} (66%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh => config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh} (56%) rename tests/WE2E/test_configs/grids_extrn_mdls_suites_community/{config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh => config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh} (56%) create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta.sh create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh rename tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/{config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh => config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh} (82%) delete mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh delete mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh delete mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh rename tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/{config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh => config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km.sh} (67%) delete mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh create mode 100644 tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh rename tests/WE2E/test_configs/wflow_features/{config.new_ESGgrid.sh => config.custom_ESGgrid.sh} (92%) rename tests/WE2E/test_configs/wflow_features/{config.new_GFDLgrid.sh => config.custom_GFDLgrid.sh} (98%) rename tests/WE2E/test_configs/wflow_features/{config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh => config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh} (97%) rename tests/WE2E/test_configs/wflow_features/{config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh => config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh} (97%) mode change 100644 => 120000 tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh create mode 100644 tests/WE2E/test_configs/wflow_features/config.specify_template_filenames.sh create mode 100644 ush/check_ruc_lsm.py create mode 100644 ush/config_defaults.yaml create mode 100644 ush/constants.py create mode 100644 ush/create_diag_table_file.py create mode 100644 ush/create_model_configure_file.py create mode 100755 ush/generate_FV3LAM_wflow.py create mode 100644 ush/get_crontab_contents.py delete mode 100755 ush/get_extrn_mdl_file_dir_info.sh create mode 100644 ush/link_fix.py create mode 100755 ush/machine/gaea.sh create mode 100644 ush/predef_grid_params.yaml delete mode 100644 ush/python_utils/change_case.py create mode 100755 ush/python_utils/fv3write_parms_lambert.py delete mode 100644 ush/python_utils/get_manage_externals_config_property.py create mode 100644 ush/python_utils/misc.py create mode 100644 ush/python_utils/xml_parser.py mode change 100644 => 100755 ush/retrieve_data.py delete mode 100644 ush/rocoto/.gitignore delete mode 100755 ush/rocoto/fv3gfs_workflow.sh delete mode 100755 ush/rocoto/rocoto.py delete mode 100755 ush/rocoto/rocoto_viewer.py delete mode 100755 ush/rocoto/setup_expt.py delete mode 100755 ush/rocoto/setup_expt_fcstonly.py delete mode 100755 ush/rocoto/setup_workflow.py delete mode 100755 ush/rocoto/setup_workflow_fcstonly.py delete mode 100755 ush/rocoto/workflow_utils.py create mode 100644 ush/set_FV3nml_ens_stoch_seeds.py rename ush/{set_FV3nml_stoch_params.sh => set_FV3nml_ens_stoch_seeds.sh} (87%) create mode 100644 ush/set_FV3nml_sfc_climo_filenames.py create mode 100644 ush/set_cycle_dates.py create mode 100644 ush/set_extrn_mdl_params.py create mode 100644 ush/set_gridparams_ESGgrid.py create mode 100644 ush/set_gridparams_GFDLgrid.py create mode 100644 ush/set_ozone_param.py create mode 100644 ush/set_predef_grid_params.py create mode 100644 ush/set_thompson_mp_fix_files.py create mode 100644 ush/setup.py create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo3.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo6.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo3.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo6.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo4.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ls.tile7.halo0.nc create mode 100644 ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ss.tile7.halo0.nc create mode 100644 ush/test_data/suite_FV3_GSD_SAR.xml create mode 100644 ush/valid_param_vals.yaml create mode 100755 ush/wrappers/run_gridensvx.sh create mode 100755 ush/wrappers/run_gridvx.sh create mode 100755 ush/wrappers/run_pointensvx.sh create mode 100755 ush/wrappers/run_pointvx.sh diff --git a/.github/workflows/python_unittests.yaml b/.github/workflows/python_unittests.yaml new file mode 100644 index 000000000..65688fd38 --- /dev/null +++ b/.github/workflows/python_unittests.yaml @@ -0,0 +1,27 @@ +name: Python unittests +on: [push, pull_request] +jobs: + + python_unittests: + name: Python unittests + runs-on: ubuntu-latest + + steps: + - name: Checkout repository + uses: actions/checkout@v2 + + # Install dependencies + - name: Install dependencies + run: | + sudo apt-get update + sudo apt-get install python3 python3-pip netcdf-bin + sudo pip3 install pyyaml jinja2 f90nml + sudo pip3 install numpy matplotlib basemap + + # Run python unittests + - name: Run python unittests + run: | + cd ush + python3 -m unittest -b python_utils/test_python_utils.py + python3 -m unittest -b *.py + diff --git a/jobs/JREGIONAL_GET_EXTRN_MDL_FILES b/jobs/JREGIONAL_GET_EXTRN_MDL_FILES index a452062c5..31c296386 100755 --- a/jobs/JREGIONAL_GET_EXTRN_MDL_FILES +++ b/jobs/JREGIONAL_GET_EXTRN_MDL_FILES @@ -33,15 +33,6 @@ # #----------------------------------------------------------------------- # -# Source the file defining the function that will be used to set various -# external-model-associated variables. -# -#----------------------------------------------------------------------- -# -. $USHDIR/get_extrn_mdl_file_dir_info.sh -# -#----------------------------------------------------------------------- -# # Save current shell options (in a global array). Then set new options # for this script/function. # @@ -72,33 +63,107 @@ print_info_msg " Entering script: \"${scrfunc_fn}\" In directory: \"${scrfunc_dir}\" -This is the J-job script for the task that copies/fetches to a local -directory (either from disk or HPSS) the external model files from which -initial or boundary condition files for the FV3 will be generated. +This is the J-job script for the task that copies or fetches external +model files from disk, HPSS, or URL, and stages them for downstream use +to generate initial or lateral boundary conditions for the FV3 model. ========================================================================" + + +# +#----------------------------------------------------------------------- +# +# Check whether the environment variable ICS_OR_LBCS is set to a valid +# value. This variable specifies whether we are getting the external +# model files for the purpose of generating initial conditions (ICs) or +# lateral boundary condtions (LBCs) for the forecast model. +# +#----------------------------------------------------------------------- +# +valid_vals_ICS_OR_LBCS=( "ICS" "LBCS" ) +check_var_valid_value "ICS_OR_LBCS" "valid_vals_ICS_OR_LBCS" +# +#----------------------------------------------------------------------- +# +# Set parameters for grabbing either the initial conditions from analysis or +# forecast files of external models, or the lateral boundary conditions +# from external models. This script has been called to do the work for +# one or the other. +# +#----------------------------------------------------------------------- +# +if [ "${ICS_OR_LBCS}" = "ICS" ]; then + time_offset_hrs=${EXTRN_MDL_ICS_OFFSET_HRS:-0} + extrn_mdl_name=${EXTRN_MDL_NAME_ICS} + +elif [ "${ICS_OR_LBCS}" = "LBCS" ]; then + time_offset_hrs=${EXTRN_MDL_LBCS_OFFSET_HRS:-0} + extrn_mdl_name=${EXTRN_MDL_NAME_LBCS} +fi + +# +#----------------------------------------------------------------------- +# +# Set the external model start time +# +#----------------------------------------------------------------------- +# + +hh=${CDATE:8:2} +yyyymmdd=${CDATE:0:8} +extrn_mdl_cdate=$( $DATE_UTIL --utc --date "${yyyymmdd} ${hh} UTC - ${time_offset_hrs} hours" "+%Y%m%d%H" ) + # #----------------------------------------------------------------------- # -# Check whether output files from the specified external model (EXTRN_MDL_NAME) -# are available on the specified cycle date and time (CDATE). +# Check whether output files from the specified external model +# (extrn_mdl_name) are available on the specified cycle date and time +# (extrn_mdl_cdate). # #----------------------------------------------------------------------- # -case ${EXTRN_MDL_NAME} in + +function data_unavailable() { + + local name cdate end_date min_max + + name=$1 + cdate=$2 + end_date=$3 + min_max=$4 + + if [ ${min_max} = max ]; then + msg="\ +Output from the specified external model (extrn_mdl_name) is not availa- +ble for the specified cycle date and time (extrn_mdl_cdate) because the latter is +later than the last forecast date and time (cdate_max) with this model: + extrn_mdl_name = \"${name}\" + CDATE_max = \"${end_date}\" + extrn_mdl_cdate = \"${cdate}\"" + + elif [ ${min_max} = min ]; then + msg="\ +Output from the specified external model (extrn_mdl_name) is not availa- +ble for the specified cycle date and time (extrn_mdl_cdate) because the latter is +earlier than the implementation date of this model: + extrn_mdl_name = \"${name}\" + CDATE_min = \"${end_date}\" + extrn_mdl_cdate = \"${cdate}\"" + fi + + echo ${msg} +} + + +case ${extrn_mdl_name} in "GSMGFS") # The transition date from the GSMGFS to the FV3GFS was 2019061212, i.e. # this was the first official forecast with the FV3GFS. So we set the # last CDATE for the GSMGFS to the one 6 hours before this. cdate_max="2019061206" - if [ "$CDATE" -gt "$cdate_max" ]; then + if [ "$extrn_mdl_cdate" -gt "$cdate_max" ]; then print_err_msg_exit "\ -Output from the specified external model (EXTRN_MDL_NAME) is not availa- -ble for the specified cycle date and time (CDATE) because the latter is -later than the last forecast date and time (cdate_max) with this model: - EXTRN_MDL_NAME = \"${EXTRN_MDL_NAME}\" - cdate_max = \"${cdate_max}\" - CDATE = \"${CDATE}\"" + $(data_unavailable $extrn_mdl_name $extrn_mdl_cdate $cdate_max max)" fi ;; @@ -106,17 +171,12 @@ later than the last forecast date and time (cdate_max) with this model: # The transition date from the GSMGFS to the FV3GFS was 2019061212, i.e. # this was the first official forecast with the FV3GFS. However, paral- # lel runs with the FV3GFS go back to 2018121500. So we set the first -# CDATE for the FV3GFS to this date and time. +# extrn_mdl_cdate for the FV3GFS to this date and time. # CDATE_min="2019061212" CDATE_min="2018121500" - if [ "$CDATE" -lt "$CDATE_min" ]; then + if [ "$extrn_mdl_cdate" -lt "$CDATE_min" ]; then print_err_msg_exit "\ -Output from the specified external model (EXTRN_MDL_NAME) is not availa- -ble for the specified cycle date and time (CDATE) because the latter is -earlier than the implementation date of this model: - EXTRN_MDL_NAME = \"${EXTRN_MDL_NAME}\" - CDATE_min = \"${CDATE_min}\" - CDATE = \"${CDATE}\"" + $(data_unavailable $extrn_mdl_name $extrn_mdl_cdate $cdate_min min)" fi ;; @@ -124,14 +184,9 @@ earlier than the implementation date of this model: # Examination of the HPSS archives shows that the RAPX data goes back to # July 01, 2015. CDATE_min="2015070100" - if [ "$CDATE" -lt "$CDATE_min" ]; then + if [ "$extrn_mdl_cdate" -lt "$CDATE_min" ]; then print_err_msg_exit "\ -Output from the specified external model (EXTRN_MDL_NAME) is not availa- -ble for the specified cycle date and time (CDATE) because the latter is -earlier than the implementation date of this model: - EXTRN_MDL_NAME = \"${EXTRN_MDL_NAME}\" - CDATE_min = \"${CDATE_min}\" - CDATE = \"${CDATE}\"" + $(data_unavailable $extrn_mdl_name $extrn_mdl_cdate $cdate_min min)" fi ;; @@ -140,14 +195,9 @@ earlier than the implementation date of this model: # implementation of the first version of the operational HRRR was # September 30, 2014. CDATE_min="2014103000" - if [ "$CDATE" -lt "$CDATE_min" ]; then + if [ "$extrn_mdl_cdate" -lt "$CDATE_min" ]; then print_err_msg_exit "\ -Output from the specified external model (EXTRN_MDL_NAME) is not availa- -ble for the specified cycle date and time (CDATE) because the latter is -earlier than the implementation date of this model: - EXTRN_MDL_NAME = \"${EXTRN_MDL_NAME}\" - CDATE_min = \"${CDATE_min}\" - CDATE = \"${CDATE}\"" + $(data_unavailable $extrn_mdl_name $extrn_mdl_cdate $cdate_min min)" fi ;; @@ -155,141 +205,25 @@ esac # #----------------------------------------------------------------------- # -# Check whether the environment variable ICS_OR_LBCS is set to a valid -# value. This variable specifies whether we are getting the external -# model files for the purpose of generating initial conditions (ICs) or -# lateral boundary condtions (LBCs) for the forecast model. -# -#----------------------------------------------------------------------- -# -valid_vals_ICS_OR_LBCS=( "ICS" "LBCS" ) -check_var_valid_value "ICS_OR_LBCS" "valid_vals_ICS_OR_LBCS" -# -#----------------------------------------------------------------------- -# -# Set parameters for grabbing either the initial conditions from analysis or -# forecast files of external models, or the lateral boundary conditions -# from external models. The script has been called to do the work for -# one or the other. -# -#----------------------------------------------------------------------- -# -if [ "${ICS_OR_LBCS}" = "ICS" ]; then - if [ ${EXTRN_MDL_ICS_OFFSET_HRS} -eq 0 ] ; then - anl_or_fcst="ANL" - time_offset_hrs=0 - else - anl_or_fcst="FCST" - time_offset_hrs=${EXTRN_MDL_ICS_OFFSET_HRS:-0} - fi -elif [ "${ICS_OR_LBCS}" = "LBCS" ]; then - anl_or_fcst="FCST" - time_offset_hrs=${EXTRN_MDL_LBCS_OFFSET_HRS:-0} -fi -# -#----------------------------------------------------------------------- -# # Create the directory where the exetrnal model files should be stored # #----------------------------------------------------------------------- # -extrn_mdl_staging_dir="${CYCLE_DIR}/${EXTRN_MDL_NAME}/for_${ICS_OR_LBCS}" +extrn_mdl_staging_dir="${CYCLE_DIR}/${extrn_mdl_name}/for_${ICS_OR_LBCS}" mkdir_vrfy -p "${extrn_mdl_staging_dir}" cd_vrfy "${extrn_mdl_staging_dir}" # #----------------------------------------------------------------------- # -# Call the function that sets various external-model-associated variables. -# See the function defintion file for the definitions of these variables. -# -#----------------------------------------------------------------------- -# -get_extrn_mdl_file_dir_info \ - extrn_mdl_name="${EXTRN_MDL_NAME}" \ - anl_or_fcst="${anl_or_fcst}" \ - cdate_FV3LAM="${CDATE}" \ - time_offset_hrs="${time_offset_hrs}" \ - varname_extrn_mdl_cdate="extrn_mdl_cdate" \ - varname_extrn_mdl_lbc_spec_fhrs="extrn_mdl_lbc_spec_fhrs" \ - varname_extrn_mdl_fns_on_disk="extrn_mdl_fns_on_disk" \ - varname_extrn_mdl_fns_in_arcv="extrn_mdl_fns_in_arcv" \ - varname_extrn_mdl_sysdir="extrn_mdl_sysdir" \ - varname_extrn_mdl_arcv_fmt="extrn_mdl_arcv_fmt" \ - varname_extrn_mdl_arcv_fns="extrn_mdl_arcv_fns" \ - varname_extrn_mdl_arcv_fps="extrn_mdl_arcv_fps" \ - varname_extrn_mdl_arcvrel_dir="extrn_mdl_arcvrel_dir" || \ -print_err_msg_exit "\ -Call to function get_extrn_mdl_file_dir_info failed." -# -#----------------------------------------------------------------------- -# -# Set the directory in which to check for the external model files (which -# we refer to here as the "source" directory) to the default one set above -# for the current machine and external model. -# -#----------------------------------------------------------------------- -# -extrn_mdl_source_dir="${extrn_mdl_sysdir}" -# -#----------------------------------------------------------------------- -# -# If the user has specified that the external model files to be used for -# generating ICs or LBCs are staged, then reset extrn_mdl_source_dir to -# the user-specified directory in which these files are staged, and reset -# extrn_mdl_fns_on_disk to the user-specified array containing the names -# of the files. -# -#----------------------------------------------------------------------- -# -if [ "${USE_USER_STAGED_EXTRN_FILES}" = "TRUE" ]; then - - if [ "${ICS_OR_LBCS}" = "ICS" ]; then - extrn_mdl_source_dir="${EXTRN_MDL_SOURCE_BASEDIR_ICS}/$CDATE" - extrn_mdl_fns_on_disk=( $( printf "%s " "${EXTRN_MDL_FILES_ICS[@]}" )) - elif [ "${ICS_OR_LBCS}" = "LBCS" ]; then - extrn_mdl_source_dir="${EXTRN_MDL_SOURCE_BASEDIR_LBCS}/$CDATE" - extrn_mdl_fns_on_disk=( $( printf "%s " "${EXTRN_MDL_FILES_LBCS[@]}" )) - fi - - if [ ! -d "${extrn_mdl_source_dir}" ]; then - print_err_msg_exit "\ -The directory extrn_mdl_source_dir containing the user-staged external -model files does not exist: - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" -Please ensure that the directory specified by extrn_mdl_source_dir exists -and that all the files specified in the array extrn_mdl_fns_on_disk exist -within it: - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" - extrn_mdl_fns_on_disk = ( $( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" ))" - fi - -fi -# -#----------------------------------------------------------------------- -# # Call the ex-script for this J-job and pass to it the necessary variables. # #----------------------------------------------------------------------- # -extrn_mdl_lbc_spec_fhrs_str="( "$( printf "\"%s\" " "${extrn_mdl_lbc_spec_fhrs[@]}" )")" -extrn_mdl_fns_on_disk_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" )")" -extrn_mdl_fns_in_arcv_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_in_arcv[@]}" )")" -extrn_mdl_arcv_fns_str="( "$( printf "\"%s\" " "${extrn_mdl_arcv_fns[@]}" )")" -extrn_mdl_arcv_fps_str="( "$( printf "\"%s\" " "${extrn_mdl_arcv_fps[@]}" )")" - $SCRIPTSDIR/exregional_get_extrn_mdl_files.sh \ - ics_or_lbcs="${ICS_OR_LBCS}" \ - use_user_staged_extrn_files="${USE_USER_STAGED_EXTRN_FILES}" \ extrn_mdl_cdate="${extrn_mdl_cdate}" \ - extrn_mdl_lbc_spec_fhrs="${extrn_mdl_lbc_spec_fhrs_str}" \ - extrn_mdl_fns_on_disk="${extrn_mdl_fns_on_disk_str}" \ - extrn_mdl_fns_in_arcv="${extrn_mdl_fns_in_arcv_str}" \ - extrn_mdl_source_dir="${extrn_mdl_source_dir}" \ + extrn_mdl_name="${extrn_mdl_name}" \ extrn_mdl_staging_dir="${extrn_mdl_staging_dir}" \ - extrn_mdl_arcv_fmt="${extrn_mdl_arcv_fmt}" \ - extrn_mdl_arcv_fns="${extrn_mdl_arcv_fns_str}" \ - extrn_mdl_arcv_fps="${extrn_mdl_arcv_fps_str}" \ - extrn_mdl_arcvrel_dir="${extrn_mdl_arcvrel_dir}" || \ + time_offset_hrs=${time_offset_hrs} || print_err_msg_exit "\ Call to ex-script corresponding to J-job \"${scrfunc_fn}\" failed." # diff --git a/modulefiles/tasks/cheyenne/get_extrn_ics b/modulefiles/tasks/cheyenne/get_extrn_ics index 58f82dca1..9f7a4e4f7 100644 --- a/modulefiles/tasks/cheyenne/get_extrn_ics +++ b/modulefiles/tasks/cheyenne/get_extrn_ics @@ -1,5 +1,4 @@ -#%Module##################################################### -## Module file intentionally blank for Cheyenne -############################################################# +#%Module +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/get_extrn_lbcs b/modulefiles/tasks/cheyenne/get_extrn_lbcs index 58f82dca1..9f7a4e4f7 100644 --- a/modulefiles/tasks/cheyenne/get_extrn_lbcs +++ b/modulefiles/tasks/cheyenne/get_extrn_lbcs @@ -1,5 +1,4 @@ -#%Module##################################################### -## Module file intentionally blank for Cheyenne -############################################################# +#%Module +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/make_grid.local b/modulefiles/tasks/cheyenne/make_grid.local index 4232b8659..2f92a39e5 100644 --- a/modulefiles/tasks/cheyenne/make_grid.local +++ b/modulefiles/tasks/cheyenne/make_grid.local @@ -1,9 +1,2 @@ #%Module -if [module-info mode load] { - system "ncar_pylib /glade/p/ral/jntp/UFS_CAM/ncar_pylib_20200427" -} - -if [module-info mode remove] { - system "deactivate" -} - +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/make_ics.local b/modulefiles/tasks/cheyenne/make_ics.local index 4232b8659..2f92a39e5 100644 --- a/modulefiles/tasks/cheyenne/make_ics.local +++ b/modulefiles/tasks/cheyenne/make_ics.local @@ -1,9 +1,2 @@ #%Module -if [module-info mode load] { - system "ncar_pylib /glade/p/ral/jntp/UFS_CAM/ncar_pylib_20200427" -} - -if [module-info mode remove] { - system "deactivate" -} - +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/make_lbcs.local b/modulefiles/tasks/cheyenne/make_lbcs.local index 4232b8659..2f92a39e5 100644 --- a/modulefiles/tasks/cheyenne/make_lbcs.local +++ b/modulefiles/tasks/cheyenne/make_lbcs.local @@ -1,9 +1,2 @@ #%Module -if [module-info mode load] { - system "ncar_pylib /glade/p/ral/jntp/UFS_CAM/ncar_pylib_20200427" -} - -if [module-info mode remove] { - system "deactivate" -} - +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/pylib_regional_workflow b/modulefiles/tasks/cheyenne/pylib_regional_workflow new file mode 100644 index 000000000..4232b8659 --- /dev/null +++ b/modulefiles/tasks/cheyenne/pylib_regional_workflow @@ -0,0 +1,9 @@ +#%Module +if [module-info mode load] { + system "ncar_pylib /glade/p/ral/jntp/UFS_CAM/ncar_pylib_20200427" +} + +if [module-info mode remove] { + system "deactivate" +} + diff --git a/modulefiles/tasks/cheyenne/run_fcst.local b/modulefiles/tasks/cheyenne/run_fcst.local index 4232b8659..2f92a39e5 100644 --- a/modulefiles/tasks/cheyenne/run_fcst.local +++ b/modulefiles/tasks/cheyenne/run_fcst.local @@ -1,9 +1,2 @@ #%Module -if [module-info mode load] { - system "ncar_pylib /glade/p/ral/jntp/UFS_CAM/ncar_pylib_20200427" -} - -if [module-info mode remove] { - system "deactivate" -} - +module load pylib_regional_workflow diff --git a/modulefiles/tasks/cheyenne/run_vx.local b/modulefiles/tasks/cheyenne/run_vx.local index 234d79908..8fb3be36d 100644 --- a/modulefiles/tasks/cheyenne/run_vx.local +++ b/modulefiles/tasks/cheyenne/run_vx.local @@ -1,11 +1,4 @@ #%Module - -if [module-info mode load] { - system "ncar_pylib /glade/p/ral/jntp/UFS_SRW_app/ncar_pylib_20200427" -} - -if [module-info mode remove] { - system "deactivate" -} +module load pylib_regional_workflow module use /glade/p/ral/jntp/MET/MET_releases/modulefiles module load met/10.0.0 diff --git a/modulefiles/tasks/gaea/make_grid.local b/modulefiles/tasks/gaea/make_grid.local new file mode 100644 index 000000000..fbd52ffa0 --- /dev/null +++ b/modulefiles/tasks/gaea/make_grid.local @@ -0,0 +1,6 @@ +#%Module +module use /lustre/f2/pdata/esrl/gsd/contrib/modulefiles +module load rocoto +module load miniconda3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/gaea/make_ics.local b/modulefiles/tasks/gaea/make_ics.local new file mode 100644 index 000000000..fbd52ffa0 --- /dev/null +++ b/modulefiles/tasks/gaea/make_ics.local @@ -0,0 +1,6 @@ +#%Module +module use /lustre/f2/pdata/esrl/gsd/contrib/modulefiles +module load rocoto +module load miniconda3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/gaea/make_lbcs.local b/modulefiles/tasks/gaea/make_lbcs.local new file mode 100644 index 000000000..fbd52ffa0 --- /dev/null +++ b/modulefiles/tasks/gaea/make_lbcs.local @@ -0,0 +1,6 @@ +#%Module +module use /lustre/f2/pdata/esrl/gsd/contrib/modulefiles +module load rocoto +module load miniconda3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/gaea/run_fcst.local b/modulefiles/tasks/gaea/run_fcst.local new file mode 100644 index 000000000..fbd52ffa0 --- /dev/null +++ b/modulefiles/tasks/gaea/run_fcst.local @@ -0,0 +1,6 @@ +#%Module +module use /lustre/f2/pdata/esrl/gsd/contrib/modulefiles +module load rocoto +module load miniconda3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/gaea/run_vx.local b/modulefiles/tasks/gaea/run_vx.local new file mode 100644 index 000000000..929c2011f --- /dev/null +++ b/modulefiles/tasks/gaea/run_vx.local @@ -0,0 +1,7 @@ +#%Module + +module use -a /contrib/anaconda/modulefiles +module load intel/18.0.5.274 +module load anaconda/latest +module use -a /contrib/met/modulefiles/ +module load met/10.0.0 diff --git a/modulefiles/tasks/hera/get_extrn_ics.local b/modulefiles/tasks/hera/get_extrn_ics.local index 9935033fd..4b0b48cc0 100644 --- a/modulefiles/tasks/hera/get_extrn_ics.local +++ b/modulefiles/tasks/hera/get_extrn_ics.local @@ -6,3 +6,4 @@ module purge module load hpss +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/hera/get_extrn_lbcs.local b/modulefiles/tasks/hera/get_extrn_lbcs.local index 1919f3355..477dfb2e4 100644 --- a/modulefiles/tasks/hera/get_extrn_lbcs.local +++ b/modulefiles/tasks/hera/get_extrn_lbcs.local @@ -6,3 +6,4 @@ module purge module load hpss +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/hera/get_obs.local b/modulefiles/tasks/hera/get_obs.local index c41c17de8..d30a2a918 100644 --- a/modulefiles/tasks/hera/get_obs.local +++ b/modulefiles/tasks/hera/get_obs.local @@ -7,7 +7,7 @@ module purge module load hpss -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 +module use /contrib/miniconda3/modulefiles +module load miniconda3/4.5.12 setenv SRW_ENV pygraf diff --git a/modulefiles/tasks/hera/make_grid.local b/modulefiles/tasks/hera/make_grid.local index 011a832c9..92505cf09 100644 --- a/modulefiles/tasks/hera/make_grid.local +++ b/modulefiles/tasks/hera/make_grid.local @@ -1,5 +1,3 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/hera/make_ics.local b/modulefiles/tasks/hera/make_ics.local index 011a832c9..92505cf09 100644 --- a/modulefiles/tasks/hera/make_ics.local +++ b/modulefiles/tasks/hera/make_ics.local @@ -1,5 +1,3 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/hera/make_lbcs.local b/modulefiles/tasks/hera/make_lbcs.local index 011a832c9..61a3a7725 100644 --- a/modulefiles/tasks/hera/make_lbcs.local +++ b/modulefiles/tasks/hera/make_lbcs.local @@ -1,5 +1,2 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/hera/miniconda_regional_workflow b/modulefiles/tasks/hera/miniconda_regional_workflow new file mode 100644 index 000000000..48de7a99b --- /dev/null +++ b/modulefiles/tasks/hera/miniconda_regional_workflow @@ -0,0 +1,5 @@ +#%Module +module use /contrib/miniconda3/modulefiles +module load miniconda3/4.5.12 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/hera/run_fcst.local b/modulefiles/tasks/hera/run_fcst.local index 011a832c9..61a3a7725 100644 --- a/modulefiles/tasks/hera/run_fcst.local +++ b/modulefiles/tasks/hera/run_fcst.local @@ -1,5 +1,2 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/get_extrn_ics.local b/modulefiles/tasks/jet/get_extrn_ics.local index 9935033fd..4b0b48cc0 100644 --- a/modulefiles/tasks/jet/get_extrn_ics.local +++ b/modulefiles/tasks/jet/get_extrn_ics.local @@ -6,3 +6,4 @@ module purge module load hpss +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/get_extrn_lbcs.local b/modulefiles/tasks/jet/get_extrn_lbcs.local index 1919f3355..477dfb2e4 100644 --- a/modulefiles/tasks/jet/get_extrn_lbcs.local +++ b/modulefiles/tasks/jet/get_extrn_lbcs.local @@ -6,3 +6,4 @@ module purge module load hpss +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/make_grid.local b/modulefiles/tasks/jet/make_grid.local index 011a832c9..92505cf09 100644 --- a/modulefiles/tasks/jet/make_grid.local +++ b/modulefiles/tasks/jet/make_grid.local @@ -1,5 +1,3 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/make_ics.local b/modulefiles/tasks/jet/make_ics.local index c3c0f61e5..61a3a7725 100644 --- a/modulefiles/tasks/jet/make_ics.local +++ b/modulefiles/tasks/jet/make_ics.local @@ -1,7 +1,2 @@ #%Module -module load wgrib2/2.0.8 - -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/make_lbcs.local b/modulefiles/tasks/jet/make_lbcs.local index c3c0f61e5..61a3a7725 100644 --- a/modulefiles/tasks/jet/make_lbcs.local +++ b/modulefiles/tasks/jet/make_lbcs.local @@ -1,7 +1,2 @@ #%Module -module load wgrib2/2.0.8 - -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/jet/miniconda_regional_workflow b/modulefiles/tasks/jet/miniconda_regional_workflow new file mode 100644 index 000000000..011a832c9 --- /dev/null +++ b/modulefiles/tasks/jet/miniconda_regional_workflow @@ -0,0 +1,5 @@ +#%Module +module use -a /contrib/miniconda3/modulefiles +module load miniconda3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/jet/run_fcst.local b/modulefiles/tasks/jet/run_fcst.local index 011a832c9..61a3a7725 100644 --- a/modulefiles/tasks/jet/run_fcst.local +++ b/modulefiles/tasks/jet/run_fcst.local @@ -1,5 +1,2 @@ #%Module -module use -a /contrib/miniconda3/modulefiles -module load miniconda3 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/noaacloud/get_extrn_ics.local b/modulefiles/tasks/noaacloud/get_extrn_ics.local new file mode 100644 index 000000000..19f582fed --- /dev/null +++ b/modulefiles/tasks/noaacloud/get_extrn_ics.local @@ -0,0 +1,7 @@ +#%Module + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow + diff --git a/modulefiles/tasks/noaacloud/get_extrn_lbcs.local b/modulefiles/tasks/noaacloud/get_extrn_lbcs.local new file mode 100644 index 000000000..6b0d97468 --- /dev/null +++ b/modulefiles/tasks/noaacloud/get_extrn_lbcs.local @@ -0,0 +1,6 @@ +#%Module + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/make_grid.local b/modulefiles/tasks/noaacloud/make_grid.local new file mode 100644 index 000000000..a97b9e4e5 --- /dev/null +++ b/modulefiles/tasks/noaacloud/make_grid.local @@ -0,0 +1,12 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/make_ics.local b/modulefiles/tasks/noaacloud/make_ics.local new file mode 100644 index 000000000..ea7d89b73 --- /dev/null +++ b/modulefiles/tasks/noaacloud/make_ics.local @@ -0,0 +1,14 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran +module load libpng +module load jasper + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/make_lbcs.local b/modulefiles/tasks/noaacloud/make_lbcs.local new file mode 100644 index 000000000..ea7d89b73 --- /dev/null +++ b/modulefiles/tasks/noaacloud/make_lbcs.local @@ -0,0 +1,14 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran +module load libpng +module load jasper + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/make_orog.local b/modulefiles/tasks/noaacloud/make_orog.local new file mode 100644 index 000000000..a97b9e4e5 --- /dev/null +++ b/modulefiles/tasks/noaacloud/make_orog.local @@ -0,0 +1,12 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/make_sfc_climo.local b/modulefiles/tasks/noaacloud/make_sfc_climo.local new file mode 100644 index 000000000..a97b9e4e5 --- /dev/null +++ b/modulefiles/tasks/noaacloud/make_sfc_climo.local @@ -0,0 +1,12 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/miniconda_regional_workflow b/modulefiles/tasks/noaacloud/miniconda_regional_workflow new file mode 100644 index 000000000..936a9d4c8 --- /dev/null +++ b/modulefiles/tasks/noaacloud/miniconda_regional_workflow @@ -0,0 +1,5 @@ +#%Module +module use -a /contrib/GST/miniconda3/modulefiles +module load miniconda3/4.10.3 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/run_fcst.local b/modulefiles/tasks/noaacloud/run_fcst.local new file mode 100644 index 000000000..ea7d89b73 --- /dev/null +++ b/modulefiles/tasks/noaacloud/run_fcst.local @@ -0,0 +1,14 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran +module load libpng +module load jasper + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/noaacloud/run_post.local b/modulefiles/tasks/noaacloud/run_post.local new file mode 100644 index 000000000..ea7d89b73 --- /dev/null +++ b/modulefiles/tasks/noaacloud/run_post.local @@ -0,0 +1,14 @@ +#%Module + +module use /contrib/spack-stack/apps/srw-app-test/modulefiles/Core +module load stack-intel +module load stack-intel-oneapi-mpi +module load netcdf-c +module load netcdf-fortran +module load libpng +module load jasper + +module load miniconda_regional_workflow +module load rocoto +prepend-path PATH /contrib/GST/miniconda/envs/regional_workflow/bin +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/orion/get_extrn_ics.local b/modulefiles/tasks/orion/get_extrn_ics.local index a9d5b4412..655aa30a2 100644 --- a/modulefiles/tasks/orion/get_extrn_ics.local +++ b/modulefiles/tasks/orion/get_extrn_ics.local @@ -3,4 +3,5 @@ ############################################################# module purge +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/orion/get_extrn_lbcs.local b/modulefiles/tasks/orion/get_extrn_lbcs.local index 09f37151a..a05a15faa 100644 --- a/modulefiles/tasks/orion/get_extrn_lbcs.local +++ b/modulefiles/tasks/orion/get_extrn_lbcs.local @@ -4,3 +4,4 @@ module purge +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/orion/make_grid.local b/modulefiles/tasks/orion/make_grid.local index 4de2a79ca..92505cf09 100644 --- a/modulefiles/tasks/orion/make_grid.local +++ b/modulefiles/tasks/orion/make_grid.local @@ -1,6 +1,3 @@ #%Module -module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles -module load miniconda3/3.8 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/orion/make_ics.local b/modulefiles/tasks/orion/make_ics.local index 3703a9ba1..61a3a7725 100644 --- a/modulefiles/tasks/orion/make_ics.local +++ b/modulefiles/tasks/orion/make_ics.local @@ -1,5 +1,2 @@ #%Module -module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles -module load miniconda3/3.8 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/orion/make_lbcs.local b/modulefiles/tasks/orion/make_lbcs.local index 3703a9ba1..61a3a7725 100644 --- a/modulefiles/tasks/orion/make_lbcs.local +++ b/modulefiles/tasks/orion/make_lbcs.local @@ -1,5 +1,2 @@ #%Module -module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles -module load miniconda3/3.8 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/modulefiles/tasks/orion/miniconda_regional_workflow b/modulefiles/tasks/orion/miniconda_regional_workflow new file mode 100644 index 000000000..3703a9ba1 --- /dev/null +++ b/modulefiles/tasks/orion/miniconda_regional_workflow @@ -0,0 +1,5 @@ +#%Module +module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles +module load miniconda3/3.8 + +setenv SRW_ENV regional_workflow diff --git a/modulefiles/tasks/orion/run_fcst.local b/modulefiles/tasks/orion/run_fcst.local index 3703a9ba1..61a3a7725 100644 --- a/modulefiles/tasks/orion/run_fcst.local +++ b/modulefiles/tasks/orion/run_fcst.local @@ -1,5 +1,2 @@ #%Module -module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles -module load miniconda3/3.8 - -setenv SRW_ENV regional_workflow +module load miniconda_regional_workflow diff --git a/scripts/exregional_get_extrn_mdl_files.sh b/scripts/exregional_get_extrn_mdl_files.sh index ac5127eb8..5a3fe04cf 100755 --- a/scripts/exregional_get_extrn_mdl_files.sh +++ b/scripts/exregional_get_extrn_mdl_files.sh @@ -42,9 +42,10 @@ print_info_msg " Entering script: \"${scrfunc_fn}\" In directory: \"${scrfunc_dir}\" -This is the ex-script for the task that copies/fetches to a local directory -either from disk or HPSS) the external model files from which initial or -boundary condition files for the FV3 will be generated. +This is the ex-script for the task that copies or fetches external model +input data from disk, HPSS, or a URL, and stages them to the +workflow-specified location so that they may be used to generate initial +or lateral boundary conditions for the FV3. ========================================================================" # #----------------------------------------------------------------------- @@ -56,18 +57,10 @@ boundary condition files for the FV3 will be generated. #----------------------------------------------------------------------- # valid_args=( \ -"ics_or_lbcs" \ -"use_user_staged_extrn_files" \ "extrn_mdl_cdate" \ -"extrn_mdl_lbc_spec_fhrs" \ -"extrn_mdl_fns_on_disk" \ -"extrn_mdl_fns_in_arcv" \ -"extrn_mdl_source_dir" \ +"extrn_mdl_name" \ "extrn_mdl_staging_dir" \ -"extrn_mdl_arcv_fmt" \ -"extrn_mdl_arcv_fns" \ -"extrn_mdl_arcv_fps" \ -"extrn_mdl_arcvrel_dir" \ +"time_offset_hrs" \ ) process_args valid_args "$@" # @@ -80,715 +73,105 @@ process_args valid_args "$@" #----------------------------------------------------------------------- # print_input_args valid_args -# -#----------------------------------------------------------------------- -# -# Set num_files_to_copy to the number of external model files that need -# to be copied or linked to from/at a location on disk. Then set -# extrn_mdl_fps_on_disk to the full paths of the external model files -# on disk. -# -#----------------------------------------------------------------------- -# -num_files_to_copy="${#extrn_mdl_fns_on_disk[@]}" -prefix="${extrn_mdl_source_dir}/" -extrn_mdl_fps_on_disk=( "${extrn_mdl_fns_on_disk[@]/#/$prefix}" ) -# -#----------------------------------------------------------------------- -# -# Loop through the list of external model files and check whether they -# all exist on disk. The counter num_files_found_on_disk keeps track of -# the number of external model files that were actually found on disk in -# the directory specified by extrn_mdl_source_dir. -# -# If the location extrn_mdl_source_dir is a user-specified directory -# (i.e. if use_user_staged_extrn_files is set to "TRUE"), then if/when we -# encounter the first file that does not exist, we exit the script with -# an error message. If extrn_mdl_source_dir is a system directory (i.e. -# if use_user_staged_extrn_files is not set to "TRUE"), then if/when we -# encounter the first file that does not exist or exists but is younger -# than a certain age, we break out of the loop and try to fetch all the -# necessary external model files from HPSS. The age cutoff is to ensure -# that files are not still being written to. -# -#----------------------------------------------------------------------- -# -num_files_found_on_disk="0" -min_age="5" # Minimum file age, in minutes. - -for fp in "${extrn_mdl_fps_on_disk[@]}"; do - # - # If the external model file exists, then... - # - if [ -f "$fp" ]; then - # - # Increment the counter that keeps track of the number of external - # model files found on disk and print out an informational message. - # - num_files_found_on_disk=$(( num_files_found_on_disk+1 )) - print_info_msg " -File fp exists on disk: - fp = \"$fp\"" - # - # If we are NOT searching for user-staged external model files, then - # we also check that the current file is at least min_age minutes old. - # If not, we try searching for all the external model files on HPSS. - # - if [ "${use_user_staged_extrn_files}" != "TRUE" ]; then - - if [ $( find "$fp" -mmin +${min_age} ) ]; then - - print_info_msg " -File fp is older than the minimum required age of min_age minutes: - fp = \"$fp\" - min_age = ${min_age} minutes" - - else - - print_info_msg " -File fp is NOT older than the minumum required age of min_age minutes: - fp = \"$fp\" - min_age = ${min_age} minutes -Will try fetching all external model files from HPSS. Not checking -presence and age of remaining external model files on disk." - break - - fi - - fi - # - # If the external model file does not exist, then... - # - else - # - # If an external model file is not found and we are searching for it - # in a user-specified directory, print out an error message and exit. - # - if [ "${use_user_staged_extrn_files}" = "TRUE" ]; then - - print_err_msg_exit "\ -File fp does NOT exist on disk: - fp = \"$fp\" -Please ensure that the directory specified by extrn_mdl_source_dir exists -and that all the files specified in the array extrn_mdl_fns_on_disk exist -within it: - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" - extrn_mdl_fns_on_disk = ( $( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" ))" - # - # If an external model file is not found and we are searching for it - # in a system directory, give up on the system directory and try instead - # to get all the external model files from HPSS. - # - else - - print_info_msg " -File fp does NOT exist on disk: - fp = \"$fp\" -Will try fetching all external model files from HPSS. Not checking -presence and age of remaining external model files on disk." - break - - fi - fi - -done -# -#----------------------------------------------------------------------- -# -# Set the variable (data_src) that determines the source of the external -# model files (either disk or HPSS). -# -#----------------------------------------------------------------------- -# -if [ "${num_files_found_on_disk}" -eq "${num_files_to_copy}" ]; then - data_src="disk" -else - data_src="HPSS" -fi -if [ ${NOMADS} == "TRUE" ]; then - data_src="online" -fi # #----------------------------------------------------------------------- # -# If the source of the external model files is "disk", copy the files -# from the source directory on disk to a staging directory. +# Set up variables for call to retrieve_data.py # #----------------------------------------------------------------------- # -extrn_mdl_fns_on_disk_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" )")" - -if [ "${data_src}" = "disk" ]; then - - if [ "${RUN_ENVIR}" = "nco" ]; then - - print_info_msg " -Creating links in staging directory (extrn_mdl_staging_dir) to external -model files on disk (extrn_mdl_fns_on_disk) in the source directory -(extrn_mdl_source_dir): - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" - extrn_mdl_fns_on_disk = ${extrn_mdl_fns_on_disk_str}" - - ln_vrfy -sf -t ${extrn_mdl_staging_dir} ${extrn_mdl_fps_on_disk[@]} - +set -x +if [ "${ICS_OR_LBCS}" = "ICS" ]; then + if [ ${time_offset_hrs} -eq 0 ] ; then + anl_or_fcst="anl" else - - # - # If the external model files are user-staged, then simply link to - # them. Otherwise, if they are on the system disk, copy them to the - # staging directory. - # - if [ "${use_user_staged_extrn_files}" = "TRUE" ]; then - print_info_msg " -Creating symlinks in the staging directory (extrn_mdl_staging_dir) to the -external model files on disk (extrn_mdl_fns_on_disk) in the source directory -(extrn_mdl_source_dir): - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" - extrn_mdl_fns_on_disk = ${extrn_mdl_fns_on_disk_str} - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\"" - ln_vrfy -sf -t ${extrn_mdl_staging_dir} ${extrn_mdl_fps_on_disk[@]} - else - print_info_msg " -Copying external model files on disk (extrn_mdl_fns_on_disk) from source -directory (extrn_mdl_source_dir) to staging directory (extrn_mdl_staging_dir): - extrn_mdl_source_dir = \"${extrn_mdl_source_dir}\" - extrn_mdl_fns_on_disk = ${extrn_mdl_fns_on_disk_str} - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\"" - cp_vrfy ${extrn_mdl_fps_on_disk[@]} ${extrn_mdl_staging_dir} - fi - + anl_or_fcst="fcst" fi -# -#----------------------------------------------------------------------- -# -# Print message indicating successful completion of script. -# -#----------------------------------------------------------------------- -# - if [ "${ics_or_lbcs}" = "ICS" ]; then - - print_info_msg " -======================================================================== -Successfully copied or linked to external model files on disk needed for -generating initial conditions and surface fields for the FV3 forecast!!! - -Exiting script: \"${scrfunc_fn}\" -In directory: \"${scrfunc_dir}\" -========================================================================" - - elif [ "${ics_or_lbcs}" = "LBCS" ]; then - - print_info_msg " -======================================================================== -Successfully copied or linked to external model files on disk needed for -generating lateral boundary conditions for the FV3 forecast!!! - -Exiting script: \"${scrfunc_fn}\" -In directory: \"${scrfunc_dir}\" -========================================================================" - + fcst_hrs=${time_offset_hrs} + file_names=${EXTRN_MDL_FILES_ICS[@]} + if [ ${extrn_mdl_name} = FV3GFS ] ; then + file_type=$FV3GFS_FILE_FMT_ICS fi -# -#----------------------------------------------------------------------- -# -# If the source of the external model files is "HPSS", fetch them from -# HPSS. -# -#----------------------------------------------------------------------- -# -elif [ "${data_src}" = "HPSS" ]; then -# -#----------------------------------------------------------------------- -# -# Set extrn_mdl_fps_in_arcv to the full paths within the archive files of -# the external model files. -# -#----------------------------------------------------------------------- -# - prefix=${extrn_mdl_arcvrel_dir:+${extrn_mdl_arcvrel_dir}/} - extrn_mdl_fps_in_arcv=( "${extrn_mdl_fns_in_arcv[@]/#/$prefix}" ) - - extrn_mdl_fps_in_arcv_str="( "$( printf "\"%s\" " "${extrn_mdl_fps_in_arcv[@]}" )")" - extrn_mdl_arcv_fps_str="( "$( printf "\"%s\" " "${extrn_mdl_arcv_fps[@]}" )")" - - print_info_msg " -Fetching external model files from HPSS. The full paths to these files -in the archive file(s) (extrn_mdl_fps_in_arcv), the archive files on HPSS -in which these files are stored (extrn_mdl_arcv_fps), and the staging -directory to which they will be copied (extrn_mdl_staging_dir) are: - extrn_mdl_fps_in_arcv = ${extrn_mdl_fps_in_arcv_str} - extrn_mdl_arcv_fps = ${extrn_mdl_arcv_fps_str} - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\"" -# -#----------------------------------------------------------------------- -# -# Get the number of archive files to consider. -# -#----------------------------------------------------------------------- -# - num_arcv_files="${#extrn_mdl_arcv_fps[@]}" -# -#----------------------------------------------------------------------- -# -# Consider the case of the archive file to be fetched from HPSS being in -# tar format. -# -#----------------------------------------------------------------------- -# - if [ "${extrn_mdl_arcv_fmt}" = "tar" ]; then -# -#----------------------------------------------------------------------- -# -# Loop through the set of archive files specified in extrn_mdl_arcv_fps -# and extract a subset of the specified external model files from each. -# -#----------------------------------------------------------------------- -# - num_files_to_extract="${#extrn_mdl_fps_in_arcv[@]}" - - for (( narcv=0; narcv<${num_arcv_files}; narcv++ )); do - - narcv_formatted=$( printf "%02d" $narcv ) - arcv_fp="${extrn_mdl_arcv_fps[$narcv]}" -# -# Before trying to extract (a subset of) the external model files from -# the current tar archive file (which is on HPSS), create a list of those -# external model files that are stored in the current tar archive file. -# For this purpose, we first use the "htar -tvf" command to list all the -# external model files that are in the current archive file and store the -# result in a log file. (This command also indirectly checks whether the -# archive file exists on HPSS.) We then grep this log file for each -# external model file and create a list containing only those external -# model files that exist in the current archive. -# -# Note that the "htar -tvf" command will fail if the tar archive file -# itself doesn't exist on HPSS, but it won't fail if any of the external -# model file names passed to it don't exist in the archive file. In the -# latter case, the missing files' names simply won't appear in the log -# file. -# - htar_log_fn="log.htar_tvf.${narcv_formatted}" - htar -tvf ${arcv_fp} ${extrn_mdl_fps_in_arcv[@]} >& ${htar_log_fn} || \ - print_err_msg_exit "\ -htar file list operation (\"htar -tvf ...\") failed. Check the log file -htar_log_fn in the staging directory (extrn_mdl_staging_di)r for details: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - htar_log_fn = \"${htar_log_fn}\"" - - i=0 - files_in_crnt_arcv=() - for (( nfile=0; nfile<${num_files_to_extract}; nfile++ )); do - extrn_mdl_fp="${extrn_mdl_fps_in_arcv[$nfile]}" -# grep -n ${extrn_mdl_fp} ${htar_log_fn} 2>&1 && { \ - grep -n ${extrn_mdl_fp} ${htar_log_fn} > /dev/null 2>&1 && { \ - files_in_crnt_arcv[$i]="${extrn_mdl_fp}"; \ - i=$((i+1)); \ - } - done -# -# If none of the external model files were found in the current archive -# file, print out an error message and exit. -# - num_files_in_crnt_arcv=${#files_in_crnt_arcv[@]} - if [ ${num_files_in_crnt_arcv} -eq 0 ]; then - extrn_mdl_fps_in_arcv_str="( "$( printf "\"%s\" " "${extrn_mdl_fps_in_arcv[@]}" )")" - print_err_msg_exit "\ -The current archive file (arcv_fp) does not contain any of the external -model files listed in extrn_mdl_fps_in_arcv: - arcv_fp = \"${arcv_fp}\" - extrn_mdl_fps_in_arcv = ${extrn_mdl_fps_in_arcv_str} -The archive file should contain at least one external model file; otherwise, -it would not be needed." - fi -# -# Extract from the current tar archive file on HPSS all the external model -# files that exist in that archive file. Also, save the output of the -# "htar -xvf" command in a log file for debugging (if necessary). -# - htar_log_fn="log.htar_xvf.${narcv_formatted}" - htar -xvf ${arcv_fp} ${files_in_crnt_arcv[@]} >& ${htar_log_fn} || \ - print_err_msg_exit "\ -htar file extract operation (\"htar -xvf ...\") failed. Check the log -file htar_log_fn in the staging directory (extrn_mdl_staging_dir) for -details: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - htar_log_fn = \"${htar_log_fn}\"" -# -# Note that the htar file extract operation above may return with a 0 -# exit code (success) even if one or more (or all) external model files -# that it is supposed to contain were not extracted. The names of those -# files that were not extracted will not be listed in the log file. Thus, -# we now check whether the log file contains the name of each external -# model file that should have been extracted. If any are missing, we -# print out a message and exit the script because initial condition and -# surface field files needed by FV3 cannot be generated without all the -# external model files. -# - for fp in "${files_in_crnt_arcv[@]}"; do -# -# If the file path is absolute (i.e. starts with a "/"), then drop the -# leading "/" because htar strips it before writing the file path to the -# log file. -# - fp=${fp#/} - - grep -n "${fp}" "${htar_log_fn}" > /dev/null 2>&1 || \ - print_err_msg_exit "\ -External model file fp not extracted from tar archive file arcv_fp: - arcv_fp = \"${arcv_fp}\" - fp = \"$fp\" -Check the log file htar_log_fn in the staging directory (extrn_mdl_staging_dir) -for details: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - htar_log_fn = \"${htar_log_fn}\"" - - done - - done -# -#----------------------------------------------------------------------- -# -# For each external model file that was supposed to have been extracted -# from the set of specified archive files, loop through the extraction -# log files and check that it appears exactly once in one of the log files. -# If it doesn't appear at all, then it means that file was not extracted, -# and if it appears more than once, then something else is wrong. In -# either case, print out an error message and exit. -# -#----------------------------------------------------------------------- -# - for (( nfile=0; nfile<${num_files_to_extract}; nfile++ )); do - extrn_mdl_fp="${extrn_mdl_fps_in_arcv[$nfile]}" -# -# If the file path is absolute (i.e. starts with a "/"), then drop the -# leading "/" because htar strips it before writing the file path to the -# log file. -# - extrn_mdl_fp=${extrn_mdl_fp#/} - - num_occurs=0 - for (( narcv=0; narcv<${num_arcv_files}; narcv++ )); do - narcv_formatted=$( printf "%02d" $narcv ) - htar_log_fn="log.htar_xvf.${narcv_formatted}" - grep -n ${extrn_mdl_fp} ${htar_log_fn} > /dev/null 2>&1 && { \ - num_occurs=$((num_occurs+1)); \ - } - done - - if [ ${num_occurs} -eq 0 ]; then - print_err_msg_exit "\ -The current external model file (extrn_mdl_fp) does not appear in any of -the archive extraction log files: - extrn_mdl_fp = \"${extrn_mdl_fp}\" -Thus, it was not extracted, likely because it doesn't exist in any of the -archive files." - elif [ ${num_occurs} -gt 1 ]; then - print_err_msg_exit "\ -The current external model file (extrn_mdl_fp) appears more than once in -the archive extraction log files: - extrn_mdl_fp = \"${extrn_mdl_fp}\" -The number of times it occurs in the log files is: - num_occurs = ${num_occurs} -Thus, it was extracted from more than one archive file, with the last one -that was extracted overwriting all previous ones. This should normally -not happen." - fi - - done -# -#----------------------------------------------------------------------- -# -# If extrn_mdl_arcvrel_dir is not set to the current directory (i.e. it -# is not equal to "."), then the htar command will have created the -# subdirectory "./${extrn_mdl_arcvrel_dir}" under the current directory -# and placed the extracted files there. In that case, we move these -# extracted files back to the current directory and then remove the -# subdirectory created by htar. -# -#----------------------------------------------------------------------- -# - if [ "${extrn_mdl_arcvrel_dir}" != "." ]; then -# -# The code below works if extrn_mdl_arcvrel_dir starts with a "/" or a -# "./", which are the only case encountered thus far. The code may have -# to be modified to accomodate other cases. -# - if [ "${extrn_mdl_arcvrel_dir:0:1}" = "/" ] || \ - [ "${extrn_mdl_arcvrel_dir:0:2}" = "./" ]; then -# -# Strip the "/" or "./" from the beginning of extrn_mdl_arcvrel_dir to -# obtain the relative directory from which to move the extracted files -# to the current directory. Then move the files. -# - rel_dir=$( printf "%s" "${extrn_mdl_arcvrel_dir}" | \ - $SED -r 's%^(\/|\.\/)([^/]*)(.*)%\2\3%' ) - mv_vrfy ${rel_dir}/* . -# -# Get the first subdirectory in rel_dir, i.e. the subdirectory before the -# first forward slash. This is the subdirectory that we want to remove -# since it no longer contains any files (only subdirectories). Then remove -# it. -# - subdir_to_remove=$( printf "%s" "${rel_dir}" | \ - $SED -r 's%^([^/]*)(.*)%\1%' ) - rm_vrfy -rf ./${subdir_to_remove} -# -# If extrn_mdl_arcvrel_dir does not start with a "/" (and it is not -# equal to "."), then print out an error message and exit. -# - else - - print_err_msg_exit "\ -The archive-relative directory specified by extrn_mdl_arcvrel_dir [i.e. -the directory \"within\" the tar file(s) listed in extrn_mdl_arcv_fps] is -not the current directory (i.e. it is not \".\"), and it does not start -with a \"/\" or a \"./\": - extrn_mdl_arcvrel_dir = \"${extrn_mdl_arcvrel_dir}\" - extrn_mdl_arcv_fps = ${extrn_mdl_arcv_fps_str} -This script must be modified to account for this case." - - fi - - fi -# -#----------------------------------------------------------------------- -# -# Consider the case of the archive file to be fetched from HPSS being in -# zip format. -# -#----------------------------------------------------------------------- -# - elif [ "${extrn_mdl_arcv_fmt}" = "zip" ]; then -# -#----------------------------------------------------------------------- -# -# For archive files that are in "zip" format files, the array extrn_mdl_arcv_fps -# containing the list of archive files should contain only one element, -# i.e. there should be only one archive file to consider. Check for this. -# If this ever changes (e.g. due to the way an external model that uses -# the "zip" format archives its output files on HPSS), the code below must -# be modified to loop over all archive files. -# -#----------------------------------------------------------------------- -# - if [ "${num_arcv_files}" -gt 1 ]; then - print_err_msg_exit "\ -Currently, this script is coded to handle only one archive file if the -archive file format is specified to be \"zip\", but the number of archive -files (num_arcv_files) passed to this script is greater than 1: - extrn_mdl_arcv_fmt = \"${extrn_mdl_arcv_fmt}\" - num_arcv_files = ${num_arcv_files} -Please modify the script to handle more than one \"zip\" archive file. -Note that code already exists in this script that can handle multiple -archive files if the archive file format is specified to be \"tar\", so -that can be used as a guide for the \"zip\" case." - else - arcv_fn="${extrn_mdl_arcv_fns[0]}" - arcv_fp="${extrn_mdl_arcv_fps[0]}" - fi -# -#----------------------------------------------------------------------- -# -# Fetch the zip archive file from HPSS. -# -#----------------------------------------------------------------------- -# - hsi_log_fn="log.hsi_get" - hsi get "${arcv_fp}" >& ${hsi_log_fn} || \ - print_err_msg_exit "\ -hsi file get operation (\"hsi get ...\") failed. Check the log file -hsi_log_fn in the staging directory (extrn_mdl_staging_dir) for details: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - hsi_log_fn = \"${hsi_log_fn}\"" -# -#----------------------------------------------------------------------- -# -# List the contents of the zip archive file and save the result in a log -# file. -# -#----------------------------------------------------------------------- -# - unzip_log_fn="log.unzip_lv" - unzip -l -v ${arcv_fn} >& ${unzip_log_fn} || \ - print_err_msg_exit "\ -unzip operation to list the contents of the zip archive file arcv_fn in -the staging directory (extrn_mdl_staging_dir) failed. Check the log -file unzip_log_fn in that directory for details: - arcv_fn = \"${arcv_fn}\" - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - unzip_log_fn = \"${unzip_log_fn}\"" -# -#----------------------------------------------------------------------- -# -# Check that the log file from the unzip command above contains the name -# of each external model file. If any are missing, then the corresponding -# files are not in the zip file and thus cannot be extracted. In that -# case, print out a message and exit the script because initial condition -# and surface field files for the FV3-LAM cannot be generated without all -# the external model files. -# -#----------------------------------------------------------------------- -# - for fp in "${extrn_mdl_fps_in_arcv[@]}"; do - grep -n "${fp}" "${unzip_log_fn}" > /dev/null 2>&1 || \ - print_err_msg_exit "\ -External model file fp does not exist in the zip archive file arcv_fn in -the staging directory (extrn_mdl_staging_dir). Check the log file -unzip_log_fn in that directory for the contents of the zip archive: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - arcv_fn = \"${arcv_fn}\" - fp = \"$fp\" - unzip_log_fn = \"${unzip_log_fn}\"" - done -# -#----------------------------------------------------------------------- -# -# Extract the external model files from the zip file on HPSS. Note that -# the -o flag to unzip is needed to overwrite existing files. Otherwise, -# unzip will wait for user input as to whether the existing files should -# be overwritten. -# -#----------------------------------------------------------------------- -# - unzip_log_fn="log.unzip" - unzip -o "${arcv_fn}" ${extrn_mdl_fps_in_arcv[@]} >& ${unzip_log_fn} || \ - print_err_msg_exit "\ -unzip file extract operation (\"unzip -o ...\") failed. Check the log -file unzip_log_fn in the staging directory (extrn_mdl_staging_dir) for -details: - extrn_mdl_staging_dir = \"${extrn_mdl_staging_dir}\" - unzip_log_fn = \"${unzip_log_fn}\"" -# -# NOTE: -# If extrn_mdl_arcvrel_dir is not empty, the unzip command above will -# create a subdirectory under extrn_mdl_staging_dir and place the external -# model files there. We have not encountered this for the RAP and HRRR -# models, but it may happen for other models in the future. In that case, -# extra code must be included here to move the external model files from -# the subdirectory up to extrn_mdl_staging_dir and then the subdirectory -# (analogous to what is done above for the case of extrn_mdl_arcv_fmt set -# to "tar". -# - + input_file_path=${EXTRN_MDL_SOURCE_BASEDIR_ICS:-$EXTRN_MDL_SYSBASEDIR_ICS} + +elif [ "${ICS_OR_LBCS}" = "LBCS" ]; then + anl_or_fcst="fcst" + first_time=$((time_offset_hrs + LBC_SPEC_INTVL_HRS)) + last_time=$((time_offset_hrs + FCST_LEN_HRS)) + fcst_hrs="${first_time} ${last_time} ${LBC_SPEC_INTVL_HRS}" + file_names=${EXTRN_MDL_FILES_LBCS[@]} + if [ ${extrn_mdl_name} = FV3GFS ] ; then + file_type=$FV3GFS_FILE_FMT_LBCS fi -# -#----------------------------------------------------------------------- -# -# Print message indicating successful completion of script. -# -#----------------------------------------------------------------------- -# - if [ "${ics_or_lbcs}" = "ICS" ]; then - - print_info_msg " -======================================================================== -External model files needed for generating initial condition and surface -fields for the FV3-LAM successfully fetched from HPSS!!! - -Exiting script: \"${scrfunc_fn}\" -In directory: \"${scrfunc_dir}\" -========================================================================" - - elif [ "${ics_or_lbcs}" = "LBCS" ]; then + input_file_path=${EXTRN_MDL_SOURCE_BASEDIR_LBCS:-$EXTRN_MDL_SYSBASEDIR_LBCS} +fi - print_info_msg " -======================================================================== -External model files needed for generating lateral boundary conditions -on the halo of the FV3-LAM's regional grid successfully fetched from -HPSS!!! +data_stores="${EXTRN_MDL_DATA_STORES}" -Exiting script: \"${scrfunc_fn}\" -In directory: \"${scrfunc_dir}\" -========================================================================" +yyyymmddhh=${extrn_mdl_cdate:0:10} +yyyy=${yyyymmddhh:0:4} +yyyymm=${yyyymmddhh:0:6} +yyyymmdd=${yyyymmddhh:0:8} +mm=${yyyymmddhh:4:2} +dd=${yyyymmddhh:6:2} +hh=${yyyymmddhh:8:2} - fi - -elif [ "${data_src}" = "online" ]; then - print_info_msg " -======================================================================== -getting data from online nomads data sources -========================================================================" +input_file_path=$(eval echo ${input_file_path}) # #----------------------------------------------------------------------- # -# Set extrn_mdl_fps to the full paths within the archive files of the -# external model output files. +# Set up optional flags for calling retrieve_data.py # #----------------------------------------------------------------------- # - prefix=${extrn_mdl_arcvrel_dir:+${extrn_mdl_arcvrel_dir}/} - extrn_mdl_fps=( "${extrn_mdl_fns_on_disk[@]/#/$prefix}" ) +additional_flags="" - extrn_mdl_fps_str="( "$( printf "\"%s\" " "${extrn_mdl_fps[@]}" )")" - print_info_msg " -Getting external model files from nomads: - extrn_mdl_fps= ${extrn_mdl_fps_str}" - - num_files_to_extract="${#extrn_mdl_fps[@]}" - wget_LOG_FN="log.wget.txt" - for (( nfile=0; nfile<${num_files_to_extract}; nfile++ )); do - cp ../../../${extrn_mdl_fps[$nfile]} . || \ - print_err_msg_exit "\ - onlie file ${extrn_mdl_fps[$nfile]} not found." - done +if [ -n "${file_type:-}" ] ; then + additional_flags="$additional_flags \ + --file_type ${file_type}" +fi +if [ -n "${file_names:-}" ] ; then + additional_flags="$additional_flags \ + --file_templates ${file_names[@]}" +fi +if [ -n "${input_file_path:-}" ] ; then + data_stores="disk $data_stores" + additional_flags="$additional_flags \ + --input_file_path ${input_file_path}" fi + # #----------------------------------------------------------------------- # -# Create a variable definitions file (a shell script) and save in it the -# values of several external-model-associated variables generated in this -# script that will be needed by downstream workflow tasks. +# Call ush script to retrieve files # #----------------------------------------------------------------------- # -if [ "${ics_or_lbcs}" = "ICS" ]; then - extrn_mdl_var_defns_fn="${EXTRN_MDL_ICS_VAR_DEFNS_FN}" -elif [ "${ics_or_lbcs}" = "LBCS" ]; then - extrn_mdl_var_defns_fn="${EXTRN_MDL_LBCS_VAR_DEFNS_FN}" -fi -extrn_mdl_var_defns_fp="${extrn_mdl_staging_dir}/${extrn_mdl_var_defns_fn}" -check_for_preexist_dir_file "${extrn_mdl_var_defns_fp}" "delete" +cmd=" +python3 -u ${USHDIR}/retrieve_data.py \ + --debug \ + --anl_or_fcst ${anl_or_fcst} \ + --config ${USHDIR}/templates/data_locations.yml \ + --cycle_date ${extrn_mdl_cdate} \ + --data_stores ${data_stores} \ + --external_model ${extrn_mdl_name} \ + --fcst_hrs ${fcst_hrs[@]} \ + --output_path ${extrn_mdl_staging_dir} \ + --summary_file ${EXTRN_MDL_VAR_DEFNS_FN} \ + $additional_flags" -if [ "${data_src}" = "disk" ]; then - extrn_mdl_fns_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" )")" -elif [ "${data_src}" = "HPSS" ]; then - extrn_mdl_fns_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_in_arcv[@]}" )")" -elif [ "${data_src}" = "online" ]; then - extrn_mdl_fns_str="( "$( printf "\"%s\" " "${extrn_mdl_fns_on_disk[@]}" )")" -fi - -settings="\ -DATA_SRC=\"${data_src}\" -EXTRN_MDL_CDATE=\"${extrn_mdl_cdate}\" -EXTRN_MDL_STAGING_DIR=\"${extrn_mdl_staging_dir}\" -EXTRN_MDL_FNS=${extrn_mdl_fns_str}" -# -# If the external model files obtained above were for generating LBCS (as -# opposed to ICs), then add to the external model variable definitions -# file the array variable EXTRN_MDL_LBC_SPEC_FHRS containing the forecast -# hours at which the lateral boundary conditions are specified. -# -if [ "${ics_or_lbcs}" = "LBCS" ]; then - extrn_mdl_lbc_spec_fhrs_str="( "$( printf "\"%s\" " "${extrn_mdl_lbc_spec_fhrs[@]}" )")" - settings="$settings -EXTRN_MDL_LBC_SPEC_FHRS=${extrn_mdl_lbc_spec_fhrs_str}" -fi +$cmd || print_err_msg_exit "\ +Call to retrieve_data.py failed with a non-zero exit status. -{ cat << EOM >> ${extrn_mdl_var_defns_fp} -$settings -EOM -} || print_err_msg_exit "\ -Heredoc (cat) command to create a variable definitions file associated -with the external model from which to generate ${ics_or_lbcs} returned with a -nonzero status. The full path to this variable definitions file is: - extrn_mdl_var_defns_fp = \"${extrn_mdl_var_defns_fp}\"" +The command was: +${cmd} +" # #----------------------------------------------------------------------- # diff --git a/scripts/exregional_make_grid.sh b/scripts/exregional_make_grid.sh index 742770630..3744638c8 100755 --- a/scripts/exregional_make_grid.sh +++ b/scripts/exregional_make_grid.sh @@ -410,7 +410,7 @@ if [ "${GRID_GEN_METHOD}" = "GFDLgrid" ]; then elif [ "${GRID_GEN_METHOD}" = "ESGgrid" ]; then CRES="C${res_equiv}" fi -set_file_param "${GLOBAL_VAR_DEFNS_FP}" "CRES" "\"$CRES\"" +set_file_param "${GLOBAL_VAR_DEFNS_FP}" "CRES" "'$CRES'" # #----------------------------------------------------------------------- # diff --git a/scripts/exregional_make_ics.sh b/scripts/exregional_make_ics.sh index 8a63b1978..fcfefd646 100755 --- a/scripts/exregional_make_ics.sh +++ b/scripts/exregional_make_ics.sh @@ -111,7 +111,7 @@ fi #----------------------------------------------------------------------- # extrn_mdl_staging_dir="${CYCLE_DIR}/${EXTRN_MDL_NAME_ICS}/for_ICS" -extrn_mdl_var_defns_fp="${extrn_mdl_staging_dir}/${EXTRN_MDL_ICS_VAR_DEFNS_FN}" +extrn_mdl_var_defns_fp="${extrn_mdl_staging_dir}/${EXTRN_MDL_VAR_DEFNS_FN}" . ${extrn_mdl_var_defns_fp} # #----------------------------------------------------------------------- @@ -437,7 +437,7 @@ case "${EXTRN_MDL_NAME_ICS}" in # geogrid_file_input_grid="${FIXgsm}/geo_em.d01.nc_RAPX" vgtyp_from_climo=True - sotyp_from_climo=False + sotyp_from_climo=True vgfrc_from_climo=True minmax_vgfrc_from_climo=True lai_from_climo=True diff --git a/scripts/exregional_make_lbcs.sh b/scripts/exregional_make_lbcs.sh index e1d40fb88..70ba51176 100755 --- a/scripts/exregional_make_lbcs.sh +++ b/scripts/exregional_make_lbcs.sh @@ -109,7 +109,7 @@ fi #----------------------------------------------------------------------- # extrn_mdl_staging_dir="${CYCLE_DIR}/${EXTRN_MDL_NAME_LBCS}/for_LBCS" -extrn_mdl_var_defns_fp="${extrn_mdl_staging_dir}/${EXTRN_MDL_LBCS_VAR_DEFNS_FN}" +extrn_mdl_var_defns_fp="${extrn_mdl_staging_dir}/${EXTRN_MDL_VAR_DEFNS_FN}" . ${extrn_mdl_var_defns_fp} # #----------------------------------------------------------------------- @@ -358,12 +358,12 @@ fi # #----------------------------------------------------------------------- # -num_fhrs="${#EXTRN_MDL_LBC_SPEC_FHRS[@]}" +num_fhrs="${#EXTRN_MDL_FHRS[@]}" for (( i=0; i<${num_fhrs}; i++ )); do # # Get the forecast hour of the external model. # - fhr="${EXTRN_MDL_LBC_SPEC_FHRS[$i]}" + fhr="${EXTRN_MDL_FHRS[$i]}" # # Set external model output file name and file type/format. Note that # these are now inputs into chgres_cube. diff --git a/scripts/exregional_run_ensgridvx.sh b/scripts/exregional_run_ensgridvx.sh index df1b91cd4..6d2cc1e40 100755 --- a/scripts/exregional_run_ensgridvx.sh +++ b/scripts/exregional_run_ensgridvx.sh @@ -133,6 +133,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME export NUM_ENS_MEMBERS export NUM_PAD export LOG_SUFFIX diff --git a/scripts/exregional_run_ensgridvx_mean.sh b/scripts/exregional_run_ensgridvx_mean.sh index bd50e3cff..9a18823ef 100755 --- a/scripts/exregional_run_ensgridvx_mean.sh +++ b/scripts/exregional_run_ensgridvx_mean.sh @@ -143,6 +143,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME export INPUT_BASE export LOG_SUFFIX diff --git a/scripts/exregional_run_ensgridvx_prob.sh b/scripts/exregional_run_ensgridvx_prob.sh index 7345c1728..8f45a53cd 100755 --- a/scripts/exregional_run_ensgridvx_prob.sh +++ b/scripts/exregional_run_ensgridvx_prob.sh @@ -144,6 +144,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME export LOG_SUFFIX # diff --git a/scripts/exregional_run_enspointvx.sh b/scripts/exregional_run_enspointvx.sh index 544f0f006..ed1bd3581 100755 --- a/scripts/exregional_run_enspointvx.sh +++ b/scripts/exregional_run_enspointvx.sh @@ -135,6 +135,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME export NUM_ENS_MEMBERS ${METPLUS_PATH}/ush/run_metplus.py \ diff --git a/scripts/exregional_run_enspointvx_mean.sh b/scripts/exregional_run_enspointvx_mean.sh index b65c9bd34..b9a0bc0c8 100755 --- a/scripts/exregional_run_enspointvx_mean.sh +++ b/scripts/exregional_run_enspointvx_mean.sh @@ -137,6 +137,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME ${METPLUS_PATH}/ush/run_metplus.py \ -c ${METPLUS_CONF}/common.conf \ diff --git a/scripts/exregional_run_enspointvx_prob.sh b/scripts/exregional_run_enspointvx_prob.sh index b315faac3..4d2bf8464 100755 --- a/scripts/exregional_run_enspointvx_prob.sh +++ b/scripts/exregional_run_enspointvx_prob.sh @@ -137,6 +137,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME ${METPLUS_PATH}/ush/run_metplus.py \ -c ${METPLUS_CONF}/common.conf \ diff --git a/scripts/exregional_run_fcst.sh b/scripts/exregional_run_fcst.sh index d7c3d48f6..2f768a657 100755 --- a/scripts/exregional_run_fcst.sh +++ b/scripts/exregional_run_fcst.sh @@ -9,7 +9,7 @@ # . ${GLOBAL_VAR_DEFNS_FP} . $USHDIR/source_util_funcs.sh -. $USHDIR/set_FV3nml_stoch_params.sh +. $USHDIR/set_FV3nml_ens_stoch_seeds.sh # #----------------------------------------------------------------------- # @@ -444,8 +444,9 @@ if [ ${WRITE_DOPOST} = "TRUE" ]; then cp_vrfy ${UPP_DIR}/parm/params_grib2_tbl_new . fi -if [ "${DO_ENSEMBLE}" = TRUE ]; then - set_FV3nml_stoch_params cdate="$cdate" || print_err_msg_exit "\ +if [ "${DO_ENSEMBLE}" = TRUE ] && ([ "${DO_SPP}" = TRUE ] || [ "${DO_SPPT}" = TRUE ] || [ "${DO_SHUM}" = TRUE ] \ + [ "${DO_SKEB}" = TRUE ] || [ "${DO_LSM_SPP}" = TRUE ]); then + set_FV3nml_ens_stoch_seeds cdate="$cdate" || print_err_msg_exit "\ Call to function to create the ensemble-based namelist for the current cycle's (cdate) run directory (run_dir) failed: cdate = \"${cdate}\" @@ -513,7 +514,6 @@ if [ ${WRITE_DOPOST} = "TRUE" ]; then yyyymmdd=${cdate:0:8} hh=${cdate:8:2} cyc=$hh - tmmark="tm00" fmn="00" if [ "${RUN_ENVIR}" = "nco" ]; then @@ -538,7 +538,7 @@ if [ ${WRITE_DOPOST} = "TRUE" ]; then post_mn=${post_time:10:2} post_mn_or_null="" post_fn_suffix="GrbF${fhr_d}" - post_renamed_fn_suffix="f${fhr}${post_mn_or_null}.${tmmark}.grib2" + post_renamed_fn_suffix="f${fhr}${post_mn_or_null}.${POST_OUTPUT_DOMAIN_NAME}.grib2" basetime=$( $DATE_UTIL --date "$yyyymmdd $hh" +%y%j%H%M ) symlink_suffix="_${basetime}f${fhr}${post_mn}" @@ -546,7 +546,7 @@ if [ ${WRITE_DOPOST} = "TRUE" ]; then for fid in "${fids[@]}"; do FID=$(echo_uppercase $fid) post_orig_fn="${FID}.${post_fn_suffix}" - post_renamed_fn="${NET}.t${cyc}z.${fid}${post_renamed_fn_suffix}" + post_renamed_fn="${NET}.t${cyc}z.${fid}.${post_renamed_fn_suffix}" mv_vrfy ${run_dir}/${post_orig_fn} ${post_renamed_fn} ln_vrfy -fs ${post_renamed_fn} ${FID}${symlink_suffix} done diff --git a/scripts/exregional_run_gridstatvx.sh b/scripts/exregional_run_gridstatvx.sh index 63288c44e..ae7f960fc 100755 --- a/scripts/exregional_run_gridstatvx.sh +++ b/scripts/exregional_run_gridstatvx.sh @@ -150,6 +150,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME # #----------------------------------------------------------------------- diff --git a/scripts/exregional_run_pointstatvx.sh b/scripts/exregional_run_pointstatvx.sh index 458b87749..02a65ed2b 100755 --- a/scripts/exregional_run_pointstatvx.sh +++ b/scripts/exregional_run_pointstatvx.sh @@ -140,6 +140,7 @@ export METPLUS_CONF export MET_CONFIG export MODEL export NET +export POST_OUTPUT_DOMAIN_NAME ${METPLUS_PATH}/ush/run_metplus.py \ -c ${METPLUS_CONF}/common.conf \ diff --git a/scripts/exregional_run_post.sh b/scripts/exregional_run_post.sh index c7aafd399..cb8c3d5d5 100755 --- a/scripts/exregional_run_post.sh +++ b/scripts/exregional_run_post.sh @@ -172,17 +172,6 @@ cyc=$hh # #----------------------------------------------------------------------- # -# The tmmark is a reference value used in real-time, DA-enabled NCEP models. -# It represents the delay between the onset of the DA cycle and the free -# forecast. With no DA in the SRW App at the moment, it is hard-wired to -# tm00 for now. -# -#----------------------------------------------------------------------- -# -tmmark="tm00" -# -#----------------------------------------------------------------------- -# # Create the namelist file (itag) containing arguments to pass to the post- # processor's executable. # @@ -300,7 +289,7 @@ if [ "${post_mn}" != "00" ]; then fi post_fn_suffix="GrbF${post_fhr}${dot_post_mn_or_null}" -post_renamed_fn_suffix="f${fhr}${post_mn_or_null}.${tmmark}.grib2" +post_renamed_fn_suffix="f${fhr}${post_mn_or_null}.${POST_OUTPUT_DOMAIN_NAME}.grib2" # # For convenience, change location to postprd_dir (where the final output # from UPP will be located). Then loop through the two files that UPP @@ -314,7 +303,7 @@ fids=( "prslev" "natlev" ) for fid in "${fids[@]}"; do FID=$(echo_uppercase $fid) post_orig_fn="${FID}.${post_fn_suffix}" - post_renamed_fn="${NET}.t${cyc}z.${fid}${post_renamed_fn_suffix}" + post_renamed_fn="${NET}.t${cyc}z.${fid}.${post_renamed_fn_suffix}" mv_vrfy ${tmp_dir}/${post_orig_fn} ${post_renamed_fn} create_symlink_to_file target="${post_renamed_fn}" \ symlink="${FID}${symlink_suffix}" \ diff --git a/tests/WE2E/create_WE2E_resource_summary.py b/tests/WE2E/create_WE2E_resource_summary.py new file mode 100644 index 000000000..5095a9fe6 --- /dev/null +++ b/tests/WE2E/create_WE2E_resource_summary.py @@ -0,0 +1,187 @@ +''' +Generate a summary of resources used for the WE2E test suite. + +Examples: + + To print usage + + python create_WE2E_resource_summary.py + python create_WE2E_resource_summary.py -h + + To print a report for all the experiments in an experiment directory + + python create_WE2E_resource_summary.py -e /path/to/expt_dir + + To print a report for all the grid_* and nco_* experiments. + + python create_WE2E_resource_summary.py -e /path/to/expt_dir \ + -n 'grid*' 'nco*' + + To compute a total estimated cost for all experiments on instances that are + $0.15 per core hour. + + python create_WE2E_resource_summary.py -e /path/to/expt_dir -c $0.15 + +Information about the output summary. + + - The core hours are an underestimate in many cases. + - Multiple tries are not captured. + - The use of a portion of a node or instance is not known. If the whole node + is used, but isn't reflected in the core count, the cores are not counted. + Partition information is not stored in the database, so mapping to a given + node type becomes ambiguous. + + For example, jobs that request 4 nodes with 2 processors per node with an + --exclusive flag will underestimate the total core hour usage by a factor + of 20 when using a 40 processor node. + + - When computing cost per job, it will also provide an underestimate for the + reasons listed above. + - Only one cost will be applied across all jobs. Rocoto jobs do not store + partition information in the job table, so was not included as an option here. + +''' + +import argparse +import glob +import os +import sys +import sqlite3 + +REPORT_WIDTH = 110 + +def parse_args(argv): + + + ''' + Function maintains the arguments accepted by this script. Please see + Python's argparse documenation for more information about settings of each + argument. + ''' + + parser = argparse.ArgumentParser( + description="Generate a usage report for a set of SRW experiments." + ) + + parser.add_argument( + '-e', '--expt_path', + help='The path to the directory containing the experiment \ + directories', + ) + parser.add_argument( + '-n', '--expt_names', + default=['*'], + help='A list of experiments to generate the report for. Wildcards \ + accepted by glob.glob may be used. If not provided, a report will be \ + generated for all experiments in the expt_path that have a Rocoto \ + database', + nargs='*', + ) + + # Optional + parser.add_argument( + '-c', '--cost_per_core_hour', + help='Provide the cost per core hour for the instance type used. \ + Only supports homogenous clusters.', + type=float, + ) + + return parser.parse_args(argv) + +def get_workflow_info(db_path): + + ''' Given the path to a Rocoto database, return the total number of tasks, + core hours and wall time for the workflow. ''' + + con = sqlite3.connect(db_path) + cur = con.cursor() + + # jobs schema is: + # (id INTEGER PRIMARY KEY, jobid VARCHAR(64), taskname VARCHAR(64), cycle + # DATETIME, cores INTEGER, state VARCHAR(64), native_state VARCHAR[64], + # exit_status INTEGER, tries INTEGER, nunknowns INTEGER, duration REAL) + # + # an example: + # 5|66993580|make_sfc_climo|1597017600|48|SUCCEEDED|COMPLETED|0|1|0|83.0 + try: + cur.execute('SELECT cores, duration from jobs') + except sqlite3.OperationalError: + return 0, 0, 0 + + workflow_info = cur.fetchall() + + core_hours = 0 + wall_time = 0 + ntasks = 0 + for cores, duration in workflow_info: + core_hours += cores * duration / 3600 + wall_time += duration / 60 + ntasks += 1 + + return ntasks, core_hours, wall_time + + +def fetch_expt_summaries(expts): + + ''' Get the important information from the database of each experiment, and + return a list, sorted by experiment name. ''' + + summaries = [] + for expt in expts: + test_name = expt.split('/')[-1] + db_path = os.path.join(expt, 'FV3LAM_wflow.db') + if not os.path.exists(db_path): + print(f'No FV3LAM_wflow.db exists for expt: {test_name}') + continue + ntasks, core_hours, wall_time = get_workflow_info(db_path) + summaries.append((test_name, ntasks, core_hours, wall_time)) + + return sorted(summaries) + +def generate_report(argv): + + ''' Given user arguments, print a summary of the requested experiments' + usage information, including cost (if requested). ''' + + cla = parse_args(argv) + + experiments = [] + for expt in cla.expt_names: + experiments.extend(glob.glob( + os.path.join(cla.expt_path, expt) + )) + + header = f'{" "*60} Core Hours | Run Time (mins)' + if cla.cost_per_core_hour: + header = f'{header} | Est. Cost ($) ' + + print('-'*REPORT_WIDTH) + print('-'*REPORT_WIDTH) + print(header) + print('-'*REPORT_WIDTH) + + total_ch = 0 + total_cost = 0 + for name, ntasks, ch, wt in fetch_expt_summaries(experiments): + line = f'{name[:60]:<60s} {ch:^12.2f} {wt:^20.1f}' + if cla.cost_per_core_hour: + cost = ch * cla.cost_per_core_hour + line = f'{line} ${cost:<.2f}' + total_cost += cost + total_ch += ch + print(line) + + print('-'*REPORT_WIDTH) + print(f'TOTAL CORE HOURS: {total_ch:6.2f}') + if cla.cost_per_core_hour: + print(f'TOTAL COST: ${cla.cost_per_core_hour * total_ch:6.2f}') + + print('*'*REPORT_WIDTH) + print('WARNING: This data reflects only the job information from the last', + 'logged try. It does not account for the use \n of an entire node, only', + 'the actual cores requested. It may provide an underestimate of true compute usage.') + print('*'*REPORT_WIDTH) + + +if __name__ == "__main__": + generate_report(sys.argv[1:]) diff --git a/tests/WE2E/get_WE2Etest_names_subdirs_descs.sh b/tests/WE2E/get_WE2Etest_names_subdirs_descs.sh index 2183ead89..d0db858b3 100755 --- a/tests/WE2E/get_WE2Etest_names_subdirs_descs.sh +++ b/tests/WE2E/get_WE2Etest_names_subdirs_descs.sh @@ -7,8 +7,10 @@ # the WE2E tests available in the WE2E testing system. This information # consists of the test names, the category subdirectories in which the # test configuration files are located (relative to a base directory), -# the test IDs, and the test descriptions. These are described in more -# detail below. +# the test IDs, and the test descriptions. This function optionally +# also creates a CSV (Comma-Separated Value) file containing various +# pieces of information about each of the workflow end-to-end (WE2E) +# tests. These are described in more detail below. # # The function takes as inputs the following arguments: # @@ -21,6 +23,10 @@ # Flag that specifies whether or not a CSV (Comma-Separated Value) file # containing information about the WE2E tests should be generated. # +# verbose: +# Optional verbosity flag. Should be set to "TRUE" or "FALSE". Default +# is "FALSE". +# # output_varname_test_configs_basedir: # Name of output variable in which to return the base directory of the # WE2E test configuration files. @@ -65,7 +71,7 @@ # in the local array category_subdirs. We refer to these as "category" # subdirectories because they are used for clarity to group the tests # into categories (instead of putting them all directly under the base -# directory). For example, one category of tests might be those that +# directory). For example, one category of tests might be those that # test workflow capabilities such as running multiple cycles and ensemble # forecasts, another might be those that run various combinations of # grids, physics suites, and external models for ICs/LBCs, etc. Note @@ -208,7 +214,11 @@ # the file will be placed in the main WE2E testing system directory # specified by the input argument WE2Edir. The CSV file can be read # into a spreadsheet in Google Sheets (or another similar tool) to get -# an overview of all the available WE2E tests. +# an overview of all the available WE2E tests. The rows of the CSV file +# correspond to the primary WE2E tests, and the columns correspond to +# the (primary) test name, alternate test names (if any), test description, +# number of times the test calls the forecast model, and values of various +# SRW App experiment variables for that test. # # A CSV file will be generated in the directory specified by WE2Edir if # one or more of the following conditions hold: @@ -228,7 +238,8 @@ # "FALSE" in the call to this function (regardless of whether or not a # CSV file already exists). If a CSV file is generated, it is placed in # the directory specified by the input argment WE2Edir, and it overwrites -# any existing copies of the file in that directory. +# any existing copies of the file in that directory. The contents of +# each column of the CSV file are described below. # #----------------------------------------------------------------------- # @@ -255,6 +266,7 @@ function get_WE2Etest_names_subdirs_descs() { local valid_args=( \ "WE2Edir" \ "generate_csv_file" \ + "verbose" \ "output_varname_test_configs_basedir" \ "output_varname_test_names" \ "output_varname_test_subdirs" \ @@ -275,6 +287,17 @@ function get_WE2Etest_names_subdirs_descs() { # #----------------------------------------------------------------------- # +# Make the default value of "verbose" "FALSE". Then make sure "verbose" +# is set to a valid value. +# +#----------------------------------------------------------------------- +# + verbose=${verbose:-"FALSE"} + check_var_valid_value "verbose" "valid_vals_BOOLEAN" + verbose=$(boolify $verbose) +# +#----------------------------------------------------------------------- +# # Declare local variables. # #----------------------------------------------------------------------- @@ -286,7 +309,10 @@ function get_WE2Etest_names_subdirs_descs() { alt_test_prim_test_names \ alt_test_subdir \ alt_test_subdirs \ + array_names_vars_to_extract \ + array_names_vars_to_extract_orig \ category_subdirs \ + cmd \ column_titles \ config_fn \ crnt_item \ @@ -294,24 +320,35 @@ function get_WE2Etest_names_subdirs_descs() { csv_fn \ csv_fp \ cwd \ + default_val \ hash_or_null \ i \ ii \ j \ jp1 \ + k \ line \ mod_time_csv \ mod_time_subdir \ + msg \ num_alt_tests \ num_category_subdirs \ + num_cdates \ + num_cycles_per_day \ + num_days \ + num_fcsts \ + num_fcsts_orig \ num_items \ num_occurrences \ num_prim_tests \ num_tests \ + num_vars_to_extract \ + prim_array_names_vars_to_extract \ prim_test_descs \ prim_test_ids \ prim_test_name_subdir \ prim_test_names \ + prim_test_num_fcsts \ prim_test_subdirs \ get_test_descs \ regex_search \ @@ -349,7 +386,11 @@ function get_WE2Etest_names_subdirs_descs() { test_subdirs_orig \ test_subdirs_str \ test_type \ - valid_vals_generate_csv_file + val \ + valid_vals_generate_csv_file \ + var_name \ + var_name_at \ + vars_to_extract # #----------------------------------------------------------------------- # @@ -408,6 +449,13 @@ function get_WE2Etest_names_subdirs_descs() { fi fi + + if [ "${generate_csv_file}" = "TRUE" ]; then + print_info_msg " +Will generate a CSV (Comma Separated Value) file (csv_fp) containing +information on all WE2E tests: + csv_fp = \"${csv_fp}\"" + fi # #----------------------------------------------------------------------- # @@ -472,6 +520,7 @@ function get_WE2Etest_names_subdirs_descs() { prim_test_names=() prim_test_ids=() prim_test_subdirs=() + prim_test_num_fcsts=() alt_test_names=() alt_test_subdirs=() @@ -839,17 +888,61 @@ they correspond to unique test names and rerun." #----------------------------------------------------------------------- # if [ "${get_test_descs}" = "TRUE" ]; then +# +# Specify in "vars_to_extract" the list of experiment variables to extract +# from each test configuration file (and later to place in the CSV file). +# Recall that the rows of the CSV file correspond to the various WE2E +# tests, and the columns correspond to the test name, description, and +# experiment variable values. The elements of "vars_to_extract" should +# be the names of SRW App experiment variables that are (or can be) +# specified in the App's configuration file. Note that if a variable is +# not specified in the test configuration file, in most cases its value +# is set to an empty string (and recorded as such in the CSV file). In +# some cases, it is set to some other value (e.g. for the number of +# ensemble members NUM_ENS_MEMBERS, it is set to 1). +# + vars_to_extract=( "PREDEF_GRID_NAME" \ + "CCPP_PHYS_SUITE" \ + "EXTRN_MDL_NAME_ICS" \ + "EXTRN_MDL_NAME_LBCS" \ + "DATE_FIRST_CYCL" \ + "DATE_LAST_CYCL" \ + "CYCL_HRS" \ + "INCR_CYCL_FREQ" \ + "FCST_LEN_HRS" \ + "LBC_SPEC_INTVL_HRS" \ + "NUM_ENS_MEMBERS" \ + ) + num_vars_to_extract="${#vars_to_extract[@]}" +# +# Create names of local arrays that will hold the value of the corresponding +# variable for each test. Then use these names to define them as empty +# arrays. [The arrays named "prim_..." are to hold values for only the +# primary tests, while other arrays are to hold values for all (primary +# plus alternate) tests.] +# + prim_array_names_vars_to_extract=( $( printf "prim_test_%s_vals " "${vars_to_extract[@]}" ) ) + array_names_vars_to_extract=( $( printf "%s_vals " "${vars_to_extract[@]}" ) ) + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + cmd="${prim_array_names_vars_to_extract[$k]}=()" + eval $cmd + cmd="${array_names_vars_to_extract[$k]}=()" + eval $cmd + done print_info_msg " -Gathering test descriptions from the configuration files of the primary -WE2E tests..." +Gathering test descriptions and experiment variable values from the +configuration files of the primary WE2E tests... +" prim_test_descs=() for (( i=0; i<=$((num_prim_tests-1)); i++ )); do test_name="${prim_test_names[$i]}" print_info_msg "\ - Reading in the test description for primary WE2E test: \"${test_name}\"" + Reading in the test description for primary WE2E test: \"${test_name}\" + In category (subdirectory): \"${subdir}\" +" subdir=("${prim_test_subdirs[$i]}") cd_vrfy "${test_configs_basedir}/$subdir" # @@ -931,16 +1024,121 @@ ${test_desc}${stripped_line} " # # First remove leading whitespace. # - test_desc="${test_desc#"${test_desc%%[![:space:]]*}"}" + test_desc="${test_desc#"${test_desc%%[![:space:]]*}"}" # # Now remove trailing whitespace. # - test_desc="${test_desc%"${test_desc##*[![:space:]]}"}" + test_desc="${test_desc%"${test_desc##*[![:space:]]}"}" # # Finally, save the description of the current test as the next element # of the array prim_test_descs. # prim_test_descs+=("${test_desc}") +# +# Get from the current test's configuration file the values of the +# variables specified in "vars_to_extract". Then save the value in the +# arrays specified by "prim_array_names_vars_to_extract". +# + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + + var_name="${vars_to_extract[$k]}" + cmd=$( grep "^[ ]*${var_name}=" "${config_fn}" ) + eval $cmd + + if [ -z "${!var_name+x}" ]; then + + msg=" + The variable \"${var_name}\" is not defined in the current test's + configuration file (config_fn): + config_fn = \"${config_fn}\" + Setting the element in the array \"${prim_array_names_vars_to_extract[$k]}\" + corresponding to this test to" + + case "${var_name}" in + + "NUM_ENS_MEMBERS") + default_val="1" + msg=$msg": + ${var_name} = \"${default_val}\"" + ;; + + "INCR_CYCL_FREQ") + default_val="24" + msg=$msg": + ${var_name} = \"${default_val}\"" + ;; + + *) + default_val="" + msg=$msg" an empty string." + ;; + + esac + cmd="${var_name}=\"${default_val}\"" + eval $cmd + + print_info_msg "$verbose" "$msg" + cmd="${prim_array_names_vars_to_extract[$k]}+=(\"'${default_val}\")" + + else +# +# The following are important notes regarding how the variable "cmd" +# containing the command that will append an element to the array +# specified by ${prim_array_names_vars_to_extract[$k]} is formulated: +# +# 1) If all the experiment variables were scalars, then the more complex +# command below could be replaced with the following: +# +# cmd="${prim_array_names_vars_to_extract[$k]}+=(\"${!var_name}\")" +# +# But some variables are arrays, so we need the more complex approach +# to cover those cases. +# +# 2) The double quotes (which need to be escaped here, i.e. \") are needed +# so that for any experiment variables that are arrays, all the elements +# of the array are combined together and treated as a single element. +# If the experiment variable is CYCL_HRS (cycle hours) and is set to +# the array ("00" "12"), we want the value saved in the local array +# here to be a single element consisting of "00 12". Otherwise, "00" +# and "12" will be treated as separate elements, and more than one +# element would be added to the array (which would be incorrect here). +# +# 3) The single quote (which needs to be escaped here, i.e. \') is needed +# so that any numbers (e.g. a set of cycle hours such as "00 12") are +# treated as strings when the CSV file is opened in Google Sheets. +# If this is not done, Google Sheets will remove leading zeros. +# + var_name_at="${var_name}[@]" + cmd="${prim_array_names_vars_to_extract[$k]}+=(\'\"${!var_name_at}\")" + fi + eval $cmd + + done +# +# Calculate the number of forecasts that will be launched by the current +# test. The "10#" forces bash to treat the following number as a decimal +# (not hexadecimal, etc). +# + num_cycles_per_day=${#CYCL_HRS[@]} + num_days=$(( (${DATE_LAST_CYCL} - ${DATE_FIRST_CYCL} + 1)*24/10#${INCR_CYCL_FREQ} )) + num_cdates=$(( ${num_cycles_per_day}*${num_days} )) + nf=$(( ${num_cdates}*10#${NUM_ENS_MEMBERS} )) +# +# In the following, the single quote at the beginning forces Google Sheets +# to interpret this quantity as a string. This prevents any automatic +# number fomatting from being applied when the CSV file is imported into +# Google Sheets. +# + prim_test_num_fcsts+=( "'$nf" ) +# +# Unset the experiment variables defined for the current test so that +# they are not accidentally used for the next one. +# + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + var_name="${vars_to_extract[$k]}" + cmd="unset ${var_name}" + eval $cmd + done done @@ -950,13 +1148,20 @@ ${test_desc}${stripped_line} " # # Create the arrays test_ids and test_descs that initially contain the # test IDs and descriptions corresponding to the primary test names -# (those of the alternate test names will be appended below). +# (those of the alternate test names will be appended below). Then, in +# the for-loop, do same for the arrays containing the experiment variable +# values for each test. # #----------------------------------------------------------------------- # test_ids=("${prim_test_ids[@]}") if [ "${get_test_descs}" = "TRUE" ]; then test_descs=("${prim_test_descs[@]}") + num_fcsts=("${prim_test_num_fcsts[@]}") + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + cmd="${array_names_vars_to_extract[$k]}=(\"\${${prim_array_names_vars_to_extract[$k]}[@]}\")" + eval $cmd + done fi # #----------------------------------------------------------------------- @@ -964,7 +1169,8 @@ ${test_desc}${stripped_line} " # Append to the arrays test_ids and test_descs the test IDs and descriptions # of the alternate test names. We set the test ID and description of # each alternate test name to those of the corresponding primary test -# name. +# name. Then, in the inner for-loop, do the same for the arrays containing +# the experiment variable values. # #----------------------------------------------------------------------- # @@ -980,6 +1186,11 @@ ${test_desc}${stripped_line} " test_ids+=("${prim_test_ids[$j]}") if [ "${get_test_descs}" = "TRUE" ]; then test_descs+=("${prim_test_descs[$j]}") + num_fcsts+=("${prim_test_num_fcsts[$j]}") + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + cmd="${array_names_vars_to_extract[$k]}+=(\"\${${prim_array_names_vars_to_extract[$k]}[$j]}\")" + eval $cmd + done fi num_occurrences=$((num_occurrences+1)) fi @@ -1003,7 +1214,9 @@ Please correct and rerun." #----------------------------------------------------------------------- # # Sort in order of increasing test ID the arrays containing the names, -# IDs, category subdirectories, and descriptions of the WE2E tests. +# IDs, category subdirectories, and descriptions of the WE2E tests as +# well as the arrays containing the experiment variable values for each +# test. # # For this purpose, we first create an array (test_ids_and_inds) each # of whose elements consist of the test ID, the test type, and the index @@ -1029,8 +1242,8 @@ Please correct and rerun." # and the test type, which we no longer need), which is the original # array index before sorting, and save the results in the array sort_inds. # This array will contain the original indices in sorted order that we -# then use to sort the arrays containing the names, IDs, subdirectories, -# and descriptions of the WE2E tests. +# then use to sort the arrays containing the WE2E test names, IDs, +# subdirectories, descriptions, and experiment variable values. # #----------------------------------------------------------------------- # @@ -1064,11 +1277,24 @@ Please correct and rerun." done if [ "${get_test_descs}" = "TRUE" ]; then + test_descs_orig=( "${test_descs[@]}" ) + num_fcsts_orig=( "${num_fcsts[@]}" ) + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + cmd="${array_names_vars_to_extract[$k]}_orig=(\"\${${array_names_vars_to_extract[$k]}[@]}\")" + eval $cmd + done + for (( i=0; i<=$((num_tests-1)); i++ )); do ii="${sort_inds[$i]}" test_descs[$i]="${test_descs_orig[$ii]}" + num_fcsts[$i]="${num_fcsts_orig[$ii]}" + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + cmd="${array_names_vars_to_extract[$k]}[$i]=\"\${${array_names_vars_to_extract[$k]}_orig[$ii]}\"" + eval $cmd + done done + fi # #----------------------------------------------------------------------- @@ -1094,18 +1320,27 @@ Please correct and rerun." # csv_delimiter="|" # -# Set the titles of the three columns that will be in the file. Then -# write them to the file. The contents of the columns are described in -# more detail further below. +# Set the titles of the columns that will be in the file. Then write +# them to the file. The contents of the columns are described in more +# detail further below. # column_titles="\ \"Test Name (Subdirectory)\" ${csv_delimiter} \ \"Alternate Test Names (Subdirectories)\" ${csv_delimiter} \ -\"Test Purpose/Description\"" +\"Test Purpose/Description\" ${csv_delimiter} \ +\"Number of Forecast Model Runs\"" + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + column_titles="\ +${column_titles} ${csv_delimiter} \ +\"${vars_to_extract[$k]}\"" + done printf "%s\n" "${column_titles}" >> "${csv_fp}" # # Loop through the arrays containing the WE2E test information. Extract # the necessary information and record it to the CSV file row-by-row. +# Note that each row corresponds to a primary test. When an alternate +# test is encountered, its information is stored in the row of the +# corresponding primary test (i.e. a new row is not created). # j=0 jp1=$((j+1)) @@ -1130,6 +1365,11 @@ Please correct and rerun." # test_desc=$( printf "%s" "${test_desc}" | sed -r -e "s/\"/\"\"/g" ) # +# Get the number of forecasts (number of times the forcast model is run, +# due to a unique starting date, an ensemble member, etc). +# + nf="${num_fcsts[$j]}" +# # In the following inner while-loop, we step through all alternate test # names (if any) that follow the current primary name and construct a # string (alt_test_names_subdirs) consisting of all the alternate test @@ -1167,11 +1407,30 @@ ${test_names[$jp1]} (${test_subdirs[$jp1]})" # # Column 3: # The test description. +# +# Column 4: +# The number of times the forecast model will be run by the test. This +# has been calculated above using the quantities that go in Columns 5, +# 6, .... +# +# Columns 5...: +# The values of the experiment variables specified in vars_to_extract. # row_content="\ \"${prim_test_name_subdir}\" ${csv_delimiter} \ \"${alt_test_names_subdirs}\" ${csv_delimiter} \ -\"${test_desc}\"" +\"${test_desc}\" ${csv_delimiter} \ +\"${nf}\"" + + for (( k=0; k<=$((num_vars_to_extract-1)); k++ )); do + unset "val" + cmd="val=\"\${${array_names_vars_to_extract[$k]}[$j]}\"" + eval $cmd + row_content="\ +${row_content} ${csv_delimiter} \ +\"${val}\"" + done + printf "%s\n" "${row_content}" >> "${csv_fp}" # # Update loop indices. @@ -1181,6 +1440,11 @@ ${test_names[$jp1]} (${test_subdirs[$jp1]})" done + print_info_msg "\ +Successfully generated a CSV (Comma Separated Value) file (csv_fp) +containing information on all WE2E tests: + csv_fp = \"${csv_fp}\"" + fi # #----------------------------------------------------------------------- diff --git a/tests/WE2E/get_expts_status.sh b/tests/WE2E/get_expts_status.sh index 01b127d9f..91de215d2 100755 --- a/tests/WE2E/get_expts_status.sh +++ b/tests/WE2E/get_expts_status.sh @@ -12,8 +12,8 @@ # directory represent active experiments (see below for how this is done). # For all such experiments, it calls the workflow (re)launch script to # update the status of the workflow and prints the status out to screen. -# It also generates a summary status file in the base directory that -# contains the last num_tail_lines lines (defined below) of each experiment's +# It also generates a status report file in the base directory that +# contains the last num_log_lines lines (defined below) of each experiment's # workflow log file [which is generated by the (re)launch script] and thus # has information on which tasks may have succeeded/failed]. # @@ -70,42 +70,112 @@ ushdir="$homerrfs/ush" # #----------------------------------------------------------------------- # -# Exactly one argument must be specified that consists of the full path -# to the experiments base directory (i.e. the directory containing the -# experiment subdirectories). Ensure that the number of arguments is -# one. +# Set the usage message. # #----------------------------------------------------------------------- # -num_args="$#" -if [ "${num_args}" -eq 1 ]; then - expts_basedir="$1" -else - print_err_msg_exit " -The number of arguments to this script must be exacty one, and that -argument must specify the experiments base directory, i.e. the directory -containing the experiment subdirectories. The acutal number of arguments -is: - num_args = ${num_args}" +usage_str="\ +Usage: + + ${scrfunc_fn} \\ + expts_basedir=\"...\" \\ + [verbose=\"...\"] + +The arguments in brackets are optional. The arguments are defined as +follows: + +expts_basedir: +Full path to the experiments base directory, i.e. the directory containing +the experiment subdirectories. + +num_log_lines: +Optional integer specifying the number of lines from the end of the +workflow launch log file (log.launch_FV3LAM_wflow) of each test to +include in the status report file that this script generates. + +verbose: +Optional verbosity flag. Should be set to \"TRUE\" or \"FALSE\". Default +is \"FALSE\". +" +# +#----------------------------------------------------------------------- +# +# Check to see if usage help for this script is being requested. If so, +# print it out and exit with a 0 exit code (success). +# +#----------------------------------------------------------------------- +# +help_flag="--help" +if [ "$#" -eq 1 ] && [ "$1" = "${help_flag}" ]; then + print_info_msg "${usage_str}" + exit 0 +fi +# +#----------------------------------------------------------------------- +# +# Specify the set of valid argument names for this script or function. +# Then process the arguments provided to it on the command line (which +# should consist of a set of name-value pairs of the form arg1="value1", +# arg2="value2", etc). +# +#----------------------------------------------------------------------- +# +valid_args=( \ + "expts_basedir" \ + "num_log_lines" \ + "verbose" \ + ) +process_args valid_args "$@" +# +#----------------------------------------------------------------------- +# +# Set the default value of "num_log_lines". +# +#----------------------------------------------------------------------- +# +num_log_lines=${num_log_lines:-"40"} +# +#----------------------------------------------------------------------- +# +# Make the default value of "verbose" "FALSE". Then make sure "verbose" +# is set to a valid value. +# +#----------------------------------------------------------------------- +# +verbose=${verbose:-"FALSE"} +check_var_valid_value "verbose" "valid_vals_BOOLEAN" +verbose=$(boolify $verbose) +# +#----------------------------------------------------------------------- +# +# Verify that the required arguments to this script have been specified. +# If not, print out an error message and exit. +# +#----------------------------------------------------------------------- +# +help_msg="\ +Use + ${scrfunc_fn} ${help_flag} +to get help on how to use this script." + +if [ -z "${expts_basedir}" ]; then + print_err_msg_exit "\ +The argument \"expts_basedir\" specifying the base directory containing +the experiment directories was not specified in the call to this script. \ +${help_msg}" fi # #----------------------------------------------------------------------- # # Check that the specified experiments base directory exists and is # actually a directory. If not, print out an error message and exit. -# If so, print out an informational message. # #----------------------------------------------------------------------- # if [ ! -d "${expts_basedir}" ]; then print_err_msg_exit " -The experiments base directory (expts_basedir) does not exit or is not -actually a directory: - expts_basedir = \"${expts_basedir}\"" -else - print_info_msg " -Checking the workflow status of all forecast experiments in the following -specified experiments base directory: +The specified experiments base directory (expts_basedir) does not exit +or is not actually a directory: expts_basedir = \"${expts_basedir}\"" fi # @@ -116,7 +186,7 @@ fi # #----------------------------------------------------------------------- # -cd "${expts_basedir}" +cd_vrfy "${expts_basedir}" # # Get a list of all subdirectories (but not files) in the experiment base # directory. Note that the ls command below will return a string containing @@ -175,6 +245,12 @@ var_defns_fn="var_defns.sh" j="0" expt_subdirs=() +print_info_msg "\ +Checking for active experiment directories in the specified experiments +base directory (expts_basedir): + expts_basedir = \"${expts_basedir}\" +..." + num_subdirs="${#subdirs_list[@]}" for (( i=0; i<=$((num_subdirs-1)); i++ )); do @@ -184,7 +260,7 @@ $separator Checking whether the subdirectory \"${subdir}\" contains an active experiment..." - print_info_msg "$msg" + print_info_msg "$verbose" "$msg" cd_vrfy "${subdir}" # @@ -193,7 +269,7 @@ contains an active experiment..." # if [ ! -f "${var_defns_fn}" ]; then - print_info_msg " + print_info_msg "$verbose" " The current subdirectory (subdir) under the experiments base directory (expts_basedir) does not contain an experiment variable defintions file (var_defns_fn): @@ -219,7 +295,7 @@ must be checked." # if [ "${EXPT_SUBDIR}" = "$subdir" ]; then - print_info_msg " + print_info_msg "$verbose" " The current subdirectory (subdir) under the experiments base directory (expts_basedir) contains an active experiment: expts_basedir = \"${expts_basedir}\" @@ -238,7 +314,7 @@ subdirectories whose workflow status must be checked." # else - print_info_msg " + print_info_msg "$verbose" " The current subdirectory (subdir) under the experiments base directory (expts_basedir) contains an experiment whose original name (EXPT_SUBDIR) does not match the name of the current subdirectory: @@ -254,7 +330,7 @@ status must be checked." fi - print_info_msg "\ + print_info_msg "$verbose" "\ $separator " # @@ -302,7 +378,7 @@ check_for_preexist_dir_file "${expts_status_fp}" "rename" # Loop through the elements of the array expt_subdirs. For each element # (i.e. for each active experiment), change location to the experiment # directory and call the script launch_FV3LAM_wflow.sh to update the log -# file log.launch_FV3LAM_wflow. Then take the last num_tail_lines of +# file log.launch_FV3LAM_wflow. Then take the last num_log_lines of # this log file (along with an appropriate message) and add it to the # status report file. # @@ -310,7 +386,6 @@ check_for_preexist_dir_file "${expts_status_fp}" "rename" # launch_wflow_fn="launch_FV3LAM_wflow.sh" launch_wflow_log_fn="log.launch_FV3LAM_wflow" -num_tail_lines="40" for (( i=0; i<=$((num_expts-1)); i++ )); do @@ -326,25 +401,28 @@ Checking workflow status of experiment \"${expt_subdir}\" ..." # cd_vrfy "${expt_subdir}" launch_msg=$( "${launch_wflow_fn}" 2>&1 ) - log_tail=$( tail -n ${num_tail_lines} "${launch_wflow_log_fn}" ) + log_tail=$( tail -n ${num_log_lines} "${launch_wflow_log_fn}" ) # # Print the workflow status to the screen. # - wflow_status=$( printf "${log_tail}" | grep "Workflow status:" ) -# wflow_status="${wflow_status## }" # Not sure why this doesn't work to strip leading spaces. - wflow_status=$( printf "${wflow_status}" "%s" | sed -r 's|^[ ]*||g' ) # Remove leading spaces. + # The "tail -1" is to get only the last occurrence of "Workflow status" + wflow_status=$( printf "${log_tail}" | grep "Workflow status:" | tail -1 ) + # Not sure why this doesn't work to strip leading spaces. +# wflow_status="${wflow_status## }" + # Remove leading spaces. + wflow_status=$( printf "${wflow_status}" "%s" | sed -r 's|^[ ]*||g' ) print_info_msg "${wflow_status}" print_info_msg "\ $separator " # -# Combine message above with the last num_tail_lines lines from the workflow +# Combine message above with the last num_log_lines lines from the workflow # launch log file and place the result in the status report file. # msg=$msg" ${wflow_status} -The last ${num_tail_lines} lines of this experiment's workflow launch log file +The last ${num_log_lines} lines of this experiment's workflow launch log file (\"${launch_wflow_log_fn}\") are: ${log_tail} @@ -360,4 +438,7 @@ ${log_tail} done print_info_msg "\ +A status report has been created in: + expts_status_fp = \"${expts_status_fp}\" + DONE." diff --git a/tests/WE2E/machine_suites/hera.txt b/tests/WE2E/machine_suites/hera.txt index 608af4192..a3f7f2850 100644 --- a/tests/WE2E/machine_suites/hera.txt +++ b/tests/WE2E/machine_suites/hera.txt @@ -2,11 +2,9 @@ grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2 grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16 grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_v15p2 -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta -nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2 -nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2 -nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta +nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR diff --git a/tests/WE2E/machine_suites/jet.txt b/tests/WE2E/machine_suites/jet.txt index 608af4192..a3f7f2850 100644 --- a/tests/WE2E/machine_suites/jet.txt +++ b/tests/WE2E/machine_suites/jet.txt @@ -2,11 +2,9 @@ grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2 grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16 grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR grid_RRFS_CONUS_25km_ics_GSMGFS_lbcs_GSMGFS_suite_GFS_v15p2 -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha -grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta -nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2 -nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2 -nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha +grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta +nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR diff --git a/tests/WE2E/run_WE2E_tests.sh b/tests/WE2E/run_WE2E_tests.sh index bfab3128a..a11dfaba3 100755 --- a/tests/WE2E/run_WE2E_tests.sh +++ b/tests/WE2E/run_WE2E_tests.sh @@ -92,7 +92,9 @@ Usage: [verbose=\"...\"] \\ [machine_file=\"...\"] \\ [stmp=\"...\"] \\ - [ptmp=\"...\"] + [ptmp=\"...\"] \\ + [compiler=\"...\"] \\ + [build_mod_fn=\"...\"] The arguments in brackets are optional. The arguments are defined as follows: @@ -131,9 +133,9 @@ tests under subdirectory testset1, another set of tests under testset2, etc. exec_subdir: -Optional. Argument is used to set the EXEC_SUBDIR configuration -variable. Please see the ush/default_configs.sh file for a full -description. +Optional. Argument used to set the EXEC_SUBDIR experiment variable. +Please see the default experiment configuration file \"config_defaults.sh\" +for a full description of EXEC_SUBDIR. use_cron_to_relaunch: Argument used to explicitly set the experiment variable USE_CRON_TO_RELAUNCH @@ -204,6 +206,87 @@ argument is not used for any tests that are not in NCO mode. ptmp: Same as the argument \"stmp\" described above but for setting the experiment variable PTMP for all tests that will run in NCO mode. + +compiler: +Type of compiler to use for the workflow. Options are \"intel\" and \"gnu\". +Default is \"intel\". + +build_mod_fn: +Specify the build module files (see ufs-srweather-app/modulefiles) to +use for the workflow. (e.g. build_cheyenne_gnu). If a +\"gnu\" compiler is specified, it must also be specified with +the \"compiler\" option. + + +Usage Examples: +-------------- +Here, we give several common usage examples. In the following, assume +my_tests.txt is a text file in the same directory as this script containing +a list of test names that we want to run, e.g. + +> more my_tests.txt +new_ESGgrid +specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE + +Then: + +1) To run the tests listed in my_tests.txt on Hera and charge the core- + hours used to the \"rtrr\" account, use: + + > run_WE2E_tests.sh tests_file=\"my_tests.txt\" machine=\"hera\" account=\"rtrr\" + + This will create the experiment subdirectories for the two tests in + the directory + + \${SR_WX_APP_TOP_DIR}/../expt_dirs + + where SR_WX_APP_TOP_DIR is the directory in which the ufs-srweather-app + repository is cloned. Thus, the following two experiment directories + will be created: + + \${SR_WX_APP_TOP_DIR}/../expt_dirs/new_ESGgrid + \${SR_WX_APP_TOP_DIR}/../expt_dirs/specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE + + In addition, by default, cron jobs will be created in the user's cron + table to relaunch the workflows of these experiments every 2 minutes. + +2) To change the frequency with which the cron relaunch jobs are submitted + from the default of 2 minutes to 1 minute, use: + + > run_WE2E_tests.sh tests_file=\"my_tests.txt\" machine=\"hera\" account=\"rtrr\" cron_relaunch_intvl_mnts=\"01\" + +3) To disable use of cron (which means the worfkow for each test will + have to be relaunched manually from within each experiment directory), + use: + + > run_WE2E_tests.sh tests_file=\"my_tests.txt\" machine=\"hera\" account=\"rtrr\" use_cron_to_relaunch=\"FALSE\" + +4) To place the experiment subdirectories in a subdirectory named \"test_set_01\" + under + + \${SR_WX_APP_TOP_DIR}/../expt_dirs + + (instead of immediately under the latter), use: + + > run_WE2E_tests.sh tests_file=\"my_tests.txt\" machine=\"hera\" account=\"rtrr\" expt_basedir=\"test_set_01\" + + In this case, the full paths to the experiment directories will be: + + \${SR_WX_APP_TOP_DIR}/../expt_dirs/test_set_01/new_ESGgrid + \${SR_WX_APP_TOP_DIR}/../expt_dirs/test_set_01/specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE + +5) To use a list of tests that is located in + + /path/to/custom/my_tests.txt + + instead of in the same directory as this script, and to have the + experiment directories be placed in an arbitrary location, say + + /path/to/custom/expt_dirs + + use: + + > run_WE2E_tests.sh tests_file=\"/path/to/custom/my_tests.txt\" machine=\"hera\" account=\"rtrr\" expt_basedir=\"/path/to/custom/expt_dirs\" " # #----------------------------------------------------------------------- @@ -240,6 +323,8 @@ valid_args=( \ "machine_file" \ "stmp" \ "ptmp" \ + "compiler" \ + "build_mod_fn" \ ) process_args valid_args "$@" # @@ -670,7 +755,8 @@ Please correct and rerun." # MACHINE="${machine^^}" ACCOUNT="${account}" - + COMPILER=${compiler:-"intel"} + BUILD_MOD_FN=${build_mod_fn:-"build_${machine}_${COMPILER}"} EXPT_BASEDIR="${expt_basedir}" EXPT_SUBDIR="${test_name}" USE_CRON_TO_RELAUNCH=${use_cron_to_relaunch:-"TRUE"} @@ -692,7 +778,10 @@ Please correct and rerun." # subdirectory. # MACHINE=\"${MACHINE}\" -ACCOUNT=\"${ACCOUNT}\"" +ACCOUNT=\"${ACCOUNT}\" + +COMPILER=\"${COMPILER}\" +BUILD_MOD_FN=\"${BUILD_MOD_FN}\"" if [ -n "${exec_subdir}" ]; then expt_config_str=${expt_config_str}" @@ -836,7 +925,7 @@ SFC_CLIMO_DIR=\"${SFC_CLIMO_DIR}\"" # 2) The directory in which the output files from the post-processor (UPP) # for a given cycle are stored. The path to this directory is # -# \$PTMP/com/\$NET/\$envir/\$RUN.\$yyyymmdd/\$hh +# \$PTMP/com/\$NET/\$model_ver/\$RUN.\$yyyymmdd/\$hh # # Here, we make the first directory listed above unique to a WE2E test # by setting RUN to the name of the current test. This will also make @@ -854,40 +943,32 @@ SFC_CLIMO_DIR=\"${SFC_CLIMO_DIR}\"" # envir to the same value as RUN (which is just EXPT_SUBDIR). Then, for # this test, the UPP output will be located in the directory # -# \$PTMP/com/\$NET/\$RUN/\$RUN.\$yyyymmdd/\$hh +# \$PTMP/com/\$NET/\we2e/\$RUN.\$yyyymmdd/\$hh # RUN=\"\${EXPT_SUBDIR}\" -envir=\"\${EXPT_SUBDIR}\"" +model_ver="we2e"" # -# Set COMINgfs if using the FV3GFS or the GSMGFS as the external model -# for ICs or LBCs. -# - if [ "${EXTRN_MDL_NAME_ICS}" = "FV3GFS" ] || \ - [ "${EXTRN_MDL_NAME_ICS}" = "GSMGFS" ] || \ - [ "${EXTRN_MDL_NAME_LBCS}" = "FV3GFS" ] || \ - [ "${EXTRN_MDL_NAME_LBCS}" = "GSMGFS" ]; then +# Set COMIN. - COMINgfs=${TEST_COMINgfs:-} + COMIN=${TEST_COMIN:-} - if [ ! -d "${COMINgfs:-}" ] ; then - print_err_msg_exit "\ -The directory (COMINgfs) that needs to be specified when running the + if [ ! -d "${COMIN:-}" ] ; then + print_err_msg_exit "\ +The directory (COMIN) that needs to be specified when running the workflow in NCO mode (RUN_ENVIR set to \"nco\") AND using the FV3GFS or the GSMGFS as the external model for ICs and/or LBCs has not been specified for this machine (MACHINE): MACHINE= \"${MACHINE}\"" - fi + fi - expt_config_str=${expt_config_str}" + expt_config_str=${expt_config_str}" # # Directory that needs to be specified when running the workflow in NCO -# mode (RUN_ENVIR set to \"nco\") AND using the FV3GFS or the GSMGFS as -# the external model for ICs and/or LBCs. +# mode (RUN_ENVIR set to \"nco\"). # -COMINgfs=\"${COMINgfs}\"" +COMIN=\"${COMIN}\"" - fi # # Set STMP and PTMP. # @@ -914,6 +995,9 @@ PTMP=\"${PTMP}\"" # if [ "${USE_USER_STAGED_EXTRN_FILES}" = "TRUE" ]; then + # Ensure we only check on disk for these files + data_stores="disk" + extrn_mdl_source_basedir=${TEST_EXTRN_MDL_SOURCE_BASEDIR:-} if [ ! -d "${extrn_mdl_source_basedir:-}" ] ; then print_err_msg_exit "\ @@ -923,26 +1007,17 @@ machine (MACHINE): MACHINE= \"${MACHINE}\"" fi EXTRN_MDL_SOURCE_BASEDIR_ICS="${extrn_mdl_source_basedir}/${EXTRN_MDL_NAME_ICS}" - if [ "${EXTRN_MDL_NAME_ICS}" = "FV3GFS" ] && [ "$MACHINE" = "HERA" ]; then - EXTRN_MDL_SOURCE_BASEDIR_ICS="${EXTRN_MDL_SOURCE_BASEDIR_ICS}/${FV3GFS_FILE_FMT_ICS}" - fi - if [ "${EXTRN_MDL_NAME_ICS}" = "FV3GFS" ] || \ - [ "${EXTRN_MDL_NAME_ICS}" = "GSMGFS" ]; then - if [ "${FV3GFS_FILE_FMT_ICS}" = "nemsio" ]; then - EXTRN_MDL_FILES_ICS=( "gfs.atmanl.nemsio" "gfs.sfcanl.nemsio" ) - elif [ "${FV3GFS_FILE_FMT_ICS}" = "grib2" ]; then - EXTRN_MDL_FILES_ICS=( "gfs.pgrb2.0p25.f000" ) - fi - elif [ "${EXTRN_MDL_NAME_ICS}" = "HRRR" ] || \ - [ "${EXTRN_MDL_NAME_ICS}" = "RAP" ]; then - EXTRN_MDL_FILES_ICS=( "${EXTRN_MDL_NAME_ICS,,}.out.for_f000" ) - elif [ "${EXTRN_MDL_NAME_ICS}" = "NAM" ]; then - EXTRN_MDL_FILES_ICS=( "${EXTRN_MDL_NAME_ICS,,}.out.for_f000" ) + if [ "${EXTRN_MDL_NAME_ICS}" = "FV3GFS" ] ; then + EXTRN_MDL_SOURCE_BASEDIR_ICS="${EXTRN_MDL_SOURCE_BASEDIR_ICS}/${FV3GFS_FILE_FMT_ICS}/\${yyyymmddhh}" + else + EXTRN_MDL_SOURCE_BASEDIR_ICS="${EXTRN_MDL_SOURCE_BASEDIR_ICS}/\${yyyymmddhh}" fi EXTRN_MDL_SOURCE_BASEDIR_LBCS="${extrn_mdl_source_basedir}/${EXTRN_MDL_NAME_LBCS}" - if [ "${EXTRN_MDL_NAME_LBCS}" = "FV3GFS" ] && [ "$MACHINE" = "HERA" ]; then - EXTRN_MDL_SOURCE_BASEDIR_LBCS="${EXTRN_MDL_SOURCE_BASEDIR_LBCS}/${FV3GFS_FILE_FMT_LBCS}" + if [ "${EXTRN_MDL_NAME_LBCS}" = "FV3GFS" ] ; then + EXTRN_MDL_SOURCE_BASEDIR_LBCS="${EXTRN_MDL_SOURCE_BASEDIR_LBCS}/${FV3GFS_FILE_FMT_LBCS}/\${yyyymmddhh}" + else + EXTRN_MDL_SOURCE_BASEDIR_LBCS="${EXTRN_MDL_SOURCE_BASEDIR_LBCS}/\${yyyymmddhh}" fi # # Make sure that the forecast length is evenly divisible by the interval @@ -958,32 +1033,16 @@ boundary conditions specification interval (LBC_SPEC_INTVL_HRS): LBC_SPEC_INTVL_HRS = ${LBC_SPEC_INTVL_HRS} rem = FCST_LEN_HRS%%LBC_SPEC_INTVL_HRS = $rem" fi - lbc_spec_times_hrs=( $( seq "${LBC_SPEC_INTVL_HRS}" "${LBC_SPEC_INTVL_HRS}" "${FCST_LEN_HRS}" ) ) - EXTRN_MDL_FILES_LBCS=( $( printf "%03d " "${lbc_spec_times_hrs[@]}" ) ) - if [ "${EXTRN_MDL_NAME_LBCS}" = "FV3GFS" ] || \ - [ "${EXTRN_MDL_NAME_LBCS}" = "GSMGFS" ]; then - if [ "${FV3GFS_FILE_FMT_LBCS}" = "nemsio" ]; then - EXTRN_MDL_FILES_LBCS=( "${EXTRN_MDL_FILES_LBCS[@]/#/gfs.atmf}" ) - EXTRN_MDL_FILES_LBCS=( "${EXTRN_MDL_FILES_LBCS[@]/%/.nemsio}" ) - elif [ "${FV3GFS_FILE_FMT_LBCS}" = "grib2" ]; then - EXTRN_MDL_FILES_LBCS=( "${EXTRN_MDL_FILES_LBCS[@]/#/gfs.pgrb2.0p25.f}" ) - fi - elif [ "${EXTRN_MDL_NAME_LBCS}" = "HRRR" ] || \ - [ "${EXTRN_MDL_NAME_LBCS}" = "RAP" ]; then - EXTRN_MDL_FILES_LBCS=( "${EXTRN_MDL_FILES_LBCS[@]/#/${EXTRN_MDL_NAME_LBCS,,}.out.for_f}" ) - elif [ "${EXTRN_MDL_NAME_LBCS}" = "NAM" ]; then - EXTRN_MDL_FILES_LBCS=( "${EXTRN_MDL_FILES_LBCS[@]/#/${EXTRN_MDL_NAME_LBCS,,}.out.for_f}" ) - fi - - expt_config_str=${expt_config_str}" + expt_config_str="${expt_config_str} # # Locations and names of user-staged external model files for generating # ICs and LBCs. # -EXTRN_MDL_SOURCE_BASEDIR_ICS=\"${EXTRN_MDL_SOURCE_BASEDIR_ICS}\" -EXTRN_MDL_FILES_ICS=( $( printf "\"%s\" " "${EXTRN_MDL_FILES_ICS[@]}" )) -EXTRN_MDL_SOURCE_BASEDIR_LBCS=\"${EXTRN_MDL_SOURCE_BASEDIR_LBCS}\" -EXTRN_MDL_FILES_LBCS=( $( printf "\"%s\" " "${EXTRN_MDL_FILES_LBCS[@]}" ))" +EXTRN_MDL_SOURCE_BASEDIR_ICS='${EXTRN_MDL_SOURCE_BASEDIR_ICS}' +EXTRN_MDL_FILES_ICS=( ${EXTRN_MDL_FILES_ICS[@]} ) +EXTRN_MDL_SOURCE_BASEDIR_LBCS='${EXTRN_MDL_SOURCE_BASEDIR_LBCS}' +EXTRN_MDL_FILES_LBCS=( ${EXTRN_MDL_FILES_LBCS[@]} ) +EXTRN_MDL_DATA_STORES=\"$data_stores\"" fi # diff --git a/tests/WE2E/setup_WE2E_tests.sh b/tests/WE2E/setup_WE2E_tests.sh index 1aad10054..de6cdf0c2 100755 --- a/tests/WE2E/setup_WE2E_tests.sh +++ b/tests/WE2E/setup_WE2E_tests.sh @@ -76,9 +76,12 @@ exec_subdir='bin_intel/bin' #----------------------------------------------------------------------- # Load Python Modules -env_file="${SRW_APP_DIR}/env/wflow_${machine}.env" -source ${env_file} +env_path="${SRW_APP_DIR}/modulefiles" +env_file="wflow_${machine}" echo "-- Load environment =>" $env_file +module use ${env_path} +module load ${env_file} +conda activate regional_workflow module list diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR.sh index 492a16336..1465641a5 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_RAP_suite_HRRR.sh @@ -22,6 +22,7 @@ EXTRN_MDL_NAME_ICS="FV3GFS" FV3GFS_FILE_FMT_ICS="grib2" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR.sh index a91c10151..10be9fed9 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_HRRR.sh @@ -17,9 +17,9 @@ EXTRN_MDL_NAME_ICS="NAM" EXTRN_MDL_NAME_LBCS="NAM" USE_USER_STAGED_EXTRN_FILES="TRUE" -DATE_FIRST_CYCL="20150602" -DATE_LAST_CYCL="20150602" -CYCL_HRS=( "12" ) +DATE_FIRST_CYCL="20210615" +DATE_LAST_CYCL="20210615" +CYCL_HRS=( "00" ) -FCST_LEN_HRS="24" +FCST_LEN_HRS="6" LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta.sh index a22466e5b..540e5a058 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_NAM_lbcs_NAM_suite_RRFS_v1beta.sh @@ -17,9 +17,9 @@ EXTRN_MDL_NAME_ICS="NAM" EXTRN_MDL_NAME_LBCS="NAM" USE_USER_STAGED_EXTRN_FILES="TRUE" -DATE_FIRST_CYCL="20150602" -DATE_LAST_CYCL="20150602" -CYCL_HRS=( "12" ) +DATE_FIRST_CYCL="20210615" +DATE_LAST_CYCL="20210615" +CYCL_HRS=( "00" ) -FCST_LEN_HRS="24" +FCST_LEN_HRS="6" LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh new file mode 100644 index 000000000..d7c402004 --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh @@ -0,0 +1,25 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the RRFS_CONUScompact_13km grid using the +# GFS_v16 physics suite with ICs and LBCs derived from the FV3GFS. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_CONUScompact_13km" +CCPP_PHYS_SUITE="FV3_GFS_v16" + +EXTRN_MDL_NAME_ICS="FV3GFS" +EXTRN_MDL_NAME_LBCS="FV3GFS" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190701" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh similarity index 68% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh index bec80f24a..40f5e4997 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_HRRR.sh @@ -3,7 +3,7 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_13km grid using the HRRR +# completes successfully on the RRFS_CONUScompact_13km grid using the HRRR # physics suite with ICs derived from the HRRR and LBCs derived from the # RAP. # @@ -11,12 +11,14 @@ RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_13km" +PREDEF_GRID_NAME="RRFS_CONUScompact_13km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh similarity index 65% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh index 5637d0b7a..f44afffcd 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_3km grid using the RRFS_v1alpha -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_13km grid using the RRFS_v1alpha +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_3km" +PREDEF_GRID_NAME="RRFS_CONUScompact_13km" CCPP_PHYS_SUITE="FV3_RRFS_v1alpha" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh similarity index 65% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh index 07f66f9cc..afcfa32e1 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_3km grid using the RRFS_v1beta -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_13km grid using the RRFS_v1beta +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_3km" +PREDEF_GRID_NAME="RRFS_CONUScompact_13km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh new file mode 100644 index 000000000..f5fc6384b --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh @@ -0,0 +1,25 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the RRFS_CONUScompact_25km grid using the +# GFS_v16 physics suite with ICs and LBCs derived from the FV3GFS. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" +CCPP_PHYS_SUITE="FV3_GFS_v16" + +EXTRN_MDL_NAME_ICS="FV3GFS" +EXTRN_MDL_NAME_LBCS="FV3GFS" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190701" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh similarity index 67% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh index 0e8ad1d04..f1302d163 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_HRRR.sh @@ -3,19 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_25km grid using the HRRR +# completes successfully on the RRFS_CONUScompact_25km grid using the HRRR # physics suite with ICs and LBCs derived from the HRRR. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="HRRR" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" @@ -23,3 +25,4 @@ CYCL_HRS=( "00" ) FCST_LEN_HRS="24" LBC_SPEC_INTVL_HRS="3" + diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh similarity index 66% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh index f236d2f35..0060e4466 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_HRRR_suite_RRFS_v1beta.sh @@ -3,19 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_25km grid using the RRFS_v1beta +# completes successfully on the RRFS_CONUScompact_25km grid using the RRFS_v1beta # physics suite with ICs and LBCs derived from the HRRR. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="HRRR" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh similarity index 66% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh index 01471df07..3dfedb568 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_25km grid using the HRRR -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_25km grid using the HRRR +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh similarity index 65% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh index 48f2f4c13..bf2e2f15e 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_25km grid using the RRFS_v1alpha -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_25km grid using the RRFS_v1alpha +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_RRFS_v1alpha" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh similarity index 65% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh index f5fabd981..8fc60571d 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_25km grid using the RRFS_v1beta -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_25km grid using the RRFS_v1beta +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh new file mode 100644 index 000000000..5cd9b0904 --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh @@ -0,0 +1,25 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the RRFS_CONUScompact_3km grid using the +# GFS_v16 physics suite with ICs and LBCs derived from the FV3GFS. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_CONUScompact_3km" +CCPP_PHYS_SUITE="FV3_GFS_v16" + +EXTRN_MDL_NAME_ICS="FV3GFS" +EXTRN_MDL_NAME_LBCS="FV3GFS" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190701" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh similarity index 65% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh index 960f4b186..11227ea00 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh @@ -3,20 +3,21 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_3km grid using the GFS_v15p2 -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_3km grid using the GFS_v15p2 +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_3km" +PREDEF_GRID_NAME="RRFS_CONUScompact_3km" CCPP_PHYS_SUITE="FV3_GFS_v15p2" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh similarity index 66% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh index 56b1ffb8b..396ce3e15 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh @@ -3,20 +3,22 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_3km grid using the HRRR -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_3km grid using the HRRR +# physics suite with ICs derived from the HRRR and LBCs derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_3km" +PREDEF_GRID_NAME="RRFS_CONUScompact_3km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) + DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh similarity index 56% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh index b3b420e00..cc92aecaa 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1alpha.sh @@ -3,20 +3,22 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_13km grid using the RRFS_v1alpha -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_3km grid using the +# RRFS_v1alpha physics suite with ICs derived from the HRRR and LBCs +# derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_13km" +PREDEF_GRID_NAME="RRFS_CONUScompact_3km" CCPP_PHYS_SUITE="FV3_RRFS_v1alpha" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh similarity index 56% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh index 3f39ab0f7..a75f8d79e 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUS_13km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_CONUScompact_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh @@ -3,20 +3,22 @@ # ------------------------ # # This test is to ensure that the workflow running in community mode -# completes successfully on the RRFS_CONUS_13km grid using the RRFS_v1beta -# physics suite with ICs derived from the HRRR and LBCs derived from the -# RAP. +# completes successfully on the RRFS_CONUScompact_3km grid using the +# RRFS_v1beta physics suite with ICs derived from the HRRR and LBCs +# derived from the RAP. # RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_13km" +PREDEF_GRID_NAME="RRFS_CONUScompact_3km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta.sh new file mode 100644 index 000000000..9e4cb594b --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_13km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1beta.sh @@ -0,0 +1,54 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the RRFS_NA_13km grid using the RRFS_v1beta +# physics suite with ICs and LBCs derived from the FV3GFS. +# +# Note that this test also sets various resource parameters for several +# of the rocoto tasks in order to more efficiently run the code on this +# (very large) grid. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_NA_13km" +CCPP_PHYS_SUITE="FV3_RRFS_v1beta" + +EXTRN_MDL_NAME_ICS="FV3GFS" +EXTRN_MDL_NAME_LBCS="FV3GFS" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190701" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="6" + +######################################################################### +# The following code/namelist/workflow setting changes are necessary to # +# run/optimize end-to-end experiments using the 3-km NA grid # +######################################################################### + +# The model should be built in 32-bit mode (64-bit will result in much +# longer run times. + +# Use k_split=2 and n_split=5, the previous namelist values (k_split=4 +# and n_split=5) will result in significantly longer run times. + +NNODES_MAKE_ICS="12" +NNODES_MAKE_LBCS="12" +PPN_MAKE_ICS="4" +PPN_MAKE_LBCS="4" +WTIME_MAKE_LBCS="01:00:00" + +NNODES_RUN_POST="6" +PPN_RUN_POST="12" + +OMP_STACKSIZE_MAKE_ICS="2048m" +OMP_STACKSIZE_RUN_FCST="2048m" + +############################################################################### diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1alpha.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1alpha.sh index c088805de..f743ebb40 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1alpha.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_NA_3km_ics_FV3GFS_lbcs_FV3GFS_suite_RRFS_v1alpha.sh @@ -45,7 +45,7 @@ PPN_MAKE_ICS="4" PPN_MAKE_LBCS="4" WTIME_MAKE_LBCS="01:00:00" -NNODES_RUN_POST="6" +NNODES_RUN_POST="8" PPN_RUN_POST="12" OMP_STACKSIZE_MAKE_ICS="2048m" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_SUBCONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_SUBCONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh index 3534d5df9..c5512d0ea 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_SUBCONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_RRFS_SUBCONUS_3km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh @@ -17,6 +17,8 @@ CCPP_PHYS_SUITE="FV3_GFS_v15p2" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh new file mode 100644 index 000000000..e11ab393a --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh @@ -0,0 +1,27 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the SUBCONUS_Ind_3km grid using the GFS_v16 +# physics suite with ICs and LBCs derived from the FV3GFS. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="SUBCONUS_Ind_3km" +CCPP_PHYS_SUITE="FV3_GFS_v16" + +EXTRN_MDL_NAME_ICS="FV3GFS" +FV3GFS_FILE_FMT_ICS="grib2" +EXTRN_MDL_NAME_LBCS="FV3GFS" +FV3GFS_FILE_FMT_LBCS="grib2" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190615" +DATE_LAST_CYCL="20190615" +CYCL_HRS=( "18" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="6" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh new file mode 100644 index 000000000..aa2cf6edc --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_HRRR.sh @@ -0,0 +1,27 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the SUBCONUS_Ind_3km grid using the HRRR +# physics suite with ICs derived from HRRR and LBCs derived from the RAP. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="SUBCONUS_Ind_3km" +CCPP_PHYS_SUITE="FV3_HRRR" + +EXTRN_MDL_NAME_ICS="HRRR" +EXTRN_MDL_NAME_LBCS="RAP" +USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) + +DATE_FIRST_CYCL="20200810" +DATE_LAST_CYCL="20200810" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="6" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh new file mode 100644 index 000000000..82c5ef43a --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_community/config.grid_SUBCONUS_Ind_3km_ics_HRRR_lbcs_RAP_suite_RRFS_v1beta.sh @@ -0,0 +1,27 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in community mode +# completes successfully on the SUBCONUS_Ind_3km grid using the RRFS_v1beta +# physics suite with ICs derived from HRRR and LBCs derived from the RAP. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="SUBCONUS_Ind_3km" +CCPP_PHYS_SUITE="FV3_RRFS_v1beta" + +EXTRN_MDL_NAME_ICS="HRRR" +EXTRN_MDL_NAME_LBCS="RAP" +USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) + +DATE_FIRST_CYCL="20200801" +DATE_LAST_CYCL="20200801" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="6" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh similarity index 82% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh index 141a9857c..f6d6454bd 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_13km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh @@ -3,7 +3,7 @@ # ------------------------ # # This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_13km grid using the GFS_v15p2 physics +# successfully on the RRFS_CONUS_13km grid using the GFS_v16 physics # suite with ICs and LBCs derived from the FV3GFS. # @@ -11,7 +11,7 @@ RUN_ENVIR="nco" PREEXISTING_DIR_METHOD="rename" PREDEF_GRID_NAME="RRFS_CONUS_13km" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" +CCPP_PHYS_SUITE="FV3_GFS_v16" EXTRN_MDL_NAME_ICS="FV3GFS" FV3GFS_FILE_FMT_ICS="grib2" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh deleted file mode 100644 index 1b7d92162..000000000 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh +++ /dev/null @@ -1,26 +0,0 @@ -# -# TEST PURPOSE/DESCRIPTION: -# ------------------------ -# -# This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_25km grid using the GFS_v15p2 physics -# suite with ICs and LBCs derived from the FV3GFS. -# - -RUN_ENVIR="nco" -PREEXISTING_DIR_METHOD="rename" - -PREDEF_GRID_NAME="RRFS_CONUS_25km" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" - -EXTRN_MDL_NAME_ICS="FV3GFS" -FV3GFS_FILE_FMT_ICS="grib2" -EXTRN_MDL_NAME_LBCS="FV3GFS" -FV3GFS_FILE_FMT_LBCS="grib2" - -DATE_FIRST_CYCL="20190615" -DATE_LAST_CYCL="20190615" -CYCL_HRS=( "00" ) - -FCST_LEN_HRS="6" -LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh deleted file mode 100644 index 656a4d0da..000000000 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_GFS_v15p2.sh +++ /dev/null @@ -1,25 +0,0 @@ -# -# TEST PURPOSE/DESCRIPTION: -# ------------------------ -# -# This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_25km grid using the FV3_GFS_v15p2 physics suite -# with ICs derived from the HRRR and LBCs derived from the RAP. -# - -RUN_ENVIR="nco" -PREEXISTING_DIR_METHOD="rename" - -PREDEF_GRID_NAME="RRFS_CONUS_25km" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" - -EXTRN_MDL_NAME_ICS="HRRR" -EXTRN_MDL_NAME_LBCS="RAP" -USE_USER_STAGED_EXTRN_FILES="TRUE" - -DATE_FIRST_CYCL="20200208" -DATE_LAST_CYCL="20200208" -CYCL_HRS=( "00" ) - -FCST_LEN_HRS="6" -LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh deleted file mode 100644 index 300dcd666..000000000 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh +++ /dev/null @@ -1,25 +0,0 @@ -# -# TEST PURPOSE/DESCRIPTION: -# ------------------------ -# -# This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_25km grid using the HRRR physics suite -# with ICs derived from the HRRR and LBCs derived from the RAP. -# - -RUN_ENVIR="nco" -PREEXISTING_DIR_METHOD="rename" - -PREDEF_GRID_NAME="RRFS_CONUS_25km" -CCPP_PHYS_SUITE="FV3_HRRR" - -EXTRN_MDL_NAME_ICS="HRRR" -EXTRN_MDL_NAME_LBCS="RAP" -USE_USER_STAGED_EXTRN_FILES="TRUE" - -DATE_FIRST_CYCL="20200208" -DATE_LAST_CYCL="20200208" -CYCL_HRS=( "00" ) - -FCST_LEN_HRS="6" -LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km.sh similarity index 67% rename from tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh rename to tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km.sh index aac14a479..19b36f87f 100644 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2.sh +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15_thompson_mynn_lam3km.sh @@ -3,15 +3,17 @@ # ------------------------ # # This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_3km grid using the GFS_v15p2 physics -# suite with ICs and LBCs derived from the FV3GFS. +# successfully on the RRFS_CONUS_3km grid using the GFS_v15_thompson_mynn_lam3km +# physics suite with ICs and LBCs derived from the FV3GFS. # RUN_ENVIR="nco" PREEXISTING_DIR_METHOD="rename" PREDEF_GRID_NAME="RRFS_CONUS_3km" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" +CCPP_PHYS_SUITE="FV3_GFS_v15_thompson_mynn_lam3km" + +USE_MERRA_CLIMO="TRUE" EXTRN_MDL_NAME_ICS="FV3GFS" FV3GFS_FILE_FMT_ICS="grib2" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh deleted file mode 100644 index 64c267636..000000000 --- a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUS_3km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v16.sh +++ /dev/null @@ -1,26 +0,0 @@ -# -# TEST PURPOSE/DESCRIPTION: -# ------------------------ -# -# This test is to ensure that the workflow running in nco mode completes -# successfully on the RRFS_CONUS_3km grid using the GFS_v16 physics -# suite with ICs and LBCs derived from the FV3GFS. -# - -RUN_ENVIR="nco" -PREEXISTING_DIR_METHOD="rename" - -PREDEF_GRID_NAME="RRFS_CONUS_3km" -CCPP_PHYS_SUITE="FV3_GFS_v16" - -EXTRN_MDL_NAME_ICS="FV3GFS" -FV3GFS_FILE_FMT_ICS="grib2" -EXTRN_MDL_NAME_LBCS="FV3GFS" -FV3GFS_FILE_FMT_LBCS="grib2" - -DATE_FIRST_CYCL="20190615" -DATE_LAST_CYCL="20190615" -CYCL_HRS=( "00" ) - -FCST_LEN_HRS="6" -LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh new file mode 100644 index 000000000..e5c4e7732 --- /dev/null +++ b/tests/WE2E/test_configs/grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh @@ -0,0 +1,29 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test is to ensure that the workflow running in nco mode completes +# successfully on the RRFS_CONUScompact_25km grid using the HRRR physics +# suite with ICs derived from the HRRR and LBCs derived from the RAP. +# + +RUN_ENVIR="nco" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" +CCPP_PHYS_SUITE="FV3_HRRR" + +EXTRN_MDL_NAME_ICS="HRRR" +EXTRN_MDL_NAME_LBCS="RAP" +USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) + +DATE_FIRST_CYCL="20200810" +DATE_LAST_CYCL="20200810" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" + +WRITE_DOPOST="TRUE" diff --git a/tests/WE2E/test_configs/wflow_features/config.new_ESGgrid.sh b/tests/WE2E/test_configs/wflow_features/config.custom_ESGgrid.sh similarity index 92% rename from tests/WE2E/test_configs/wflow_features/config.new_ESGgrid.sh rename to tests/WE2E/test_configs/wflow_features/config.custom_ESGgrid.sh index b56681f54..bb8d2bc0c 100644 --- a/tests/WE2E/test_configs/wflow_features/config.new_ESGgrid.sh +++ b/tests/WE2E/test_configs/wflow_features/config.custom_ESGgrid.sh @@ -2,7 +2,7 @@ # TEST PURPOSE/DESCRIPTION: # ------------------------ # -# This test checks the capability of the workflow to have the user +# This test checks the capability of the workflow to have the user # specify a new grid (as opposed to one of the predefined ones in the # workflow) of ESGgrid type. @@ -44,10 +44,12 @@ LAYOUT_X="8" LAYOUT_Y="12" BLOCKSIZE="13" +POST_OUTPUT_DOMAIN_NAME="custom_ESGgrid" + QUILTING="TRUE" if [ "$QUILTING" = "TRUE" ]; then WRTCMP_write_groups="1" - WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) + WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) WRTCMP_output_grid="lambert_conformal" WRTCMP_cen_lon="${ESGgrid_LON_CTR}" WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" diff --git a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid.sh b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid.sh similarity index 98% rename from tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid.sh rename to tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid.sh index d27148de2..66604a549 100644 --- a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid.sh +++ b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid.sh @@ -76,6 +76,8 @@ LAYOUT_X="6" LAYOUT_Y="6" BLOCKSIZE="26" +POST_OUTPUT_DOMAIN_NAME="custom_GFDLgrid" + QUILTING="TRUE" if [ "$QUILTING" = "TRUE" ]; then WRTCMP_write_groups="1" diff --git a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh similarity index 97% rename from tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh rename to tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh index aef2636f2..4c4f0d5c7 100644 --- a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh +++ b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_FALSE.sh @@ -63,6 +63,8 @@ LAYOUT_X="6" LAYOUT_Y="6" BLOCKSIZE="26" +POST_OUTPUT_DOMAIN_NAME="custom_GFDLgrid" + QUILTING="TRUE" if [ "$QUILTING" = "TRUE" ]; then WRTCMP_write_groups="1" diff --git a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh similarity index 97% rename from tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh rename to tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh index 59ca0925e..3e067ee3d 100644 --- a/tests/WE2E/test_configs/wflow_features/config.new_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh +++ b/tests/WE2E/test_configs/wflow_features/config.custom_GFDLgrid__GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES_eq_TRUE.sh @@ -63,6 +63,8 @@ LAYOUT_X="6" LAYOUT_Y="6" BLOCKSIZE="26" +POST_OUTPUT_DOMAIN_NAME="custom_GFDLgrid" + QUILTING="TRUE" if [ "$QUILTING" = "TRUE" ]; then WRTCMP_write_groups="1" diff --git a/tests/WE2E/test_configs/wflow_features/config.get_from_HPSS_ics_HRRR_lbcs_RAP.sh b/tests/WE2E/test_configs/wflow_features/config.get_from_HPSS_ics_HRRR_lbcs_RAP.sh index d535bc5d5..61d4e5c9e 100644 --- a/tests/WE2E/test_configs/wflow_features/config.get_from_HPSS_ics_HRRR_lbcs_RAP.sh +++ b/tests/WE2E/test_configs/wflow_features/config.get_from_HPSS_ics_HRRR_lbcs_RAP.sh @@ -12,11 +12,13 @@ RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/wflow_features/config.nco_ensemble.sh b/tests/WE2E/test_configs/wflow_features/config.nco_ensemble.sh index 3b9c93f98..0f43e9e1d 100644 --- a/tests/WE2E/test_configs/wflow_features/config.nco_ensemble.sh +++ b/tests/WE2E/test_configs/wflow_features/config.nco_ensemble.sh @@ -17,16 +17,16 @@ RUN_ENVIR="nco" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="CONUS_25km_GFDLgrid" -CCPP_PHYS_SUITE="FV3_GFS_2017_gfdlmp_regional" +PREDEF_GRID_NAME="RRFS_CONUS_25km" +CCPP_PHYS_SUITE="FV3_GFS_v15p2" EXTRN_MDL_NAME_ICS="FV3GFS" EXTRN_MDL_NAME_LBCS="FV3GFS" USE_USER_STAGED_EXTRN_FILES="TRUE" -DATE_FIRST_CYCL="20190901" -DATE_LAST_CYCL="20190902" -CYCL_HRS=( "12" "18" ) +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190702" +CYCL_HRS=( "00" "12" ) FCST_LEN_HRS="6" LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh b/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh deleted file mode 100644 index 003b08234..000000000 --- a/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh +++ /dev/null @@ -1,28 +0,0 @@ -# -# TEST PURPOSE/DESCRIPTION: -# ------------------------ -# -# This test checks the capability of the workflow to use the inline -# post option (WRITE_DOPOST) in model_configure for 'nco' mode. -# - -RUN_ENVIR="nco" -PREEXISTING_DIR_METHOD="rename" - -PREDEF_GRID_NAME="RRFS_CONUS_25km" -WRITE_DOPOST="TRUE" - -CCPP_PHYS_SUITE="FV3_GFS_v15p2" - -EXTRN_MDL_NAME_ICS="FV3GFS" -EXTRN_MDL_NAME_LBCS="FV3GFS" -USE_USER_STAGED_EXTRN_FILES="TRUE" - -DATE_FIRST_CYCL="20190701" -DATE_LAST_CYCL="20190701" -CYCL_HRS=( "00" ) - -FCST_LEN_HRS="6" -LBC_SPEC_INTVL_HRS="3" - - diff --git a/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh b/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh new file mode 120000 index 000000000..392d3311a --- /dev/null +++ b/tests/WE2E/test_configs/wflow_features/config.nco_inline_post.sh @@ -0,0 +1 @@ +../grids_extrn_mdls_suites_nco/config.nco_grid_RRFS_CONUScompact_25km_ics_HRRR_lbcs_RAP_suite_HRRR.sh \ No newline at end of file diff --git a/tests/WE2E/test_configs/wflow_features/config.specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE.sh b/tests/WE2E/test_configs/wflow_features/config.specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE.sh index 18699dd51..7a10a0ec3 100644 --- a/tests/WE2E/test_configs/wflow_features/config.specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE.sh +++ b/tests/WE2E/test_configs/wflow_features/config.specify_DT_ATMOS_LAYOUT_XY_BLOCKSIZE.sh @@ -10,12 +10,14 @@ RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_HRRR" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200801" DATE_LAST_CYCL="20200801" diff --git a/tests/WE2E/test_configs/wflow_features/config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh b/tests/WE2E/test_configs/wflow_features/config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh index 2ade79ff5..eca762a94 100644 --- a/tests/WE2E/test_configs/wflow_features/config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh +++ b/tests/WE2E/test_configs/wflow_features/config.specify_EXTRN_MDL_SYSBASEDIR_ICS_LBCS.sh @@ -18,9 +18,9 @@ FV3GFS_FILE_FMT_ICS="grib2" EXTRN_MDL_NAME_LBCS="FV3GFS" FV3GFS_FILE_FMT_LBCS="grib2" -DATE_FIRST_CYCL="20210603" -DATE_LAST_CYCL="20210603" -CYCL_HRS=( "06" ) +DATE_FIRST_CYCL="20210615" +DATE_LAST_CYCL="20210615" +CYCL_HRS=( "00" ) FCST_LEN_HRS="6" LBC_SPEC_INTVL_HRS="3" diff --git a/tests/WE2E/test_configs/wflow_features/config.specify_template_filenames.sh b/tests/WE2E/test_configs/wflow_features/config.specify_template_filenames.sh new file mode 100644 index 000000000..81185425c --- /dev/null +++ b/tests/WE2E/test_configs/wflow_features/config.specify_template_filenames.sh @@ -0,0 +1,30 @@ +# +# TEST PURPOSE/DESCRIPTION: +# ------------------------ +# +# This test checks the capability of the workflow to use user-defined +# template files. +# + +RUN_ENVIR="community" +PREEXISTING_DIR_METHOD="rename" + +PREDEF_GRID_NAME="RRFS_CONUS_25km" +CCPP_PHYS_SUITE="FV3_GFS_v15p2" + +EXTRN_MDL_NAME_ICS="FV3GFS" +EXTRN_MDL_NAME_LBCS="FV3GFS" +USE_USER_STAGED_EXTRN_FILES="TRUE" + +DATE_FIRST_CYCL="20190701" +DATE_LAST_CYCL="20190701" +CYCL_HRS=( "00" ) + +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" + +DATA_TABLE_TMPL_FN="data_table" +DIAG_TABLE_TMPL_FN="diag_table" +FIELD_TABLE_TMPL_FN="field_table" +MODEL_CONFIG_TMPL_FN="model_configure" +NEMS_CONFIG_TMPL_FN="nems.configure" diff --git a/tests/WE2E/test_configs/wflow_features/config.subhourly_post.sh b/tests/WE2E/test_configs/wflow_features/config.subhourly_post.sh index 3edce24df..a1676e4d5 100644 --- a/tests/WE2E/test_configs/wflow_features/config.subhourly_post.sh +++ b/tests/WE2E/test_configs/wflow_features/config.subhourly_post.sh @@ -9,12 +9,14 @@ RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/tests/WE2E/test_configs/wflow_features/config.subhourly_post_ensemble_2mems.sh b/tests/WE2E/test_configs/wflow_features/config.subhourly_post_ensemble_2mems.sh index 337e99740..adcc37a33 100644 --- a/tests/WE2E/test_configs/wflow_features/config.subhourly_post_ensemble_2mems.sh +++ b/tests/WE2E/test_configs/wflow_features/config.subhourly_post_ensemble_2mems.sh @@ -15,12 +15,14 @@ RUN_ENVIR="community" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="RRFS_CONUS_25km" +PREDEF_GRID_NAME="RRFS_CONUScompact_25km" CCPP_PHYS_SUITE="FV3_RRFS_v1beta" EXTRN_MDL_NAME_ICS="HRRR" EXTRN_MDL_NAME_LBCS="RAP" USE_USER_STAGED_EXTRN_FILES="TRUE" +EXTRN_MDL_FILES_ICS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) +EXTRN_MDL_FILES_LBCS=( '{yy}{jjj}{hh}00{fcst_hr:02d}00' ) DATE_FIRST_CYCL="20200810" DATE_LAST_CYCL="20200810" diff --git a/ush/NOMADS_get_extrn_mdl_files.sh b/ush/NOMADS_get_extrn_mdl_files.sh index 308aa8082..789eeff3d 100755 --- a/ush/NOMADS_get_extrn_mdl_files.sh +++ b/ush/NOMADS_get_extrn_mdl_files.sh @@ -37,10 +37,10 @@ cd gfs.$yyyymmdd/$hh #getting online analysis data if [ $file_fmt == "grib2" ] || [ $file_fmt == "GRIB2" ]; then - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.pgrb2.0p25.f000 + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.pgrb2.0p25.f000 else - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.atmanl.nemsio - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.sfcanl.nemsio + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.atmanl.nemsio + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.sfcanl.nemsio fi #getting online forecast data @@ -60,10 +60,10 @@ echo $ifcst echo $ifcst_str # if [ $file_fmt == "grib2" ] || [ $file_fmt == "GRIB2" ]; then - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.pgrb2.0p25.f${ifcst_str} + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.pgrb2.0p25.f${ifcst_str} else - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.atmf${ifcst_str}.nemsio - wget -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.sfcf${ifcst_str}.nemsio + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.atmf${ifcst_str}.nemsio + wget --tries=2 -c https://nomads.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gfs.$yyyymmdd/$hh/gfs.t${hh}z.sfcf${ifcst_str}.nemsio fi # ifcst=$[$ifcst+$nfcst_int] diff --git a/ush/check_expt_config_vars.sh b/ush/check_expt_config_vars.sh index 53ce13a09..7476de779 100644 --- a/ush/check_expt_config_vars.sh +++ b/ush/check_expt_config_vars.sh @@ -63,7 +63,7 @@ function check_expt_config_vars() { # Note that a variable name will be found only if the equal sign immediately # follows the variable name. # - var_name=$( printf "%s" "${crnt_line}" | $SED -n -r -e "s/^([^ =\"]*)=.*/\1/p") + var_name=$( printf "%s" "${crnt_line}" | $SED -n -r -e "s/^([^ =\"]*)=.*/\1/p" ) if [ -z "${var_name}" ]; then diff --git a/ush/check_ruc_lsm.py b/ush/check_ruc_lsm.py new file mode 100644 index 000000000..fdf288f30 --- /dev/null +++ b/ush/check_ruc_lsm.py @@ -0,0 +1,30 @@ +#!/usr/bin/env python3 + +import os +import unittest + +from python_utils import set_env_var, print_input_args, \ + load_xml_file, has_tag_with_value + +def check_ruc_lsm(ccpp_phys_suite_fp): + """ This file defines a function that checks whether the RUC land surface + model (LSM) parameterization is being called by the selected physics suite. + + Args: + ccpp_phys_suite_fp: full path to CCPP physics suite xml file + Returns: + Boolean + """ + + print_input_args(locals()) + + tree = load_xml_file(ccpp_phys_suite_fp) + has_ruc = has_tag_with_value(tree, "scheme", "lsm_ruc") + return has_ruc + +class Testing(unittest.TestCase): + def test_check_ruc_lsm(self): + self.assertTrue( check_ruc_lsm(ccpp_phys_suite_fp=f"test_data{os.sep}suite_FV3_GSD_SAR.xml") ) + def setUp(self): + set_env_var('DEBUG',True) + diff --git a/ush/config.community.sh b/ush/config.community.sh index 70433d597..ba8aa44da 100644 --- a/ush/config.community.sh +++ b/ush/config.community.sh @@ -14,7 +14,7 @@ QUILTING="TRUE" DO_ENSEMBLE="FALSE" NUM_ENS_MEMBERS="2" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" +CCPP_PHYS_SUITE="FV3_GFS_v16" FCST_LEN_HRS="48" LBC_SPEC_INTVL_HRS="6" @@ -30,7 +30,7 @@ FV3GFS_FILE_FMT_LBCS="grib2" WTIME_RUN_FCST="01:00:00" -MODEL="FV3_GFS_v15p2_CONUS_25km" +MODEL="FV3_GFS_v16_CONUS_25km" METPLUS_PATH="path/to/METPlus" MET_INSTALL_DIR="path/to/MET" CCPA_OBS_DIR="/path/to/processed/CCPA/data" diff --git a/ush/config.nco.sh b/ush/config.nco.sh index 3b2c84be6..556aa6521 100644 --- a/ush/config.nco.sh +++ b/ush/config.nco.sh @@ -8,27 +8,44 @@ VERBOSE="TRUE" RUN_ENVIR="nco" PREEXISTING_DIR_METHOD="rename" -PREDEF_GRID_NAME="CONUS_25km_GFDLgrid" +USE_CRON_TO_RELAUNCH="TRUE" +CRON_RELAUNCH_INTVL_MNTS="3" + +PREDEF_GRID_NAME="RRFS_CONUS_25km" QUILTING="TRUE" -CCPP_PHYS_SUITE="FV3_GFS_v15p2" +CCPP_PHYS_SUITE="FV3_GFS_v16" -FCST_LEN_HRS="06" -LBC_SPEC_INTVL_HRS="6" +FCST_LEN_HRS="6" +LBC_SPEC_INTVL_HRS="3" -DATE_FIRST_CYCL="20190901" -DATE_LAST_CYCL="20190901" -CYCL_HRS=( "18" ) +DATE_FIRST_CYCL="20220407" +DATE_LAST_CYCL="20220407" +CYCL_HRS=( "00" ) EXTRN_MDL_NAME_ICS="FV3GFS" EXTRN_MDL_NAME_LBCS="FV3GFS" +FV3GFS_FILE_FMT_ICS="grib2" +FV3GFS_FILE_FMT_LBCS="grib2" + +WTIME_RUN_FCST="01:00:00" + +WRITE_DOPOST="TRUE" + +# +# Output directory: {NET}/{model_ver}/{RUN}.YYYYMMDD/ +# Output file name: {NET}.tHHz.[var_name].f###.{POST_OUTPUT_DOMAIN_NAME}.grib2 +# +POST_OUTPUT_DOMAIN_NAME="conus_25km" +NET="rrfs" +model_ver="v1.0" +RUN="rrfs_test" # # The following must be modified for different platforms and users. # -RUN="an_experiment" -COMINgfs="/scratch1/NCEPDEV/hwrf/noscrub/hafs-input/COMGFS" # Path to directory containing files from the external model (FV3GFS). -FIXLAM_NCO_BASEDIR="/scratch2/BMC/det/FV3LAM_pregen" # Path to directory containing the pregenerated grid, orography, and surface climatology "fixed" files to use for the experiment. -STMP="/scratch2/BMC/det/Gerard.Ketefian/UFS_CAM/NCO_dirs/stmp" # Path to directory STMP that mostly contains input files. -PTMP="/scratch2/BMC/det/Gerard.Ketefian/UFS_CAM/NCO_dirs/ptmp" # Path to directory PTMP in which the experiment's output files will be placed. +COMIN="/scratch1/NCEPDEV/rstprod/com/gfs/prod" # Path to directory containing files from the external model. +DOMAIN_PREGEN_BASEDIR="/scratch2/BMC/det/UFS_SRW_App/develop/FV3LAM_pregen" # Path to directory containing the pregenerated grid, orography, and surface climatology "fixed" files to use for the experiment. +STMP="/path/to/stmp/directory" # Path to directory STMP that mostly contains input files. +PTMP="/path/to/ptmp/directory" # Path to directory PTMP in which the experiment's output files will be placed. diff --git a/ush/config_defaults.sh b/ush/config_defaults.sh index 4c7b4d7e8..d304d203a 100644 --- a/ush/config_defaults.sh +++ b/ush/config_defaults.sh @@ -17,8 +17,8 @@ # # NCEP Central Operations # WCOSS Implementation Standards -# April 17, 2019 -# Version 10.2.0 +# January 19, 2022 +# Version 11.0.0 # # RUN_ENVIR is described in this document as follows: # @@ -73,12 +73,12 @@ RUN_ENVIR="nco" # Path to the LMOD sh file on your Linux system. Is set automatically # for supported machines. # -# BUILD_ENV_FN: -# Name of alternative build environment file to use if using an +# BUILD_MOD_FN: +# Name of alternative build module file to use if using an # unsupported platform. Is set automatically for supported machines. # -# WFLOW_ENV_FN: -# Name of alternative workflow environment file to use if using an +# WFLOW_MOD_FN: +# Name of alternative workflow module file to use if using an # unsupported platform. Is set automatically for supported machines. # # SCHED: @@ -142,8 +142,8 @@ ACCOUNT="project_name" WORKFLOW_MANAGER="none" NCORES_PER_NODE="" LMOD_PATH="" -BUILD_ENV_FN="" -WFLOW_ENV_FN="" +BUILD_MOD_FN="" +WFLOW_MOD_FN="" SCHED="" PARTITION_DEFAULT="" QUEUE_DEFAULT="" @@ -232,31 +232,29 @@ EXEC_SUBDIR="bin" # Set variables that are only used in NCO mode (i.e. when RUN_ENVIR is # set to "nco"). Definitions: # -# COMINgfs: -# The beginning portion of the directory containing files generated by -# the external model (FV3GFS) that the initial and lateral boundary -# condition generation tasks need in order to create initial and boundary -# condition files for a given cycle on the native FV3-LAM grid. For a -# cycle that starts on the date specified by the variable yyyymmdd -# (consisting of the 4-digit year followed by the 2-digit month followed -# by the 2-digit day of the month) and hour specified by the variable hh -# (consisting of the 2-digit hour-of-day), the directory in which the -# workflow will look for the external model files is: +# COMIN: +# Directory containing files generated by the external model (FV3GFS, NAM, +# HRRR, etc) that the initial and lateral boundary condition generation tasks +# need in order to create initial and boundary condition files for a given +# cycle on the native FV3-LAM grid. # -# $COMINgfs/gfs.$yyyymmdd/$hh/atmos +# envir, NET, model_ver, RUN: +# Standard environment variables defined in the NCEP Central Operations WCOSS +# Implementation Standards document as follows: # -# FIXLAM_NCO_BASEDIR: -# The base directory containing pregenerated grid, orography, and surface -# climatology files. For the pregenerated grid specified by PREDEF_GRID_NAME, -# these "fixed" files are located in: +# envir: +# Set to "test" during the initial testing phase, "para" when running +# in parallel (on a schedule), and "prod" in production. # -# ${FIXLAM_NCO_BASEDIR}/${PREDEF_GRID_NAME} +# NET: +# Model name (first level of com directory structure) # -# The workflow scripts will create a symlink in the experiment directory -# that will point to a subdirectory (having the name of the grid being -# used) under this directory. This variable should be set to a null -# string in this file, but it can be specified in the user-specified -# workflow configuration file (EXPT_CONFIG_FN). +# model_ver: +# Version number of package in three digits (second level of com directory) +# +# RUN: +# Name of model run (third level of com directory structure). +# In general, same as $NET # # STMP: # The beginning portion of the directory that will contain cycle-dependent @@ -269,22 +267,6 @@ EXEC_SUBDIR="bin" # # $STMP/tmpnwprd/$RUN/$cdate # -# NET, envir, RUN: -# Variables used in forming the path to the directory that will contain -# the output files from the post-processor (UPP) for a given cycle (see -# definition of PTMP below). These are defined in the WCOSS Implementation -# Standards document as follows: -# -# NET: -# Model name (first level of com directory structure) -# -# envir: -# Set to "test" during the initial testing phase, "para" when running -# in parallel (on a schedule), and "prod" in production. -# -# RUN: -# Name of model run (third level of com directory structure). -# # PTMP: # The beginning portion of the directory that will contain the output # files from the post-processor (UPP) for a given cycle. For a cycle @@ -292,16 +274,17 @@ EXEC_SUBDIR="bin" # (where yyyymmdd and hh are as described above), the directory in which # the UPP output files will be placed will be: # -# $PTMP/com/$NET/$envir/$RUN.$yyyymmdd/$hh +# $PTMP/com/$NET/$model_ver/$RUN.$yyyymmdd/$hh # #----------------------------------------------------------------------- # -COMINgfs="/base/path/of/directory/containing/gfs/input/files" -FIXLAM_NCO_BASEDIR="" +COMIN="/path/of/directory/containing/data/files/for/IC/LBCS" STMP="/base/path/of/directory/containing/model/input/and/raw/output/files" -NET="rrfs" envir="para" -RUN="experiment_name" +NET="rrfs" +model_ver="v1.0.0" +RUN="rrfs" +STMP="/base/path/of/directory/containing/model/input/and/raw/output/files" PTMP="/base/path/of/directory/containing/postprocessed/output/files" # #----------------------------------------------------------------------- @@ -343,28 +326,34 @@ DOT_OR_USCORE="_" # from which the namelist files for each of the enesemble members are # generated. # -# DIAG_TABLE_FN: -# Name of file that specifies the fields that the forecast model will -# output. -# -# FIELD_TABLE_FN: -# Name of file that specifies the tracers that the forecast model will -# read in from the IC/LBC files. -# -# DATA_TABLE_FN: -# Name of file that specifies ??? -# -# MODEL_CONFIG_FN: -# Name of file that specifies ??? -# -# NEMS_CONFIG_FN: -# Name of file that specifies ??? -# # FV3_EXEC_FN: # Name to use for the forecast model executable when it is copied from # the directory in which it is created in the build step to the executables # directory (EXECDIR; this is set during experiment generation). # +# DIAG_TABLE_TMPL_FN: +# Name of a template file that specifies the output fields of the forecast +# model (ufs-weather-model: diag_table) followed by [dot_ccpp_phys_suite]. +# Its default value is the name of the file that the ufs weather model +# expects to read in. +# +# FIELD_TABLE_TMPL_FN: +# Name of a template file that specifies the tracers in IC/LBC files of the +# forecast model (ufs-weather-mode: field_table) followed by [dot_ccpp_phys_suite]. +# Its default value is the name of the file that the ufs weather model expects +# to read in. +# +# MODEL_CONFIG_TMPL_FN: +# Name of a template file that contains settings and configurations for the +# NUOPC/ESMF main component (ufs-weather-model: model_config). Its default +# value is the name of the file that the ufs weather model expects to read in. +# +# NEMS_CONFIG_TMPL_FN: +# Name of a template file that contains information about the various NEMS +# components and their run sequence (ufs-weather-model: nems.configure). +# Its default value is the name of the file that the ufs weather model expects +# to read in. +# # FCST_MODEL: # Name of forecast model (default=ufs-weather-model) # @@ -381,19 +370,12 @@ DOT_OR_USCORE="_" # to each workflow task) in order to make all the experiment variables # available in those scripts. # -# EXTRN_MDL_ICS_VAR_DEFNS_FN: -# Name of file (a shell script) containing the defintions of variables -# associated with the external model from which ICs are generated. This -# file is created by the GET_EXTRN_ICS_TN task because the values of the -# variables it contains are not known before this task runs. The file is -# then sourced by the MAKE_ICS_TN task. -# -# EXTRN_MDL_LBCS_VAR_DEFNS_FN: -# Name of file (a shell script) containing the defintions of variables -# associated with the external model from which LBCs are generated. This -# file is created by the GET_EXTRN_LBCS_TN task because the values of the -# variables it contains are not known before this task runs. The file is -# then sourced by the MAKE_ICS_TN task. +# EXTRN_MDL_VAR_DEFNS_FN: +# Name of file (a shell script) containing the defintions of variables +# associated with the external model from which ICs or LBCs are generated. This +# file is created by the GET_EXTRN_*_TN task because the values of the variables +# it contains are not known before this task runs. The file is then sourced by +# the MAKE_ICS_TN and MAKE_LBCS_TN tasks. # # WFLOW_LAUNCH_SCRIPT_FN: # Name of the script that can be used to (re)launch the experiment's rocoto @@ -409,27 +391,38 @@ EXPT_CONFIG_FN="config.sh" RGNL_GRID_NML_FN="regional_grid.nml" -DATA_TABLE_FN="data_table" -DIAG_TABLE_FN="diag_table" -FIELD_TABLE_FN="field_table" FV3_NML_BASE_SUITE_FN="input.nml.FV3" FV3_NML_YAML_CONFIG_FN="FV3.input.yml" FV3_NML_BASE_ENS_FN="input.nml.base_ens" -MODEL_CONFIG_FN="model_configure" -NEMS_CONFIG_FN="nems.configure" FV3_EXEC_FN="ufs_model" -FCST_MODEL="ufs-weather-model" +DATA_TABLE_TMPL_FN="" +DIAG_TABLE_TMPL_FN="" +FIELD_TABLE_TMPL_FN="" +MODEL_CONFIG_TMPL_FN="" +NEMS_CONFIG_TMPL_FN="" +FCST_MODEL="ufs-weather-model" WFLOW_XML_FN="FV3LAM_wflow.xml" GLOBAL_VAR_DEFNS_FN="var_defns.sh" -EXTRN_MDL_ICS_VAR_DEFNS_FN="extrn_mdl_ics_var_defns.sh" -EXTRN_MDL_LBCS_VAR_DEFNS_FN="extrn_mdl_lbcs_var_defns.sh" +EXTRN_MDL_VAR_DEFNS_FN="extrn_mdl_var_defns.sh" WFLOW_LAUNCH_SCRIPT_FN="launch_FV3LAM_wflow.sh" WFLOW_LAUNCH_LOG_FN="log.launch_FV3LAM_wflow" # #----------------------------------------------------------------------- # +# Set output file name. Definitions: +# +# POST_OUTPUT_DOMAIN_NAME: +# Domain name used in naming the output files of run_post by UPP or inline post. +# Output file name: $NET.tHHz.[var_name].f###.$POST_OUTPUT_DOMAIN_NAME.grib2 +# +#----------------------------------------------------------------------- +# +POST_OUTPUT_DOMAIN_NAME="" +# +#----------------------------------------------------------------------- +# # Set forecast parameters. Definitions: # # DATE_FIRST_CYCL: @@ -698,9 +691,18 @@ EXTRN_MDL_SYSBASEDIR_LBCS='' # set to "FALSE". # # EXTRN_MDL_FILES_ICS: -# Array containing the names of the files to search for in the directory -# specified by EXTRN_MDL_SOURCE_BASEDIR_ICS. This variable is not used -# if USE_USER_STAGED_EXTRN_FILES is set to "FALSE". +# Array containing templates of the names of the files to search for in +# the directory specified by EXTRN_MDL_SOURCE_BASEDIR_ICS. This +# variable is not used if USE_USER_STAGED_EXTRN_FILES is set to "FALSE". +# A single template should be used for each model file type that is +# meant to be used. You may use any of the Python-style templates +# allowed in the ush/retrieve_data.py script. To see the full list of +# supported templates, run that script with a -h option. Here is an example of +# setting FV3GFS nemsio input files: +# EXTRN_MDL_FILES_ICS=( gfs.t{hh}z.atmf{fcst_hr:03d}.nemsio \ +# gfs.t{hh}z.sfcf{fcst_hr:03d}.nemsio ) +# Or for FV3GFS grib files: +# EXTRN_MDL_FILES_ICS=( gfs.t{hh}z.pgrb2.0p25.f{fcst_hr:03d} ) # # EXTRN_MDL_SOURCE_BASEDIR_LBCS: # Analogous to EXTRN_MDL_SOURCE_BASEDIR_ICS but for LBCs instead of ICs. @@ -708,13 +710,21 @@ EXTRN_MDL_SYSBASEDIR_LBCS='' # EXTRN_MDL_FILES_LBCS: # Analogous to EXTRN_MDL_FILES_ICS but for LBCs instead of ICs. # +# EXTRN_MDL_DATA_STORES: +# A list of data stores where the scripts should look for external model +# data. The list is in priority order. If disk information is provided +# via USE_USER_STAGED_EXTRN_FILES or a known location on the platform, +# the disk location will be highest priority. Options are disk, hpss, +# aws, and nomads. +# #----------------------------------------------------------------------- # USE_USER_STAGED_EXTRN_FILES="FALSE" -EXTRN_MDL_SOURCE_BASEDIR_ICS="/base/dir/containing/user/staged/extrn/mdl/files/for/ICs" -EXTRN_MDL_FILES_ICS=( "ICS_file1" "ICS_file2" "..." ) -EXTRN_MDL_SOURCE_BASEDIR_LBCS="/base/dir/containing/user/staged/extrn/mdl/files/for/LBCs" -EXTRN_MDL_FILES_LBCS=( "LBCS_file1" "LBCS_file2" "..." ) +EXTRN_MDL_SOURCE_BASEDIR_ICS="" +EXTRN_MDL_FILES_ICS="" +EXTRN_MDL_SOURCE_BASEDIR_LBCS="" +EXTRN_MDL_FILES_LBCS="" +EXTRN_MDL_DATA_STORES="" # #----------------------------------------------------------------------- # @@ -744,7 +754,7 @@ NOMADS_file_type="nemsio" # #----------------------------------------------------------------------- # -CCPP_PHYS_SUITE="FV3_GFS_v15p2" +CCPP_PHYS_SUITE="FV3_GFS_v16" # #----------------------------------------------------------------------- # @@ -1302,6 +1312,22 @@ VX_ENSPOINT_PROB_TN="run_enspointvx_prob" # SFC_CLIMO_DIR: # Same as GRID_DIR but for the MAKE_SFC_CLIMO_TN task. # +# DOMAIN_PREGEN_BASEDIR: +# The base directory containing pregenerated grid, orography, and surface +# climatology files. This is an alternative for setting GRID_DIR, +# OROG_DIR, and SFC_CLIMO_DIR individually +# +# For the pregenerated grid specified by PREDEF_GRID_NAME, +# these "fixed" files are located in: +# +# ${DOMAIN_PREGEN_BASEDIR}/${PREDEF_GRID_NAME} +# +# The workflow scripts will create a symlink in the experiment directory +# that will point to a subdirectory (having the name of the grid being +# used) under this directory. This variable should be set to a null +# string in this file, but it can be specified in the user-specified +# workflow configuration file (EXPT_CONFIG_FN). +# # RUN_TASK_GET_EXTRN_ICS: # Flag that determines whether the GET_EXTRN_ICS_TN task is to be run. # @@ -1348,6 +1374,8 @@ OROG_DIR="/path/to/pregenerated/orog/files" RUN_TASK_MAKE_SFC_CLIMO="TRUE" SFC_CLIMO_DIR="/path/to/pregenerated/surface/climo/files" +DOMAIN_PREGEN_BASEDIR="" + RUN_TASK_GET_EXTRN_ICS="TRUE" RUN_TASK_GET_EXTRN_LBCS="TRUE" RUN_TASK_MAKE_ICS="TRUE" @@ -1504,6 +1532,7 @@ FIXgsm_FILES_TO_COPY_TO_FIXam=( \ "global_hyblev.l65.txt" \ "global_zorclim.1x1.grb" \ "global_sfc_emissivity_idx.txt" \ +"global_tg3clim.2.6x1.5.grb" \ "global_solarconstant_noaa_an.txt" \ "global_albedo4.1x1.grb" \ "geo_em.d01.lat-lon.2.5m.HGT_M.nc" \ @@ -1558,6 +1587,7 @@ CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING=( \ "global_h2oprdlos.f77 | global_h2o_pltc.f77" \ "global_albedo4.1x1.grb | global_albedo4.1x1.grb" \ "global_zorclim.1x1.grb | global_zorclim.1x1.grb" \ +"global_tg3clim.2.6x1.5.grb | global_tg3clim.2.6x1.5.grb" \ "sfc_emissivity_idx.txt | global_sfc_emissivity_idx.txt" \ "solarconstant_noaa_an.txt | global_solarconstant_noaa_an.txt" \ "global_o3prdlos.f77 | " \ @@ -1644,9 +1674,9 @@ WTIME_VX_ENSPOINT_PROB="01:00:00" # # Maximum number of attempts. # -MAXTRIES_MAKE_GRID="1" -MAXTRIES_MAKE_OROG="1" -MAXTRIES_MAKE_SFC_CLIMO="1" +MAXTRIES_MAKE_GRID="2" +MAXTRIES_MAKE_OROG="2" +MAXTRIES_MAKE_SFC_CLIMO="2" MAXTRIES_GET_EXTRN_ICS="1" MAXTRIES_GET_EXTRN_LBCS="1" MAXTRIES_MAKE_ICS="1" @@ -1682,6 +1712,14 @@ MAXTRIES_VX_ENSGRID_PROB_RETOP="1" MAXTRIES_VX_ENSPOINT="1" MAXTRIES_VX_ENSPOINT_MEAN="1" MAXTRIES_VX_ENSPOINT_PROB="1" + +# +#----------------------------------------------------------------------- +# +# Allows an extra parameter to be passed to slurm via XML Native +# command +# +SLURM_NATIVE_CMD="" # #----------------------------------------------------------------------- # @@ -1793,18 +1831,25 @@ NUM_ENS_MEMBERS="1" DO_SHUM="FALSE" DO_SPPT="FALSE" DO_SKEB="FALSE" +ISEED_SPPT="1" +ISEED_SHUM="2" +ISEED_SKEB="3" +NEW_LSCALE="TRUE" SHUM_MAG="0.006" #Variable "shum" in input.nml SHUM_LSCALE="150000" SHUM_TSCALE="21600" #Variable "shum_tau" in input.nml SHUM_INT="3600" #Variable "shumint" in input.nml SPPT_MAG="0.7" #Variable "sppt" in input.nml +SPPT_LOGIT="TRUE" SPPT_LSCALE="150000" SPPT_TSCALE="21600" #Variable "sppt_tau" in input.nml SPPT_INT="3600" #Variable "spptint" in input.nml +SPPT_SFCLIMIT="TRUE" SKEB_MAG="0.5" #Variable "skeb" in input.nml SKEB_LSCALE="150000" SKEB_TSCALE="21600" #Variable "skeb_tau" in input.nml SKEB_INT="3600" #Variable "skebint" in input.nml +SKEBNORM="1" SKEB_VDOF="10" USE_ZMTNBLCK="FALSE" # @@ -1833,7 +1878,7 @@ SPP_TSCALE=( "21600.0" "21600.0" "21600.0" "21600.0" "21600.0" ) #Variable "spp_ SPP_SIGTOP1=( "0.1" "0.1" "0.1" "0.1" "0.1") SPP_SIGTOP2=( "0.025" "0.025" "0.025" "0.025" "0.025" ) SPP_STDDEV_CUTOFF=( "1.5" "1.5" "2.5" "1.5" "1.5" ) -ISEED_SPP=( "4" "4" "4" "4" "4" ) +ISEED_SPP=( "4" "5" "6" "7" "8" ) # #----------------------------------------------------------------------- # diff --git a/ush/config_defaults.yaml b/ush/config_defaults.yaml new file mode 100644 index 000000000..56b89bb11 --- /dev/null +++ b/ush/config_defaults.yaml @@ -0,0 +1,2022 @@ +# +#----------------------------------------------------------------------- +# +# This file sets the experiment's configuration variables (which are +# global shell variables) to their default values. For many of these +# variables, the valid values that they may take on are defined in the +# file $USHDIR/valid_param_vals.sh. +# +#----------------------------------------------------------------------- +# + +# +#----------------------------------------------------------------------- +# +# Set the RUN_ENVIR variable that is listed and described in the WCOSS +# Implementation Standards document: +# +# NCEP Central Operations +# WCOSS Implementation Standards +# April 19, 2022 +# Version 11.0.0 +# +# RUN_ENVIR is described in this document as follows: +# +# Set to "nco" if running in NCO's production environment. Used to +# distinguish between organizations. +# +# Valid values are "nco" and "community". Here, we use it to generate +# and run the experiment either in NCO mode (if RUN_ENVIR is set to "nco") +# or in community mode (if RUN_ENVIR is set to "community"). This has +# implications on the experiment variables that need to be set and the +# the directory structure used. +# +#----------------------------------------------------------------------- +# +RUN_ENVIR: "nco" +# +#----------------------------------------------------------------------- +# +# mach_doc_start +# Set machine and queue parameters. Definitions: +# +# MACHINE: +# Machine on which the workflow will run. If you are NOT on a named, +# supported platform, and you want to use the Rocoto workflow manager, +# you will need set MACHINE: "linux" and WORKFLOW_MANAGER: "rocoto". This +# combination will assume a Slurm batch manager when generating the XML. +# Please see ush/valid_param_vals.sh for a full list of supported +# platforms. +# +# MACHINE_FILE: +# Path to a configuration file with machine-specific settings. If none +# is provided, setup.sh will attempt to set the path to for a supported +# platform. +# +# ACCOUNT: +# The account under which to submit jobs to the queue. +# +# WORKFLOW_MANAGER: +# The workflow manager to use (e.g. rocoto). This is set to "none" by +# default, but if the machine name is set to a platform that supports +# rocoto, this will be overwritten and set to "rocoto". If set +# explicitly to rocoto along with the use of the MACHINE=linux target, +# the configuration layer assumes a Slurm batch manager when generating +# the XML. Valid options: "rocoto" or "none" +# +# NCORES_PER_NODE: +# The number of cores available per node on the compute platform. Set +# for supported platforms in setup.sh, but is now also configurable for +# all platforms. +# +# LMOD_PATH: +# Path to the LMOD sh file on your Linux system. Is set automatically +# for supported machines. +# +# BUILD_MOD_FN: +# Name of alternative build module file to use if using an +# unsupported platform. Is set automatically for supported machines. +# +# WFLOW_MOD_FN: +# Name of alternative workflow module file to use if using an +# unsupported platform. Is set automatically for supported machines. +# +# SCHED: +# The job scheduler to use (e.g. slurm). Set this to an empty string in +# order for the experiment generation script to set it depending on the +# machine. +# +# PARTITION_DEFAULT: +# If using the slurm job scheduler (i.e. if SCHED is set to "slurm"), +# the default partition to which to submit workflow tasks. If a task +# does not have a specific variable that specifies the partition to which +# it will be submitted (e.g. PARTITION_HPSS, PARTITION_FCST; see below), +# it will be submitted to the partition specified by this variable. If +# this is not set or is set to an empty string, it will be (re)set to a +# machine-dependent value. This is not used if SCHED is not set to +# "slurm". +# +# QUEUE_DEFAULT: +# The default queue or QOS (if using the slurm job scheduler, where QOS +# is Quality of Service) to which workflow tasks are submitted. If a +# task does not have a specific variable that specifies the queue to which +# it will be submitted (e.g. QUEUE_HPSS, QUEUE_FCST; see below), it will +# be submitted to the queue specified by this variable. If this is not +# set or is set to an empty string, it will be (re)set to a machine- +# dependent value. +# +# PARTITION_HPSS: +# If using the slurm job scheduler (i.e. if SCHED is set to "slurm"), +# the partition to which the tasks that get or create links to external +# model files [which are needed to generate initial conditions (ICs) and +# lateral boundary conditions (LBCs)] are submitted. If this is not set +# or is set to an empty string, it will be (re)set to a machine-dependent +# value. This is not used if SCHED is not set to "slurm". +# +# QUEUE_HPSS: +# The queue or QOS to which the tasks that get or create links to external +# model files [which are needed to generate initial conditions (ICs) and +# lateral boundary conditions (LBCs)] are submitted. If this is not set +# or is set to an empty string, it will be (re)set to a machine-dependent +# value. +# +# PARTITION_FCST: +# If using the slurm job scheduler (i.e. if SCHED is set to "slurm"), +# the partition to which the task that runs forecasts is submitted. If +# this is not set or set to an empty string, it will be (re)set to a +# machine-dependent value. This is not used if SCHED is not set to +# "slurm". +# +# QUEUE_FCST: +# The queue or QOS to which the task that runs a forecast is submitted. +# If this is not set or set to an empty string, it will be (re)set to a +# machine-dependent value. +# +# mach_doc_end +# +#----------------------------------------------------------------------- +# +MACHINE: "BIG_COMPUTER" +MACHINE_FILE: "" +ACCOUNT: "project_name" +WORKFLOW_MANAGER: "none" +NCORES_PER_NODE: "" +LMOD_PATH: "" +BUILD_MOD_FN: "" +WFLOW_MOD_FN: "" +SCHED: "" +PARTITION_DEFAULT: "" +QUEUE_DEFAULT: "" +PARTITION_HPSS: "" +QUEUE_HPSS: "" +PARTITION_FCST: "" +QUEUE_FCST: "" +# +#----------------------------------------------------------------------- +# +# Set run commands for platforms without a workflow manager. These values +# will be ignored unless WORKFLOW_MANAGER: "none". Definitions: +# +# RUN_CMD_UTILS: +# The run command for pre-processing utilities (shave, orog, sfc_climo_gen, +# etc.) Can be left blank for smaller domains, in which case the executables +# will run without MPI. +# +# RUN_CMD_FCST: +# The run command for the model forecast step. This will be appended to +# the end of the variable definitions file, so it can reference other +# variables. +# +# RUN_CMD_POST: +# The run command for post-processing (UPP). Can be left blank for smaller +# domains, in which case UPP will run without MPI. +# +#----------------------------------------------------------------------- +# +RUN_CMD_UTILS: "mpirun -np 1" +RUN_CMD_FCST: "mpirun -np ${PE_MEMBER01}" +RUN_CMD_POST: "mpirun -np 1" +# +#----------------------------------------------------------------------- +# +# Set cron-associated parameters. Definitions: +# +# USE_CRON_TO_RELAUNCH: +# Flag that determines whether or not to add a line to the user's cron +# table to call the experiment launch script every CRON_RELAUNCH_INTVL_MNTS +# minutes. +# +# CRON_RELAUNCH_INTVL_MNTS: +# The interval (in minutes) between successive calls of the experiment +# launch script by a cron job to (re)launch the experiment (so that the +# workflow for the experiment kicks off where it left off). +# +#----------------------------------------------------------------------- +# +USE_CRON_TO_RELAUNCH: "FALSE" +CRON_RELAUNCH_INTVL_MNTS: "03" +# +#----------------------------------------------------------------------- +# +# dir_doc_start +# Set directories. Definitions: +# +# EXPT_BASEDIR: +# The base directory in which the experiment directory will be created. +# If this is not specified or if it is set to an empty string, it will +# default to ${HOMErrfs}/../expt_dirs. +# +# EXPT_SUBDIR: +# The name that the experiment directory (without the full path) will +# have. The full path to the experiment directory, which will be contained +# in the variable EXPTDIR, will be: +# +# EXPTDIR: "${EXPT_BASEDIR}/${EXPT_SUBDIR}" +# +# This cannot be empty. If set to a null string here, it must be set to +# a (non-empty) value in the user-defined experiment configuration file. +# +# dir_doc_end +# +# EXEC_SUBDIR: +# The name of the subdirectory of ufs-srweather-app where executables are +# installed. +#----------------------------------------------------------------------- +# +EXPT_BASEDIR: "" +EXPT_SUBDIR: "" +EXEC_SUBDIR: "bin" +# +#----------------------------------------------------------------------- +# +# Set variables that are only used in NCO mode (i.e. when RUN_ENVIR is +# set to "nco"). Definitions: +# +# COMIN: +# Directory containing files generated by the external model (FV3GFS, NAM, +# HRRR, etc) that the initial and lateral boundary condition generation tasks +# need in order to create initial and boundary condition files for a given +# cycle on the native FV3-LAM grid. +# +# envir, NET, model_ver, RUN: +# Standard environment variables defined in the NCEP Central Operations WCOSS +# Implementation Standards document as follows: +# +# envir: +# Set to "test" during the initial testing phase, "para" when running +# in parallel (on a schedule), and "prod" in production. +# +# NET: +# Model name (first level of com directory structure) +# +# model_ver: +# Version number of package in three digits (second level of com directory) +# +# RUN: +# Name of model run (third level of com directory structure). +# In general, same as $NET +# +# STMP: +# The beginning portion of the directory that will contain cycle-dependent +# model input files, symlinks to cycle-independent input files, and raw +# (i.e. before post-processing) forecast output files for a given cycle. +# For a cycle that starts on the date specified by yyyymmdd and hour +# specified by hh (where yyyymmdd and hh are as described above) [so that +# the cycle date (cdate) is given by cdate: "${yyyymmdd}${hh}"], the +# directory in which the aforementioned files will be located is: +# +# $STMP/tmpnwprd/$RUN/$cdate +# +# PTMP: +# The beginning portion of the directory that will contain the output +# files from the post-processor (UPP) for a given cycle. For a cycle +# that starts on the date specified by yyyymmdd and hour specified by hh +# (where yyyymmdd and hh are as described above), the directory in which +# the UPP output files will be placed will be: +# +# $PTMP/com/$NET/$model_ver/$RUN.$yyyymmdd/$hh +# +#----------------------------------------------------------------------- +# +COMIN: "/path/of/directory/containing/data/files/for/IC/LBCS" +envir: "para" +NET: "rrfs" +model_ver: "v1.0.0" +RUN: "rrfs" +STMP: "/base/path/of/directory/containing/model/input/and/raw/output/files" +PTMP: "/base/path/of/directory/containing/postprocessed/output/files" +# +#----------------------------------------------------------------------- +# +# Set the separator character(s) to use in the names of the grid, mosaic, +# and orography fixed files. +# +# Ideally, the same separator should be used in the names of these fixed +# files as the surface climatology fixed files (which always use a "." +# as the separator), i.e. ideally, DOT_OR_USCORE should be set to "." +# +#----------------------------------------------------------------------- +# +DOT_OR_USCORE: "_" +# +#----------------------------------------------------------------------- +# +# Set file names. Definitions: +# +# EXPT_CONFIG_FN: +# Name of the user-specified configuration file for the forecast experiment. +# +# RGNL_GRID_NML_FN: +# Name of file containing the namelist settings for the code that generates +# a "ESGgrid" type of regional grid. +# +# FV3_NML_BASE_SUITE_FN: +# Name of Fortran namelist file containing the forecast model's base suite +# namelist, i.e. the portion of the namelist that is common to all physics +# suites. +# +# FV3_NML_YAML_CONFIG_FN: +# Name of YAML configuration file containing the forecast model's namelist +# settings for various physics suites. +# +# FV3_NML_BASE_ENS_FN: +# Name of Fortran namelist file containing the forecast model's base +# ensemble namelist, i.e. the the namelist file that is the starting point +# from which the namelist files for each of the enesemble members are +# generated. +# +# FV3_EXEC_FN: +# Name to use for the forecast model executable when it is copied from +# the directory in which it is created in the build step to the executables +# directory (EXECDIR; this is set during experiment generation). +# +# DIAG_TABLE_TMPL_FN: +# Name of a template file that specifies the output fields of the forecast +# model (ufs-weather-model: diag_table) followed by [dot_ccpp_phys_suite]. +# Its default value is the name of the file that the ufs weather model +# expects to read in. +# +# FIELD_TABLE_TMPL_FN: +# Name of a template file that specifies the tracers in IC/LBC files of the +# forecast model (ufs-weather-mode: field_table) followed by [dot_ccpp_phys_suite]. +# Its default value is the name of the file that the ufs weather model expects +# to read in. +# +# MODEL_CONFIG_TMPL_FN: +# Name of a template file that contains settings and configurations for the +# NUOPC/ESMF main component (ufs-weather-model: model_config). Its default +# value is the name of the file that the ufs weather model expects to read in. +# +# NEMS_CONFIG_TMPL_FN: +# Name of a template file that contains information about the various NEMS +# components and their run sequence (ufs-weather-model: nems.configure). +# Its default value is the name of the file that the ufs weather model expects +# to read in. +# +# FCST_MODEL: +# Name of forecast model (default=ufs-weather-model) +# +# WFLOW_XML_FN: +# Name of the rocoto workflow XML file that the experiment generation +# script creates and that defines the workflow for the experiment. +# +# GLOBAL_VAR_DEFNS_FN: +# Name of file (a shell script) containing the defintions of the primary +# experiment variables (parameters) defined in this default configuration +# script and in the user-specified configuration as well as secondary +# experiment variables generated by the experiment generation script. +# This file is sourced by many scripts (e.g. the J-job scripts corresponding +# to each workflow task) in order to make all the experiment variables +# available in those scripts. +# +# EXTRN_MDL_VAR_DEFNS_FN: +# Name of file (a shell script) containing the defintions of variables +# associated with the external model from which ICs or LBCs are generated. This +# file is created by the GET_EXTRN_*_TN task because the values of the variables +# it contains are not known before this task runs. The file is then sourced by +# the MAKE_ICS_TN and MAKE_LBCS_TN tasks. +# +# WFLOW_LAUNCH_SCRIPT_FN: +# Name of the script that can be used to (re)launch the experiment's rocoto +# workflow. +# +# WFLOW_LAUNCH_LOG_FN: +# Name of the log file that contains the output from successive calls to +# the workflow launch script (WFLOW_LAUNCH_SCRIPT_FN). +# +#----------------------------------------------------------------------- +# +EXPT_CONFIG_FN: "config.sh" + +RGNL_GRID_NML_FN: "regional_grid.nml" + +FV3_NML_BASE_SUITE_FN: "input.nml.FV3" +FV3_NML_YAML_CONFIG_FN: "FV3.input.yml" +FV3_NML_BASE_ENS_FN: "input.nml.base_ens" +FV3_EXEC_FN: "ufs_model" + +DATA_TABLE_TMPL_FN: "" +DIAG_TABLE_TMPL_FN: "" +FIELD_TABLE_TMPL_FN: "" +MODEL_CONFIG_TMPL_FN: "" +NEMS_CONFIG_TMPL_FN: "" + +FCST_MODEL: "ufs-weather-model" +WFLOW_XML_FN: "FV3LAM_wflow.xml" +GLOBAL_VAR_DEFNS_FN: "var_defns.sh" +EXTRN_MDL_VAR_DEFNS_FN: "extrn_mdl_var_defns.sh" +WFLOW_LAUNCH_SCRIPT_FN: "launch_FV3LAM_wflow.sh" +WFLOW_LAUNCH_LOG_FN: "log.launch_FV3LAM_wflow" +# +#----------------------------------------------------------------------- +# +# Set output file name. Definitions: +# +# POST_OUTPUT_DOMAIN_NAME: +# Domain name used in naming the output files of run_post by UPP or inline post. +# Output file name: $NET.tHHz.[var_name].f###.$POST_OUTPUT_DOMAIN_NAME.grib2 +# +#----------------------------------------------------------------------- +# +POST_OUTPUT_DOMAIN_NAME: "" +# +#----------------------------------------------------------------------- +# +# Set forecast parameters. Definitions: +# +# DATE_FIRST_CYCL: +# Starting date of the first forecast in the set of forecasts to run. +# Format is "YYYYMMDD". Note that this does not include the hour-of-day. +# +# DATE_LAST_CYCL: +# Starting date of the last forecast in the set of forecasts to run. +# Format is "YYYYMMDD". Note that this does not include the hour-of-day. +# +# CYCL_HRS: +# An array containing the hours of the day at which to launch forecasts. +# Forecasts are launched at these hours on each day from DATE_FIRST_CYCL +# to DATE_LAST_CYCL, inclusive. Each element of this array must be a +# two-digit string representing an integer that is less than or equal to +# 23, e.g. "00", "03", "12", "23". +# +# INCR_CYCL_FREQ: +# Increment in hours for Cycle Frequency (cycl_freq). +# Default is 24, which means cycle_freq=24:00:00 +# +# FCST_LEN_HRS: +# The length of each forecast, in integer hours. +# +#----------------------------------------------------------------------- +# +DATE_FIRST_CYCL: "YYYYMMDD" +DATE_LAST_CYCL: "YYYYMMDD" +CYCL_HRS: [ "HH1", "HH2" ] +INCR_CYCL_FREQ: "24" +FCST_LEN_HRS: "24" +# +#----------------------------------------------------------------------- +# +# Set model_configure parameters. Definitions: +# +# DT_ATMOS: +# The main forecast model integraton time step. As described in the +# forecast model documentation, "It corresponds to the frequency with +# which the top level routine in the dynamics is called as well as the +# frequency with which the physics is called." +# +# CPL: parameter for coupling +# (set automatically based on FCST_MODEL in ush/setup.sh) +# (ufs-weather-model:FALSE, fv3gfs_aqm:TRUE) +# +# RESTART_INTERVAL: +# frequency of the output restart files (unit:hour). +# Default=0: restart files are produced at the end of a forecast run +# For example, RESTART_INTERVAL: "1": restart files are produced every hour +# with the prefix "YYYYMMDD.HHmmSS." in the RESTART directory +# +# WRITE_DOPOST: +# Flag that determines whether or not to use the INLINE POST option +# When TRUE, force to turn off run_post (RUN_TASK_RUN_POST=FALSE) in setup.sh +# +#----------------------------------------------------------------------- +# +DT_ATMOS: "" +RESTART_INTERVAL: "0" +WRITE_DOPOST: "FALSE" +# +#----------------------------------------------------------------------- +# +# Set METplus parameters. Definitions: +# +# MODEL: +# String that specifies a descriptive name for the model being verified. +# +# MET_INSTALL_DIR: +# Location to top-level directory of MET installation. +# +# METPLUS_PATH: +# Location to top-level directory of METplus installation. +# +# CCPA_OBS_DIR: +# User-specified location of top-level directory where CCPA hourly +# precipitation files used by METplus are located. This parameter needs +# to be set for both user-provided observations and for observations +# that are retrieved from the NOAA HPSS (if the user has access) via +# the get_obs_ccpa_tn task (activated in workflow by setting +# RUN_TASK_GET_OBS_CCPA="TRUE"). In the case of pulling observations +# directly from NOAA HPSS, the data retrieved will be placed in this +# directory. Please note, this path must be defind as +# /full-path-to-obs/ccpa/proc. METplus is configured to verify 01-, +# 03-, 06-, and 24-h accumulated precipitation using hourly CCPA files. +# METplus configuration files require the use of predetermined directory +# structure and file names. Therefore, if the CCPA files are user +# provided, they need to follow the anticipated naming structure: +# {YYYYMMDD}/ccpa.t{HH}z.01h.hrap.conus.gb2, where YYYY is the 4-digit +# valid year, MM the 2-digit valid month, DD the 2-digit valid day of +# the month, and HH the 2-digit valid hour of the day. In addition, a +# caveat is noted for using hourly CCPA data. There is a problem with +# the valid time in the metadata for files valid from 19 - 00 UTC (or +# files under the '00' directory). The script to pull the CCPA data +# from the NOAA HPSS has an example of how to account for this as well +# as organizing the data into a more intuitive format: +# regional_workflow/scripts/exregional_get_ccpa_files.sh. When a fix +# is provided, it will be accounted for in the +# exregional_get_ccpa_files.sh script. +# +# MRMS_OBS_DIR: +# User-specified location of top-level directory where MRMS composite +# reflectivity files used by METplus are located. This parameter needs +# to be set for both user-provided observations and for observations +# that are retrieved from the NOAA HPSS (if the user has access) via the +# get_obs_mrms_tn task (activated in workflow by setting +# RUN_TASK_GET_OBS_MRMS="TRUE"). In the case of pulling observations +# directly from NOAA HPSS, the data retrieved will be placed in this +# directory. Please note, this path must be defind as +# /full-path-to-obs/mrms/proc. METplus configuration files require the +# use of predetermined directory structure and file names. Therefore, if +# the MRMS files are user provided, they need to follow the anticipated +# naming structure: +# {YYYYMMDD}/MergedReflectivityQCComposite_00.50_{YYYYMMDD}-{HH}{mm}{SS}.grib2, +# where YYYY is the 4-digit valid year, MM the 2-digit valid month, DD +# the 2-digit valid day of the month, HH the 2-digit valid hour of the +# day, mm the 2-digit valid minutes of the hour, and SS is the two-digit +# valid seconds of the hour. In addition, METplus is configured to look +# for a MRMS composite reflectivity file for the valid time of the +# forecast being verified; since MRMS composite reflectivity files do +# not always exactly match the valid time, a script, within the main +# script to retrieve MRMS data from the NOAA HPSS, is used to identify +# and rename the MRMS composite reflectivity file to match the valid +# time of the forecast. The script to pull the MRMS data from the NOAA +# HPSS has an example of the expected file naming structure: +# regional_workflow/scripts/exregional_get_mrms_files.sh. This script +# calls the script used to identify the MRMS file closest to the valid +# time: regional_workflow/ush/mrms_pull_topofhour.py. +# +# NDAS_OBS_DIR: +# User-specified location of top-level directory where NDAS prepbufr +# files used by METplus are located. This parameter needs to be set for +# both user-provided observations and for observations that are +# retrieved from the NOAA HPSS (if the user has access) via the +# get_obs_ndas_tn task (activated in workflow by setting  +# RUN_TASK_GET_OBS_NDAS="TRUE"). In the case of pulling observations +# directly from NOAA HPSS, the data retrieved will be placed in this +# directory. Please note, this path must be defind as +# /full-path-to-obs/ndas/proc. METplus is configured to verify +# near-surface variables hourly and upper-air variables at times valid +# at 00 and 12 UTC with NDAS prepbufr files. METplus configuration files +# require the use of predetermined file names. Therefore, if the NDAS +# files are user provided, they need to follow the anticipated naming +# structure: prepbufr.ndas.{YYYYMMDDHH}, where YYYY is the 4-digit valid +# year, MM the 2-digit valid month, DD the 2-digit valid day of the +# month, and HH the 2-digit valid hour of the day. The script to pull +# the NDAS data from the NOAA HPSS has an example of how to rename the +# NDAS data into a more intuitive format with the valid time listed in +# the file name: regional_workflow/scripts/exregional_get_ndas_files.sh +# +#----------------------------------------------------------------------- +# +MODEL: "" +MET_INSTALL_DIR: "" +MET_BIN_EXEC: "bin" +METPLUS_PATH: "" +CCPA_OBS_DIR: "" +MRMS_OBS_DIR: "" +NDAS_OBS_DIR: "" +# +#----------------------------------------------------------------------- +# +# Set initial and lateral boundary condition generation parameters. +# Definitions: +# +# EXTRN_MDL_NAME_ICS: +#`The name of the external model that will provide fields from which +# initial condition (including and surface) files will be generated for +# input into the forecast model. +# +# EXTRN_MDL_NAME_LBCS: +#`The name of the external model that will provide fields from which +# lateral boundary condition (LBC) files will be generated for input into +# the forecast model. +# +# LBC_SPEC_INTVL_HRS: +# The interval (in integer hours) with which LBC files will be generated. +# We will refer to this as the boundary update interval. Note that the +# model specified in EXTRN_MDL_NAME_LBCS must have data available at a +# frequency greater than or equal to that implied by LBC_SPEC_INTVL_HRS. +# For example, if LBC_SPEC_INTVL_HRS is set to 6, then the model must have +# data availble at least every 6 hours. It is up to the user to ensure +# that this is the case. +# +# EXTRN_MDL_ICS_OFFSET_HRS: +# Users may wish to start a forecast from a forecast of a previous cycle +# of an external model. This variable sets the number of hours earlier +# the external model started than when the FV3 forecast configured here +# should start. For example, the forecast should start from a 6 hour +# forecast of the GFS, then EXTRN_MDL_ICS_OFFSET_HRS=6. + +# EXTRN_MDL_LBCS_OFFSET_HRS: +# Users may wish to use lateral boundary conditions from a forecast that +# was started earlier than the initial time for the FV3 forecast +# configured here. This variable sets the number of hours earlier +# the external model started than when the FV3 forecast configured here +# should start. For example, the forecast should use lateral boundary +# conditions from the GFS started 6 hours earlier, then +# EXTRN_MDL_LBCS_OFFSET_HRS=6. +# Note: the default value is model-dependent and set in +# set_extrn_mdl_params.sh +# +# FV3GFS_FILE_FMT_ICS: +# If using the FV3GFS model as the source of the ICs (i.e. if EXTRN_MDL_NAME_ICS +# is set to "FV3GFS"), this variable specifies the format of the model +# files to use when generating the ICs. +# +# FV3GFS_FILE_FMT_LBCS: +# If using the FV3GFS model as the source of the LBCs (i.e. if +# EXTRN_MDL_NAME_LBCS is set to "FV3GFS"), this variable specifies the +# format of the model files to use when generating the LBCs. +# +#----------------------------------------------------------------------- +# +EXTRN_MDL_NAME_ICS: "FV3GFS" +EXTRN_MDL_NAME_LBCS: "FV3GFS" +LBC_SPEC_INTVL_HRS: "6" +EXTRN_MDL_ICS_OFFSET_HRS: "0" +EXTRN_MDL_LBCS_OFFSET_HRS: "" +FV3GFS_FILE_FMT_ICS: "nemsio" +FV3GFS_FILE_FMT_LBCS: "nemsio" +# +#----------------------------------------------------------------------- +# +# Base directories in which to search for external model files. +# +# EXTRN_MDL_SYSBASEDIR_ICS: +# Base directory on the local machine containing external model files for +# generating ICs on the native grid. The way the full path containing +# these files is constructed depends on the user-specified external model +# for ICs, i.e. EXTRN_MDL_NAME_ICS. +# +# EXTRN_MDL_SYSBASEDIR_LBCS: +# Same as EXTRN_MDL_SYSBASEDIR_ICS but for LBCs. +# +# Note that these must be defined as null strings here so that if they +# are specified by the user in the experiment configuration file, they +# remain set to those values, and if not, they get set to machine-dependent +# values. +# +#----------------------------------------------------------------------- +# +EXTRN_MDL_SYSBASEDIR_ICS: '' +EXTRN_MDL_SYSBASEDIR_LBCS: '' +# +#----------------------------------------------------------------------- +# +# User-staged external model directories and files. Definitions: +# +# USE_USER_STAGED_EXTRN_FILES: +# Flag that determines whether or not the workflow will look for the +# external model files needed for generating ICs and LBCs in user-specified +# directories. +# +# EXTRN_MDL_SOURCE_BASEDIR_ICS: +# Directory in which to look for external model files for generating ICs. +# If USE_USER_STAGED_EXTRN_FILES is set to "TRUE", the workflow looks in +# this directory (specifically, in a subdirectory under this directory +# named "YYYYMMDDHH" consisting of the starting date and cycle hour of +# the forecast, where YYYY is the 4-digit year, MM the 2-digit month, DD +# the 2-digit day of the month, and HH the 2-digit hour of the day) for +# the external model files specified by the array EXTRN_MDL_FILES_ICS +# (these files will be used to generate the ICs on the native FV3-LAM +# grid). This variable is not used if USE_USER_STAGED_EXTRN_FILES is +# set to "FALSE". +# +# EXTRN_MDL_FILES_ICS: +# Array containing templates of the names of the files to search for in +# the directory specified by EXTRN_MDL_SOURCE_BASEDIR_ICS. This +# variable is not used if USE_USER_STAGED_EXTRN_FILES is set to "FALSE". +# A single template should be used for each model file type that is +# meant to be used. You may use any of the Python-style templates +# allowed in the ush/retrieve_data.py script. To see the full list of +# supported templates, run that script with a -h option. Here is an example of +# setting FV3GFS nemsio input files: +# EXTRN_MDL_FILES_ICS=( gfs.t{hh}z.atmf{fcst_hr:03d}.nemsio \ +# gfs.t{hh}z.sfcf{fcst_hr:03d}.nemsio ) +# Or for FV3GFS grib files: +# EXTRN_MDL_FILES_ICS=( gfs.t{hh}z.pgrb2.0p25.f{fcst_hr:03d} ) +# +# EXTRN_MDL_SOURCE_BASEDIR_LBCS: +# Analogous to EXTRN_MDL_SOURCE_BASEDIR_ICS but for LBCs instead of ICs. +# +# EXTRN_MDL_FILES_LBCS: +# Analogous to EXTRN_MDL_FILES_ICS but for LBCs instead of ICs. +# +# EXTRN_MDL_DATA_STORES: +# A list of data stores where the scripts should look for external model +# data. The list is in priority order. If disk information is provided +# via USE_USER_STAGED_EXTRN_FILES or a known location on the platform, +# the disk location will be highest priority. Options are disk, hpss, +# aws, and nomads. +# +#----------------------------------------------------------------------- +# +USE_USER_STAGED_EXTRN_FILES: "FALSE" +EXTRN_MDL_SOURCE_BASEDIR_ICS: "" +EXTRN_MDL_FILES_ICS: "" +EXTRN_MDL_SOURCE_BASEDIR_LBCS: "" +EXTRN_MDL_FILES_LBCS: "" +EXTRN_MDL_DATA_STORES: "" +# +#----------------------------------------------------------------------- +# +# Set NOMADS online data associated parameters. Definitions: +# +# NOMADS: +# Flag controlling whether or not using NOMADS online data. +# +# NOMADS_file_type: +# Flag controlling the format of data. +# +#----------------------------------------------------------------------- +# +NOMADS: "FALSE" +NOMADS_file_type: "nemsio" +# +#----------------------------------------------------------------------- +# +# Set CCPP-associated parameters. Definitions: +# +# CCPP_PHYS_SUITE: +# The physics suite that will run using CCPP (Common Community Physics +# Package). The choice of physics suite determines the forecast model's +# namelist file, the diagnostics table file, the field table file, and +# the XML physics suite definition file that are staged in the experiment +# directory or the cycle directories under it. +# +#----------------------------------------------------------------------- +# +CCPP_PHYS_SUITE: "FV3_GFS_v16" +# +#----------------------------------------------------------------------- +# +# Set GRID_GEN_METHOD. This variable specifies the method to use to +# generate a regional grid in the horizontal. The values that it can +# take on are: +# +# * "GFDLgrid": +# This setting will generate a regional grid by first generating a +# "parent" global cubed-sphere grid and then taking a portion of tile +# 6 of that global grid -- referred to in the grid generation scripts +# as "tile 7" even though it doesn't correspond to a complete tile -- +# and using it as the regional grid. Note that the forecast is run on +# only on the regional grid (i.e. tile 7, not tiles 1 through 6). +# +# * "ESGgrid": +# This will generate a regional grid using the map projection developed +# by Jim Purser of EMC. +# +# Note that: +# +# 1) If the experiment is using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to the name of one of the valid predefined +# grids), then GRID_GEN_METHOD will be reset to the value of +# GRID_GEN_METHOD for that grid. This will happen regardless of +# whether or not GRID_GEN_METHOD is assigned a value in the user- +# specified experiment configuration file, i.e. any value it may be +# assigned in the experiment configuration file will be overwritten. +# +# 2) If the experiment is not using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to a null string), then GRID_GEN_METHOD must +# be set in the experiment configuration file. Otherwise, it will +# remain set to a null string, and the experiment generation will +# fail because the generation scripts check to ensure that it is set +# to a non-empty string before creating the experiment directory. +# +#----------------------------------------------------------------------- +# +GRID_GEN_METHOD: "" +# +#----------------------------------------------------------------------- +# +# Set parameters specific to the "GFDLgrid" method of generating a regional +# grid (i.e. for GRID_GEN_METHOD set to "GFDLgrid"). The following +# parameters will be used only if GRID_GEN_METHOD is set to "GFDLgrid". +# In this grid generation method: +# +# * The regional grid is defined with respect to a "parent" global cubed- +# sphere grid. Thus, all the parameters for a global cubed-sphere grid +# must be specified in order to define this parent global grid even +# though the model equations are not integrated on (they are integrated +# only on the regional grid). +# +# * GFDLgrid_RES is the number of grid cells in either one of the two +# horizontal directions x and y on any one of the 6 tiles of the parent +# global cubed-sphere grid. The mapping from GFDLgrid_RES to a nominal +# resolution (grid cell size) for a uniform global grid (i.e. Schmidt +# stretch factor GFDLgrid_STRETCH_FAC set to 1) for several values of +# GFDLgrid_RES is as follows: +# +# GFDLgrid_RES typical cell size +# ------------ ----------------- +# 192 50 km +# 384 25 km +# 768 13 km +# 1152 8.5 km +# 3072 3.2 km +# +# Note that these are only typical cell sizes. The actual cell size on +# the global grid tiles varies somewhat as we move across a tile. +# +# * Tile 6 has arbitrarily been chosen as the tile to use to orient the +# global parent grid on the sphere (Earth). This is done by specifying +# GFDLgrid_LON_T6_CTR and GFDLgrid_LAT_T6_CTR, which are the longitude +# and latitude (in degrees) of the center of tile 6. +# +# * Setting the Schmidt stretching factor GFDLgrid_STRETCH_FAC to a value +# greater than 1 shrinks tile 6, while setting it to a value less than +# 1 (but still greater than 0) expands it. The remaining 5 tiles change +# shape as necessary to maintain global coverage of the grid. +# +# * The cell size on a given global tile depends on both GFDLgrid_RES and +# GFDLgrid_STRETCH_FAC (since changing GFDLgrid_RES changes the number +# of cells in the tile, and changing GFDLgrid_STRETCH_FAC modifies the +# shape and size of the tile). +# +# * The regional grid is embedded within tile 6 (i.e. it doesn't extend +# beyond the boundary of tile 6). Its exact location within tile 6 is +# is determined by specifying the starting and ending i and j indices +# of the regional grid on tile 6, where i is the grid index in the x +# direction and j is the grid index in the y direction. These indices +# are stored in the variables +# +# GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G +# GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G +# GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G +# GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G +# +# * In the forecast model code and in the experiment generation and workflow +# scripts, for convenience the regional grid is denoted as "tile 7" even +# though it doesn't map back to one of the 6 faces of the cube from +# which the parent global grid is generated (it maps back to only a +# subregion on face 6 since it is wholly confined within tile 6). Tile +# 6 may be referred to as the "parent" tile of the regional grid. +# +# * GFDLgrid_REFINE_RATIO is the refinement ratio of the regional grid +# (tile 7) with respect to the grid on its parent tile (tile 6), i.e. +# it is the number of grid cells along the boundary of the regional grid +# that abut one cell on tile 6. Thus, the cell size on the regional +# grid depends not only on GFDLgrid_RES and GFDLgrid_STRETCH_FAC (because +# the cell size on tile 6 depends on these two parameters) but also on +# GFDLgrid_REFINE_RATIO. Note that as on the tiles of the global grid, +# the cell size on the regional grid is not uniform but varies as we +# move across the grid. +# +# Definitions of parameters that need to be specified when GRID_GEN_METHOD +# is set to "GFDLgrid": +# +# GFDLgrid_LON_T6_CTR: +# Longitude of the center of tile 6 (in degrees). +# +# GFDLgrid_LAT_T6_CTR: +# Latitude of the center of tile 6 (in degrees). +# +# GFDLgrid_RES: +# Number of points in each of the two horizontal directions (x and y) on +# each tile of the parent global grid. Note that the name of this parameter +# is really a misnomer because although it has the stirng "RES" (for +# "resolution") in its name, it specifies number of grid cells, not grid +# size (in say meters or kilometers). However, we keep this name in order +# to remain consistent with the usage of the word "resolution" in the +# global forecast model and other auxiliary codes. +# +# GFDLgrid_STRETCH_FAC: +# Stretching factor used in the Schmidt transformation applied to the +# parent cubed-sphere grid. +# +# GFDLgrid_REFINE_RATIO: +# Cell refinement ratio for the regional grid, i.e. the number of cells +# in either the x or y direction on the regional grid (tile 7) that abut +# one cell on its parent tile (tile 6). +# +# GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G: +# i-index on tile 6 at which the regional grid (tile 7) starts. +# +# GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G: +# i-index on tile 6 at which the regional grid (tile 7) ends. +# +# GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G: +# j-index on tile 6 at which the regional grid (tile 7) starts. +# +# GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G: +# j-index on tile 6 at which the regional grid (tile 7) ends. +# +# GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES: +# Flag that determines the file naming convention to use for grid, orography, +# and surface climatology files (or, if using pregenerated files, the +# naming convention that was used to name these files). These files +# usually start with the string "C${RES}_", where RES is an integer. +# In the global forecast model, RES is the number of points in each of +# the two horizontal directions (x and y) on each tile of the global grid +# (defined here as GFDLgrid_RES). If this flag is set to "TRUE", RES will +# be set to GFDLgrid_RES just as in the global forecast model. If it is +# set to "FALSE", we calculate (in the grid generation task) an "equivalent +# global uniform cubed-sphere resolution" -- call it RES_EQUIV -- and +# then set RES equal to it. RES_EQUIV is the number of grid points in +# each of the x and y directions on each tile that a global UNIFORM (i.e. +# stretch factor of 1) cubed-sphere grid would have to have in order to +# have the same average grid size as the regional grid. This is a more +# useful indicator of the grid size because it takes into account the +# effects of GFDLgrid_RES, GFDLgrid_STRETCH_FAC, and GFDLgrid_REFINE_RATIO +# in determining the regional grid's typical grid size, whereas simply +# setting RES to GFDLgrid_RES doesn't take into account the effects of +# GFDLgrid_STRETCH_FAC and GFDLgrid_REFINE_RATIO on the regional grid's +# resolution. Nevertheless, some users still prefer to use GFDLgrid_RES +# in the file names, so we allow for that here by setting this flag to +# "TRUE". +# +# Note that: +# +# 1) If the experiment is using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to the name of one of the valid predefined +# grids), then: +# +# a) If the value of GRID_GEN_METHOD for that grid is "GFDLgrid", then +# these parameters will get reset to the values for that grid. +# This will happen regardless of whether or not they are assigned +# values in the user-specified experiment configuration file, i.e. +# any values they may be assigned in the experiment configuration +# file will be overwritten. +# +# b) If the value of GRID_GEN_METHOD for that grid is "ESGgrid", then +# these parameters will not be used and thus do not need to be reset +# to non-empty strings. +# +# 2) If the experiment is not using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to a null string), then: +# +# a) If GRID_GEN_METHOD is set to "GFDLgrid" in the user-specified +# experiment configuration file, then these parameters must be set +# in that configuration file. +# +# b) If GRID_GEN_METHOD is set to "ESGgrid" in the user-specified +# experiment configuration file, then these parameters will not be +# used and thus do not need to be reset to non-empty strings. +# +#----------------------------------------------------------------------- +# +GFDLgrid_LON_T6_CTR: "" +GFDLgrid_LAT_T6_CTR: "" +GFDLgrid_RES: "" +GFDLgrid_STRETCH_FAC: "" +GFDLgrid_REFINE_RATIO: "" +GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G: "" +GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G: "" +GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G: "" +GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G: "" +GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES: "" +# +#----------------------------------------------------------------------- +# +# Set parameters specific to the "ESGgrid" method of generating a regional +# grid (i.e. for GRID_GEN_METHOD set to "ESGgrid"). Definitions: +# +# ESGgrid_LON_CTR: +# The longitude of the center of the grid (in degrees). +# +# ESGgrid_LAT_CTR: +# The latitude of the center of the grid (in degrees). +# +# ESGgrid_DELX: +# The cell size in the zonal direction of the regional grid (in meters). +# +# ESGgrid_DELY: +# The cell size in the meridional direction of the regional grid (in +# meters). +# +# ESGgrid_NX: +# The number of cells in the zonal direction on the regional grid. +# +# ESGgrid_NY: +# The number of cells in the meridional direction on the regional grid. +# +# ESGgrid_WIDE_HALO_WIDTH: +# The width (in units of number of grid cells) of the halo to add around +# the regional grid before shaving the halo down to the width(s) expected +# by the forecast model. +# +# ESGgrid_PAZI: +# The rotational parameter for the ESG grid (in degrees). +# +# In order to generate grid files containing halos that are 3-cell and +# 4-cell wide and orography files with halos that are 0-cell and 3-cell +# wide (all of which are required as inputs to the forecast model), the +# grid and orography tasks first create files with halos around the regional +# domain of width ESGgrid_WIDE_HALO_WIDTH cells. These are first stored +# in files. The files are then read in and "shaved" down to obtain grid +# files with 3-cell-wide and 4-cell-wide halos and orography files with +# 0-cell-wide (i.e. no halo) and 3-cell-wide halos. For this reason, we +# refer to the original halo that then gets shaved down as the "wide" +# halo, i.e. because it is wider than the 0-cell-wide, 3-cell-wide, and +# 4-cell-wide halos that we will eventually end up with. Note that the +# grid and orography files with the wide halo are only needed as intermediates +# in generating the files with 0-cell-, 3-cell-, and 4-cell-wide halos; +# they are not needed by the forecast model. +# NOTE: Probably don't need to make ESGgrid_WIDE_HALO_WIDTH a user-specified +# variable. Just set it in the function set_gridparams_ESGgrid.sh. +# +# Note that: +# +# 1) If the experiment is using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to the name of one of the valid predefined +# grids), then: +# +# a) If the value of GRID_GEN_METHOD for that grid is "GFDLgrid", then +# these parameters will not be used and thus do not need to be reset +# to non-empty strings. +# +# b) If the value of GRID_GEN_METHOD for that grid is "ESGgrid", then +# these parameters will get reset to the values for that grid. +# This will happen regardless of whether or not they are assigned +# values in the user-specified experiment configuration file, i.e. +# any values they may be assigned in the experiment configuration +# file will be overwritten. +# +# 2) If the experiment is not using one of the predefined grids (i.e. if +# PREDEF_GRID_NAME is set to a null string), then: +# +# a) If GRID_GEN_METHOD is set to "GFDLgrid" in the user-specified +# experiment configuration file, then these parameters will not be +# used and thus do not need to be reset to non-empty strings. +# +# b) If GRID_GEN_METHOD is set to "ESGgrid" in the user-specified +# experiment configuration file, then these parameters must be set +# in that configuration file. +# +#----------------------------------------------------------------------- +# +ESGgrid_LON_CTR: "" +ESGgrid_LAT_CTR: "" +ESGgrid_DELX: "" +ESGgrid_DELY: "" +ESGgrid_NX: "" +ESGgrid_NY: "" +ESGgrid_WIDE_HALO_WIDTH: "" +ESGgrid_PAZI: "" +# +#----------------------------------------------------------------------- +# +# Set computational parameters for the forecast. Definitions: +# +# LAYOUT_X, LAYOUT_Y: +# The number of MPI tasks (processes) to use in the two horizontal +# directions (x and y) of the regional grid when running the forecast +# model. +# +# BLOCKSIZE: +# The amount of data that is passed into the cache at a time. +# +# Here, we set these parameters to null strings. This is so that, for +# any one of these parameters: +# +# 1) If the experiment is using a predefined grid, then if the user +# sets the parameter in the user-specified experiment configuration +# file (EXPT_CONFIG_FN), that value will be used in the forecast(s). +# Otherwise, the default value of the parameter for that predefined +# grid will be used. +# +# 2) If the experiment is not using a predefined grid (i.e. it is using +# a custom grid whose parameters are specified in the experiment +# configuration file), then the user must specify a value for the +# parameter in that configuration file. Otherwise, the parameter +# will remain set to a null string, and the experiment generation +# will fail because the generation scripts check to ensure that all +# the parameters defined in this section are set to non-empty strings +# before creating the experiment directory. +# +#----------------------------------------------------------------------- +# +LAYOUT_X: "" +LAYOUT_Y: "" +BLOCKSIZE: "" +# +#----------------------------------------------------------------------- +# +# Set write-component (quilting) parameters. Definitions: +# +# QUILTING: +# Flag that determines whether or not to use the write component for +# writing output files to disk. +# +# WRTCMP_write_groups: +# The number of write groups (i.e. groups of MPI tasks) to use in the +# write component. +# +# WRTCMP_write_tasks_per_group: +# The number of MPI tasks to allocate for each write group. +# +# PRINT_ESMF: +# Flag for whether or not to output extra (debugging) information from +# ESMF routines. Must be "TRUE" or "FALSE". Note that the write +# component uses ESMF library routines to interpolate from the native +# forecast model grid to the user-specified output grid (which is defined +# in the model configuration file MODEL_CONFIG_FN in the forecast's run +# directory). +# +#----------------------------------------------------------------------- +# +QUILTING: "TRUE" +PRINT_ESMF: "FALSE" + +WRTCMP_write_groups: "1" +WRTCMP_write_tasks_per_group: "20" + +WRTCMP_output_grid: "''" +WRTCMP_cen_lon: "" +WRTCMP_cen_lat: "" +WRTCMP_lon_lwr_left: "" +WRTCMP_lat_lwr_left: "" +# +# The following are used only for the case of WRTCMP_output_grid set to +# "'rotated_latlon'". +# +WRTCMP_lon_upr_rght: "" +WRTCMP_lat_upr_rght: "" +WRTCMP_dlon: "" +WRTCMP_dlat: "" +# +# The following are used only for the case of WRTCMP_output_grid set to +# "'lambert_conformal'". +# +WRTCMP_stdlat1: "" +WRTCMP_stdlat2: "" +WRTCMP_nx: "" +WRTCMP_ny: "" +WRTCMP_dx: "" +WRTCMP_dy: "" +# +#----------------------------------------------------------------------- +# +# Set PREDEF_GRID_NAME. This parameter specifies a predefined regional +# grid, as follows: +# +# * If PREDEF_GRID_NAME is set to a valid predefined grid name, the grid +# generation method GRID_GEN_METHOD, the (native) grid parameters, and +# the write-component grid parameters are set to predefined values for +# the specified grid, overwriting any settings of these parameters in +# the user-specified experiment configuration file. In addition, if +# the time step DT_ATMOS and the computational parameters LAYOUT_X, +# LAYOUT_Y, and BLOCKSIZE are not specified in that configuration file, +# they are also set to predefined values for the specified grid. +# +# * If PREDEF_GRID_NAME is set to an empty string, it implies the user +# is providing the native grid parameters in the user-specified +# experiment configuration file (EXPT_CONFIG_FN). In this case, the +# grid generation method GRID_GEN_METHOD, the native grid parameters, +# and the write-component grid parameters as well as the time step +# forecast model's main time step DT_ATMOS and the computational +# parameters LAYOUT_X, LAYOUT_Y, and BLOCKSIZE must be set in that +# configuration file; otherwise, the values of all of these parameters +# in this default experiment configuration file will be used. +# +# Setting PREDEF_GRID_NAME provides a convenient method of specifying a +# commonly used set of grid-dependent parameters. The predefined grid +# parameters are specified in the script +# +# $HOMErrfs/ush/set_predef_grid_params.sh +# +#----------------------------------------------------------------------- +# +PREDEF_GRID_NAME: "" +# +#----------------------------------------------------------------------- +# +# Set PREEXISTING_DIR_METHOD. This variable determines the method to use +# use to deal with preexisting directories [e.g ones generated by previous +# calls to the experiment generation script using the same experiment name +# (EXPT_SUBDIR) as the current experiment]. This variable must be set to +# one of "delete", "rename", and "quit". The resulting behavior for each +# of these values is as follows: +# +# * "delete": +# The preexisting directory is deleted and a new directory (having the +# same name as the original preexisting directory) is created. +# +# * "rename": +# The preexisting directory is renamed and a new directory (having the +# same name as the original preexisting directory) is created. The new +# name of the preexisting directory consists of its original name and +# the suffix "_oldNNN", where NNN is a 3-digit integer chosen to make +# the new name unique. +# +# * "quit": +# The preexisting directory is left unchanged, but execution of the +# currently running script is terminated. In this case, the preexisting +# directory must be dealt with manually before rerunning the script. +# +#----------------------------------------------------------------------- +# +PREEXISTING_DIR_METHOD: "delete" +# +#----------------------------------------------------------------------- +# +# Set flags for more detailed messages. Defintitions: +# +# VERBOSE: +# This is a flag that determines whether or not the experiment generation +# and workflow task scripts tend to print out more informational messages. +# +# DEBUG: +# This is a flag that determines whether or not very detailed debugging +# messages are printed to out. Note that if DEBUG is set to TRUE, then +# VERBOSE will also get reset to TRUE if it isn't already. +# +#----------------------------------------------------------------------- +# +VERBOSE: "TRUE" +DEBUG: "FALSE" +# +#----------------------------------------------------------------------- +# +# Set the names of the various rocoto workflow tasks. +# +#----------------------------------------------------------------------- +# +MAKE_GRID_TN: "make_grid" +MAKE_OROG_TN: "make_orog" +MAKE_SFC_CLIMO_TN: "make_sfc_climo" +GET_EXTRN_ICS_TN: "get_extrn_ics" +GET_EXTRN_LBCS_TN: "get_extrn_lbcs" +MAKE_ICS_TN: "make_ics" +MAKE_LBCS_TN: "make_lbcs" +RUN_FCST_TN: "run_fcst" +RUN_POST_TN: "run_post" +GET_OBS: "get_obs" +GET_OBS_CCPA_TN: "get_obs_ccpa" +GET_OBS_MRMS_TN: "get_obs_mrms" +GET_OBS_NDAS_TN: "get_obs_ndas" +VX_TN: "run_vx" +VX_GRIDSTAT_TN: "run_gridstatvx" +VX_GRIDSTAT_REFC_TN: "run_gridstatvx_refc" +VX_GRIDSTAT_RETOP_TN: "run_gridstatvx_retop" +VX_GRIDSTAT_03h_TN: "run_gridstatvx_03h" +VX_GRIDSTAT_06h_TN: "run_gridstatvx_06h" +VX_GRIDSTAT_24h_TN: "run_gridstatvx_24h" +VX_POINTSTAT_TN: "run_pointstatvx" +VX_ENSGRID_TN: "run_ensgridvx" +VX_ENSGRID_03h_TN: "run_ensgridvx_03h" +VX_ENSGRID_06h_TN: "run_ensgridvx_06h" +VX_ENSGRID_24h_TN: "run_ensgridvx_24h" +VX_ENSGRID_REFC_TN: "run_ensgridvx_refc" +VX_ENSGRID_RETOP_TN: "run_ensgridvx_retop" +VX_ENSGRID_MEAN_TN: "run_ensgridvx_mean" +VX_ENSGRID_PROB_TN: "run_ensgridvx_prob" +VX_ENSGRID_MEAN_03h_TN: "run_ensgridvx_mean_03h" +VX_ENSGRID_PROB_03h_TN: "run_ensgridvx_prob_03h" +VX_ENSGRID_MEAN_06h_TN: "run_ensgridvx_mean_06h" +VX_ENSGRID_PROB_06h_TN: "run_ensgridvx_prob_06h" +VX_ENSGRID_MEAN_24h_TN: "run_ensgridvx_mean_24h" +VX_ENSGRID_PROB_24h_TN: "run_ensgridvx_prob_24h" +VX_ENSGRID_PROB_REFC_TN: "run_ensgridvx_prob_refc" +VX_ENSGRID_PROB_RETOP_TN: "run_ensgridvx_prob_retop" +VX_ENSPOINT_TN: "run_enspointvx" +VX_ENSPOINT_MEAN_TN: "run_enspointvx_mean" +VX_ENSPOINT_PROB_TN: "run_enspointvx_prob" +# +#----------------------------------------------------------------------- +# +# Set flags (and related directories) that determine whether various +# workflow tasks should be run. Note that the MAKE_GRID_TN, MAKE_OROG_TN, +# and MAKE_SFC_CLIMO_TN are all cycle-independent tasks, i.e. if they +# are to be run, they do so only once at the beginning of the workflow +# before any cycles are run. Definitions: +# +# RUN_TASK_MAKE_GRID: +# Flag that determines whether the MAKE_GRID_TN task is to be run. If +# this is set to "TRUE", the grid generation task is run and new grid +# files are generated. If it is set to "FALSE", then the scripts look +# for pregenerated grid files in the directory specified by GRID_DIR +# (see below). +# +# GRID_DIR: +# The directory in which to look for pregenerated grid files if +# RUN_TASK_MAKE_GRID is set to "FALSE". +# +# RUN_TASK_MAKE_OROG: +# Same as RUN_TASK_MAKE_GRID but for the MAKE_OROG_TN task. +# +# OROG_DIR: +# Same as GRID_DIR but for the MAKE_OROG_TN task. +# +# RUN_TASK_MAKE_SFC_CLIMO: +# Same as RUN_TASK_MAKE_GRID but for the MAKE_SFC_CLIMO_TN task. +# +# SFC_CLIMO_DIR: +# Same as GRID_DIR but for the MAKE_SFC_CLIMO_TN task. +# +# DOMAIN_PREGEN_BASEDIR: +# The base directory containing pregenerated grid, orography, and surface +# climatology files. This is an alternative for setting GRID_DIR, +# OROG_DIR, and SFC_CLIMO_DIR individually +# +# For the pregenerated grid specified by PREDEF_GRID_NAME, +# these "fixed" files are located in: +# +# ${DOMAIN_PREGEN_BASEDIR}/${PREDEF_GRID_NAME} +# +# The workflow scripts will create a symlink in the experiment directory +# that will point to a subdirectory (having the name of the grid being +# used) under this directory. This variable should be set to a null +# string in this file, but it can be specified in the user-specified +# workflow configuration file (EXPT_CONFIG_FN). +# +# RUN_TASK_GET_EXTRN_ICS: +# Flag that determines whether the GET_EXTRN_ICS_TN task is to be run. +# +# RUN_TASK_GET_EXTRN_LBCS: +# Flag that determines whether the GET_EXTRN_LBCS_TN task is to be run. +# +# RUN_TASK_MAKE_ICS: +# Flag that determines whether the MAKE_ICS_TN task is to be run. +# +# RUN_TASK_MAKE_LBCS: +# Flag that determines whether the MAKE_LBCS_TN task is to be run. +# +# RUN_TASK_RUN_FCST: +# Flag that determines whether the RUN_FCST_TN task is to be run. +# +# RUN_TASK_RUN_POST: +# Flag that determines whether the RUN_POST_TN task is to be run. +# +# RUN_TASK_VX_GRIDSTAT: +# Flag that determines whether the grid-stat verification task is to be +# run. +# +# RUN_TASK_VX_POINTSTAT: +# Flag that determines whether the point-stat verification task is to be +# run. +# +# RUN_TASK_VX_ENSGRID: +# Flag that determines whether the ensemble-stat verification for gridded +# data task is to be run. +# +# RUN_TASK_VX_ENSPOINT: +# Flag that determines whether the ensemble point verification task is +# to be run. If this flag is set, both ensemble-stat point verification +# and point verification of ensemble-stat output is computed. +# +#----------------------------------------------------------------------- +# +RUN_TASK_MAKE_GRID: "TRUE" +GRID_DIR: "/path/to/pregenerated/grid/files" + +RUN_TASK_MAKE_OROG: "TRUE" +OROG_DIR: "/path/to/pregenerated/orog/files" + +RUN_TASK_MAKE_SFC_CLIMO: "TRUE" +SFC_CLIMO_DIR: "/path/to/pregenerated/surface/climo/files" + +DOMAIN_PREGEN_BASEDIR: "" + +RUN_TASK_GET_EXTRN_ICS: "TRUE" +RUN_TASK_GET_EXTRN_LBCS: "TRUE" +RUN_TASK_MAKE_ICS: "TRUE" +RUN_TASK_MAKE_LBCS: "TRUE" +RUN_TASK_RUN_FCST: "TRUE" +RUN_TASK_RUN_POST: "TRUE" + +RUN_TASK_GET_OBS_CCPA: "FALSE" +RUN_TASK_GET_OBS_MRMS: "FALSE" +RUN_TASK_GET_OBS_NDAS: "FALSE" +RUN_TASK_VX_GRIDSTAT: "FALSE" +RUN_TASK_VX_POINTSTAT: "FALSE" +RUN_TASK_VX_ENSGRID: "FALSE" +RUN_TASK_VX_ENSPOINT: "FALSE" +# +#----------------------------------------------------------------------- +# +# Flag that determines whether MERRA2 aerosol climatology data and +# lookup tables for optics properties are obtained +# +#----------------------------------------------------------------------- +# +USE_MERRA_CLIMO: "FALSE" +# +#----------------------------------------------------------------------- +# +# Set the array parameter containing the names of all the fields that the +# MAKE_SFC_CLIMO_TN task generates on the native FV3-LAM grid. +# +#----------------------------------------------------------------------- +# +SFC_CLIMO_FIELDS: [ +"facsf", +"maximum_snow_albedo", +"slope_type", +"snowfree_albedo", +"soil_type", +"substrate_temperature", +"vegetation_greenness", +"vegetation_type" +] +# +#----------------------------------------------------------------------- +# +# Set parameters associated with the fixed (i.e. static) files. Definitions: +# +# FIXgsm: +# System directory in which the majority of fixed (i.e. time-independent) +# files that are needed to run the FV3-LAM model are located +# +# FIXaer: +# System directory where MERRA2 aerosol climatology files are located +# +# FIXlut: +# System directory where the lookup tables for optics properties are located +# +# TOPO_DIR: +# The location on disk of the static input files used by the make_orog +# task (orog.x and shave.x). Can be the same as FIXgsm. +# +# SFC_CLIMO_INPUT_DIR: +# The location on disk of the static surface climatology input fields, used by +# sfc_climo_gen. These files are only used if RUN_TASK_MAKE_SFC_CLIMO=TRUE +# +# FNGLAC, ..., FNMSKH: +# Names of (some of the) global data files that are assumed to exist in +# a system directory specified (this directory is machine-dependent; +# the experiment generation scripts will set it and store it in the +# variable FIXgsm). These file names also appear directly in the forecast +# model's input namelist file. +# +# FIXgsm_FILES_TO_COPY_TO_FIXam: +# If not running in NCO mode, this array contains the names of the files +# to copy from the FIXgsm system directory to the FIXam directory under +# the experiment directory. Note that the last element has a dummy value. +# This last element will get reset by the workflow generation scripts to +# the name of the ozone production/loss file to copy from FIXgsm. The +# name of this file depends on the ozone parameterization being used, +# and that in turn depends on the CCPP physics suite specified for the +# experiment. Thus, the CCPP physics suite XML must first be read in to +# determine the ozone parameterizaton and then the name of the ozone +# production/loss file. These steps are carried out elsewhere (in one +# of the workflow generation scripts/functions). +# +# FV3_NML_VARNAME_TO_FIXam_FILES_MAPPING: +# This array is used to set some of the namelist variables in the forecast +# model's namelist file that represent the relative or absolute paths of +# various fixed files (the first column of the array, where columns are +# delineated by the pipe symbol "|") to the full paths to these files in +# the FIXam directory derived from the corresponding workflow variables +# containing file names (the second column of the array). +# +# FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING: +# This array is used to set some of the namelist variables in the forecast +# model's namelist file that represent the relative or absolute paths of +# various fixed files (the first column of the array, where columns are +# delineated by the pipe symbol "|") to the full paths to surface climatology +# files (on the native FV3-LAM grid) in the FIXLAM directory derived from +# the corresponding surface climatology fields (the second column of the +# array). +# +# CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING: +# This array specifies the mapping to use between the symlinks that need +# to be created in each cycle directory (these are the "files" that FV3 +# looks for) and their targets in the FIXam directory. The first column +# of the array specifies the symlink to be created, and the second column +# specifies its target file in FIXam (where columns are delineated by the +# pipe symbol "|"). +# +#----------------------------------------------------------------------- +# +# Because the default values are dependent on the platform, we set these +# to a null string which will then be overwritten in setup.sh unless the +# user has specified a different value in config.sh +FIXgsm: "" +FIXaer: "" +FIXlut: "" +TOPO_DIR: "" +SFC_CLIMO_INPUT_DIR: "" + +FNGLAC: &FNGLAC "global_glacier.2x2.grb" +FNMXIC: &FNMXIC "global_maxice.2x2.grb" +FNTSFC: &FNTSFC "RTGSST.1982.2012.monthly.clim.grb" +FNSNOC: &FNSNOC "global_snoclim.1.875.grb" +FNZORC: &FNZORC "igbp" +FNAISC: &FNAISC "CFSR.SEAICE.1982.2012.monthly.clim.grb" +FNSMCC: &FNSMCC "global_soilmgldas.t126.384.190.grb" +FNMSKH: &FNMSKH "seaice_newland.grb" + +FIXgsm_FILES_TO_COPY_TO_FIXam: [ +*FNGLAC, +*FNMXIC, +*FNTSFC, +*FNSNOC, +*FNAISC, +*FNSMCC, +*FNMSKH, +"global_climaeropac_global.txt", +"fix_co2_proj/global_co2historicaldata_2010.txt", +"fix_co2_proj/global_co2historicaldata_2011.txt", +"fix_co2_proj/global_co2historicaldata_2012.txt", +"fix_co2_proj/global_co2historicaldata_2013.txt", +"fix_co2_proj/global_co2historicaldata_2014.txt", +"fix_co2_proj/global_co2historicaldata_2015.txt", +"fix_co2_proj/global_co2historicaldata_2016.txt", +"fix_co2_proj/global_co2historicaldata_2017.txt", +"fix_co2_proj/global_co2historicaldata_2018.txt", +"fix_co2_proj/global_co2historicaldata_2019.txt", +"fix_co2_proj/global_co2historicaldata_2020.txt", +"fix_co2_proj/global_co2historicaldata_2021.txt", +"global_co2historicaldata_glob.txt", +"co2monthlycyc.txt", +"global_h2o_pltc.f77", +"global_hyblev.l65.txt", +"global_zorclim.1x1.grb", +"global_sfc_emissivity_idx.txt", +"global_tg3clim.2.6x1.5.grb", +"global_solarconstant_noaa_an.txt", +"global_albedo4.1x1.grb", +"geo_em.d01.lat-lon.2.5m.HGT_M.nc", +"HGT.Beljaars_filtered.lat-lon.30s_res.nc", +"replace_with_FIXgsm_ozone_prodloss_filename" +] + +# +# It is possible to remove this as a workflow variable and make it only +# a local one since it is used in only one script. +# +FV3_NML_VARNAME_TO_FIXam_FILES_MAPPING: [ +!join_str ["FNGLAC | ",*FNGLAC], +!join_str ["FNMXIC | ",*FNMXIC], +!join_str ["FNTSFC | ",*FNTSFC], +!join_str ["FNSNOC | ",*FNSNOC], +!join_str ["FNAISC | ",*FNAISC], +!join_str ["FNSMCC | ",*FNSMCC], +!join_str ["FNMSKH | ",*FNMSKH] +] +#"FNZORC | $FNZORC", + +FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING: [ +"FNALBC | snowfree_albedo", +"FNALBC2 | facsf", +"FNTG3C | substrate_temperature", +"FNVEGC | vegetation_greenness", +"FNVETC | vegetation_type", +"FNSOTC | soil_type", +"FNVMNC | vegetation_greenness", +"FNVMXC | vegetation_greenness", +"FNSLPC | slope_type", +"FNABSC | maximum_snow_albedo" +] + +CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING: [ +"aerosol.dat | global_climaeropac_global.txt", +"co2historicaldata_2010.txt | fix_co2_proj/global_co2historicaldata_2010.txt", +"co2historicaldata_2011.txt | fix_co2_proj/global_co2historicaldata_2011.txt", +"co2historicaldata_2012.txt | fix_co2_proj/global_co2historicaldata_2012.txt", +"co2historicaldata_2013.txt | fix_co2_proj/global_co2historicaldata_2013.txt", +"co2historicaldata_2014.txt | fix_co2_proj/global_co2historicaldata_2014.txt", +"co2historicaldata_2015.txt | fix_co2_proj/global_co2historicaldata_2015.txt", +"co2historicaldata_2016.txt | fix_co2_proj/global_co2historicaldata_2016.txt", +"co2historicaldata_2017.txt | fix_co2_proj/global_co2historicaldata_2017.txt", +"co2historicaldata_2018.txt | fix_co2_proj/global_co2historicaldata_2018.txt", +"co2historicaldata_2019.txt | fix_co2_proj/global_co2historicaldata_2019.txt", +"co2historicaldata_2020.txt | fix_co2_proj/global_co2historicaldata_2020.txt", +"co2historicaldata_2021.txt | fix_co2_proj/global_co2historicaldata_2021.txt", +"co2historicaldata_glob.txt | global_co2historicaldata_glob.txt", +"co2monthlycyc.txt | co2monthlycyc.txt", +"global_h2oprdlos.f77 | global_h2o_pltc.f77", +"global_albedo4.1x1.grb | global_albedo4.1x1.grb", +"global_zorclim.1x1.grb | global_zorclim.1x1.grb", +"global_tg3clim.2.6x1.5.grb | global_tg3clim.2.6x1.5.grb", +"sfc_emissivity_idx.txt | global_sfc_emissivity_idx.txt", +"solarconstant_noaa_an.txt | global_solarconstant_noaa_an.txt", +"global_o3prdlos.f77 | " +] +# +#----------------------------------------------------------------------- +# +# For each workflow task, set the parameters to pass to the job scheduler +# (e.g. slurm) that will submit a job for each task to be run. These +# parameters include the number of nodes to use to run the job, the MPI +# processes per node, the maximum walltime to allow for the job to complete, +# and the maximum number of times to attempt to run each task. +# +#----------------------------------------------------------------------- +# +# Number of nodes. +# +NNODES_MAKE_GRID: "1" +NNODES_MAKE_OROG: "1" +NNODES_MAKE_SFC_CLIMO: "2" +NNODES_GET_EXTRN_ICS: "1" +NNODES_GET_EXTRN_LBCS: "1" +NNODES_MAKE_ICS: "4" +NNODES_MAKE_LBCS: "4" +NNODES_RUN_FCST: "" # This is calculated in the workflow generation scripts, so no need to set here. +NNODES_RUN_POST: "2" +NNODES_GET_OBS_CCPA: "1" +NNODES_GET_OBS_MRMS: "1" +NNODES_GET_OBS_NDAS: "1" +NNODES_VX_GRIDSTAT: "1" +NNODES_VX_POINTSTAT: "1" +NNODES_VX_ENSGRID: "1" +NNODES_VX_ENSGRID_MEAN: "1" +NNODES_VX_ENSGRID_PROB: "1" +NNODES_VX_ENSPOINT: "1" +NNODES_VX_ENSPOINT_MEAN: "1" +NNODES_VX_ENSPOINT_PROB: "1" +# +# Number of MPI processes per node. +# +PPN_MAKE_GRID: "24" +PPN_MAKE_OROG: "24" +PPN_MAKE_SFC_CLIMO: "24" +PPN_GET_EXTRN_ICS: "1" +PPN_GET_EXTRN_LBCS: "1" +PPN_MAKE_ICS: "12" +PPN_MAKE_LBCS: "12" +PPN_RUN_FCST: "" # will be calculated from NCORES_PER_NODE and OMP_NUM_THREADS in setup.sh +PPN_RUN_POST: "24" +PPN_GET_OBS_CCPA: "1" +PPN_GET_OBS_MRMS: "1" +PPN_GET_OBS_NDAS: "1" +PPN_VX_GRIDSTAT: "1" +PPN_VX_POINTSTAT: "1" +PPN_VX_ENSGRID: "1" +PPN_VX_ENSGRID_MEAN: "1" +PPN_VX_ENSGRID_PROB: "1" +PPN_VX_ENSPOINT: "1" +PPN_VX_ENSPOINT_MEAN: "1" +PPN_VX_ENSPOINT_PROB: "1" +# +# Walltimes. +# +WTIME_MAKE_GRID: "00:20:00" +WTIME_MAKE_OROG: "01:00:00" +WTIME_MAKE_SFC_CLIMO: "00:20:00" +WTIME_GET_EXTRN_ICS: "00:45:00" +WTIME_GET_EXTRN_LBCS: "00:45:00" +WTIME_MAKE_ICS: "00:30:00" +WTIME_MAKE_LBCS: "00:30:00" +WTIME_RUN_FCST: "04:30:00" +WTIME_RUN_POST: "00:15:00" +WTIME_GET_OBS_CCPA: "00:45:00" +WTIME_GET_OBS_MRMS: "00:45:00" +WTIME_GET_OBS_NDAS: "02:00:00" +WTIME_VX_GRIDSTAT: "02:00:00" +WTIME_VX_POINTSTAT: "01:00:00" +WTIME_VX_ENSGRID: "01:00:00" +WTIME_VX_ENSGRID_MEAN: "01:00:00" +WTIME_VX_ENSGRID_PROB: "01:00:00" +WTIME_VX_ENSPOINT: "01:00:00" +WTIME_VX_ENSPOINT_MEAN: "01:00:00" +WTIME_VX_ENSPOINT_PROB: "01:00:00" +# +# Maximum number of attempts. +# +MAXTRIES_MAKE_GRID: "2" +MAXTRIES_MAKE_OROG: "2" +MAXTRIES_MAKE_SFC_CLIMO: "2" +MAXTRIES_GET_EXTRN_ICS: "1" +MAXTRIES_GET_EXTRN_LBCS: "1" +MAXTRIES_MAKE_ICS: "1" +MAXTRIES_MAKE_LBCS: "1" +MAXTRIES_RUN_FCST: "1" +MAXTRIES_RUN_POST: "2" +MAXTRIES_GET_OBS_CCPA: "1" +MAXTRIES_GET_OBS_MRMS: "1" +MAXTRIES_GET_OBS_NDAS: "1" +MAXTRIES_VX_GRIDSTAT: "1" +MAXTRIES_VX_GRIDSTAT_REFC: "1" +MAXTRIES_VX_GRIDSTAT_RETOP: "1" +MAXTRIES_VX_GRIDSTAT_03h: "1" +MAXTRIES_VX_GRIDSTAT_06h: "1" +MAXTRIES_VX_GRIDSTAT_24h: "1" +MAXTRIES_VX_POINTSTAT: "1" +MAXTRIES_VX_ENSGRID: "1" +MAXTRIES_VX_ENSGRID_REFC: "1" +MAXTRIES_VX_ENSGRID_RETOP: "1" +MAXTRIES_VX_ENSGRID_03h: "1" +MAXTRIES_VX_ENSGRID_06h: "1" +MAXTRIES_VX_ENSGRID_24h: "1" +MAXTRIES_VX_ENSGRID_MEAN: "1" +MAXTRIES_VX_ENSGRID_PROB: "1" +MAXTRIES_VX_ENSGRID_MEAN_03h: "1" +MAXTRIES_VX_ENSGRID_PROB_03h: "1" +MAXTRIES_VX_ENSGRID_MEAN_06h: "1" +MAXTRIES_VX_ENSGRID_PROB_06h: "1" +MAXTRIES_VX_ENSGRID_MEAN_24h: "1" +MAXTRIES_VX_ENSGRID_PROB_24h: "1" +MAXTRIES_VX_ENSGRID_PROB_REFC: "1" +MAXTRIES_VX_ENSGRID_PROB_RETOP: "1" +MAXTRIES_VX_ENSPOINT: "1" +MAXTRIES_VX_ENSPOINT_MEAN: "1" +MAXTRIES_VX_ENSPOINT_PROB: "1" + +# +#----------------------------------------------------------------------- +# +# Allows an extra parameter to be passed to slurm via XML Native +# command +# +SLURM_NATIVE_CMD: "" +# +#----------------------------------------------------------------------- +# +# Set parameters associated with subhourly forecast model output and +# post-processing. +# +# SUB_HOURLY_POST: +# Flag that indicates whether the forecast model will generate output +# files on a sub-hourly time interval (e.g. 10 minutes, 15 minutes, etc). +# This will also cause the post-processor to process these sub-hourly +# files. If ths is set to "TRUE", then DT_SUBHOURLY_POST_MNTS should be +# set to a value between "00" and "59". +# +# DT_SUB_HOURLY_POST_MNTS: +# Time interval in minutes between the forecast model output files. If +# SUB_HOURLY_POST is set to "TRUE", this needs to be set to a two-digit +# integer between "01" and "59". This is not used if SUB_HOURLY_POST is +# not set to "TRUE". Note that if SUB_HOURLY_POST is set to "TRUE" but +# DT_SUB_HOURLY_POST_MNTS is set to "00", SUB_HOURLY_POST will get reset +# to "FALSE" in the experiment generation scripts (there will be an +# informational message in the log file to emphasize this). +# +#----------------------------------------------------------------------- +# +SUB_HOURLY_POST: "FALSE" +DT_SUBHOURLY_POST_MNTS: "00" +# +#----------------------------------------------------------------------- +# +# Set parameters associated with defining a customized post configuration +# file. +# +# USE_CUSTOM_POST_CONFIG_FILE: +# Flag that determines whether a user-provided custom configuration file +# should be used for post-processing the model data. If this is set to +# "TRUE", then the workflow will use the custom post-processing (UPP) +# configuration file specified in CUSTOM_POST_CONFIG_FP. Otherwise, a +# default configuration file provided in the UPP repository will be +# used. +# +# CUSTOM_POST_CONFIG_FP: +# The full path to the custom post flat file, including filename, to be +# used for post-processing. This is only used if CUSTOM_POST_CONFIG_FILE +# is set to "TRUE". +# +#----------------------------------------------------------------------- +# +USE_CUSTOM_POST_CONFIG_FILE: "FALSE" +CUSTOM_POST_CONFIG_FP: "" +# +#----------------------------------------------------------------------- +# +# Set parameters associated with outputting satellite fields in the UPP +# grib2 files using the Community Radiative Transfer Model (CRTM). +# +# USE_CRTM: +# Flag that defines whether external CRTM coefficient files have been +# staged by the user in order to output synthetic statellite products +# available within the UPP. If this is set to "TRUE", then the workflow +# will check for these files in the directory CRTM_DIR. Otherwise, it is +# assumed that no satellite fields are being requested in the UPP +# configuration. +# +# CRTM_DIR: +# This is the path to the top CRTM fix file directory. This is only used +# if USE_CRTM is set to "TRUE". +# +#----------------------------------------------------------------------- +# +USE_CRTM: "FALSE" +CRTM_DIR: "" +# +#----------------------------------------------------------------------- +# +# Set parameters associated with running ensembles. Definitions: +# +# DO_ENSEMBLE: +# Flag that determines whether to run a set of ensemble forecasts (for +# each set of specified cycles). If this is set to "TRUE", NUM_ENS_MEMBERS +# forecasts are run for each cycle, each with a different set of stochastic +# seed values. Otherwise, a single forecast is run for each cycle. +# +# NUM_ENS_MEMBERS: +# The number of ensemble members to run if DO_ENSEMBLE is set to "TRUE". +# This variable also controls the naming of the ensemble member directories. +# For example, if this is set to "8", the member directories will be named +# mem1, mem2, ..., mem8. If it is set to "08" (note the leading zero), +# the member directories will be named mem01, mem02, ..., mem08. Note, +# however, that after reading in the number of characters in this string +# (in order to determine how many leading zeros, if any, should be placed +# in the names of the member directories), the workflow generation scripts +# strip away those leading zeros. Thus, in the variable definitions file +# (GLOBAL_VAR_DEFNS_FN), this variable appear with its leading zeros +# stripped. This variable is not used if DO_ENSEMBLE is not set to "TRUE". +# +#----------------------------------------------------------------------- +# +DO_ENSEMBLE: "FALSE" +NUM_ENS_MEMBERS: "1" +# +#----------------------------------------------------------------------- +# +# Set default ad-hoc stochastic physics options. +# For detailed documentation of these parameters, see: +# https://stochastic-physics.readthedocs.io/en/ufs_public_release/namelist_options.html +# +#----------------------------------------------------------------------- +# +DO_SHUM: "FALSE" +DO_SPPT: "FALSE" +DO_SKEB: "FALSE" +ISEED_SPPT: "1" +ISEED_SHUM: "2" +ISEED_SKEB: "3" +NEW_LSCALE: "TRUE" +SHUM_MAG: "0.006" #Variable "shum" in input.nml +SHUM_LSCALE: "150000" +SHUM_TSCALE: "21600" #Variable "shum_tau" in input.nml +SHUM_INT: "3600" #Variable "shumint" in input.nml +SPPT_MAG: "0.7" #Variable "sppt" in input.nml +SPPT_LOGIT: "TRUE" +SPPT_LSCALE: "150000" +SPPT_TSCALE: "21600" #Variable "sppt_tau" in input.nml +SPPT_INT: "3600" #Variable "spptint" in input.nml +SPPT_SFCLIMIT: "TRUE" +SKEB_MAG: "0.5" #Variable "skeb" in input.nml +SKEB_LSCALE: "150000" +SKEB_TSCALE: "21600" #Variable "skeb_tau" in input.nml +SKEB_INT: "3600" #Variable "skebint" in input.nml +SKEBNORM: "1" +SKEB_VDOF: "10" +USE_ZMTNBLCK: "FALSE" +# +#----------------------------------------------------------------------- +# +# Set default SPP stochastic physics options. Each SPP option is an array, +# applicable (in order) to the scheme/parameter listed in SPP_VAR_LIST. +# Enter each value of the array in config.sh as shown below without commas +# or single quotes (e.g., SPP_VAR_LIST=( "pbl" "sfc" "mp" "rad" "gwd" ). +# Both commas and single quotes will be added by Jinja when creating the +# namelist. +# +# Note that SPP is currently only available for specific physics schemes +# used in the RAP/HRRR physics suite. Users need to be aware of which SDF +# is chosen when turning this option on. +# +# Patterns evolve and are applied at each time step. +# +#----------------------------------------------------------------------- +# +DO_SPP: "FALSE" +SPP_VAR_LIST: [ "pbl", "sfc", "mp", "rad", "gwd" ] +SPP_MAG_LIST: [ "0.2", "0.2", "0.75", "0.2", "0.2" ] #Variable "spp_prt_list" in input.nml +SPP_LSCALE: [ "150000.0", "150000.0", "150000.0", "150000.0", "150000.0" ] +SPP_TSCALE: [ "21600.0", "21600.0", "21600.0", "21600.0", "21600.0" ] #Variable "spp_tau" in input.nml +SPP_SIGTOP1: [ "0.1", "0.1", "0.1", "0.1", "0.1" ] +SPP_SIGTOP2: [ "0.025", "0.025", "0.025", "0.025", "0.025" ] +SPP_STDDEV_CUTOFF: [ "1.5", "1.5", "2.5", "1.5", "1.5" ] +ISEED_SPP: [ "4", "4", "4", "4", "4" ] +# +#----------------------------------------------------------------------- +# +# Turn on SPP in Noah or RUC LSM (support for Noah MP is in progress). +# Please be aware of the SDF that you choose if you wish to turn on LSM +# SPP. +# +# SPP in LSM schemes is handled in the &nam_sfcperts namelist block +# instead of in &nam_sppperts, where all other SPP is implemented. +# +# The default perturbation frequency is determined by the fhcyc namelist +# entry. Since that parameter is set to zero in the SRW App, use +# LSM_SPP_EACH_STEP to perturb every time step. +# +# Perturbations to soil moisture content (SMC) are only applied at the +# first time step. +# +# LSM perturbations include SMC - soil moisture content (volume fraction), +# VGF - vegetation fraction, ALB - albedo, SAL - salinity, +# EMI - emissivity, ZOL - surface roughness (cm), and STC - soil temperature. +# +# Only five perturbations at a time can be applied currently, but all seven +# are shown below. In addition, only one unique iseed value is allowed +# at the moment, and is used for each pattern. +# +DO_LSM_SPP: "FALSE" #If true, sets lndp_type=2 +LSM_SPP_TSCALE: [ "21600", "21600", "21600", "21600", "21600", "21600", "21600" ] +LSM_SPP_LSCALE: [ "150000", "150000", "150000", "150000", "150000", "150000", "150000" ] +ISEED_LSM_SPP: [ "9" ] +LSM_SPP_VAR_LIST: [ "smc", "vgf", "alb", "sal", "emi", "zol", "stc" ] +LSM_SPP_MAG_LIST: [ "0.2", "0.001", "0.001", "0.001", "0.001", "0.001", "0.2" ] +LSM_SPP_EACH_STEP: "TRUE" #Sets lndp_each_step=.true. +# +#----------------------------------------------------------------------- +# +# HALO_BLEND: +# Number of rows into the computational domain that should be blended +# with the LBCs. To shut halo blending off, this can be set to zero. +# +#----------------------------------------------------------------------- +# +HALO_BLEND: "10" +# +#----------------------------------------------------------------------- +# +# USE_FVCOM: +# Flag set to update surface conditions in FV3-LAM with fields generated +# from the Finite Volume Community Ocean Model (FVCOM). This will +# replace lake/sea surface temperature, ice surface temperature, and ice +# placement. FVCOM data must already be interpolated to the desired +# FV3-LAM grid. This flag will be used in make_ics to modify sfc_data.nc +# after chgres_cube is run by running the routine process_FVCOM.exe +# +# FVCOM_WCSTART: +# Define if this is a "warm" start or a "cold" start. Setting this to +# "warm" will read in sfc_data.nc generated in a RESTART directory. +# Setting this to "cold" will read in the sfc_data.nc generated from +# chgres_cube in the make_ics portion of the workflow. +# +# FVCOM_DIR: +# User defined directory where FVCOM data already interpolated to FV3-LAM +# grid is located. File name in this path should be "fvcom.nc" to allow +# +# FVCOM_FILE: +# Name of file located in FVCOM_DIR that has FVCOM data interpolated to +# FV3-LAM grid. This file will be copied later to a new location and name +# changed to fvcom.nc +# +#------------------------------------------------------------------------ +# +USE_FVCOM: "FALSE" +FVCOM_WCSTART: "cold" +FVCOM_DIR: "/user/defined/dir/to/fvcom/data" +FVCOM_FILE: "fvcom.nc" +# +#----------------------------------------------------------------------- +# +# COMPILER: +# Type of compiler invoked during the build step. +# +#------------------------------------------------------------------------ +# +COMPILER: "intel" +# +#----------------------------------------------------------------------- +# +# KMP_AFFINITY_*: +# From Intel: "The Intel® runtime library has the ability to bind OpenMP +# threads to physical processing units. The interface is controlled using +# the KMP_AFFINITY environment variable. Depending on the system (machine) +# topology, application, and operating system, thread affinity can have a +# dramatic effect on the application speed. +# +# Thread affinity restricts execution of certain threads (virtual execution +# units) to a subset of the physical processing units in a multiprocessor +# computer. Depending upon the topology of the machine, thread affinity can +# have a dramatic effect on the execution speed of a program." +# +# For more information, see the following link: +# https://software.intel.com/content/www/us/en/develop/documentation/cpp- +# compiler-developer-guide-and-reference/top/optimization-and-programming- +# guide/openmp-support/openmp-library-support/thread-affinity-interface- +# linux-and-windows.html +# +# OMP_NUM_THREADS_*: +# The number of OpenMP threads to use for parallel regions. +# +# OMP_STACKSIZE_*: +# Controls the size of the stack for threads created by the OpenMP +# implementation. +# +# Note that settings for the make_grid and make_orog tasks are not +# included below as they do not use parallelized code. +# +#----------------------------------------------------------------------- +# +KMP_AFFINITY_MAKE_OROG: "disabled" +OMP_NUM_THREADS_MAKE_OROG: "6" +OMP_STACKSIZE_MAKE_OROG: "2048m" + +KMP_AFFINITY_MAKE_SFC_CLIMO: "scatter" +OMP_NUM_THREADS_MAKE_SFC_CLIMO: "1" +OMP_STACKSIZE_MAKE_SFC_CLIMO: "1024m" + +KMP_AFFINITY_MAKE_ICS: "scatter" +OMP_NUM_THREADS_MAKE_ICS: "1" +OMP_STACKSIZE_MAKE_ICS: "1024m" + +KMP_AFFINITY_MAKE_LBCS: "scatter" +OMP_NUM_THREADS_MAKE_LBCS: "1" +OMP_STACKSIZE_MAKE_LBCS: "1024m" + +KMP_AFFINITY_RUN_FCST: "scatter" +OMP_NUM_THREADS_RUN_FCST: "2" # atmos_nthreads in model_configure +OMP_STACKSIZE_RUN_FCST: "1024m" + +KMP_AFFINITY_RUN_POST: "scatter" +OMP_NUM_THREADS_RUN_POST: "1" +OMP_STACKSIZE_RUN_POST: "1024m" +# +#----------------------------------------------------------------------- +# diff --git a/ush/constants.py b/ush/constants.py new file mode 100644 index 000000000..e0ff60a0b --- /dev/null +++ b/ush/constants.py @@ -0,0 +1,27 @@ +#!/usr/env/bin python3 + +# +#----------------------------------------------------------------------- +# +# Mathematical and physical constants. +# +#----------------------------------------------------------------------- +# + +# Pi. +pi_geom = 3.14159265358979323846264338327 + +# Degrees per radian. +degs_per_radian = 360.0 / (2.0 * pi_geom) + +# Radius of the Earth in meters. +radius_Earth = 6371200.0 + +# +#----------------------------------------------------------------------- +# +# Other. +# +#----------------------------------------------------------------------- +# +valid_vals_BOOLEAN = [True, False] diff --git a/ush/create_diag_table_file.py b/ush/create_diag_table_file.py new file mode 100644 index 000000000..cd98fcec7 --- /dev/null +++ b/ush/create_diag_table_file.py @@ -0,0 +1,79 @@ +#!/usr/bin/env python3 + +import os +import unittest +from textwrap import dedent + +from python_utils import import_vars, set_env_var, print_input_args, \ + print_info_msg, print_err_msg_exit, cfg_to_yaml_str + +from fill_jinja_template import fill_jinja_template + +def create_diag_table_file(run_dir): + """ Creates a diagnostic table file for each cycle to be run + + Args: + run_dir: run directory + Returns: + Boolean + """ + + print_input_args(locals()) + + #import all environment variables + import_vars() + + #create a diagnostic table file within the specified run directory + print_info_msg(f''' + Creating a diagnostics table file (\"{DIAG_TABLE_FN}\") in the specified + run directory... + + run_dir = \"{run_dir}\"''', verbose=VERBOSE) + + diag_table_fp = os.path.join(run_dir, DIAG_TABLE_FN) + + print_info_msg(f''' + + Using the template diagnostics table file: + + diag_table_tmpl_fp = {DIAG_TABLE_TMPL_FP} + + to create: + + diag_table_fp = \"{diag_table_fp}\"''', verbose=VERBOSE) + + settings = { + 'starttime': CDATE, + 'cres': CRES + } + settings_str = cfg_to_yaml_str(settings) + + #call fill jinja + try: + fill_jinja_template(["-q", "-u", settings_str, "-t", DIAG_TABLE_TMPL_FP, "-o", diag_table_fp]) + except: + print_err_msg_exit(f''' + !!!!!!!!!!!!!!!!! + + fill_jinja_template.py failed! + + !!!!!!!!!!!!!!!!!''') + return False + return True + +class Testing(unittest.TestCase): + def test_create_diag_table_file(self): + path = os.path.join(os.getenv('USHDIR'), "test_data") + self.assertTrue(create_diag_table_file(run_dir=path)) + def setUp(self): + USHDIR = os.path.dirname(os.path.abspath(__file__)) + DIAG_TABLE_FN="diag_table" + DIAG_TABLE_TMPL_FP = os.path.join(USHDIR,"templates",f"{DIAG_TABLE_FN}.FV3_GFS_v15p2") + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + set_env_var("USHDIR",USHDIR) + set_env_var("DIAG_TABLE_FN",DIAG_TABLE_FN) + set_env_var("DIAG_TABLE_TMPL_FP",DIAG_TABLE_TMPL_FP) + set_env_var("CRES","C48") + set_env_var("CDATE","2021010106") + diff --git a/ush/create_model_configure_file.py b/ush/create_model_configure_file.py new file mode 100644 index 000000000..9ae124139 --- /dev/null +++ b/ush/create_model_configure_file.py @@ -0,0 +1,243 @@ +#!/usr/bin/env python3 + +import os +import unittest +from datetime import datetime +from textwrap import dedent + +from python_utils import import_vars, set_env_var, print_input_args, \ + print_info_msg, print_err_msg_exit, lowercase, cfg_to_yaml_str + +from fill_jinja_template import fill_jinja_template + +def create_model_configure_file(cdate,run_dir,sub_hourly_post,dt_subhourly_post_mnts,dt_atmos): + """ Creates a model configuration file in the specified + run directory + + Args: + cdate: cycle date + run_dir: run directory + sub_hourly_post + dt_subhourly_post_mnts + dt_atmos + Returns: + Boolean + """ + + print_input_args(locals()) + + #import all environment variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # Create a model configuration file in the specified run directory. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + Creating a model configuration file (\"{MODEL_CONFIG_FN}\") in the specified + run directory (run_dir): + run_dir = \"{run_dir}\"''', verbose=VERBOSE) + # + # Extract from cdate the starting year, month, day, and hour of the forecast. + # + yyyy=cdate.year + mm=cdate.month + dd=cdate.day + hh=cdate.hour + # + # Set parameters in the model configure file. + # + dot_quilting_dot=f".{lowercase(str(QUILTING))}." + dot_print_esmf_dot=f".{lowercase(str(PRINT_ESMF))}." + dot_cpl_dot=f".{lowercase(str(CPL))}." + dot_write_dopost=f".{lowercase(str(WRITE_DOPOST))}." + # + #----------------------------------------------------------------------- + # + # Create a multiline variable that consists of a yaml-compliant string + # specifying the values that the jinja variables in the template + # model_configure file should be set to. + # + #----------------------------------------------------------------------- + # + settings = { + 'PE_MEMBER01': PE_MEMBER01, + 'print_esmf': dot_print_esmf_dot, + 'start_year': yyyy, + 'start_month': mm, + 'start_day': dd, + 'start_hour': hh, + 'nhours_fcst': FCST_LEN_HRS, + 'dt_atmos': DT_ATMOS, + 'cpl': dot_cpl_dot, + 'atmos_nthreads': OMP_NUM_THREADS_RUN_FCST, + 'restart_interval': RESTART_INTERVAL, + 'write_dopost': dot_write_dopost, + 'quilting': dot_quilting_dot, + 'output_grid': WRTCMP_output_grid + } + # + # If the write-component is to be used, then specify a set of computational + # parameters and a set of grid parameters. The latter depends on the type + # (coordinate system) of the grid that the write-component will be using. + # + if QUILTING: + settings.update({ + 'write_groups': WRTCMP_write_groups, + 'write_tasks_per_group': WRTCMP_write_tasks_per_group, + 'cen_lon': WRTCMP_cen_lon, + 'cen_lat': WRTCMP_cen_lat, + 'lon1': WRTCMP_lon_lwr_left, + 'lat1': WRTCMP_lat_lwr_left + }) + + if WRTCMP_output_grid == "lambert_conformal": + settings.update({ + 'stdlat1': WRTCMP_stdlat1, + 'stdlat2': WRTCMP_stdlat2, + 'nx': WRTCMP_nx, + 'ny': WRTCMP_ny, + 'dx': WRTCMP_dx, + 'dy': WRTCMP_dy, + 'lon2': "", + 'lat2': "", + 'dlon': "", + 'dlat': "", + }) + elif WRTCMP_output_grid == "regional_latlon" or \ + WRTCMP_output_grid == "rotated_latlon": + settings.update({ + 'lon2': WRTCMP_lon_upr_rght, + 'lat2': WRTCMP_lat_upr_rght, + 'dlon': WRTCMP_dlon, + 'dlat': WRTCMP_dlat, + 'stdlat1': "", + 'stdlat2': "", + 'nx': "", + 'ny': "", + 'dx': "", + 'dy': "" + }) + # + # If sub_hourly_post is set to "TRUE", then the forecast model must be + # directed to generate output files on a sub-hourly interval. Do this + # by specifying the output interval in the model configuration file + # (MODEL_CONFIG_FN) in units of number of forecat model time steps (nsout). + # nsout is calculated using the user-specified output time interval + # dt_subhourly_post_mnts (in units of minutes) and the forecast model's + # main time step dt_atmos (in units of seconds). Note that nsout is + # guaranteed to be an integer because the experiment generation scripts + # require that dt_subhourly_post_mnts (after conversion to seconds) be + # evenly divisible by dt_atmos. Also, in this case, the variable output_fh + # [which specifies the output interval in hours; + # see the jinja model_config template file] is set to 0, although this + # doesn't matter because any positive of nsout will override output_fh. + # + # If sub_hourly_post is set to "FALSE", then the workflow is hard-coded + # (in the jinja model_config template file) to direct the forecast model + # to output files every hour. This is done by setting (1) output_fh to 1 + # here, and (2) nsout to -1 here which turns off output by time step interval. + # + # Note that the approach used here of separating how hourly and subhourly + # output is handled should be changed/generalized/simplified such that + # the user should only need to specify the output time interval (there + # should be no need to specify a flag like sub_hourly_post); the workflow + # should then be able to direct the model to output files with that time + # interval and to direct the post-processor to process those files + # regardless of whether that output time interval is larger than, equal + # to, or smaller than one hour. + # + if sub_hourly_post: + nsout=dt_subhourly_post_mnts*60 / dt_atmos + output_fh=0 + else: + output_fh=1 + nsout=-1 + + settings.update({ + 'output_fh': output_fh, + 'nsout': nsout + }) + + settings_str = cfg_to_yaml_str(settings) + + print_info_msg(dedent(f''' + The variable \"settings\" specifying values to be used in the \"{MODEL_CONFIG_FN}\" + file has been set as follows: + #----------------------------------------------------------------------- + settings =\n''') + settings_str,verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # Call a python script to generate the experiment's actual MODEL_CONFIG_FN + # file from the template file. + # + #----------------------------------------------------------------------- + # + model_config_fp=os.path.join(run_dir, MODEL_CONFIG_FN) + + try: + fill_jinja_template(["-q", "-u", settings_str, "-t", MODEL_CONFIG_TMPL_FP, "-o", model_config_fp]) + except: + print_err_msg_exit(f''' + Call to python script fill_jinja_template.py to create a \"{MODEL_CONFIG_FN}\" + file from a jinja2 template failed. Parameters passed to this script are: + Full path to template rocoto XML file: + MODEL_CONFIG_TMPL_FP = \"{MODEL_CONFIG_TMPL_FP}\" + Full path to output rocoto XML file: + model_config_fp = \"{model_config_fp}\" + Namelist settings specified on command line: + settings = + {settings_str}''') + return False + + return True + +class Testing(unittest.TestCase): + def test_create_model_configure_file(self): + path = os.path.join(os.getenv('USHDIR'), "test_data") + self.assertTrue(\ + create_model_configure_file( \ + run_dir=path, + cdate=datetime(2021,1,1), + sub_hourly_post=True, + dt_subhourly_post_mnts=4, + dt_atmos=1) ) + def setUp(self): + USHDIR = os.path.dirname(os.path.abspath(__file__)) + MODEL_CONFIG_FN='model_configure' + MODEL_CONFIG_TMPL_FP = os.path.join(USHDIR, "templates", MODEL_CONFIG_FN) + + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + set_env_var('QUILTING',True) + set_env_var('PRINT_ESMF',True) + set_env_var('CPL',True) + set_env_var('WRITE_DOPOST',True) + set_env_var("USHDIR",USHDIR) + set_env_var('MODEL_CONFIG_FN',MODEL_CONFIG_FN) + set_env_var("MODEL_CONFIG_TMPL_FP",MODEL_CONFIG_TMPL_FP) + set_env_var('PE_MEMBER01',24) + set_env_var('FCST_LEN_HRS',72) + set_env_var('DT_ATMOS',1) + set_env_var('OMP_NUM_THREADS_RUN_FCST',1) + set_env_var('RESTART_INTERVAL',4) + + set_env_var('WRTCMP_write_groups',1) + set_env_var('WRTCMP_write_tasks_per_group',2) + set_env_var('WRTCMP_output_grid',"lambert_conformal") + set_env_var('WRTCMP_cen_lon',-97.5) + set_env_var('WRTCMP_cen_lat',35.0) + set_env_var('WRTCMP_stdlat1',35.0) + set_env_var('WRTCMP_stdlat2',35.0) + set_env_var('WRTCMP_nx',199) + set_env_var('WRTCMP_ny',111) + set_env_var('WRTCMP_lon_lwr_left',-121.23349066) + set_env_var('WRTCMP_lat_lwr_left',23.41731593) + set_env_var('WRTCMP_dx',3000.0) + set_env_var('WRTCMP_dy',3000.0) + + diff --git a/ush/fill_jinja_template.py b/ush/fill_jinja_template.py index 47885051b..74c348908 100755 --- a/ush/fill_jinja_template.py +++ b/ush/fill_jinja_template.py @@ -43,6 +43,7 @@ import datetime as dt import os +import sys import argparse import jinja2 as j2 @@ -155,7 +156,7 @@ def path_ok(arg): raise argparse.ArgumentTypeError(msg) -def parse_args(): +def parse_args(argv): ''' Function maintains the arguments accepted by this script. Please see @@ -195,7 +196,7 @@ def parse_args(): required=True, type=path_ok, ) - return parser.parse_args() + return parser.parse_args(argv) def update_dict(dest, newdict, quiet=False): @@ -231,13 +232,18 @@ def update_dict(dest, newdict, quiet=False): print('*' * 50) -def main(cla): +def fill_jinja_template(argv): ''' Loads a Jinja template, determines its necessary undefined variables, retrives them from user supplied settings, and renders the final result. ''' + # parse args + cla = parse_args(argv) + if cla.config: + cla.config = config_exists(cla.config) + # Create a Jinja Environment to load the template. env = j2.Environment(loader=j2.FileSystemLoader(cla.template)) template_source = env.loader.get_source(env, '') @@ -274,7 +280,5 @@ def main(cla): if __name__ == '__main__': - cla = parse_args() - if cla.config: - cla.config = config_exists(cla.config) - main(cla) + + fill_jinja_template(sys.argv[1:]) diff --git a/ush/generate_FV3LAM_wflow.py b/ush/generate_FV3LAM_wflow.py new file mode 100755 index 000000000..2391340c4 --- /dev/null +++ b/ush/generate_FV3LAM_wflow.py @@ -0,0 +1,1126 @@ +#!/usr/bin/env python3 + +import os +import sys +import platform +import subprocess +from multiprocessing import Process +from textwrap import dedent +from datetime import datetime,timedelta + +from python_utils import print_info_msg, print_err_msg_exit, import_vars, cp_vrfy, cd_vrfy,\ + rm_vrfy, ln_vrfy, mkdir_vrfy, mv_vrfy, run_command, date_to_str, \ + define_macos_utilities, create_symlink_to_file, check_for_preexist_dir_file, \ + cfg_to_yaml_str, find_pattern_in_str + +from setup import setup +from set_FV3nml_sfc_climo_filenames import set_FV3nml_sfc_climo_filenames +from get_crontab_contents import get_crontab_contents +from fill_jinja_template import fill_jinja_template +from set_namelist import set_namelist + +def python_error_handler(): + """ Error handler for missing packages """ + + print_err_msg_exit(''' + Errors found: check your python environment + + Instructions for setting up python environments can be found on the web: + https://github.com/ufs-community/ufs-srweather-app/wiki/Getting-Started + ''', stack_trace=False) + +# Check for non-standard python packages +try: + import jinja2 + import yaml + import f90nml +except ImportError as error: + print_info_msg(error.__class__.__name__ + ": " + str(error)) + python_error_handler() + +def generate_FV3LAM_wflow(): + """ Function to setup a forecast experiment and create a workflow + (according to the parameters specified in the config file + + Args: + None + Returns: + None + """ + + print(dedent(''' + ======================================================================== + ======================================================================== + + Starting experiment generation... + + ======================================================================== + ========================================================================''')) + + #set ushdir + ushdir = os.path.dirname(os.path.abspath(__file__)) + + #check python version + major,minor,patch = platform.python_version_tuple() + if int(major) < 3 or int(minor) < 6: + print_info_msg(f''' + + Error: python version must be 3.6 or higher + python version: {major}.{minor}''') + + #define macros + define_macos_utilities() + + # + #----------------------------------------------------------------------- + # + # Source the file that defines and then calls the setup function. The + # setup function in turn first sources the default configuration file + # (which contains default values for the experiment/workflow parameters) + # and then sources the user-specified configuration file (which contains + # user-specified values for a subset of the experiment/workflow parame- + # ters that override their default values). + # + #----------------------------------------------------------------------- + # + setup() + + #import all environment variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # Set the full path to the experiment's rocoto workflow xml file. This + # file will be placed at the top level of the experiment directory and + # then used by rocoto to run the workflow. + # + #----------------------------------------------------------------------- + # + WFLOW_XML_FP = os.path.join(EXPTDIR, WFLOW_XML_FN) + + # + #----------------------------------------------------------------------- + # + # Create a multiline variable that consists of a yaml-compliant string + # specifying the values that the jinja variables in the template rocoto + # XML should be set to. These values are set either in the user-specified + # workflow configuration file (EXPT_CONFIG_FN) or in the setup.sh script + # sourced above. Then call the python script that generates the XML. + # + #----------------------------------------------------------------------- + # + if WORKFLOW_MANAGER == "rocoto": + + template_xml_fp = os.path.join(TEMPLATE_DIR, WFLOW_XML_FN) + + print_info_msg(f''' + Creating rocoto workflow XML file (WFLOW_XML_FP) from jinja template XML + file (template_xml_fp): + template_xml_fp = \"{template_xml_fp}\" + WFLOW_XML_FP = \"{WFLOW_XML_FP}\"''') + + ensmem_indx_name = "" + uscore_ensmem_name = "" + slash_ensmem_subdir = "" + if DO_ENSEMBLE: + ensmem_indx_name = "mem" + uscore_ensmem_name = f"_mem#{ensmem_indx_name}#" + slash_ensmem_subdir = f"/mem#{ensmem_indx_name}#" + + #get time string + d = DATE_FIRST_CYCL + timedelta(seconds=DT_ATMOS) + time_str = d.strftime("%M:%S") + + cycl_hrs_str = [ f"{c:02d}" for c in CYCL_HRS ] + cdate_first_cycl = DATE_FIRST_CYCL + timedelta(hours=CYCL_HRS[0]) + + # Dictionary of settings + settings = { + # + # Parameters needed by the job scheduler. + # + 'account': ACCOUNT, + 'sched': SCHED, + 'partition_default': PARTITION_DEFAULT, + 'queue_default': QUEUE_DEFAULT, + 'partition_hpss': PARTITION_HPSS, + 'queue_hpss': QUEUE_HPSS, + 'partition_fcst': PARTITION_FCST, + 'queue_fcst': QUEUE_FCST, + 'machine': MACHINE, + 'slurm_native_cmd': SLURM_NATIVE_CMD, + # + # Workflow task names. + # + 'make_grid_tn': MAKE_GRID_TN, + 'make_orog_tn': MAKE_OROG_TN, + 'make_sfc_climo_tn': MAKE_SFC_CLIMO_TN, + 'get_extrn_ics_tn': GET_EXTRN_ICS_TN, + 'get_extrn_lbcs_tn': GET_EXTRN_LBCS_TN, + 'make_ics_tn': MAKE_ICS_TN, + 'make_lbcs_tn': MAKE_LBCS_TN, + 'run_fcst_tn': RUN_FCST_TN, + 'run_post_tn': RUN_POST_TN, + 'get_obs_ccpa_tn': GET_OBS_CCPA_TN, + 'get_obs_ndas_tn': GET_OBS_NDAS_TN, + 'get_obs_mrms_tn': GET_OBS_MRMS_TN, + 'vx_tn': VX_TN, + 'vx_gridstat_tn': VX_GRIDSTAT_TN, + 'vx_gridstat_refc_tn': VX_GRIDSTAT_REFC_TN, + 'vx_gridstat_retop_tn': VX_GRIDSTAT_RETOP_TN, + 'vx_gridstat_03h_tn': VX_GRIDSTAT_03h_TN, + 'vx_gridstat_06h_tn': VX_GRIDSTAT_06h_TN, + 'vx_gridstat_24h_tn': VX_GRIDSTAT_24h_TN, + 'vx_pointstat_tn': VX_POINTSTAT_TN, + 'vx_ensgrid_tn': VX_ENSGRID_TN, + 'vx_ensgrid_refc_tn': VX_ENSGRID_REFC_TN, + 'vx_ensgrid_retop_tn': VX_ENSGRID_RETOP_TN, + 'vx_ensgrid_03h_tn': VX_ENSGRID_03h_TN, + 'vx_ensgrid_06h_tn': VX_ENSGRID_06h_TN, + 'vx_ensgrid_24h_tn': VX_ENSGRID_24h_TN, + 'vx_ensgrid_mean_tn': VX_ENSGRID_MEAN_TN, + 'vx_ensgrid_prob_tn': VX_ENSGRID_PROB_TN, + 'vx_ensgrid_mean_03h_tn': VX_ENSGRID_MEAN_03h_TN, + 'vx_ensgrid_prob_03h_tn': VX_ENSGRID_PROB_03h_TN, + 'vx_ensgrid_mean_06h_tn': VX_ENSGRID_MEAN_06h_TN, + 'vx_ensgrid_prob_06h_tn': VX_ENSGRID_PROB_06h_TN, + 'vx_ensgrid_mean_24h_tn': VX_ENSGRID_MEAN_24h_TN, + 'vx_ensgrid_prob_24h_tn': VX_ENSGRID_PROB_24h_TN, + 'vx_ensgrid_prob_refc_tn': VX_ENSGRID_PROB_REFC_TN, + 'vx_ensgrid_prob_retop_tn': VX_ENSGRID_PROB_RETOP_TN, + 'vx_enspoint_tn': VX_ENSPOINT_TN, + 'vx_enspoint_mean_tn': VX_ENSPOINT_MEAN_TN, + 'vx_enspoint_prob_tn': VX_ENSPOINT_PROB_TN, + # + # Entity used to load the module file for each GET_OBS_* task. + # + 'get_obs': GET_OBS, + # + # Number of nodes to use for each task. + # + 'nnodes_make_grid': NNODES_MAKE_GRID, + 'nnodes_make_orog': NNODES_MAKE_OROG, + 'nnodes_make_sfc_climo': NNODES_MAKE_SFC_CLIMO, + 'nnodes_get_extrn_ics': NNODES_GET_EXTRN_ICS, + 'nnodes_get_extrn_lbcs': NNODES_GET_EXTRN_LBCS, + 'nnodes_make_ics': NNODES_MAKE_ICS, + 'nnodes_make_lbcs': NNODES_MAKE_LBCS, + 'nnodes_run_fcst': NNODES_RUN_FCST, + 'nnodes_run_post': NNODES_RUN_POST, + 'nnodes_get_obs_ccpa': NNODES_GET_OBS_CCPA, + 'nnodes_get_obs_mrms': NNODES_GET_OBS_MRMS, + 'nnodes_get_obs_ndas': NNODES_GET_OBS_NDAS, + 'nnodes_vx_gridstat': NNODES_VX_GRIDSTAT, + 'nnodes_vx_pointstat': NNODES_VX_POINTSTAT, + 'nnodes_vx_ensgrid': NNODES_VX_ENSGRID, + 'nnodes_vx_ensgrid_mean': NNODES_VX_ENSGRID_MEAN, + 'nnodes_vx_ensgrid_prob': NNODES_VX_ENSGRID_PROB, + 'nnodes_vx_enspoint': NNODES_VX_ENSPOINT, + 'nnodes_vx_enspoint_mean': NNODES_VX_ENSPOINT_MEAN, + 'nnodes_vx_enspoint_prob': NNODES_VX_ENSPOINT_PROB, + # + # Number of cores used for a task + # + 'ncores_run_fcst': PE_MEMBER01, + 'native_run_fcst': f"--cpus-per-task {OMP_NUM_THREADS_RUN_FCST} --exclusive", + # + # Number of logical processes per node for each task. If running without + # threading, this is equal to the number of MPI processes per node. + # + 'ppn_make_grid': PPN_MAKE_GRID, + 'ppn_make_orog': PPN_MAKE_OROG, + 'ppn_make_sfc_climo': PPN_MAKE_SFC_CLIMO, + 'ppn_get_extrn_ics': PPN_GET_EXTRN_ICS, + 'ppn_get_extrn_lbcs': PPN_GET_EXTRN_LBCS, + 'ppn_make_ics': PPN_MAKE_ICS, + 'ppn_make_lbcs': PPN_MAKE_LBCS, + 'ppn_run_fcst': PPN_RUN_FCST, + 'ppn_run_post': PPN_RUN_POST, + 'ppn_get_obs_ccpa': PPN_GET_OBS_CCPA, + 'ppn_get_obs_mrms': PPN_GET_OBS_MRMS, + 'ppn_get_obs_ndas': PPN_GET_OBS_NDAS, + 'ppn_vx_gridstat': PPN_VX_GRIDSTAT, + 'ppn_vx_pointstat': PPN_VX_POINTSTAT, + 'ppn_vx_ensgrid': PPN_VX_ENSGRID, + 'ppn_vx_ensgrid_mean': PPN_VX_ENSGRID_MEAN, + 'ppn_vx_ensgrid_prob': PPN_VX_ENSGRID_PROB, + 'ppn_vx_enspoint': PPN_VX_ENSPOINT, + 'ppn_vx_enspoint_mean': PPN_VX_ENSPOINT_MEAN, + 'ppn_vx_enspoint_prob': PPN_VX_ENSPOINT_PROB, + # + # Maximum wallclock time for each task. + # + 'wtime_make_grid': WTIME_MAKE_GRID, + 'wtime_make_orog': WTIME_MAKE_OROG, + 'wtime_make_sfc_climo': WTIME_MAKE_SFC_CLIMO, + 'wtime_get_extrn_ics': WTIME_GET_EXTRN_ICS, + 'wtime_get_extrn_lbcs': WTIME_GET_EXTRN_LBCS, + 'wtime_make_ics': WTIME_MAKE_ICS, + 'wtime_make_lbcs': WTIME_MAKE_LBCS, + 'wtime_run_fcst': WTIME_RUN_FCST, + 'wtime_run_post': WTIME_RUN_POST, + 'wtime_get_obs_ccpa': WTIME_GET_OBS_CCPA, + 'wtime_get_obs_mrms': WTIME_GET_OBS_MRMS, + 'wtime_get_obs_ndas': WTIME_GET_OBS_NDAS, + 'wtime_vx_gridstat': WTIME_VX_GRIDSTAT, + 'wtime_vx_pointstat': WTIME_VX_POINTSTAT, + 'wtime_vx_ensgrid': WTIME_VX_ENSGRID, + 'wtime_vx_ensgrid_mean': WTIME_VX_ENSGRID_MEAN, + 'wtime_vx_ensgrid_prob': WTIME_VX_ENSGRID_PROB, + 'wtime_vx_enspoint': WTIME_VX_ENSPOINT, + 'wtime_vx_enspoint_mean': WTIME_VX_ENSPOINT_MEAN, + 'wtime_vx_enspoint_prob': WTIME_VX_ENSPOINT_PROB, + # + # Maximum number of tries for each task. + # + 'maxtries_make_grid': MAXTRIES_MAKE_GRID, + 'maxtries_make_orog': MAXTRIES_MAKE_OROG, + 'maxtries_make_sfc_climo': MAXTRIES_MAKE_SFC_CLIMO, + 'maxtries_get_extrn_ics': MAXTRIES_GET_EXTRN_ICS, + 'maxtries_get_extrn_lbcs': MAXTRIES_GET_EXTRN_LBCS, + 'maxtries_make_ics': MAXTRIES_MAKE_ICS, + 'maxtries_make_lbcs': MAXTRIES_MAKE_LBCS, + 'maxtries_run_fcst': MAXTRIES_RUN_FCST, + 'maxtries_run_post': MAXTRIES_RUN_POST, + 'maxtries_get_obs_ccpa': MAXTRIES_GET_OBS_CCPA, + 'maxtries_get_obs_mrms': MAXTRIES_GET_OBS_MRMS, + 'maxtries_get_obs_ndas': MAXTRIES_GET_OBS_NDAS, + 'maxtries_vx_gridstat': MAXTRIES_VX_GRIDSTAT, + 'maxtries_vx_gridstat_refc': MAXTRIES_VX_GRIDSTAT_REFC, + 'maxtries_vx_gridstat_retop': MAXTRIES_VX_GRIDSTAT_RETOP, + 'maxtries_vx_gridstat_03h': MAXTRIES_VX_GRIDSTAT_03h, + 'maxtries_vx_gridstat_06h': MAXTRIES_VX_GRIDSTAT_06h, + 'maxtries_vx_gridstat_24h': MAXTRIES_VX_GRIDSTAT_24h, + 'maxtries_vx_pointstat': MAXTRIES_VX_POINTSTAT, + 'maxtries_vx_ensgrid': MAXTRIES_VX_ENSGRID, + 'maxtries_vx_ensgrid_refc': MAXTRIES_VX_ENSGRID_REFC, + 'maxtries_vx_ensgrid_retop': MAXTRIES_VX_ENSGRID_RETOP, + 'maxtries_vx_ensgrid_03h': MAXTRIES_VX_ENSGRID_03h, + 'maxtries_vx_ensgrid_06h': MAXTRIES_VX_ENSGRID_06h, + 'maxtries_vx_ensgrid_24h': MAXTRIES_VX_ENSGRID_24h, + 'maxtries_vx_ensgrid_mean': MAXTRIES_VX_ENSGRID_MEAN, + 'maxtries_vx_ensgrid_prob': MAXTRIES_VX_ENSGRID_PROB, + 'maxtries_vx_ensgrid_mean_03h': MAXTRIES_VX_ENSGRID_MEAN_03h, + 'maxtries_vx_ensgrid_prob_03h': MAXTRIES_VX_ENSGRID_PROB_03h, + 'maxtries_vx_ensgrid_mean_06h': MAXTRIES_VX_ENSGRID_MEAN_06h, + 'maxtries_vx_ensgrid_prob_06h': MAXTRIES_VX_ENSGRID_PROB_06h, + 'maxtries_vx_ensgrid_mean_24h': MAXTRIES_VX_ENSGRID_MEAN_24h, + 'maxtries_vx_ensgrid_prob_24h': MAXTRIES_VX_ENSGRID_PROB_24h, + 'maxtries_vx_ensgrid_prob_refc': MAXTRIES_VX_ENSGRID_PROB_REFC, + 'maxtries_vx_ensgrid_prob_retop': MAXTRIES_VX_ENSGRID_PROB_RETOP, + 'maxtries_vx_enspoint': MAXTRIES_VX_ENSPOINT, + 'maxtries_vx_enspoint_mean': MAXTRIES_VX_ENSPOINT_MEAN, + 'maxtries_vx_enspoint_prob': MAXTRIES_VX_ENSPOINT_PROB, + # + # Flags that specify whether to run the preprocessing or + # verification-related tasks. + # + 'run_task_make_grid': RUN_TASK_MAKE_GRID, + 'run_task_make_orog': RUN_TASK_MAKE_OROG, + 'run_task_make_sfc_climo': RUN_TASK_MAKE_SFC_CLIMO, + 'run_task_get_extrn_ics': RUN_TASK_GET_EXTRN_ICS, + 'run_task_get_extrn_lbcs': RUN_TASK_GET_EXTRN_LBCS, + 'run_task_make_ics': RUN_TASK_MAKE_ICS, + 'run_task_make_lbcs': RUN_TASK_MAKE_LBCS, + 'run_task_run_fcst': RUN_TASK_RUN_FCST, + 'run_task_run_post': RUN_TASK_RUN_POST, + 'run_task_get_obs_ccpa': RUN_TASK_GET_OBS_CCPA, + 'run_task_get_obs_mrms': RUN_TASK_GET_OBS_MRMS, + 'run_task_get_obs_ndas': RUN_TASK_GET_OBS_NDAS, + 'run_task_vx_gridstat': RUN_TASK_VX_GRIDSTAT, + 'run_task_vx_pointstat': RUN_TASK_VX_POINTSTAT, + 'run_task_vx_ensgrid': RUN_TASK_VX_ENSGRID, + 'run_task_vx_enspoint': RUN_TASK_VX_ENSPOINT, + # + # Number of physical cores per node for the current machine. + # + 'ncores_per_node': NCORES_PER_NODE, + # + # Directories and files. + # + 'jobsdir': JOBSDIR, + 'logdir': LOGDIR, + 'scriptsdir': SCRIPTSDIR, + 'cycle_basedir': CYCLE_BASEDIR, + 'global_var_defns_fp': GLOBAL_VAR_DEFNS_FP, + 'load_modules_run_task_fp': LOAD_MODULES_RUN_TASK_FP, + # + # External model information for generating ICs and LBCs. + # + 'extrn_mdl_name_ics': EXTRN_MDL_NAME_ICS, + 'extrn_mdl_name_lbcs': EXTRN_MDL_NAME_LBCS, + # + # Parameters that determine the set of cycles to run. + # + 'date_first_cycl': date_to_str(DATE_FIRST_CYCL,True), + 'date_last_cycl': date_to_str(DATE_LAST_CYCL,True), + 'cdate_first_cycl': cdate_first_cycl, + 'cycl_hrs': cycl_hrs_str, + 'cycl_freq': f"{INCR_CYCL_FREQ:02d}:00:00", + # + # Forecast length (same for all cycles). + # + 'fcst_len_hrs': FCST_LEN_HRS, + # + # Inline post + # + 'write_dopost': WRITE_DOPOST, + # + # METPlus-specific information + # + 'model': MODEL, + 'met_install_dir': MET_INSTALL_DIR, + 'met_bin_exec': MET_BIN_EXEC, + 'metplus_path': METPLUS_PATH, + 'vx_config_dir': VX_CONFIG_DIR, + 'metplus_conf': METPLUS_CONF, + 'met_config': MET_CONFIG, + 'ccpa_obs_dir': CCPA_OBS_DIR, + 'mrms_obs_dir': MRMS_OBS_DIR, + 'ndas_obs_dir': NDAS_OBS_DIR, + # + # Ensemble-related parameters. + # + 'do_ensemble': DO_ENSEMBLE, + 'num_ens_members': NUM_ENS_MEMBERS, + 'ndigits_ensmem_names': f"{NDIGITS_ENSMEM_NAMES}", + 'ensmem_indx_name': ensmem_indx_name, + 'uscore_ensmem_name': uscore_ensmem_name, + 'slash_ensmem_subdir': slash_ensmem_subdir, + # + # Parameters associated with subhourly post-processed output + # + 'sub_hourly_post': SUB_HOURLY_POST, + 'delta_min': DT_SUBHOURLY_POST_MNTS, + 'first_fv3_file_tstr': f"000:{time_str}" + } + # End of "settings" variable. + settings_str = cfg_to_yaml_str(settings) + + print_info_msg(dedent(f''' + The variable \"settings\" specifying values of the rococo XML variables + has been set as follows: + #----------------------------------------------------------------------- + settings =\n''') + settings_str, verbose=VERBOSE) + + # + # Call the python script to generate the experiment's actual XML file + # from the jinja template file. + # + try: + fill_jinja_template(["-q", "-u", settings_str, "-t", template_xml_fp, "-o", WFLOW_XML_FP]) + except: + print_err_msg_exit(f''' + Call to python script fill_jinja_template.py to create a rocoto workflow + XML file from a template file failed. Parameters passed to this script + are: + Full path to template rocoto XML file: + template_xml_fp = \"{template_xml_fp}\" + Full path to output rocoto XML file: + WFLOW_XML_FP = \"{WFLOW_XML_FP}\" + Namelist settings specified on command line: + settings = + {settings_str}''') + # + #----------------------------------------------------------------------- + # + # Create a symlink in the experiment directory that points to the workflow + # (re)launch script. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + Creating symlink in the experiment directory (EXPTDIR) that points to the + workflow launch script (WFLOW_LAUNCH_SCRIPT_FP): + EXPTDIR = \"{EXPTDIR}\" + WFLOW_LAUNCH_SCRIPT_FP = \"{WFLOW_LAUNCH_SCRIPT_FP}\"''',verbose=VERBOSE) + + create_symlink_to_file(WFLOW_LAUNCH_SCRIPT_FP, + os.path.join(EXPTDIR,WFLOW_LAUNCH_SCRIPT_FN), + False) + # + #----------------------------------------------------------------------- + # + # If USE_CRON_TO_RELAUNCH is set to TRUE, add a line to the user's cron + # table to call the (re)launch script every CRON_RELAUNCH_INTVL_MNTS mi- + # nutes. + # + #----------------------------------------------------------------------- + # + if USE_CRON_TO_RELAUNCH: + # + # Make a backup copy of the user's crontab file and save it in a file. + # + time_stamp = datetime.now().strftime("%F_%T") + crontab_backup_fp=os.path.join(EXPTDIR,f"crontab.bak.{time_stamp}") + print_info_msg(f''' + Copying contents of user cron table to backup file: + crontab_backup_fp = \"{crontab_backup_fp}\"''',verbose=VERBOSE) + + global called_from_cron + try: called_from_cron + except: called_from_cron = False + + crontab_cmd,crontab_contents = get_crontab_contents(called_from_cron=called_from_cron) + # To create the backup crontab file and add a new job to the user's + # existing cron table, use the "printf" command, not "echo", to print + # out variables. This is because "echo" will add a newline at the end + # of its output even if its input argument is a null string, resulting + # in extranous blank lines in the backup crontab file and/or the cron + # table itself. Using "printf" prevents the appearance of these blank + # lines. + run_command(f'''printf "%s" '{crontab_contents}' > "{crontab_backup_fp}"''') + # + # Below, we use "grep" to determine whether the crontab line that the + # variable CRONTAB_LINE contains is already present in the cron table. + # For that purpose, we need to escape the asterisks in the string in + # CRONTAB_LINE with backslashes. Do this next. + # + (_,crontab_line_esc_astr,_) = run_command(f'''printf "%s" '{CRONTAB_LINE}' | \ + {SED} -r -e "s%[*]%\\\\*%g"''') + # In the grep command below, the "^" at the beginning of the string + # passed to grep is a start-of-line anchor, and the "$" at the end is + # an end-of-line anchor. Thus, in order for grep to find a match on + # any given line of the cron table's contents, that line must contain + # exactly the string in the variable crontab_line_esc_astr without any + # leading or trailing characters. This is to eliminate situations in + # which a line in the cron table contains the string in crontab_line_esc_astr + # but is precedeeded, for example, by the comment character "#" (in which + # case cron ignores that line) and/or is followed by further commands + # that are not part of the string in crontab_line_esc_astr (in which + # case it does something more than the command portion of the string in + # crontab_line_esc_astr does). + # + if MACHINE == "WCOSS_DELL_P3": + (exit_status,grep_output,_)=run_command(f'''grep '^{crontab_line_esc_astr}$' "/u/{USER}/cron/mycrontab"''') + else: + (exit_status,grep_output,_)=run_command(f'''printf "%s" '{crontab_contents}' | grep "^{crontab_line_esc_astr}$"''') + + if exit_status == 0: + + print_info_msg(f''' + The following line already exists in the cron table and thus will not be + added: + CRONTAB_LINE = \"{CRONTAB_LINE}\"''') + + else: + + print_info_msg(f''' + Adding the following line to the user's cron table in order to automatically + resubmit SRW workflow: + CRONTAB_LINE = \"{CRONTAB_LINE}\"''',verbose=VERBOSE) + + if MACHINE == "WCOSS_DELL_P3": + run_command(f'''printf "%s\n" '{CRONTAB_LINE}' >> "/u/{USER}/cron/mycrontab"''') + else: + # Add a newline to the end of crontab_contents only if it is not empty. + # This is needed so that when CRONTAB_LINE is printed out, it appears on + # a separate line. + if crontab_contents: + crontab_contents += "\n" + run_command(f'''( printf "%s" '{crontab_contents}'; printf "%s\n" '{CRONTAB_LINE}' ) | {crontab_cmd}''') + # + #----------------------------------------------------------------------- + # + # Create the FIXam directory under the experiment directory. In NCO mode, + # this will be a symlink to the directory specified in FIXgsm, while in + # community mode, it will be an actual directory with files copied into + # it from FIXgsm. + # + #----------------------------------------------------------------------- + # + # First, consider NCO mode. + # + if RUN_ENVIR == "nco": + + ln_vrfy(f'''-fsn "{FIXgsm}" "{FIXam}"''') + # + # Resolve the target directory that the FIXam symlink points to and check + # that it exists. + # + try: + path_resolved = os.path.realpath(FIXam) + except: + path_resolved = FIXam + if not os.path.exists(path_resolved): + print_err_msg_exit(f''' + In order to be able to generate a forecast experiment in NCO mode (i.e. + when RUN_ENVIR set to \"nco\"), the path specified by FIXam after resolving + all symlinks (path_resolved) must be an existing directory (but in this + case isn't): + RUN_ENVIR = \"{RUN_ENVIR}\" + FIXam = \"{FIXam}\" + path_resolved = \"{path_resolved}\" + Please ensure that path_resolved is an existing directory and then rerun + the experiment generation script.''') + # + # Now consider community mode. + # + else: + + print_info_msg(f''' + Copying fixed files from system directory (FIXgsm) to a subdirectory + (FIXam) in the experiment directory: + FIXgsm = \"{FIXgsm}\" + FIXam = \"{FIXam}\"''',verbose=VERBOSE) + + check_for_preexist_dir_file(FIXam,"delete") + mkdir_vrfy("-p",FIXam) + mkdir_vrfy("-p",os.path.join(FIXam,"fix_co2_proj")) + + num_files=len(FIXgsm_FILES_TO_COPY_TO_FIXam) + for i in range(num_files): + fn=f"{FIXgsm_FILES_TO_COPY_TO_FIXam[i]}" + cp_vrfy(os.path.join(FIXgsm,fn), os.path.join(FIXam,fn)) + # + #----------------------------------------------------------------------- + # + # Copy MERRA2 aerosol climatology data. + # + #----------------------------------------------------------------------- + # + if USE_MERRA_CLIMO: + print_info_msg(f''' + Copying MERRA2 aerosol climatology data files from system directory + (FIXaer/FIXlut) to a subdirectory (FIXclim) in the experiment directory: + FIXaer = \"{FIXaer}\" + FIXlut = \"{FIXlut}\" + FIXclim = \"{FIXclim}\"''',verbose=VERBOSE) + + check_for_preexist_dir_file(FIXclim,"delete") + mkdir_vrfy("-p", FIXclim) + + cp_vrfy(os.path.join(FIXaer,"merra2.aerclim*.nc"), FIXclim) + cp_vrfy(os.path.join(FIXlut,"optics*.dat"), FIXclim) + # + #----------------------------------------------------------------------- + # + # Copy templates of various input files to the experiment directory. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + Copying templates of various input files to the experiment directory...''', + verbose=VERBOSE) + + print_info_msg(f''' + Copying the template data table file to the experiment directory...''', + verbose=VERBOSE) + cp_vrfy(DATA_TABLE_TMPL_FP,DATA_TABLE_FP) + + print_info_msg(f''' + Copying the template field table file to the experiment directory...''', + verbose=VERBOSE) + cp_vrfy(FIELD_TABLE_TMPL_FP,FIELD_TABLE_FP) + + print_info_msg(f''' + Copying the template NEMS configuration file to the experiment directory...''', + verbose=VERBOSE) + cp_vrfy(NEMS_CONFIG_TMPL_FP,NEMS_CONFIG_FP) + # + # Copy the CCPP physics suite definition file from its location in the + # clone of the FV3 code repository to the experiment directory (EXPT- + # DIR). + # + print_info_msg(f''' + Copying the CCPP physics suite definition XML file from its location in + the forecast model directory sturcture to the experiment directory...''', + verbose=VERBOSE) + cp_vrfy(CCPP_PHYS_SUITE_IN_CCPP_FP,CCPP_PHYS_SUITE_FP) + # + # Copy the field dictionary file from its location in the + # clone of the FV3 code repository to the experiment directory (EXPT- + # DIR). + # + print_info_msg(f''' + Copying the field dictionary file from its location in the forecast + model directory sturcture to the experiment directory...''', + verbose=VERBOSE) + cp_vrfy(FIELD_DICT_IN_UWM_FP,FIELD_DICT_FP) + # + #----------------------------------------------------------------------- + # + # Set parameters in the FV3-LAM namelist file. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + Setting parameters in weather model's namelist file (FV3_NML_FP): + FV3_NML_FP = \"{FV3_NML_FP}\"''') + # + # Set npx and npy, which are just NX plus 1 and NY plus 1, respectively. + # These need to be set in the FV3-LAM Fortran namelist file. They represent + # the number of cell vertices in the x and y directions on the regional + # grid. + # + npx=NX+1 + npy=NY+1 + # + # For the physics suites that use RUC LSM, set the parameter kice to 9, + # Otherwise, leave it unspecified (which means it gets set to the default + # value in the forecast model). + # + # NOTE: + # May want to remove kice from FV3.input.yml (and maybe input.nml.FV3). + # + kice=None + if SDF_USES_RUC_LSM: + kice=9 + # + # Set lsoil, which is the number of input soil levels provided in the + # chgres_cube output NetCDF file. This is the same as the parameter + # nsoill_out in the namelist file for chgres_cube. [On the other hand, + # the parameter lsoil_lsm (not set here but set in input.nml.FV3 and/or + # FV3.input.yml) is the number of soil levels that the LSM scheme in the + # forecast model will run with.] Here, we use the same approach to set + # lsoil as the one used to set nsoill_out in exregional_make_ics.sh. + # See that script for details. + # + # NOTE: + # May want to remove lsoil from FV3.input.yml (and maybe input.nml.FV3). + # Also, may want to set lsm here as well depending on SDF_USES_RUC_LSM. + # + lsoil=4 + if ( EXTRN_MDL_NAME_ICS == "HRRR" or \ + EXTRN_MDL_NAME_ICS == "RAP" ) and \ + ( SDF_USES_RUC_LSM ): + lsoil=9 + # + # Create a multiline variable that consists of a yaml-compliant string + # specifying the values that the namelist variables that are physics- + # suite-independent need to be set to. Below, this variable will be + # passed to a python script that will in turn set the values of these + # variables in the namelist file. + # + # IMPORTANT: + # If we want a namelist variable to be removed from the namelist file, + # in the "settings" variable below, we need to set its value to the + # string "null". This is equivalent to setting its value to + # !!python/none + # in the base namelist file specified by FV3_NML_BASE_SUITE_FP or the + # suite-specific yaml settings file specified by FV3_NML_YAML_CONFIG_FP. + # + # It turns out that setting the variable to an empty string also works + # to remove it from the namelist! Which is better to use?? + # + settings = {} + settings['atmos_model_nml'] = { + 'blocksize': BLOCKSIZE, + 'ccpp_suite': CCPP_PHYS_SUITE + } + settings['fv_core_nml'] = { + 'target_lon': LON_CTR, + 'target_lat': LAT_CTR, + 'nrows_blend': HALO_BLEND, + # + # Question: + # For a ESGgrid type grid, what should stretch_fac be set to? This depends + # on how the FV3 code uses the stretch_fac parameter in the namelist file. + # Recall that for a ESGgrid, it gets set in the function set_gridparams_ESGgrid(.sh) + # to something like 0.9999, but is it ok to set it to that here in the + # FV3 namelist file? + # + 'stretch_fac': STRETCH_FAC, + 'npx': npx, + 'npy': npy, + 'layout': [LAYOUT_X, LAYOUT_Y], + 'bc_update_interval': LBC_SPEC_INTVL_HRS + } + settings['gfs_physics_nml'] = { + 'kice': kice or None, + 'lsoil': lsoil or None, + 'do_shum': DO_SHUM, + 'do_sppt': DO_SPPT, + 'do_skeb': DO_SKEB, + 'do_spp': DO_SPP, + 'n_var_spp': N_VAR_SPP, + 'n_var_lndp': N_VAR_LNDP, + 'lndp_type': LNDP_TYPE, + 'lndp_each_step': LSM_SPP_EACH_STEP, + 'fhcyc': FHCYC_LSM_SPP_OR_NOT + } + # + # Add to "settings" the values of those namelist variables that specify + # the paths to fixed files in the FIXam directory. As above, these namelist + # variables are physcs-suite-independent. + # + # Note that the array FV3_NML_VARNAME_TO_FIXam_FILES_MAPPING contains + # the mapping between the namelist variables and the names of the files + # in the FIXam directory. Here, we loop through this array and process + # each element to construct each line of "settings". + # + dummy_run_dir=os.path.join(EXPTDIR,"any_cyc") + if DO_ENSEMBLE: + dummy_run_dir=os.path.join(dummy_run_dir,"any_ensmem") + + regex_search="^[ ]*([^| ]+)[ ]*[|][ ]*([^| ]+)[ ]*$" + num_nml_vars=len(FV3_NML_VARNAME_TO_FIXam_FILES_MAPPING) + namsfc_dict = {} + for i in range(num_nml_vars): + + mapping=f"{FV3_NML_VARNAME_TO_FIXam_FILES_MAPPING[i]}" + tup = find_pattern_in_str(regex_search, mapping) + nml_var_name = tup[0] + FIXam_fn = tup[1] + + fp="\"\"" + if FIXam_fn: + fp=os.path.join(FIXam,FIXam_fn) + # + # If not in NCO mode, for portability and brevity, change fp so that it + # is a relative path (relative to any cycle directory immediately under + # the experiment directory). + # + if RUN_ENVIR != "nco": + fp = os.path.relpath(os.path.realpath(fp), start=dummy_run_dir) + # + # Add a line to the variable "settings" that specifies (in a yaml-compliant + # format) the name of the current namelist variable and the value it should + # be set to. + # + namsfc_dict[nml_var_name] = fp + # + # Add namsfc_dict to settings + # + settings['namsfc'] = namsfc_dict + # + # Use netCDF4 when running the North American 3-km domain due to file size. + # + if PREDEF_GRID_NAME == "RRFS_NA_3km": + settings['fms2_io_nml'] = { + 'netcdf_default_format': 'netcdf4' + } + # + # Add the relevant tendency-based stochastic physics namelist variables to + # "settings" when running with SPPT, SHUM, or SKEB turned on. If running + # with SPP or LSM SPP, set the "new_lscale" variable. Otherwise only + # include an empty "nam_stochy" stanza. + # + nam_stochy_dict = {} + if DO_SPPT: + nam_stochy_dict.update({ + 'iseed_sppt': ISEED_SPPT, + 'new_lscale': NEW_LSCALE, + 'sppt': SPPT_MAG, + 'sppt_logit': SPPT_LOGIT, + 'sppt_lscale': SPPT_LSCALE, + 'sppt_sfclimit': SPPT_SFCLIMIT, + 'sppt_tau': SPPT_TSCALE, + 'spptint': SPPT_INT, + 'use_zmtnblck': USE_ZMTNBLCK + }) + + if DO_SHUM: + nam_stochy_dict.update({ + 'iseed_shum': ISEED_SHUM, + 'new_lscale': NEW_LSCALE, + 'shum': SHUM_MAG, + 'shum_lscale': SHUM_LSCALE, + 'shum_tau': SHUM_TSCALE, + 'shumint': SHUM_INT + }) + + if DO_SKEB: + nam_stochy_dict.update({ + 'iseed_skeb': ISEED_SKEB, + 'new_lscale': NEW_LSCALE, + 'skeb': SKEB_MAG, + 'skeb_lscale': SKEB_LSCALE, + 'skebnorm': SKEBNORM, + 'skeb_tau': SKEB_TSCALE, + 'skebint': SKEB_INT, + 'skeb_vdof': SKEB_VDOF + }) + + if DO_SPP or DO_LSM_SPP: + nam_stochy_dict.update({ + 'new_lscale': NEW_LSCALE + }) + + settings['nam_stochy'] = nam_stochy_dict + # + # Add the relevant SPP namelist variables to "settings" when running with + # SPP turned on. Otherwise only include an empty "nam_sppperts" stanza. + # + nam_sppperts_dict = {} + if DO_SPP: + nam_sppperts_dict = { + 'iseed_spp': ISEED_SPP, + 'spp_lscale': SPP_LSCALE, + 'spp_prt_list': SPP_MAG_LIST, + 'spp_sigtop1': SPP_SIGTOP1, + 'spp_sigtop2': SPP_SIGTOP2, + 'spp_stddev_cutoff': SPP_STDDEV_CUTOFF, + 'spp_tau': SPP_TSCALE, + 'spp_var_list': SPP_VAR_LIST + } + + settings['nam_sppperts'] = nam_sppperts_dict + # + # Add the relevant LSM SPP namelist variables to "settings" when running with + # LSM SPP turned on. + # + nam_sfcperts_dict = {} + if DO_LSM_SPP: + nam_sfcperts_dict = { + 'lndp_type': LNDP_TYPE, + 'lndp_tau': LSM_SPP_TSCALE, + 'lndp_lscale': LSM_SPP_LSCALE, + 'iseed_lndp': ISEED_LSM_SPP, + 'lndp_var_list': LSM_SPP_VAR_LIST, + 'lndp_prt_list': LSM_SPP_MAG_LIST + } + + settings['nam_sfcperts'] = nam_sfcperts_dict + + settings_str = cfg_to_yaml_str(settings) + + print_info_msg(dedent(f''' + The variable \"settings\" specifying values of the weather model's + namelist variables has been set as follows: + + settings =\n''') + settings_str, verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # Call the set_namelist.py script to create a new FV3 namelist file (full + # path specified by FV3_NML_FP) using the file FV3_NML_BASE_SUITE_FP as + # the base (i.e. starting) namelist file, with physics-suite-dependent + # modifications to the base file specified in the yaml configuration file + # FV3_NML_YAML_CONFIG_FP (for the physics suite specified by CCPP_PHYS_SUITE), + # and with additional physics-suite-independent modificaitons specified + # in the variable "settings" set above. + # + #----------------------------------------------------------------------- + # + try: + set_namelist(["-q", "-n", FV3_NML_BASE_SUITE_FP, "-c", FV3_NML_YAML_CONFIG_FP, + CCPP_PHYS_SUITE, "-u", settings_str, "-o", FV3_NML_FP]) + except: + print_err_msg_exit(f''' + Call to python script set_namelist.py to generate an FV3 namelist file + failed. Parameters passed to this script are: + Full path to base namelist file: + FV3_NML_BASE_SUITE_FP = \"{FV3_NML_BASE_SUITE_FP}\" + Full path to yaml configuration file for various physics suites: + FV3_NML_YAML_CONFIG_FP = \"{FV3_NML_YAML_CONFIG_FP}\" + Physics suite to extract from yaml configuration file: + CCPP_PHYS_SUITE = \"{CCPP_PHYS_SUITE}\" + Full path to output namelist file: + FV3_NML_FP = \"{FV3_NML_FP}\" + Namelist settings specified on command line: + settings = + {settings_str}''') + # + # If not running the MAKE_GRID_TN task (which implies the workflow will + # use pregenerated grid files), set the namelist variables specifying + # the paths to surface climatology files. These files are located in + # (or have symlinks that point to them) in the FIXLAM directory. + # + # Note that if running the MAKE_GRID_TN task, this action usually cannot + # be performed here but must be performed in that task because the names + # of the surface climatology files depend on the CRES parameter (which is + # the C-resolution of the grid), and this parameter is in most workflow + # configurations is not known until the grid is created. + # + if not RUN_TASK_MAKE_GRID: + + set_FV3nml_sfc_climo_filenames() + # + #----------------------------------------------------------------------- + # + # To have a record of how this experiment/workflow was generated, copy + # the experiment/workflow configuration file to the experiment directo- + # ry. + # + #----------------------------------------------------------------------- + # + cp_vrfy(os.path.join(USHDIR,EXPT_CONFIG_FN), EXPTDIR) + # + #----------------------------------------------------------------------- + # + # For convenience, print out the commands that need to be issued on the + # command line in order to launch the workflow and to check its status. + # Also, print out the line that should be placed in the user's cron table + # in order for the workflow to be continually resubmitted. + # + #----------------------------------------------------------------------- + # + if WORKFLOW_MANAGER == "rocoto": + wflow_db_fn=f"{os.path.splitext(WFLOW_XML_FN)[0]}.db" + rocotorun_cmd=f"rocotorun -w {WFLOW_XML_FN} -d {wflow_db_fn} -v 10" + rocotostat_cmd=f"rocotostat -w {WFLOW_XML_FN} -d {wflow_db_fn} -v 10" + + print_info_msg(f''' + ======================================================================== + ======================================================================== + + Experiment generation completed. The experiment directory is: + + EXPTDIR=\"{EXPTDIR}\" + + ======================================================================== + ======================================================================== + ''') + # + #----------------------------------------------------------------------- + # + # If rocoto is required, print instructions on how to load and use it + # + #----------------------------------------------------------------------- + # + if WORKFLOW_MANAGER == "rocoto": + + print_info_msg(f''' + To launch the workflow, first ensure that you have a compatible version + of rocoto available. For most pre-configured platforms, rocoto can be + loaded via a module: + + > module load rocoto + + For more details on rocoto, see the User's Guide. + + To launch the workflow, first ensure that you have a compatible version + of rocoto loaded. For example, to load version 1.3.1 of rocoto, use + + > module load rocoto/1.3.1 + + (This version has been tested on hera; later versions may also work but + have not been tested.) + + To launch the workflow, change location to the experiment directory + (EXPTDIR) and issue the rocotrun command, as follows: + + > cd {EXPTDIR} + > {rocotorun_cmd} + + To check on the status of the workflow, issue the rocotostat command + (also from the experiment directory): + + > {rocotostat_cmd} + + Note that: + + 1) The rocotorun command must be issued after the completion of each + task in the workflow in order for the workflow to submit the next + task(s) to the queue. + + 2) In order for the output of the rocotostat command to be up-to-date, + the rocotorun command must be issued immediately before issuing the + rocotostat command. + + For automatic resubmission of the workflow (say every 3 minutes), the + following line can be added to the user's crontab (use \"crontab -e\" to + edit the cron table): + + */3 * * * * cd {EXPTDIR} && ./launch_FV3LAM_wflow.sh called_from_cron=\"TRUE\" + ''') + # + # If necessary, run the NOMADS script to source external model data. + # + if NOMADS: + print("Getting NOMADS online data") + print(f"NOMADS_file_type= {NOMADS_file_type}") + cd_vrfy(EXPTDIR) + NOMADS_script = os.path.join(USHDIR, "NOMADS_get_extrn_mdl_files.h") + run_command(f'''{NOMADS_script} {date_to_str(DATE_FIRST_CYCL,True)} \ + {CYCL_HRS} {NOMADS_file_type} {FCST_LEN_HRS} {LBC_SPEC_INTVL_HRS}''') + + +# +#----------------------------------------------------------------------- +# +# Start of the script that will call the experiment/workflow generation +# function defined above. +# +#----------------------------------------------------------------------- +# +if __name__ == "__main__": + # + #----------------------------------------------------------------------- + # + # Set directories. + # + #----------------------------------------------------------------------- + # + ushdir=os.path.dirname(os.path.abspath(__file__)) + # + # Set the name of and full path to the temporary file in which we will + # save some experiment/workflow variables. The need for this temporary + # file is explained below. + # + tmp_fn="tmp" + tmp_fp=os.path.join(ushdir, tmp_fn) + rm_vrfy("-f",tmp_fp) + # + # Set the name of and full path to the log file in which the output from + # the experiment/workflow generation function will be saved. + # + log_fn="log.generate_FV3LAM_wflow" + log_fp=os.path.join(ushdir, log_fn) + rm_vrfy("-f",log_fp) + # + # Call the generate_FV3LAM_wflow function defined above to generate the + # experiment/workflow. Note that we pipe the output of the function + # (and possibly other commands) to the "tee" command in order to be able + # to both save it to a file and print it out to the screen (stdout). + # The piping causes the call to the function (and the other commands + # grouped with it using the curly braces, { ... }) to be executed in a + # subshell. As a result, the experiment/workflow variables that the + # function sets are not available outside of the grouping, i.e. they are + # not available at and after the call to "tee". Since some of these va- + # riables are needed after the call to "tee" below, we save them in a + # temporary file and read them in outside the subshell later below. + # + def workflow_func(): + retval=1 + generate_FV3LAM_wflow() + retval=0 + run_command(f'''echo "{EXPTDIR}" >> "{tmp_fp}"''') + run_command(f'''echo "{retval}" >> "{tmp_fp}"''') + + # create tee functionality + tee = subprocess.Popen(["tee", log_fp], stdin=subprocess.PIPE) + os.dup2(tee.stdin.fileno(), sys.stdout.fileno()) + os.dup2(tee.stdin.fileno(), sys.stderr.fileno()) + + #create workflow process + p = Process(target=workflow_func) + p.start() + p.join() + + # + # Read in experiment/workflow variables needed later below from the tem- + # porary file created in the subshell above containing the call to the + # generate_FV3LAM_wflow function. These variables are not directly + # available here because the call to generate_FV3LAM_wflow above takes + # place in a subshell (due to the fact that we are then piping its out- + # put to the "tee" command). Then remove the temporary file. + # + (_,exptdir,_)=run_command(f'''sed "1q;d" "{tmp_fp}"''') + (_,retval,_)=run_command(f''' sed "2q;d" "{tmp_fp}"''') + if retval: + retval = int(retval) + else: + retval = 1 + rm_vrfy(tmp_fp) + # + # If the call to the generate_FV3LAM_wflow function above was success- + # ful, move the log file in which the "tee" command saved the output of + # the function to the experiment directory. + # + if retval == 0: + mv_vrfy(log_fp,exptdir) + # + # If the call to the generate_FV3LAM_wflow function above was not suc- + # cessful, print out an error message and exit with a nonzero return + # code. + # + else: + print_err_msg_exit(f''' + Experiment generation failed. Check the log file from the ex- + periment/workflow generation script in the file specified by log_fp: + log_fp = \"{log_fp}\" + Stopping.''') + diff --git a/ush/generate_FV3LAM_wflow.sh b/ush/generate_FV3LAM_wflow.sh index 7c840a955..8af7497b0 100755 --- a/ush/generate_FV3LAM_wflow.sh +++ b/ush/generate_FV3LAM_wflow.sh @@ -193,6 +193,7 @@ file (template_xml_fp): 'partition_fcst': ${PARTITION_FCST} 'queue_fcst': ${QUEUE_FCST} 'machine': ${MACHINE} + 'slurm_native_cmd': ${SLURM_NATIVE_CMD} # # Workflow task names. # @@ -790,22 +791,6 @@ settings="\ 'lndp_type': ${LNDP_TYPE}, 'lndp_each_step': ${LSM_SPP_EACH_STEP}, 'fhcyc': ${FHCYC_LSM_SPP_OR_NOT}, - } -'nam_stochy': { - 'shum': ${SHUM_MAG}, - 'shum_lscale': ${SHUM_LSCALE}, - 'shum_tau': ${SHUM_TSCALE}, - 'shumint': ${SHUM_INT}, - 'sppt': ${SPPT_MAG}, - 'sppt_lscale': ${SPPT_LSCALE}, - 'sppt_tau': ${SPPT_TSCALE}, - 'spptint': ${SPPT_INT}, - 'skeb': ${SKEB_MAG}, - 'skeb_lscale': ${SKEB_LSCALE}, - 'skeb_tau': ${SKEB_TSCALE}, - 'skebint': ${SKEB_INT}, - 'skeb_vdof': ${SKEB_VDOF}, - 'use_zmtnblck': ${USE_ZMTNBLCK}, }" # # Add to "settings" the values of those namelist variables that specify @@ -862,6 +847,64 @@ done settings="$settings }" # +# Use netCDF4 when running the North American 3-km domain due to file size. +# +if [ "${PREDEF_GRID_NAME}" = "RRFS_NA_3km" ]; then +settings="$settings +'fms2_io_nml': { + 'netcdf_default_format': netcdf4, + }" +fi +# +# Add the relevant tendency-based stochastic physics namelist variables to +# "settings" when running with SPPT, SHUM, or SKEB turned on. If running +# with SPP or LSM SPP, set the "new_lscale" variable. Otherwise only +# include an empty "nam_stochy" stanza. +# +settings="$settings +'nam_stochy': {" +if [ "${DO_SPPT}" = "TRUE" ]; then + settings="$settings + 'iseed_sppt': ${ISEED_SPPT}, + 'new_lscale': ${NEW_LSCALE}, + 'sppt': ${SPPT_MAG}, + 'sppt_logit': ${SPPT_LOGIT}, + 'sppt_lscale': ${SPPT_LSCALE}, + 'sppt_sfclimit': ${SPPT_SFCLIMIT}, + 'sppt_tau': ${SPPT_TSCALE}, + 'spptint': ${SPPT_INT}, + 'use_zmtnblck': ${USE_ZMTNBLCK}," +fi + +if [ "${DO_SHUM}" = "TRUE" ]; then + settings="$settings + 'iseed_shum': ${ISEED_SHUM}, + 'new_lscale': ${NEW_LSCALE}, + 'shum': ${SHUM_MAG}, + 'shum_lscale': ${SHUM_LSCALE}, + 'shum_tau': ${SHUM_TSCALE}, + 'shumint': ${SHUM_INT}," +fi + +if [ "${DO_SKEB}" = "TRUE" ]; then + settings="$settings + 'iseed_skeb': ${ISEED_SKEB}, + 'new_lscale': ${NEW_LSCALE}, + 'skeb': ${SKEB_MAG}, + 'skeb_lscale': ${SKEB_LSCALE}, + 'skebnorm': ${SKEBNORM}, + 'skeb_tau': ${SKEB_TSCALE}, + 'skebint': ${SKEB_INT}, + 'skeb_vdof': ${SKEB_VDOF}," +fi + +if [ "${DO_SPP}" = "TRUE" ] || [ "${DO_LSM_SPP}" = "TRUE" ]; then + settings="$settings + 'new_lscale': ${NEW_LSCALE}," +fi +settings="$settings + }" +# # Add the relevant SPP namelist variables to "settings" when running with # SPP turned on. Otherwise only include an empty "nam_sppperts" stanza. # @@ -1067,9 +1110,6 @@ fi } - - - # #----------------------------------------------------------------------- # diff --git a/ush/get_crontab_contents.py b/ush/get_crontab_contents.py new file mode 100644 index 000000000..cbb434c69 --- /dev/null +++ b/ush/get_crontab_contents.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python3 + +import os +import unittest +from datetime import datetime + +from python_utils import import_vars, set_env_var, print_input_args, \ + run_command, define_macos_utilities, check_var_valid_value +from constants import valid_vals_BOOLEAN + +def get_crontab_contents(called_from_cron): + """ + #----------------------------------------------------------------------- + # + # This function returns the contents of the user's + # cron table as well as the command to use to manipulate the cron table + # (i.e. the "crontab" command, but on some platforms the version or + # location of this may change depending on other circumstances, e.g. on + # Cheyenne, this depends on whether a script that wants to call "crontab" + # is itself being called from a cron job). Arguments are as follows: + # + # called_from_cron: + # Boolean flag that specifies whether this function (and the scripts or + # functions that are calling it) are called as part of a cron job. Must + # be set to "TRUE" or "FALSE". + # + # outvarname_crontab_cmd: + # Name of the output variable that will contain the command to issue for + # the system "crontab" command. + # + # outvarname_crontab_contents: + # Name of the output variable that will contain the contents of the + # user's cron table. + # + #----------------------------------------------------------------------- + """ + + print_input_args(locals()) + + #import all env vars + IMPORTS = ["MACHINE", "USER"] + import_vars(env_vars=IMPORTS) + + # + # Make sure called_from_cron is set to a valid value. + # + check_var_valid_value(called_from_cron, valid_vals_BOOLEAN) + + if MACHINE == "WCOSS_DELL_P3": + __crontab_cmd__="" + (_,__crontab_contents__,_)=run_command(f'''cat "/u/{USER}/cron/mycrontab"''') + else: + __crontab_cmd__="crontab" + # + # On Cheyenne, simply typing "crontab" will launch the crontab command + # at "/glade/u/apps/ch/opt/usr/bin/crontab". This is a containerized + # version of crontab that will work if called from scripts that are + # themselves being called as cron jobs. In that case, we must instead + # call the system version of crontab at /usr/bin/crontab. + # + if MACHINE == "CHEYENNE": + if called_from_cron: + __crontab_cmd__="/usr/bin/crontab" + (_,__crontab_contents__,_)=run_command(f'''{__crontab_cmd__} -l''') + # + # On Cheyenne, the output of the "crontab -l" command contains a 3-line + # header (comments) at the top that is not actually part of the user's + # cron table. This needs to be removed to avoid adding an unnecessary + # copy of this header to the user's cron table. + # + if MACHINE == "CHEYENNE": + (_,__crontab_contents__,_)=run_command(f'''printf "%s" "{__crontab_contents__}" | tail -n +4 ''') + + return __crontab_cmd__, __crontab_contents__ + +class Testing(unittest.TestCase): + def test_get_crontab_contents(self): + crontab_cmd,crontab_contents = get_crontab_contents(called_from_cron=True) + self.assertEqual(crontab_cmd, "crontab") + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',False) + set_env_var('MACHINE', 'HERA') diff --git a/ush/get_crontab_contents.sh b/ush/get_crontab_contents.sh index 9aae44115..182cf1936 100644 --- a/ush/get_crontab_contents.sh +++ b/ush/get_crontab_contents.sh @@ -64,15 +64,6 @@ function get_crontab_contents() { __crontab_contents__=$( ${__crontab_cmd__} -l ) fi # - # On Cheyenne, the output of the "crontab -l" command contains a 3-line - # header (comments) at the top that is not actually part of the user's - # cron table. This needs to be removed to avoid adding an unnecessary - # copy of this header to the user's cron table. - # - if [ "$MACHINE" = "CHEYENNE" ]; then - __crontab_contents__=$( printf "%s" "${__crontab_contents__}" | tail -n +4 ) - fi - # # Set output variables. # printf -v ${outvarname_crontab_cmd} "%s" "${__crontab_cmd__}" diff --git a/ush/get_extrn_mdl_file_dir_info.sh b/ush/get_extrn_mdl_file_dir_info.sh deleted file mode 100755 index 30a6e2d88..000000000 --- a/ush/get_extrn_mdl_file_dir_info.sh +++ /dev/null @@ -1,681 +0,0 @@ -# -#----------------------------------------------------------------------- -# -# Source the variable definitions file and the bash utility functions. -# -#----------------------------------------------------------------------- -# -. ${GLOBAL_VAR_DEFNS_FP} -. $USHDIR/source_util_funcs.sh -# -#----------------------------------------------------------------------- -# -# This file defines a function that is used to obtain information (e.g. -# output file names, system and mass store file and/or directory names) -# for a specified external model, analysis or forecast, and cycle date. -# See the usage statement below for this function should be called and -# the definitions of the input parameters. -# -#----------------------------------------------------------------------- -# - -usage () { - -echo " -Incorrect number of arguments specified: - - Function name: \"${func_name}\" - Number of arguments specified: $# - -Usage: - - ${func_name} \ - extrn_mdl_name \ - anl_or_fcst \ - cdate_FV3LAM \ - time_offset_hrs \ - varname_extrn_mdl_cdate \ - varname_extrn_mdl_lbc_spec_fhrs \ - varname_extrn_mdl_fns \ - varname_extrn_mdl_sysdir \ - varname_extrn_mdl_arcv_fmt \ - varname_extrn_mdl_arcv_fns \ - varname_extrn_mdl_arcv_fps \ - varname_extrn_mdl_arcvrel_dir - -where the arguments are defined as follows: - - extrn_mdl_name: - Name of the external model, i.e. the name of the model providing the - fields from which files containing initial conditions, surface fields, - and/or lateral boundary conditions for the FV3-LAM will be generated. - - anl_or_fcst: - Flag that specifies whether the external model files we are interested - in obtaining are analysis or forecast files. - - cdate_FV3LAM: - The cycle date and time (hours only) for which we want to obtain file - and directory information. This has the form YYYYMMDDHH, where YYYY - is the four-digit starting year of the cycle, MM is the two-digit - month, DD is the two-digit day of the month, and HH is the two-digit - hour of day. - - time_offset_hrs: - The number of hours by which to shift back in time the start time of - the external model forecast from the specified cycle start time of the - FV3-LAM (cdate_FV3LAM). When getting directory and file information on - external model analysis files, this is normally set to 0. When get- - ting directory and file information on external model forecast files, - this may be set to a nonzero value to obtain information for an exter- - nal model run that started time_offset_hrs hours before cdate_FV3LAM - (instead of exactly at cdate_FV3LAM). Note that in this case, the - forecast hours (relative to the external model run's start time) at - which the lateral boundary conditions will be updated must be shifted - forward by time_offset_hrs hours relative to those for the FV3-LAM in - order to make up for the backward-in-time shift in the starting time - of the external model. - - varname_extrn_mdl_cdate: - Name of the global variable that will contain the starting date and - hour of the external model run. - - varname_extrn_mdl_lbc_spec_fhrs: - Name of the global variable that will contain the forecast hours (re- - lative to the starting time of the external model run, which is earli- - er than that of the FV3-LAM by time_offset_hrs hours) at which lateral - boundary condition (LBC) output files are obtained from the external - model (and will be used to update the LBCs of the FV3-LAM). - - varname_extrn_mdl_fns_on_disk: - Name of the global variable that will contain the expected names of - the external model output files on disk. - - varname_extrn_mdl_fns_in_arcv: - Name of the global variable that will contain the expected names of - the external model output files on NOAA HPSS. - - varname_extrn_mdl_sysdir: - Name of the global variable that will contain the system directory in - which the externaml model output files may be stored. - - varname_extrn_mdl_arcv_fmt: - Name of the global variable that will contain the format of the ar- - chive file on HPSS in which the externaml model output files may be - stored. - - varname_extrn_mdl_arcv_fns: - Name of the global variable that will contain the name of the archive - file on HPSS in which the externaml model output files may be stored. - - varname_extrn_mdl_arcv_fps: - Name of the global variable that will contain the full path to the ar- - chive file on HPSS in which the externaml model output files may be - stored. - - varname_extrn_mdl_arcvrel_dir: - Name of the global variable that will contain the archive-relative di- - rectory, i.e. the directory \"inside\" the archive file in which the ex- - ternal model output files may be stored. -" -} - -function quit_unless_user_spec_data() { - if [ "${USE_USER_STAGED_EXTRN_FILES}" != "TRUE" ]; then - print_err_msg_exit "\ -The system directory in which to look for external model output files -has not been specified for this external model and machine combination: - extrn_mdl_name = \"${extrn_mdl_name}\" - MACHINE = \"$MACHINE\"" - fi -} - -function get_extrn_mdl_file_dir_info() { - - { save_shell_opts; set -u +x; } > /dev/null 2>&1 - - local func_name="${FUNCNAME[0]}" - # - #----------------------------------------------------------------------- - # - # Specify the set of valid argument names for this script/function. Then - # process the arguments provided to this script/function (which should - # consist of a set of name-value pairs of the form arg1="value1", etc). - # - #----------------------------------------------------------------------- - # - local valid_args=( \ - "extrn_mdl_name" \ - "anl_or_fcst" \ - "cdate_FV3LAM" \ - "time_offset_hrs" \ - "varname_extrn_mdl_cdate" \ - "varname_extrn_mdl_lbc_spec_fhrs" \ - "varname_extrn_mdl_fns_on_disk" \ - "varname_extrn_mdl_fns_in_arcv" \ - "varname_extrn_mdl_sysdir" \ - "varname_extrn_mdl_arcv_fmt" \ - "varname_extrn_mdl_arcv_fns" \ - "varname_extrn_mdl_arcv_fps" \ - "varname_extrn_mdl_arcvrel_dir" \ - ) - process_args valid_args "$@" - - if [ "$#" -ne "13" ]; then - print_err_msg_exit $(usage) - fi - - # - #----------------------------------------------------------------------- - # - # For debugging purposes, print out values of arguments passed to this - # script/function. Note that these will be printed out only if VERBOSE - # is set to TRUE. - # - #----------------------------------------------------------------------- - # - print_input_args valid_args - - # - #----------------------------------------------------------------------- - # - # Declare additional local variables. - # - #----------------------------------------------------------------------- - # - local yyyy yy mm dd hh mn yyyymmdd ddd \ - lbc_spec_fhrs i num_fhrs \ - fcst_hhh fcst_hh fcst_mn \ - prefix suffix fns fns_on_disk fns_in_arcv \ - sysbasedir sysdir \ - arcv_dir arcv_fmt arcv_fns arcv_fps arcvrel_dir - - anl_or_fcst=$(echo_uppercase $anl_or_fcst) - valid_vals_anl_or_fcst=( "ANL" "FCST" ) - check_var_valid_value "anl_or_fcst" "valid_vals_anl_or_fcst" - # - #----------------------------------------------------------------------- - # - # Set cdate to the start time for the external model being used. - # - #----------------------------------------------------------------------- - # - hh=${cdate_FV3LAM:8:2} - yyyymmdd=${cdate_FV3LAM:0:8} - - # Adjust time for offset - cdate=$( $DATE_UTIL --utc --date "${yyyymmdd} ${hh} UTC - ${time_offset_hrs} hours" "+%Y%m%d%H" ) - - yyyy=${cdate:0:4} - yy=${yyyy:2:4} - mm=${cdate:4:2} - dd=${cdate:6:2} - hh=${cdate:8:2} - mn="00" - yyyymmdd=${cdate:0:8} - # ddd is the Julian day -- not 3 digit day of month - ddd=$( $DATE_UTIL --utc --date "${yyyy}-${mm}-${dd} ${hh}:${mn} UTC" "+%j" ) - # - #----------------------------------------------------------------------- - # - # Initialize lbc_spec_fhrs array. Skip the initial time, since it is - # handled separately. - # - #----------------------------------------------------------------------- - # - lbc_spec_fhrs=( "" ) - - if [ "${anl_or_fcst}" = "FCST" ]; then - - lbc_spec_fhrs=( "${LBC_SPEC_FCST_HRS[@]}" ) - - num_fhrs=${#lbc_spec_fhrs[@]} - for (( i=0; i<=$((num_fhrs-1)); i++ )); do - # Add in offset to account for shift in initial time - lbc_spec_fhrs[$i]=$(( ${lbc_spec_fhrs[$i]} + time_offset_hrs )) - done - - fi - # - #----------------------------------------------------------------------- - # - # The model may be started with a variety of file types from FV3GFS. - # Set that file type now - # - #----------------------------------------------------------------------- - # - - if [ "${anl_or_fcst}" = "ANL" ]; then - fv3gfs_file_fmt="${FV3GFS_FILE_FMT_ICS}" - elif [ "${anl_or_fcst}" = "FCST" ]; then - fv3gfs_file_fmt="${FV3GFS_FILE_FMT_LBCS}" - fi - - # - #----------------------------------------------------------------------- - # - # Generate an array of file names expected from the external model - # Assume that filenames in archive and on disk are the same, unless - # otherwise specified (primarily on Jet). - # - #----------------------------------------------------------------------- - # - declare -a fns_on_disk - declare -a fns_in_arcv - case "${anl_or_fcst}" in - - "ANL") - - fcst_hh="00" - fcst_mn="00" - - case "${extrn_mdl_name}" in - - "GSMGFS") - fns_in_arcv=("gfs.t${hh}z.atmanl.nemsio" "gfs.t${hh}z.sfcanl.nemsio") - ;; - - "FV3GFS") - case "${fv3gfs_file_fmt}" in - "nemsio") - fns_in_arcv=("gfs.t${hh}z.atmanl.nemsio" "gfs.t${hh}z.sfcanl.nemsio") - - # File names are prefixed with a date time on Jet - if [ "${MACHINE}" = "JET" ]; then - prefix="${yy}${ddd}${hh}00" - fns_on_disk=( ${fns_in_arcv[@]/#/$prefix}) - fi - ;; - "grib2") - fns_in_arcv=( "gfs.t${hh}z.pgrb2.0p25.f000" ) - ;; - "netcdf") - fns_in_arcv=("gfs.t${hh}z.atmanl.nc" "gfs.t${hh}z.sfcanl.nc") - # File names are prefixed with a date time on Jet - if [ "${MACHINE}" = "JET" ]; then - prefix="${yy}${ddd}${hh}00" - fns_on_disk=( ${fns_in_arcv[@]/#/$prefix}) - fi - ;; - esac - ;; - - "RAP") - ;& # Fall through. RAP and HRRR follow same naming rules - - "HRRR") - fns_in_arcv=( "${yy}${ddd}${hh}${mn}${fcst_hh}${fcst_mn}" ) - if [ "${MACHINE}" = "JET" ]; then - fns_on_disk=( "${yy}${ddd}${hh}${mn}${fcst_mn}${fcst_hh}${fcst_mn}" ) - fi - ;; - - "NAM") - fns=( "" ) - fns_in_arcv=( "nam.t${hh}z.bgrdsf${fcst_hh}.tm00" ) - ;; - - *) - if [ "${USE_USER_STAGED_EXTRN_FILES}" != "TRUE" ]; then - print_err_msg_exit "\ -The external model file names (either on disk or in archive files) have -not yet been specified for this combination of external model (extrn_mdl_name) -and analysis or forecast (anl_or_fcst): - extrn_mdl_name = \"${extrn_mdl_name}\" - anl_or_fcst = \"${anl_or_fcst}\"" - fi - ;; - - esac # End external model case for ANL files - ;; - - "FCST") - fcst_mn="00" - fcst_hhh=( $( printf "%03d " "${lbc_spec_fhrs[@]}" ) ) - fcst_hh=( $( printf "%02d " "${lbc_spec_fhrs[@]}" ) ) - - case "${extrn_mdl_name}" in - - "GSMGFS") - fn_tmpl="gfs.t${hh}z.atmfFHR3.nemsio" - ;; - - "FV3GFS") - - if [ "${fv3gfs_file_fmt}" = "nemsio" ]; then - fn_tmpl="gfs.t${hh}z.atmfFHR3.nemsio" - if [ "${MACHINE}" = "JET" ]; then - disk_tmpl="${yy}${ddd}${hh}00.gfs.t${hh}z.atmfFHR3.nemsio" - for fhr in ${fcst_hhh[@]} ; do - fns_on_disk+=(${disk_tmpl/FHR3/$fhr}) - done - fi - elif [ "${fv3gfs_file_fmt}" = "grib2" ]; then - fn_tmpl="gfs.t${hh}z.pgrb2.0p25.fFHR3" - elif [ "${fv3gfs_file_fmt}" = "netcdf" ]; then - fn_tmpl="gfs.t${hh}z.atmfFHR3.nc" - if [ "${MACHINE}" = "JET" ]; then - disk_tmpl="${yy}${ddd}${hh}00.gfs.t${hh}z.atmfFHR3.nc" - for fhr in ${fcst_hhh[@]} ; do - fns_on_disk+=(${disk_tmpl/FHR3/$fhr}) - done - fi - fi - ;; - - "RAP") - ;& # Fall through since RAP and HRRR are named the same - - "HRRR") - fn_tmpl="${yy}${ddd}${hh}00FHR200" - if [ "${MACHINE}" = "JET" ]; then - disk_tmpl="${yy}${ddd}${hh}0000FHR2" - for fhr in ${fcst_hhh[@]} ; do - fns_on_disk+=(${disk_tmpl/FHR3/$fhr}) - done - fi - ;; - - "NAM") - fn_tmpl="nam.t${hh}z.bgrdsfFHR3" - ;; - - *) - if [ "${USE_USER_STAGED_EXTRN_FILES}" != "TRUE" ]; then - print_err_msg_exit "\ -The external model file names have not yet been specified for this com- -bination of external model (extrn_mdl_name) and analysis or forecast -(anl_or_fcst): - extrn_mdl_name = \"${extrn_mdl_name}\" - anl_or_fcst = \"${anl_or_fcst}\"" - fi - ;; - - esac # End external model case for FCST files - ;; - esac # End ANL FCST case - - # - # Expand the archive file names for all forecast hours - # - if [ ${anl_or_fcst} = FCST ] ; then - if [[ $fn_tmpl =~ FHR3 ]] ; then - fhrs=( $( printf "%03d " "${lbc_spec_fhrs[@]}" ) ) - tmpl=FHR3 - elif [[ ${fn_tmpl} =~ FHR2 ]] ; then - fhrs=( $( printf "%02d " "${lbc_spec_fhrs[@]}" ) ) - tmpl=FHR2 - else - print_err_msg_exit "\ - Forecast file name templates are expected to contain a template - string, either FHR2 or FHR3" - fi - for fhr in ${fhrs[@]}; do - fns_in_arcv+=(${fn_tmpl/$tmpl/$fhr}) - done - fi - - # Make sure all filenames variables are set. - if [ -z $fns_in_arcv ] ; then - print_err_msg_exit "\ - The script has not set \$fns_in_arcv properly" - fi - - if [ -z ${fns_on_disk:-} ] ; then - fns_on_disk=(${fns_in_arcv[@]}) - fi - # - #----------------------------------------------------------------------- - # - # Set the system directory (i.e. a directory on disk) in which the external - # model output files for the specified cycle date (cdate) may be located. - # Note that this will be used by the calling script only if the output - # files for the specified cdate actually exist at this location. Otherwise, - # the files will be searched for on the mass store (HPSS). - # - #----------------------------------------------------------------------- - # - if [ "${anl_or_fcst}" = "ANL" ]; then - sysdir=$(eval echo ${EXTRN_MDL_SYSBASEDIR_ICS}) - elif [ "${anl_or_fcst}" = "FCST" ]; then - sysdir=$(eval echo ${EXTRN_MDL_SYSBASEDIR_LBCS}) - fi - - # - #----------------------------------------------------------------------- - # - # Set parameters associated with the mass store (HPSS) for the specified - # cycle date (cdate). These consist of: - # - # 1) The type of the archive file (e.g. tar, zip, etc). - # 2) The name of the archive file. - # 3) The full path in HPSS to the archive file. - # 4) The relative directory in the archive file in which the model output - # files are located. - # - # Note that these will be used by the calling script only if the archive - # file for the specified cdate actually exists on HPSS. - # - #----------------------------------------------------------------------- - # - case "${extrn_mdl_name}" in - - "GSMGFS") - arcv_dir="/NCEPPROD/hpssprod/runhistory/rh${yyyy}/${yyyy}${mm}/${yyyymmdd}" - arcv_fmt="tar" - arcv_fns="gpfs_hps_nco_ops_com_gfs_prod_gfs.${cdate}." - if [ "${anl_or_fcst}" = "ANL" ]; then - arcv_fns="${arcv_fns}anl" - arcvrel_dir="." - elif [ "${anl_or_fcst}" = "FCST" ]; then - arcv_fns="${arcv_fns}sigma" - arcvrel_dir="/gpfs/hps/nco/ops/com/gfs/prod/gfs.${yyyymmdd}" - fi - arcv_fns="${arcv_fns}.${arcv_fmt}" - arcv_fps="${arcv_dir}/${arcv_fns}" - ;; - - "FV3GFS") - - if [ "${cdate_FV3LAM}" -lt "2019061200" ]; then - arcv_dir="/NCEPDEV/emc-global/5year/emc.glopara/WCOSS_C/Q2FY19/prfv3rt3/${cdate_FV3LAM}" - arcv_fns="" - elif [ "${cdate_FV3LAM}" -ge "2019061200" ] && \ - [ "${cdate_FV3LAM}" -lt "2020022600" ]; then - arcv_dir="/NCEPPROD/hpssprod/runhistory/rh${yyyy}/${yyyy}${mm}/${yyyymmdd}" - arcv_fns="gpfs_dell1_nco_ops_com_gfs_prod_gfs.${yyyymmdd}_${hh}." - elif [ "${cdate_FV3LAM}" -ge "2020022600" ]; then - arcv_dir="/NCEPPROD/hpssprod/runhistory/rh${yyyy}/${yyyy}${mm}/${yyyymmdd}" - arcv_fns="com_gfs_prod_gfs.${yyyymmdd}_${hh}." - fi - - if [ "${fv3gfs_file_fmt}" = "nemsio" ]; then - - if [ "${anl_or_fcst}" = "ANL" ]; then - arcv_fns="${arcv_fns}gfs_nemsioa" - elif [ "${anl_or_fcst}" = "FCST" ]; then - last_fhr_in_nemsioa="39" - first_lbc_fhr="${lbc_spec_fhrs[0]}" - last_lbc_fhr="${lbc_spec_fhrs[-1]}" - if [ "${last_lbc_fhr}" -le "${last_fhr_in_nemsioa}" ]; then - arcv_fns="${arcv_fns}gfs_nemsioa" - elif [ "${first_lbc_fhr}" -gt "${last_fhr_in_nemsioa}" ]; then - arcv_fns="${arcv_fns}gfs_nemsiob" - else - arcv_fns=( "${arcv_fns}gfs_nemsioa" "${arcv_fns}gfs_nemsiob" ) - fi - fi - - elif [ "${fv3gfs_file_fmt}" = "grib2" ]; then - - arcv_fns="${arcv_fns}gfs_pgrb2" - - elif [ "${fv3gfs_file_fmt}" = "netcdf" ]; then - - if [ "${anl_or_fcst}" = "ANL" ]; then - arcv_fns="${arcv_fns}gfs_nca" - elif [ "${anl_or_fcst}" = "FCST" ]; then - last_fhr_in_netcdfa="39" - first_lbc_fhr="${lbc_spec_fhrs[0]}" - last_lbc_fhr="${lbc_spec_fhrs[-1]}" - if [ "${last_lbc_fhr}" -le "${last_fhr_in_netcdfa}" ]; then - arcv_fns="${arcv_fns}gfs_nca" - elif [ "${first_lbc_fhr}" -gt "${last_fhr_in_netcdfa}" ]; then - arcv_fns="${arcv_fns}gfs_ncb" - else - arcv_fns=( "${arcv_fns}gfs_nca" "${arcv_fns}gfs_ncb" ) - fi - fi - - fi - - arcv_fmt="tar" - - slash_atmos_or_null="" - if [ "${cdate_FV3LAM}" -ge "2021032100" ]; then - slash_atmos_or_null="/atmos" - fi - arcvrel_dir="./gfs.${yyyymmdd}/${hh}${slash_atmos_or_null}" - - is_array arcv_fns - if [ "$?" = "0" ]; then - suffix=".${arcv_fmt}" - arcv_fns=( "${arcv_fns[@]/%/$suffix}" ) - prefix="${arcv_dir}/" - arcv_fps=( "${arcv_fns[@]/#/$prefix}" ) - else - arcv_fns="${arcv_fns}.${arcv_fmt}" - arcv_fps="${arcv_dir}/${arcv_fns}" - fi - ;; - - - "RAP") - # - # Note that this is GSL RAPX data, not operational NCEP RAP data. - # An option for the latter may be added in the future. - # - # The zip archive files for RAPX are named such that the forecast - # files for odd-numbered starting hours (e.g. 01, 03, ..., 23) are - # stored together with the forecast files for the corresponding - # preceding even numbered starting hours (e.g. 00, 02, ..., 22, - # respectively), in an archive file whose name contains only the - # even-numbered hour. Thus, in forming the name of the archive - # file, if the starting hour (hh) is odd, we reduce it by one to get - # the corresponding even-numbered hour and use that to form the - # archive file name. - # - # Convert hh to a decimal (i.e. base-10) number to ovoid octal - # interpretation in bash. - - hh_orig=$hh - hh=$((10#$hh)) - if [ $(($hh%2)) = 1 ]; then - hh=$((hh-1)) - fi - # Archive files use 2-digit forecast hour - hh=$( printf "%02d\n" $hh ) - - arcv_dir="/BMC/fdr/Permanent/${yyyy}/${mm}/${dd}/data/fsl/rap/full/wrfnat" - arcv_fmt="zip" - arcv_fns="${yyyy}${mm}${dd}${hh}00.${arcv_fmt}" - arcv_fps="${arcv_dir}/${arcv_fns}" - arcvrel_dir="" - - # Reset hh to its original value - hh=${hh_orig} - ;; - - "HRRR") - # - # Note that this is GSL HRRRX data, not operational NCEP HRRR data. - # An option for the latter may be added in the future. - # - arcv_dir="/BMC/fdr/Permanent/${yyyy}/${mm}/${dd}/data/fsl/hrrr/conus/wrfnat" - arcv_fmt="zip" - arcv_fns="${yyyy}${mm}${dd}${hh}00.${arcv_fmt}" - arcv_fps="${arcv_dir}/${arcv_fns}" - arcvrel_dir="" - ;; - - "NAM") - arcv_dir="/NCEPPROD/hpssprod/runhistory/rh${yyyy}/${yyyy}${mm}/${yyyymmdd}" - arcv_fmt="tar" - arcv_fns="com_nam_prod_nam.${yyyy}${mm}${dd}${hh}.bgrid.${arcv_fmt}" - arcv_fps="${arcv_dir}/${arcv_fns}" - arcvrel_dir="" - ;; - - *) - print_err_msg_exit "\ -Archive file information has not been specified for this external model: - extrn_mdl_name = \"${extrn_mdl_name}\"" - ;; - - esac - # - # Depending on the experiment configuration, the above code may set - # arcv_fns and arcv_fps to either scalars or arrays. If they are not - # arrays, recast them as arrays because that is what is expected in - # the code below. - # - is_array arcv_fns || arcv_fns=( "${arcv_fns}" ) - is_array arcv_fps || arcv_fps=( "${arcv_fps}" ) - # - #----------------------------------------------------------------------- - # - # Use the eval function to set the output variables. Note that each - # of these is set only if the corresponding input variable specifying - # the name to use for the output variable is not empty. - # - #----------------------------------------------------------------------- - # - if [ ! -z "${varname_extrn_mdl_cdate}" ]; then - eval ${varname_extrn_mdl_cdate}="${cdate}" - fi - - if [ ! -z "${varname_extrn_mdl_lbc_spec_fhrs}" ]; then - lbc_spec_fhrs_str="( "$( printf "\"%s\" " "${lbc_spec_fhrs[@]}" )")" - eval ${varname_extrn_mdl_lbc_spec_fhrs}=${lbc_spec_fhrs_str} - fi - - if [ ! -z "${varname_extrn_mdl_fns_on_disk}" ]; then - fns_on_disk_str="( "$( printf "\"%s\" " "${fns_on_disk[@]}" )")" - eval ${varname_extrn_mdl_fns_on_disk}=${fns_on_disk_str} - fi - - if [ ! -z "${varname_extrn_mdl_fns_in_arcv}" ]; then - fns_in_arcv_str="( "$( printf "\"%s\" " "${fns_in_arcv[@]}" )")" - eval ${varname_extrn_mdl_fns_in_arcv}=${fns_in_arcv_str} - fi - - if [ ! -z "${varname_extrn_mdl_sysdir}" ]; then - eval ${varname_extrn_mdl_sysdir}="${sysdir}" - fi - - if [ ! -z "${varname_extrn_mdl_arcv_fmt}" ]; then - eval ${varname_extrn_mdl_arcv_fmt}="${arcv_fmt}" - fi - - if [ ! -z "${varname_extrn_mdl_arcv_fns}" ]; then - arcv_fns_str="( "$( printf "\"%s\" " "${arcv_fns[@]}" )")" - eval ${varname_extrn_mdl_arcv_fns}=${arcv_fns_str} - fi - - if [ ! -z "${varname_extrn_mdl_arcv_fps}" ]; then - arcv_fps_str="( "$( printf "\"%s\" " "${arcv_fps[@]}" )")" - eval ${varname_extrn_mdl_arcv_fps}=${arcv_fps_str} - fi - - if [ ! -z "${varname_extrn_mdl_arcvrel_dir}" ]; then - eval ${varname_extrn_mdl_arcvrel_dir}="${arcvrel_dir}" - fi - # - #----------------------------------------------------------------------- - # - # Restore the shell options saved at the beginning of this script/function. - # - #----------------------------------------------------------------------- - # - { restore_shell_opts; } > /dev/null 2>&1 -} diff --git a/ush/launch_FV3LAM_wflow.sh b/ush/launch_FV3LAM_wflow.sh index 183df2c8f..4db7f853b 100755 --- a/ush/launch_FV3LAM_wflow.sh +++ b/ush/launch_FV3LAM_wflow.sh @@ -149,11 +149,11 @@ expt_name="${EXPT_SUBDIR}" # #----------------------------------------------------------------------- # -env_fp="${SR_WX_APP_TOP_DIR}/env/${WFLOW_ENV_FN}" -source "${env_fp}" || print_err_msg_exit "\ -Sourcing platform-specific environment file (env_fp) for the workflow +module use "${SR_WX_APP_TOP_DIR}/modulefiles" +module load "${WFLOW_MOD_FN}" > /dev/null 2>&1 || print_err_msg_exit "\ +Loading platform-specific module file (WFLOW_MOD_FN) for the workflow task failed: - env_fp = \"${env_fp}\"" + WFLOW_MOD_FN = \"${WFLOW_MOD_FN}\"" # #----------------------------------------------------------------------- # diff --git a/ush/link_fix.py b/ush/link_fix.py new file mode 100644 index 000000000..9788a4ad4 --- /dev/null +++ b/ush/link_fix.py @@ -0,0 +1,391 @@ +#!/usr/bin/env python3 + +import unittest +import os +import glob + +from python_utils import import_vars, set_env_var, print_input_args, \ + print_info_msg, print_err_msg_exit, create_symlink_to_file, \ + define_macos_utilities, check_var_valid_value, \ + cd_vrfy, mkdir_vrfy, find_pattern_in_str + +def link_fix(verbose, file_group): + """ This file defines a function that ... + Args: + verbose: True or False + file_group: could be on of ["grid", "orog", "sfc_climo"] + Returns: + a string: resolution + """ + + print_input_args(locals()) + + valid_vals_file_group=["grid", "orog", "sfc_climo"] + check_var_valid_value(file_group, valid_vals_file_group) + + #import all environement variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # Create symlinks in the FIXLAM directory pointing to the grid files. + # These symlinks are needed by the make_orog, make_sfc_climo, make_ic, + # make_lbc, and/or run_fcst tasks. + # + # Note that we check that each target file exists before attempting to + # create symlinks. This is because the "ln" command will create sym- + # links to non-existent targets without returning with a nonzero exit + # code. + # + #----------------------------------------------------------------------- + # + print_info_msg(f'Creating links in the FIXLAM directory to the grid files...', + verbose=verbose) + # + #----------------------------------------------------------------------- + # + # Create globbing patterns for grid, orography, and surface climatology + # files. + # + # + # For grid files (i.e. file_group set to "grid"), symlinks are created + # in the FIXLAM directory to files (of the same names) in the GRID_DIR. + # These symlinks/files and the reason each is needed is listed below: + # + # 1) "C*.mosaic.halo${NHW}.nc" + # This mosaic file for the wide-halo grid (i.e. the grid with a ${NHW}- + # cell-wide halo) is needed as an input to the orography filtering + # executable in the orography generation task. The filtering code + # extracts from this mosaic file the name of the file containing the + # grid on which it will generate filtered topography. Note that the + # orography generation and filtering are both performed on the wide- + # halo grid. The filtered orography file on the wide-halo grid is then + # shaved down to obtain the filtered orography files with ${NH3}- and + # ${NH4}-cell-wide halos. + # + # The raw orography generation step in the make_orog task requires the + # following symlinks/files: + # + # a) C*.mosaic.halo${NHW}.nc + # The script for the make_orog task extracts the name of the grid + # file from this mosaic file; this name should be + # "C*.grid.tile${TILE_RGNL}.halo${NHW}.nc". + # + # b) C*.grid.tile${TILE_RGNL}.halo${NHW}.nc + # This is the + # The script for the make_orog task passes the name of the grid + # file (extracted above from the mosaic file) to the orography + # generation executable. The executable then + # reads in this grid file and generates a raw orography + # file on the grid. The raw orography file is initially renamed "out.oro.nc", + # but for clarity, it is then renamed "C*.raw_orog.tile${TILE_RGNL}.halo${NHW}.nc". + # + # c) The fixed files thirty.second.antarctic.new.bin, landcover30.fixed, + # and gmted2010.30sec.int. + # + # The orography filtering step in the make_orog task requires the + # following symlinks/files: + # + # a) C*.mosaic.halo${NHW}.nc + # This is the mosaic file for the wide-halo grid. The orography + # filtering executable extracts from this file the name of the grid + # file containing the wide-halo grid (which should be + # "${CRES}.grid.tile${TILE_RGNL}.halo${NHW}.nc"). The executable then + # looks for this grid file IN THE DIRECTORY IN WHICH IT IS RUNNING. + # Thus, before running the executable, the script creates a symlink in this run directory that + # points to the location of the actual wide-halo grid file. + # + # b) C*.raw_orog.tile${TILE_RGNL}.halo${NHW}.nc + # This is the raw orography file on the wide-halo grid. The script + # for the make_orog task copies this file to a new file named + # "C*.filtered_orog.tile${TILE_RGNL}.halo${NHW}.nc" that will be + # used as input to the orography filtering executable. The executable + # will then overwrite the contents of this file with the filtered orography. + # Thus, the output of the orography filtering executable will be + # the file C*.filtered_orog.tile${TILE_RGNL}.halo${NHW}.nc. + # + # The shaving step in the make_orog task requires the following: + # + # a) C*.filtered_orog.tile${TILE_RGNL}.halo${NHW}.nc + # This is the filtered orography file on the wide-halo grid. + # This gets shaved down to two different files: + # + # i) ${CRES}.oro_data.tile${TILE_RGNL}.halo${NH0}.nc + # This is the filtered orography file on the halo-0 grid. + # + # ii) ${CRES}.oro_data.tile${TILE_RGNL}.halo${NH4}.nc + # This is the filtered orography file on the halo-4 grid. + # + # Note that the file names of the shaved files differ from that of + # the initial unshaved file on the wide-halo grid in that the field + # after ${CRES} is now "oro_data" (not "filtered_orog") to comply + # with the naming convention used more generally. + # + # 2) "C*.mosaic.halo${NH4}.nc" + # This mosaic file for the grid with a 4-cell-wide halo is needed as + # an input to the surface climatology generation executable. The + # surface climatology generation code reads from this file the number + # of tiles (which should be 1 for a regional grid) and the tile names. + # More importantly, using the ESMF function ESMF_GridCreateMosaic(), + # it creates a data object of type esmf_grid; the grid information + # in this object is obtained from the grid file specified in the mosaic + # file, which should be "C*.grid.tile${TILE_RGNL}.halo${NH4}.nc". The + # dimensions specified in this grid file must match the ones specified + # in the (filtered) orography file "C*.oro_data.tile${TILE_RGNL}.halo${NH4}.nc" + # that is also an input to the surface climatology generation executable. + # If they do not, then the executable will crash with an ESMF library + # error (something like "Arguments are incompatible"). + # + # Thus, for the make_sfc_climo task, the following symlinks/files must + # exist: + # a) "C*.mosaic.halo${NH4}.nc" + # b) "C*.grid.tile${TILE_RGNL}.halo${NH4}.nc" + # c) "C*.oro_data.tile${TILE_RGNL}.halo${NH4}.nc" + # + # 3) + # + # + #----------------------------------------------------------------------- + # + # + if file_group == "grid": + fns=[ + f"C*{DOT_OR_USCORE}mosaic.halo{NHW}.nc", + f"C*{DOT_OR_USCORE}mosaic.halo{NH4}.nc", + f"C*{DOT_OR_USCORE}mosaic.halo{NH3}.nc", + f"C*{DOT_OR_USCORE}grid.tile{TILE_RGNL}.halo{NHW}.nc", + f"C*{DOT_OR_USCORE}grid.tile{TILE_RGNL}.halo{NH3}.nc", + f"C*{DOT_OR_USCORE}grid.tile{TILE_RGNL}.halo{NH4}.nc" + ] + fps=[ os.path.join(GRID_DIR,itm) for itm in fns] + run_task=f"{RUN_TASK_MAKE_GRID}" + # + elif file_group == "orog": + fns=[ + f"C*{DOT_OR_USCORE}oro_data.tile{TILE_RGNL}.halo{NH0}.nc", + f"C*{DOT_OR_USCORE}oro_data.tile{TILE_RGNL}.halo{NH4}.nc" + ] + if CCPP_PHYS_SUITE == "FV3_HRRR": + fns+=[ + f"C*{DOT_OR_USCORE}oro_data_ss.tile{TILE_RGNL}.halo{NH0}.nc", + f"C*{DOT_OR_USCORE}oro_data_ls.tile{TILE_RGNL}.halo{NH0}.nc", + ] + fps=[ os.path.join(OROG_DIR,itm) for itm in fns] + run_task=f"{RUN_TASK_MAKE_OROG}" + # + # The following list of symlinks (which have the same names as their + # target files) need to be created made in order for the make_ics and + # make_lbcs tasks (i.e. tasks involving chgres_cube) to work. + # + elif file_group == "sfc_climo": + num_fields=len(SFC_CLIMO_FIELDS) + fns=[None] * (2 * num_fields) + for i in range(num_fields): + ii=2*i + fns[ii]=f"C*.{SFC_CLIMO_FIELDS[i]}.tile{TILE_RGNL}.halo{NH0}.nc" + fns[ii+1]=f"C*.{SFC_CLIMO_FIELDS[i]}.tile{TILE_RGNL}.halo{NH4}.nc" + fps=[ os.path.join(SFC_CLIMO_DIR,itm) for itm in fns] + run_task=f"{RUN_TASK_MAKE_SFC_CLIMO}" + # + + # + #----------------------------------------------------------------------- + # + # Find all files matching the globbing patterns and make sure that they + # all have the same resolution (an integer) in their names. + # + #----------------------------------------------------------------------- + # + i=0 + res_prev="" + res="" + fp_prev="" + + for pattern in fps: + files = glob.glob(pattern) + for fp in files: + + fn = os.path.basename(fp) + + regex_search = "^C([0-9]*).*" + res = find_pattern_in_str(regex_search, fn) + if res is None: + print_err_msg_exit(f''' + The resolution could not be extracted from the current file's name. The + full path to the file (fp) is: + fp = \"{fp}\" + This may be because fp contains the * globbing character, which would + imply that no files were found that match the globbing pattern specified + in fp.''') + else: + res = res[0] + + if ( i > 0 ) and ( res != res_prev ): + print_err_msg_exit(f''' + The resolutions (as obtained from the file names) of the previous and + current file (fp_prev and fp, respectively) are different: + fp_prev = \"{fp_prev}\" + fp = \"{fp}\" + Please ensure that all files have the same resolution.''') + + i=i+1 + fp_prev=f"{fp}" + res_prev=res + # + #----------------------------------------------------------------------- + # + # Replace the * globbing character in the set of globbing patterns with + # the resolution. This will result in a set of (full paths to) specific + # files. + # + #----------------------------------------------------------------------- + # + fps=[ itm.replace('*',res) for itm in fps] + # + #----------------------------------------------------------------------- + # + # In creating the various symlinks below, it is convenient to work in + # the FIXLAM directory. We will change directory back to the original + # later below. + # + #----------------------------------------------------------------------- + # + SAVE_DIR=os.getcwd() + cd_vrfy(FIXLAM) + # + #----------------------------------------------------------------------- + # + # Use the set of full file paths generated above as the link targets to + # create symlinks to these files in the FIXLAM directory. + # + #----------------------------------------------------------------------- + # + # If the task in consideration (which will be one of the pre-processing + # tasks MAKE_GRID_TN, MAKE_OROG_TN, and MAKE_SFC_CLIMO_TN) was run, then + # the target files will be located under the experiment directory. In + # this case, we use relative symlinks in order the experiment directory + # more portable and the symlinks more readable. However, if the task + # was not run, then pregenerated grid, orography, or surface climatology + # files will be used, and those will be located in an arbitrary directory + # (specified by the user) that is somwehere outside the experiment + # directory. Thus, in this case, there isn't really an advantage to using + # relative symlinks, so we use symlinks with absolute paths. + # + if run_task: + relative_link_flag=True + else: + relative_link_flag=False + + for fp in fps: + fn=os.path.basename(fp) + create_symlink_to_file(fp,fn,relative_link_flag) + # + #----------------------------------------------------------------------- + # + # Set the C-resolution based on the resolution appearing in the file + # names. + # + #----------------------------------------------------------------------- + # + cres=f"C{res}" + # + #----------------------------------------------------------------------- + # + # If considering grid files, create a symlink to the halo4 grid file + # that does not contain the halo size in its name. This is needed by + # the tasks that generate the initial and lateral boundary condition + # files. + # + #----------------------------------------------------------------------- + # + if file_group == "grid": + + target=f"{cres}{DOT_OR_USCORE}grid.tile{TILE_RGNL}.halo{NH4}.nc" + symlink=f"{cres}{DOT_OR_USCORE}grid.tile{TILE_RGNL}.nc" + create_symlink_to_file(target,symlink,True) + # + # The surface climatology file generation code looks for a grid file + # having a name of the form "C${GFDLgrid_RES}_grid.tile7.halo4.nc" (i.e. + # the C-resolution used in the name of this file is the number of grid + # points per horizontal direction per tile, just like in the global model). + # Thus, if we are running the MAKE_SFC_CLIMO_TN task, if the grid is of + # GFDLgrid type, and if we are not using GFDLgrid_RES in filenames (i.e. + # we are using the equivalent global uniform grid resolution instead), + # then create a link whose name uses the GFDLgrid_RES that points to the + # link whose name uses the equivalent global uniform resolution. + # + if RUN_TASK_MAKE_SFC_CLIMO and \ + GRID_GEN_METHOD == "GFDLgrid" and \ + not GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES: + target=f"{cres}{DOT_OR_USCORE}grid.tile{TILE_RGNL}.halo{NH4}.nc" + symlink=f"C{GFDLgrid_RES}{DOT_OR_USCORE}grid.tile{TILE_RGNL}.nc" + create_symlink_to_file(target,symlink,relative) + # + #----------------------------------------------------------------------- + # + # If considering surface climatology files, create symlinks to the surface + # climatology files that do not contain the halo size in their names. + # These are needed by the task that generates the initial condition files. + # + #----------------------------------------------------------------------- + # + if file_group == "sfc_climo": + + tmp=[ f"{cres}.{itm}" for itm in SFC_CLIMO_FIELDS] + fns_sfc_climo_with_halo_in_fn=[ f"{itm}.tile{TILE_RGNL}.halo{NH4}.nc" for itm in tmp] + fns_sfc_climo_no_halo_in_fn=[ f"{itm}.tile{TILE_RGNL}.nc" for itm in tmp] + + for i in range(num_fields): + target=f"{fns_sfc_climo_with_halo_in_fn[i]}" + symlink=f"{fns_sfc_climo_no_halo_in_fn[i]}" + create_symlink_to_file(target, symlink, True) + # + # In order to be able to specify the surface climatology file names in + # the forecast model's namelist file, in the FIXLAM directory a symlink + # must be created for each surface climatology field that has "tile1" in + # its name (and no "halo") and which points to the corresponding "tile7.halo0" + # file. + # + tmp=[ f"{cres}.{itm}" for itm in SFC_CLIMO_FIELDS ] + fns_sfc_climo_tile7_halo0_in_fn=[ f"{itm}.tile{TILE_RGNL}.halo{NH0}.nc" for itm in tmp ] + fns_sfc_climo_tile1_no_halo_in_fn=[ f"{itm}.tile1.nc" for itm in tmp ] + + for i in range(num_fields): + target=f"{fns_sfc_climo_tile7_halo0_in_fn[i]}" + symlink=f"{fns_sfc_climo_tile1_no_halo_in_fn[i]}" + create_symlink_to_file(target,symlink,True) + # + #----------------------------------------------------------------------- + # + # Change directory back to original one. + # + #----------------------------------------------------------------------- + # + cd_vrfy(SAVE_DIR) + + return res + +class Testing(unittest.TestCase): + def test_link_fix(self): + res = link_fix(verbose=True, file_group="grid") + self.assertTrue( res == "3357") + def setUp(self): + define_macos_utilities() + TEST_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "test_data"); + FIXLAM = os.path.join(TEST_DIR, "expt", "fix_lam") + mkdir_vrfy("-p",FIXLAM) + set_env_var("FIXLAM",FIXLAM) + set_env_var("DOT_OR_USCORE","_") + set_env_var("TILE_RGNL",7) + set_env_var("NH0",0) + set_env_var("NHW",6) + set_env_var("NH4",4) + set_env_var("NH3",3) + set_env_var("GRID_DIR",TEST_DIR + os.sep + "RRFS_CONUS_3km") + set_env_var("RUN_TASK_MAKE_GRID","FALSE") + set_env_var("OROG_DIR",TEST_DIR + os.sep + "RRFS_CONUS_3km") + set_env_var("RUN_TASK_MAKE_OROG","FALSE") + set_env_var("SFC_CLIMO_DIR",TEST_DIR + os.sep + "RRFS_CONUS_3km") + set_env_var("RUN_TASK_MAKE_SFC_CLIMO","FALSE") + set_env_var("CCPP_PHYS_SUITE","FV3_GSD_SAR") diff --git a/ush/load_modules_run_task.sh b/ush/load_modules_run_task.sh index 56c005faa..7a9547fe0 100755 --- a/ush/load_modules_run_task.sh +++ b/ush/load_modules_run_task.sh @@ -86,18 +86,19 @@ jjob_fp="$2" # #----------------------------------------------------------------------- # -# Sourcing ufs-srweather-app build env file +# Loading ufs-srweather-app build module files # #----------------------------------------------------------------------- # machine=$(echo_lowercase $MACHINE) -env_fp="${SR_WX_APP_TOP_DIR}/env/${BUILD_ENV_FN}" -module use "${SR_WX_APP_TOP_DIR}/env" -source "${env_fp}" || print_err_msg_exit "\ -Sourcing platform- and compiler-specific environment file (env_fp) for the + +source "${SR_WX_APP_TOP_DIR}/etc/lmod-setup.sh" +module use "${SR_WX_APP_TOP_DIR}/modulefiles" +module load "${BUILD_MOD_FN}" || print_err_msg_exit "\ +Sourcing platform- and compiler-specific module file (BUILD_MOD_FN) for the workflow task specified by task_name failed: task_name = \"${task_name}\" - env_fp = \"${env_fp}\"" + BUILD_MOD_FN = \"${BUILD_MOD_FN}\"" # #----------------------------------------------------------------------- # @@ -112,7 +113,7 @@ workflow task specified by task_name failed: # # The regional_workflow repository contains module files for the # workflow tasks in the template rocoto XML file for the FV3-LAM work- -# flow that need modules not loaded in the env_fn above. +# flow that need modules not loaded in the BUILD_MOD_FN above. # # The full path to a module file for a given task is # diff --git a/ush/machine/cheyenne.sh b/ush/machine/cheyenne.sh index 6cf61061a..cf80db74f 100644 --- a/ush/machine/cheyenne.sh +++ b/ush/machine/cheyenne.sh @@ -46,12 +46,13 @@ QUEUE_HPSS=${QUEUE_HPSS:-"regular"} QUEUE_FCST=${QUEUE_FCST:-"regular"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/glade/p/ral/jntp/UFS_CAM/fix/fix_am"} -FIXaer=${FIXaer:-"/glade/p/ral/jntp/UFS_CAM/fix/fix_aer"} -FIXlut=${FIXlut:-"/glade/p/ral/jntp/UFS_CAM/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/glade/p/ral/jntp/UFS_CAM/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/glade/p/ral/jntp/UFS_CAM/fix/climo_fields_netcdf"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/needs/to/be/specified"} +staged_data_dir="/glade/p/ral/jntp/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="time" @@ -62,12 +63,12 @@ RUN_CMD_POST='mpirun -np $nprocs' # MET/METplus-Related Paths MET_INSTALL_DIR=${MET_INSTALL_DIR:-"/glade/p/ral/jntp/MET/MET_releases/10.0.0"} METPLUS_PATH=${METPLUS_PATH:-"/glade/p/ral/jntp/MET/METplus/METplus-4.0.0"} -CCPA_OBS_DIR=${CCPA_OBS_DIR:-"/glade/p/ral/jntp/UFS_SRW_app/develop/obs_data/ccpa/proc"} -MRMS_OBS_DIR=${MRMS_OBS_DIR:-"/glade/p/ral/jntp/UFS_SRW_app/develop/obs_data/mrms/proc"} -NDAS_OBS_DIR=${NDAS_OBS_DIR:-"/glade/p/ral/jntp/UFS_SRW_app/develop/obs_data/ndas/proc"} +CCPA_OBS_DIR=${CCPA_OBS_DIR:-"${staged_data_dir}/obs_data/ccpa/proc"} +MRMS_OBS_DIR=${MRMS_OBS_DIR:-"${staged_data_dir}/obs_data/mrms/proc"} +NDAS_OBS_DIR=${NDAS_OBS_DIR:-"${staged_data_dir}/obs_data/ndas/proc"} MET_BIN_EXEC=${MET_BIN_EXEC:-"bin"} # Test Data Locations -TEST_PREGEN_BASEDIR="/glade/p/ral/jntp/UFS_SRW_app/FV3LAM_pregen" -TEST_COMINgfs="/glade/p/ral/jntp/UFS_SRW_app/COMGFS" -TEST_EXTRN_MDL_SOURCE_BASEDIR="/glade/p/ral/jntp/UFS_SRW_app/staged_extrn_mdl_files" +TEST_COMIN="${staged_data_dir}/COMGFS" +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" diff --git a/ush/machine/gaea.sh b/ush/machine/gaea.sh new file mode 100755 index 000000000..88f715366 --- /dev/null +++ b/ush/machine/gaea.sh @@ -0,0 +1,70 @@ +#!/bin/bash + +function file_location() { + + # Return the default location of external model files on disk + + local external_file_fmt external_model location + + external_model=${1} + external_file_fmt=${2} + + case ${external_model} in + + "FV3GFS") + location='/lustre/f2/dev/Mark.Potts/EPIC/SRW/model_data/FV3GFS/${yyyymmdd}${hh}' + ;; + + esac + echo ${location:-} +} + + +EXTRN_MDL_SYSBASEDIR_ICS=${EXTRN_MDL_SYSBASEDIR_ICS:-$(file_location \ + ${EXTRN_MDL_NAME_ICS} \ + ${FV3GFS_FILE_FMT_ICS})} +EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ + ${EXTRN_MDL_NAME_LBCS} \ + ${FV3GFS_FILE_FMT_ICS})} + +# System scripts to source to initialize various commands within workflow +# scripts (e.g. "module"). +if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then + ENV_INIT_SCRIPTS_FPS=( "/etc/profile" ) +fi + + +# Commands to run at the start of each workflow task. +PRE_TASK_CMDS='{ ulimit -s unlimited; ulimit -a; }' + +# Architecture information +WORKFLOW_MANAGER="rocoto" +SLURM_NATIVE_CMD="-M c3" +NCORES_PER_NODE=${NCORES_PER_NODE:-32} +SCHED=${SCHED:-"slurm"} +QUEUE_DEFAULT=${QUEUE_DEFAULT:-"normal"} +QUEUE_HPSS=${QUEUE_DEFAULT:-"normal"} +QUEUE_FCST=${QUEUE_DEFAULT:-"normal"} +WTIME_MAKE_LBCS="00:60:00" + +# UFS SRW App specific paths +staged_data_dir="/lustre/f2/pdata/ncep/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" + +RUN_CMD_SERIAL="time" +#Run Commands currently differ for GNU/openmpi +#RUN_CMD_UTILS='mpirun --mca btl tcp,vader,self -np $nprocs' +#RUN_CMD_FCST='mpirun --mca btl tcp,vader,self -np ${PE_MEMBER01}' +#RUN_CMD_POST='mpirun --mca btl tcp,vader,self -np $nprocs' +RUN_CMD_UTILS='srun --mpi=pmi2 -n $nprocs' +RUN_CMD_FCST='srun --mpi=pmi2 -n ${PE_MEMBER01}' +RUN_CMD_POST='srun --mpi=pmi2 -n $nprocs' + +# MET Installation Locations +# MET Plus is not yet supported on gaea +# Test Data Locations diff --git a/ush/machine/hera.sh b/ush/machine/hera.sh index bb0f77313..454a1e938 100644 --- a/ush/machine/hera.sh +++ b/ush/machine/hera.sh @@ -28,6 +28,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_LBCS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"hpss aws nomads"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -49,12 +51,13 @@ PARTITION_FCST=${PARTITION_FCST:-"hera"} QUEUE_FCST=${QUEUE_FCST:-"batch"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/scratch1/NCEPDEV/global/glopara/fix/fix_am"} -FIXaer=${FIXaer:-"/scratch1/NCEPDEV/global/glopara/fix/fix_aer"} -FIXlut=${FIXlut:-"/scratch1/NCEPDEV/global/glopara/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/scratch1/NCEPDEV/global/glopara/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/scratch1/NCEPDEV/global/glopara/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/scratch2/BMC/det/FV3LAM_pregen"} +staged_data_dir="/scratch2/BMC/det/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="time" @@ -65,14 +68,14 @@ RUN_CMD_POST="srun" # MET/METplus-Related Paths MET_INSTALL_DIR=${MET_INSTALL_DIR:-"/contrib/met/10.0.0"} METPLUS_PATH=${METPLUS_PATH:-"/contrib/METplus/METplus-4.0.0"} -CCPA_OBS_DIR=${CCPA_OBS_DIR:-"/scratch2/BMC/det/UFS_SRW_app/develop/obs_data/ccpa/proc"} -MRMS_OBS_DIR=${MRMS_OBS_DIR:-"/scratch2/BMC/det/UFS_SRW_app/develop/obs_data/mrms/proc"} -NDAS_OBS_DIR=${NDAS_OBS_DIR:-"/scratch2/BMC/det/UFS_SRW_app/develop/obs_data/ndas/proc"} +CCPA_OBS_DIR=${CCPA_OBS_DIR:-"${staged_data_dir}/obs_data/ccpa/proc"} +MRMS_OBS_DIR=${MRMS_OBS_DIR:-"${staged_data_dir}/obs_data/mrms/proc"} +NDAS_OBS_DIR=${NDAS_OBS_DIR:-"${staged_data_dir}/obs_data/ndas/proc"} MET_BIN_EXEC=${MET_BIN_EXEC:-"bin"} # Test Data Locations -TEST_PREGEN_BASEDIR="/scratch2/BMC/det/UFS_SRW_app/FV3LAM_pregen" -TEST_COMINgfs="/scratch2/NCEPDEV/fv3-cam/noscrub/UFS_SRW_App/COMGFS" -TEST_EXTRN_MDL_SOURCE_BASEDIR="/scratch2/BMC/det/UFS_SRW_app/develop/model_data" +TEST_COMIN="${staged_data_dir}/COMGFS" +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" TEST_ALT_EXTRN_MDL_SYSBASEDIR_ICS="/scratch2/BMC/det/UFS_SRW_app/dummy_FV3GFS_sys_dir" TEST_ALT_EXTRN_MDL_SYSBASEDIR_LBCS="/scratch2/BMC/det/UFS_SRW_app/dummy_FV3GFS_sys_dir" diff --git a/ush/machine/jet.sh b/ush/machine/jet.sh index dac90d549..f383090ec 100644 --- a/ush/machine/jet.sh +++ b/ush/machine/jet.sh @@ -44,6 +44,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_LBCS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"hpss aws nomads"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -65,12 +67,13 @@ PARTITION_FCST=${PARTITION_FCST:-"sjet,vjet,kjet,xjet"} QUEUE_FCST=${QUEUE_FCST:-"batch"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/fix_am"} -FIXaer=${FIXaer:-"/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/fix_aer"} -FIXlut=${FIXlut:-"/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/mnt/lfs4/BMC/wrfruc/FV3-LAM/pregen"} +staged_data_dir="/mnt/lfs4/BMC/wrfruc/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="time" @@ -79,6 +82,6 @@ RUN_CMD_FCST="srun" RUN_CMD_POST="srun" # Test Data Locations -TEST_PREGEN_BASEDIR="/mnt/lfs4/BMC/wrfruc/UFS_SRW_app/FV3LAM_pregen" -TEST_COMINgfs="/mnt/lfs4/BMC/wrfruc/UFS_SRW_app/COMGFS" -TEST_EXTRN_MDL_SOURCE_BASEDIR="/mnt/lfs4/BMC/wrfruc/UFS_SRW_app/staged_extrn_mdl_files" +TEST_COMIN="${staged_data_dir}/COMGFS" +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" diff --git a/ush/machine/noaacloud.sh b/ush/machine/noaacloud.sh index fe4276661..62a698da8 100755 --- a/ush/machine/noaacloud.sh +++ b/ush/machine/noaacloud.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/bin/bash set -x @@ -16,16 +16,13 @@ function file_location() { "FV3GFS") location='/contrib/GST/model_data/FV3GFS/${yyyymmdd}${hh}' ;; - *) - print_info_msg"\ - External model \'${external_model}\' does not have a default - location on Hera. Will try to pull from HPSS" - ;; esac echo ${location:-} } - +export PROJ_LIB=/contrib/GST/miniconda/envs/regional_workflow/share/proj +export OPT=/contrib/EPIC/hpc-modules +export PATH=${PATH}:/contrib/GST/miniconda/envs/regional_workflow/bin EXTRN_MDL_SYSBASEDIR_ICS=${EXTRN_MDL_SYSBASEDIR_ICS:-$(file_location \ ${EXTRN_MDL_NAME_ICS} \ @@ -34,6 +31,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_ICS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"aws nomads"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -50,22 +49,26 @@ NCORES_PER_NODE=${NCORES_PER_NODE:-36} SCHED=${SCHED:-"slurm"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/contrib/EPIC/fix/fix_am"} -FIXaer=${FIXaer:-"/contrib/EPIC/fix/fix_aer"} -FIXlut=${FIXlut:-"/contrib/EPIC/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/contrib/EPIC/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/contrib/EPIC/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/scratch2/BMC/det/FV3LAM_pregen"} +staged_data_dir="/contrib/EPIC/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" RUN_CMD_SERIAL="time" #Run Commands currently differ for GNU/openmpi #RUN_CMD_UTILS='mpirun --mca btl tcp,vader,self -np $nprocs' #RUN_CMD_FCST='mpirun --mca btl tcp,vader,self -np ${PE_MEMBER01}' #RUN_CMD_POST='mpirun --mca btl tcp,vader,self -np $nprocs' -RUN_CMD_UTILS='srun --mpi=pmi2 -n $nprocs' -RUN_CMD_FCST='srun --mpi=pmi2 -n ${PE_MEMBER01}' -RUN_CMD_POST='srun --mpi=pmi2 -n $nprocs' +RUN_CMD_UTILS='mpiexec -np $nprocs' +RUN_CMD_FCST='mpiexec -np ${PE_MEMBER01}' +RUN_CMD_POST='mpiexec -np $nprocs' + +export build_mod_fn="wflow_noaacloud" +BUILD_MOD_FN="wflow_noaacloud" # MET Installation Locations # MET Plus is not yet supported on noaacloud - +. /contrib/EPIC/.bash_conda diff --git a/ush/machine/odin.sh b/ush/machine/odin.sh index 1bceaa873..58d360a63 100644 --- a/ush/machine/odin.sh +++ b/ush/machine/odin.sh @@ -9,16 +9,26 @@ function file_location() { external_model=${1} external_file_fmt=${2} + staged_data_dir="/scratch/ywang/UFS_SRW_App/develop" + location="" case ${external_model} in "GSMGFS") - location='/scratch/ywang/EPIC/GDAS/2019053000_mem001' + location="${staged_data_dir}/input_model_data/GFS" ;; "FV3GFS") - location='/scratch/ywang/test_runs/FV3_regional/gfs/${yyyymmdd}' + location="${staged_data_dir}/input_model_data/FV3GFS" + ;; + "HRRR") + location="${staged_data_dir}/input_model_data/HRRR" + ;; + "RAP") + location="${staged_data_dir}/input_model_data/RAP" + ;; + "NAM") + location="${staged_data_dir}/input_model_data/NAM" ;; - esac echo ${location:-} @@ -52,15 +62,19 @@ PARTITION_FCST=${PARTITION_FCST:-"workq"} QUEUE_FCST=${QUEUE_FCST:-"workq"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/scratch/ywang/fix/theia_fix/fix_am"} -FIXaer=${FIXaer:-"/scratch/ywang/fix/theia_fix/fix_aer"} -FIXlut=${FIXlut:-"/scratch/ywang/fix/theia_fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/scratch/ywang/fix/theia_fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/scratch/ywang/fix/climo_fields_netcdf"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/needs/to/be/specified"} +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="srun -n 1" RUN_CMD_UTILS='srun -n $nprocs' RUN_CMD_FCST='srun -n ${PE_MEMBER01}' RUN_CMD_POST="srun -n 1" + +# Test Data Locations +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" diff --git a/ush/machine/orion.sh b/ush/machine/orion.sh index 5f4abc0d9..b8032e241 100644 --- a/ush/machine/orion.sh +++ b/ush/machine/orion.sh @@ -22,6 +22,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_LBCS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"aws nomads"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -43,12 +45,13 @@ PARTITION_FCST=${PARTITION_FCST:-"orion"} QUEUE_FCST=${QUEUE_FCST:-"batch"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/work/noaa/global/glopara/fix/fix_am"} -FIXaer=${FIXaer:-"/work/noaa/global/glopara/fix/fix_aer"} -FIXlut=${FIXlut:-"/work/noaa/global/glopara/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/work/noaa/global/glopara/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/work/noaa/global/glopara/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/needs/to/be/specified"} +staged_data_dir="/work/noaa/fv3-cam/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="time" @@ -56,5 +59,16 @@ RUN_CMD_UTILS="srun" RUN_CMD_FCST='srun -n ${PE_MEMBER01}' RUN_CMD_POST="srun" +# MET/METplus-Related Paths +MET_INSTALL_DIR=${MET_INSTALL_DIR:-"/apps/contrib/MET/10.1.0"} +METPLUS_PATH=${METPLUS_PATH:-"/apps/contrib/MET/METplus/METplus-4.0.0"} +CCPA_OBS_DIR=${CCPA_OBS_DIR:-"${staged_data_dir}/obs_data/ccpa/proc"} +MRMS_OBS_DIR=${MRMS_OBS_DIR:-"${staged_data_dir}/obs_data/mrms/proc"} +NDAS_OBS_DIR=${NDAS_OBS_DIR:-"${staged_data_dir}/obs_data/ndas/proc"} +MET_BIN_EXEC=${MET_BIN_EXEC:-"bin"} + # Test Data Locations -TEST_EXTRN_MDL_SOURCE_BASEDIR=/work/noaa/gsd-fv3-dev/gsketefia/UFS/staged_extrn_mdl_files +TEST_COMIN="${staged_data_dir}/COMGFS" +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" + diff --git a/ush/machine/singularity.sh b/ush/machine/singularity.sh index 528e1dbd5..14f840800 100644 --- a/ush/machine/singularity.sh +++ b/ush/machine/singularity.sh @@ -21,6 +21,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_LBCS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"aws nomads"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -47,7 +49,7 @@ FIXaer=${FIXaer:-"/contrib/global/glopara/fix/fix_aer"} FIXlut=${FIXlut:-"/contrib/global/glopara/fix/fix_lut"} TOPO_DIR=${TOPO_DIR:-"/contrib/global/glopara/fix/fix_orog"} SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/contrib/global/glopara/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/needs/to/be/specified"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"/needs/to/be/specified"} # Run commands for executables RUN_CMD_SERIAL="time" diff --git a/ush/machine/stampede.sh b/ush/machine/stampede.sh index 41afa5fc1..3f879ea22 100644 --- a/ush/machine/stampede.sh +++ b/ush/machine/stampede.sh @@ -9,15 +9,26 @@ function file_location() { external_model=${1} external_file_fmt=${2} + staged_data_dir="/work2/00315/tg455890/stampede2/UFS_SRW_App/develop" + location="" case ${external_model} in "GSMGFS") - ;& # Fall through. All files in same place + location="${staged_data_dir}/input_model_data/GFS" + ;; "FV3GFS") - location='/scratch/00315/tg455890/GDAS/20190530/2019053000_mem001' + location="${staged_data_dir}/input_model_data/FV3GFS" + ;; + "HRRR") + location="${staged_data_dir}/input_model_data/HRRR" + ;; + "RAP") + location="${staged_data_dir}/input_model_data/RAP" + ;; + "NAM") + location="${staged_data_dir}/input_model_data/NAM" ;; - esac echo ${location:-} @@ -51,15 +62,20 @@ PARTITION_FCST=${PARTITION_FCST:-"normal"} QUEUE_FCST=${QUEUE_FCST:-"normal"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/work/00315/tg455890/stampede2/regional_fv3/fix_am"} -FIXaer=${FIXaer:-"/work/00315/tg455890/stampede2/regional_fv3/fix_aer"} -FIXlut=${FIXlut:-"/work/00315/tg455890/stampede2/regional_fv3/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/work/00315/tg455890/stampede2/regional_fv3/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/work/00315/tg455890/stampede2/regional_fv3/climo_fields_netcdf"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/needs/to/be/specified"} +staged_data_dir="/work2/00315/tg455890/stampede2/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="time" RUN_CMD_UTILS='ibrun -np $nprocs' RUN_CMD_FCST='ibrun -np $nprocs' RUN_CMD_POST='ibrun -np $nprocs' + +# Test Data Locations +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" diff --git a/ush/machine/wcoss_dell_p3.sh b/ush/machine/wcoss_dell_p3.sh index 03656438f..9bf525a35 100644 --- a/ush/machine/wcoss_dell_p3.sh +++ b/ush/machine/wcoss_dell_p3.sh @@ -37,6 +37,8 @@ EXTRN_MDL_SYSBASEDIR_LBCS=${EXTRN_MDL_SYSBASEDIR_LBCS:-$(file_location \ ${EXTRN_MDL_NAME_LBCS} \ ${FV3GFS_FILE_FMT_LBCS})} +EXTRN_MDL_DATA_STORES=${EXTRN_MDL_DATA_STORES:-"hpss"} + # System scripts to source to initialize various commands within workflow # scripts (e.g. "module"). if [ -z ${ENV_INIT_SCRIPTS_FPS:-""} ]; then @@ -55,12 +57,13 @@ QUEUE_HPSS=${QUEUE_HPSS:-"dev_transfer"} QUEUE_FCST=${QUEUE_FCST:-"dev"} # UFS SRW App specific paths -FIXgsm=${FIXgsm:-"/gpfs/dell2/emc/modeling/noscrub/emc.glopara/git/fv3gfs/fix/fix_am"} -FIXaer=${FIXaer:-"/gpfs/dell2/emc/modeling/noscrub/emc.glopara/git/fv3gfs/fix/fix_aer"} -FIXlut=${FIXlut:-"/gpfs/dell2/emc/modeling/noscrub/emc.glopara/git/fv3gfs/fix/fix_lut"} -TOPO_DIR=${TOPO_DIR:-"/gpfs/dell2/emc/modeling/noscrub/emc.glopara/git/fv3gfs/fix/fix_orog"} -SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"/gpfs/dell2/emc/modeling/noscrub/emc.glopara/git/fv3gfs/fix/fix_sfc_climo"} -FIXLAM_NCO_BASEDIR=${FIXLAM_NCO_BASEDIR:-"/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/FV3LAM_pregen"} +staged_data_dir="/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/develop" +FIXgsm=${FIXgsm:-"${staged_data_dir}/fix/fix_am"} +FIXaer=${FIXaer:-"${staged_data_dir}/fix/fix_aer"} +FIXlut=${FIXlut:-"${staged_data_dir}/fix/fix_lut"} +TOPO_DIR=${TOPO_DIR:-"${staged_data_dir}/fix/fix_orog"} +SFC_CLIMO_INPUT_DIR=${SFC_CLIMO_INPUT_DIR:-"${staged_data_dir}/fix/fix_sfc_climo"} +DOMAIN_PREGEN_BASEDIR=${DOMAIN_PREGEN_BASEDIR:-"${staged_data_dir}/FV3LAM_pregen"} # Run commands for executables RUN_CMD_SERIAL="mpirun" @@ -71,12 +74,12 @@ RUN_CMD_POST="mpirun" # MET/METplus-Related Paths MET_INSTALL_DIR=${MET_INSTALL_DIR:-"/gpfs/dell2/emc/verification/noscrub/emc.metplus/met/10.0.0"} METPLUS_PATH=${METPLUS_PATH:-"/gpfs/dell2/emc/verification/noscrub/emc.metplus/METplus/METplus-4.0.0"} -CCPA_OBS_DIR=${CCPA_OBS_DIR:-"/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/obs_data/ccpa/proc"} -MRMS_OBS_DIR=${MRMS_OBS_DIR:-"/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/obs_data/mrms/proc"} -NDAS_OBS_DIR=${NDAS_OBS_DIR:-"/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/obs_data/ndas/proc"} +CCPA_OBS_DIR=${CCPA_OBS_DIR:-"${staged_data_dir}/obs_data/ccpa/proc"} +MRMS_OBS_DIR=${MRMS_OBS_DIR:-"${staged_data_dir}/obs_data/mrms/proc"} +NDAS_OBS_DIR=${NDAS_OBS_DIR:-"${staged_data_dir}/obs_data/ndas/proc"} MET_BIN_EXEC=${MET_BIN_EXEC:-"exec"} # Test Data Locations -TEST_PREGEN_BASEDIR="/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/FV3LAM_pregen" -TEST_COMINgfs="/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/COMGFS" -TEST_EXTRN_MDL_SOURCE_BASEDIR="/gpfs/dell2/emc/modeling/noscrub/UFS_SRW_App/extrn_mdl_files" +TEST_PREGEN_BASEDIR="${staged_data_dir}/FV3LAM_pregen" +TEST_COMIN="${staged_data_dir}/COMGFS" +TEST_EXTRN_MDL_SOURCE_BASEDIR="${staged_data_dir}/input_model_data" diff --git a/ush/mrms_pull_topofhour.py b/ush/mrms_pull_topofhour.py index 43bc70619..374424201 100644 --- a/ush/mrms_pull_topofhour.py +++ b/ush/mrms_pull_topofhour.py @@ -3,80 +3,86 @@ import re, csv, glob import bisect import numpy as np - -# Copy and unzip MRMS files that are closest to top of hour -# Done every hour on a 20-minute lag - -# Include option to define valid time on command line -# Used to backfill verification -#try: -valid_time = str(sys.argv[1]) - -YYYY = int(valid_time[0:4]) -MM = int(valid_time[4:6]) -DD = int(valid_time[6:8]) -HH = int(valid_time[8:19]) - -valid = datetime.datetime(YYYY,MM,DD,HH,0,0) - -#except IndexError: -# valid_time = None - -# Default to current hour if not defined on command line -#if valid_time is None: -# now = datetime.datetime.utcnow() -# YYYY = int(now.strftime('%Y')) -# MM = int(now.strftime('%m')) -# DD = int(now.strftime('%d')) -# HH = int(now.strftime('%H')) - -# valid = datetime.datetime(YYYY,MM,DD,HH,0,0) -# valid_time = valid.strftime('%Y%m%d%H') - -print('Pulling '+valid_time+' MRMS data') - -# Set up working directory -DATA_HEAD = str(sys.argv[2]) -MRMS_PROD_DIR = str(sys.argv[3]) -MRMS_PRODUCT = str(sys.argv[4]) -level = str(sys.argv[5]) - -VALID_DIR = os.path.join(DATA_HEAD,valid.strftime('%Y%m%d')) -if not os.path.exists(VALID_DIR): - os.makedirs(VALID_DIR) -os.chdir(DATA_HEAD) - -# Sort list of files for each MRMS product -print(valid.strftime('%Y%m%d')) -if valid.strftime('%Y%m%d') < '20200304': - search_path = MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+MRMS_PRODUCT+'*.gz' -elif valid.strftime('%Y%m%d') >= '20200304': - search_path = MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+MRMS_PRODUCT+'*.gz' -file_list = [f for f in glob.glob(search_path)] -time_list = [file_list[x][-24:-9] for x in range(len(file_list))] -int_list = [int(time_list[x][0:8]+time_list[x][9:15]) for x in range(len(time_list))] -int_list.sort() -datetime_list = [datetime.datetime.strptime(str(x),"%Y%m%d%H%M%S") for x in int_list] - -# Find the MRMS file closest to the valid time -i = bisect.bisect_left(datetime_list,valid) -closest_timestamp = min(datetime_list[max(0, i-1): i+2], key=lambda date: abs(valid - date)) - -# Check to make sure closest file is within +/- 15 mins of top of the hour -# Copy and rename the file for future ease -difference = abs(closest_timestamp - valid) -if difference.total_seconds() <= 900: - filename1 = MRMS_PRODUCT+level+closest_timestamp.strftime('%Y%m%d-%H%M%S')+'.grib2.gz' - filename2 = MRMS_PRODUCT+level+valid.strftime('%Y%m%d-%H')+'0000.grib2.gz' - - if valid.strftime('%Y%m%d') < '20200304': - print('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) - - os.system('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) - os.system('gunzip '+VALID_DIR+'/'+filename2) - elif valid.strftime('%Y%m%d') >= '20200304': - print('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) - - os.system('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) - os.system('gunzip '+VALID_DIR+'/'+filename2) - +import unittest + +if __name__ == '__main__': + # Copy and unzip MRMS files that are closest to top of hour + # Done every hour on a 20-minute lag + + # Include option to define valid time on command line + # Used to backfill verification + #try: + valid_time = str(sys.argv[1]) + + YYYY = int(valid_time[0:4]) + MM = int(valid_time[4:6]) + DD = int(valid_time[6:8]) + HH = int(valid_time[8:19]) + + valid = datetime.datetime(YYYY,MM,DD,HH,0,0) + + #except IndexError: + # valid_time = None + + # Default to current hour if not defined on command line + #if valid_time is None: + # now = datetime.datetime.utcnow() + # YYYY = int(now.strftime('%Y')) + # MM = int(now.strftime('%m')) + # DD = int(now.strftime('%d')) + # HH = int(now.strftime('%H')) + + # valid = datetime.datetime(YYYY,MM,DD,HH,0,0) + # valid_time = valid.strftime('%Y%m%d%H') + + print('Pulling '+valid_time+' MRMS data') + + # Set up working directory + DATA_HEAD = str(sys.argv[2]) + MRMS_PROD_DIR = str(sys.argv[3]) + MRMS_PRODUCT = str(sys.argv[4]) + level = str(sys.argv[5]) + + VALID_DIR = os.path.join(DATA_HEAD,valid.strftime('%Y%m%d')) + if not os.path.exists(VALID_DIR): + os.makedirs(VALID_DIR) + os.chdir(DATA_HEAD) + + # Sort list of files for each MRMS product + print(valid.strftime('%Y%m%d')) + if valid.strftime('%Y%m%d') < '20200303': + search_path = MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+MRMS_PRODUCT+'*.gz' + elif valid.strftime('%Y%m%d') >= '20200303': + search_path = MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+MRMS_PRODUCT+'*.gz' + file_list = [f for f in glob.glob(search_path)] + time_list = [file_list[x][-24:-9] for x in range(len(file_list))] + int_list = [int(time_list[x][0:8]+time_list[x][9:15]) for x in range(len(time_list))] + int_list.sort() + datetime_list = [datetime.datetime.strptime(str(x),"%Y%m%d%H%M%S") for x in int_list] + + # Find the MRMS file closest to the valid time + i = bisect.bisect_left(datetime_list,valid) + closest_timestamp = min(datetime_list[max(0, i-1): i+2], key=lambda date: abs(valid - date)) + + # Check to make sure closest file is within +/- 15 mins of top of the hour + # Copy and rename the file for future ease + difference = abs(closest_timestamp - valid) + if difference.total_seconds() <= 900: + filename1 = MRMS_PRODUCT+level+closest_timestamp.strftime('%Y%m%d-%H%M%S')+'.grib2.gz' + filename2 = MRMS_PRODUCT+level+valid.strftime('%Y%m%d-%H')+'0000.grib2.gz' + + if valid.strftime('%Y%m%d') < '20200303': + print('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) + + os.system('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/dcom/us007003/ldmdata/obs/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) + os.system('gunzip '+VALID_DIR+'/'+filename2) + elif valid.strftime('%Y%m%d') >= '20200303': + print('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) + + os.system('cp '+MRMS_PROD_DIR+'/'+valid.strftime('%Y%m%d')+'/upperair/mrms/conus/'+MRMS_PRODUCT+'/'+filename1+' '+VALID_DIR+'/'+filename2) + os.system('gunzip '+VALID_DIR+'/'+filename2) + +#dummy unittest +class Testing(unittest.TestCase): + def test_mrms_pull_topfhour(self): + pass diff --git a/ush/predef_grid_params.yaml b/ush/predef_grid_params.yaml new file mode 100644 index 000000000..330a9a647 --- /dev/null +++ b/ush/predef_grid_params.yaml @@ -0,0 +1,909 @@ +# +#----------------------------------------------------------------------- +# +# Set grid and other parameters according to the value of the predefined +# domain (PREDEF_GRID_NAME). Note that the code will enter this script +# only if PREDEF_GRID_NAME has a valid (and non-empty) value. +# +#################### +# The following comments need to be updated: +#################### +# +# 1) Reset the experiment title (expt_title). +# 2) Reset the grid parameters. +# 3) If the write component is to be used (i.e. QUILTING is set to +# "TRUE") and the variable WRTCMP_PARAMS_TMPL_FN containing the name +# of the write-component template file is unset or empty, set that +# filename variable to the appropriate preexisting template file. +# +# For the predefined domains, we determine the starting and ending indi- +# ces of the regional grid within tile 6 by specifying margins (in units +# of number of cells on tile 6) between the boundary of tile 6 and that +# of the regional grid (tile 7) along the left, right, bottom, and top +# portions of these boundaries. Note that we do not use "west", "east", +# "south", and "north" here because the tiles aren't necessarily orient- +# ed such that the left boundary segment corresponds to the west edge, +# etc. The widths of these margins (in units of number of cells on tile +# 6) are specified via the parameters +# +# num_margin_cells_T6_left +# num_margin_cells_T6_right +# num_margin_cells_T6_bottom +# num_margin_cells_T6_top +# +# where the "_T6" in these names is used to indicate that the cell count +# is on tile 6, not tile 7. +# +# Note that we must make the margins wide enough (by making the above +# four parameters large enough) such that a region of halo cells around +# the boundary of the regional grid fits into the margins, i.e. such +# that the halo does not overrun the boundary of tile 6. (The halo is +# added later in another script; its function is to feed in boundary +# conditions to the regional grid.) Currently, a halo of 5 regional +# grid cells is used around the regional grid. Setting num_margin_- +# cells_T6_... to at least 10 leaves enough room for this halo. +# +#----------------------------------------------------------------------- +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~25km cells. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUS_25km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 25000.0 + ESGgrid_DELY: 25000.0 + ESGgrid_NX: 219 + ESGgrid_NY: 131 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 5 + LAYOUT_Y: 2 + BLOCKSIZE: 40 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 2 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 217 + WRTCMP_ny: 128 + WRTCMP_lon_lwr_left: -122.719528 + WRTCMP_lat_lwr_left: 21.138123 + WRTCMP_dx: 25000.0 + WRTCMP_dy: 25000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~25km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_25km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 25000.0 + ESGgrid_DELY: 25000.0 + ESGgrid_NX: 202 + ESGgrid_NY: 116 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 5 + LAYOUT_Y: 2 + BLOCKSIZE: 40 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 2 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 199 + WRTCMP_ny: 111 + WRTCMP_lon_lwr_left: -121.23349066 + WRTCMP_lat_lwr_left: 23.41731593 + WRTCMP_dx: 25000.0 + WRTCMP_dy: 25000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~13km cells. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUS_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 420 + ESGgrid_NY: 252 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 45 + LAYOUT_X: 16 + LAYOUT_Y: 10 + BLOCKSIZE: 32 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 10 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 416 + WRTCMP_ny: 245 + WRTCMP_lon_lwr_left: -122.719528 + WRTCMP_lat_lwr_left: 21.138123 + WRTCMP_dx: 13000.0 + WRTCMP_dy: 13000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~13km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 396 + ESGgrid_NY: 232 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 45 + LAYOUT_X: 16 + LAYOUT_Y: 10 + BLOCKSIZE: 32 + # if QUILTING = TRUE + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 16 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 393 + WRTCMP_ny: 225 + WRTCMP_lon_lwr_left: -121.70231097 + WRTCMP_lat_lwr_left: 22.57417972 + WRTCMP_dx: 13000.0 + WRTCMP_dy: 13000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~13km cells. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUS_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 396 + ESGgrid_NY: 232 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 45 + LAYOUT_X: 16 + LAYOUT_Y: 10 + BLOCKSIZE: 32 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 10 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 393 + WRTCMP_ny: 225 + WRTCMP_lon_lwr_left: -121.70231097 + WRTCMP_lat_lwr_left: 22.57417972 + WRTCMP_dx: 13000.0 + WRTCMP_dy: 13000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~3km cells. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUS_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 1820 + ESGgrid_NY: 1092 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 36 + LAYOUT_X: 28 + LAYOUT_Y: 28 + BLOCKSIZE: 29 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 28 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 1799 + WRTCMP_ny: 1059 + WRTCMP_lon_lwr_left: -122.719528 + WRTCMP_lat_lwr_left: 21.138123 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~3km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 1748 + ESGgrid_NY: 1038 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 30 + LAYOUT_Y: 16 + BLOCKSIZE: 32 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 16 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 1746 + WRTCMP_ny: 1014 + WRTCMP_lon_lwr_left: -122.17364391 + WRTCMP_lat_lwr_left: 21.88588562 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS SUBCONUS domain with ~3km cells. +# +#----------------------------------------------------------------------- +# +"RRFS_SUBCONUS_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 35.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 840 + ESGgrid_NY: 600 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 30 + LAYOUT_Y: 24 + BLOCKSIZE: 35 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 24 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 35.0 + WRTCMP_stdlat1: 35.0 + WRTCMP_stdlat2: 35.0 + WRTCMP_nx: 837 + WRTCMP_ny: 595 + WRTCMP_lon_lwr_left: -109.97410429 + WRTCMP_lat_lwr_left: 26.31459843 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# A subconus domain over Indianapolis, Indiana with ~3km cells. This is +# mostly for testing on a 3km grid with a much small number of cells than +# on the full CONUS. +# +#----------------------------------------------------------------------- +# +"SUBCONUS_Ind_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -86.16 + ESGgrid_LAT_CTR: 39.77 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 200 + ESGgrid_NY: 200 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 5 + LAYOUT_Y: 5 + BLOCKSIZE: 40 + #if QUILTING : True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 5 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -86.16 + WRTCMP_cen_lat: 39.77 + WRTCMP_stdlat1: 39.77 + WRTCMP_stdlat2: 39.77 + WRTCMP_nx: 197 + WRTCMP_ny: 197 + WRTCMP_lon_lwr_left: -89.47120417 + WRTCMP_lat_lwr_left: 37.07809642 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS Alaska domain with ~13km cells. +# +# Note: +# This grid has not been thoroughly tested (as of 20201027). +# +#----------------------------------------------------------------------- +# +"RRFS_AK_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -161.5 + ESGgrid_LAT_CTR: 63.0 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 320 + ESGgrid_NY: 240 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 10 + LAYOUT_X: 16 + LAYOUT_Y: 12 + BLOCKSIZE: 40 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 12 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -161.5 + WRTCMP_cen_lat: 63.0 + WRTCMP_stdlat1: 63.0 + WRTCMP_stdlat2: 63.0 + WRTCMP_nx: 318 + WRTCMP_ny: 234 + WRTCMP_lon_lwr_left: 172.23339164 + WRTCMP_lat_lwr_left: 45.77691870 + WRTCMP_dx: 13000.0 + WRTCMP_dy: 13000.0 +# +#----------------------------------------------------------------------- +# +# The RRFS Alaska domain with ~3km cells. +# +# Note: +# This grid has not been thoroughly tested (as of 20201027). +# +#----------------------------------------------------------------------- +# +"RRFS_AK_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -161.5 + ESGgrid_LAT_CTR: 63.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 1380 + ESGgrid_NY: 1020 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 10 + LAYOUT_X: 30 + LAYOUT_Y: 17 + BLOCKSIZE: 40 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 17 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -161.5 + WRTCMP_cen_lat: 63.0 + WRTCMP_stdlat1: 63.0 + WRTCMP_stdlat2: 63.0 + WRTCMP_nx: 1379 + WRTCMP_ny: 1003 + WRTCMP_lon_lwr_left: -187.89737923 + WRTCMP_lat_lwr_left: 45.84576053 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# The WoFS domain with ~3km cells. +# +# Note: +# The WoFS domain will generate a 301 x 301 output grid (WRITE COMPONENT) and +# will eventually be movable (ESGgrid_LON_CTR/ESGgrid_LAT_CTR). A python script +# python_utils/fv3write_parms_lambert will be useful to determine +# WRTCMP_lon_lwr_left and WRTCMP_lat_lwr_left locations (only for Lambert map +# projection currently) of the quilting output when the domain location is +# moved. Later, it should be integrated into the workflow. +# +#----------------------------------------------------------------------- +# +"WoFS_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -97.5 + ESGgrid_LAT_CTR: 38.5 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 361 + ESGgrid_NY: 361 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 20 + LAYOUT_X: 18 + LAYOUT_Y: 12 + BLOCKSIZE: 30 + # if QUILTING = True + WRTCMP_write_groups: "1" + WRTCMP_write_tasks_per_group: 12 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_stdlat1: 38.5 + WRTCMP_stdlat2: 38.5 + WRTCMP_nx: 301 + WRTCMP_ny: 301 + WRTCMP_lon_lwr_left: -102.3802487 + WRTCMP_lat_lwr_left: 34.3407918 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# A CONUS domain of GFDLgrid type with ~25km cells. +# +# Note: +# This grid is larger than the HRRRX domain and thus cannot be initialized +# using the HRRRX. +# +#----------------------------------------------------------------------- +# +"CONUS_25km_GFDLgrid": + GRID_GEN_METHOD: "GFDLgrid" + GFDLgrid_LON_T6_CTR: -97.5 + GFDLgrid_LAT_T6_CTR: 38.5 + GFDLgrid_STRETCH_FAC: 1.4 + GFDLgrid_RES: 96 + GFDLgrid_REFINE_RATIO: 3 + num_margin_cells_T6_left: 12 + GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G: 13 + num_margin_cells_T6_right: 12 + GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G: 84 + num_margin_cells_T6_bottom: 16 + GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G: 17 + num_margin_cells_T6_top: 16 + GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G: 80 + GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES: True + DT_ATMOS: 225 + LAYOUT_X: 6 + LAYOUT_Y: 4 + BLOCKSIZE: 36 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 4 + WRTCMP_output_grid: "rotated_latlon" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_lon_lwr_left: -24.40085141 + WRTCMP_lat_lwr_left: -19.65624142 + WRTCMP_lon_upr_rght: 24.40085141 + WRTCMP_lat_upr_rght: 19.65624142 + WRTCMP_dlon: 0.22593381 + WRTCMP_dlat: 0.22593381 +# +#----------------------------------------------------------------------- +# +# A CONUS domain of GFDLgrid type with ~3km cells. +# +# Note: +# This grid is larger than the HRRRX domain and thus cannot be initialized +# using the HRRRX. +# +#----------------------------------------------------------------------- +# +"CONUS_3km_GFDLgrid": + GRID_GEN_METHOD: "GFDLgrid" + GFDLgrid_LON_T6_CTR: -97.5 + GFDLgrid_LAT_T6_CTR: 38.5 + GFDLgrid_STRETCH_FAC: 1.5 + GFDLgrid_RES: 768 + GFDLgrid_REFINE_RATIO: 3 + num_margin_cells_T6_left: 69 + GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G: 70 + num_margin_cells_T6_right: 69 + GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G: 699 + num_margin_cells_T6_bottom: 164 + GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G: 165 + num_margin_cells_T6_top: 164 + GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G: 604 + GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES: True + DT_ATMOS: 18 + LAYOUT_X: 30 + LAYOUT_Y: 22 + BLOCKSIZE: 35 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 22 + WRTCMP_output_grid: "rotated_latlon" + WRTCMP_cen_lon: -97.5 + WRTCMP_cen_lat: 38.5 + WRTCMP_lon_lwr_left: -25.23144805 + WRTCMP_lat_lwr_left: -15.82130419 + WRTCMP_lon_upr_rght: 25.23144805 + WRTCMP_lat_upr_rght: 15.82130419 + WRTCMP_dlon: 0.02665763 + WRTCMP_dlat: 0.02665763 +# +#----------------------------------------------------------------------- +# +# EMC's Alaska grid. +# +#----------------------------------------------------------------------- +# +"EMC_AK": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -153.0 + ESGgrid_LAT_CTR: 61.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 1344 # Supergrid value 2704 + ESGgrid_NY: 1152 # Supergrid value 2320 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 18 + LAYOUT_X: 28 + LAYOUT_Y: 16 + BLOCKSIZE: 24 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 24 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -153.0 + WRTCMP_cen_lat: 61.0 + WRTCMP_stdlat1: 61.0 + WRTCMP_stdlat2: 61.0 + WRTCMP_nx: 1344 + WRTCMP_ny: 1152 + WRTCMP_lon_lwr_left: -177.0 + WRTCMP_lat_lwr_left: 42.5 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# EMC's Hawaii grid. +# +#----------------------------------------------------------------------- +# +"EMC_HI": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -157.0 + ESGgrid_LAT_CTR: 20.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 432 # Supergrid value 880 + ESGgrid_NY: 360 # Supergrid value 736 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 18 + LAYOUT_X: 8 + LAYOUT_Y: 8 + BLOCKSIZE: 27 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 8 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -157.0 + WRTCMP_cen_lat: 20.0 + WRTCMP_stdlat1: 20.0 + WRTCMP_stdlat2: 20.0 + WRTCMP_nx: 420 + WRTCMP_ny: 348 + WRTCMP_lon_lwr_left: -162.8 + WRTCMP_lat_lwr_left: 15.2 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# EMC's Puerto Rico grid. +# +#----------------------------------------------------------------------- +# +"EMC_PR": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -69.0 + ESGgrid_LAT_CTR: 18.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 576 # Supergrid value 1168 + ESGgrid_NY: 432 # Supergrid value 880 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 18 + LAYOUT_X: 16 + LAYOUT_Y: 8 + BLOCKSIZE: 24 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 24 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -69.0 + WRTCMP_cen_lat: 18.0 + WRTCMP_stdlat1: 18.0 + WRTCMP_stdlat2: 18.0 + WRTCMP_nx: 576 + WRTCMP_ny: 432 + WRTCMP_lon_lwr_left: -77 + WRTCMP_lat_lwr_left: 12 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# EMC's Guam grid. +# +#----------------------------------------------------------------------- +# +"EMC_GU": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: 146.0 + ESGgrid_LAT_CTR: 15.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 432 # Supergrid value 880 + ESGgrid_NY: 360 # Supergrid value 736 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 18 + LAYOUT_X: 16 + LAYOUT_Y: 12 + BLOCKSIZE: 27 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 24 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: 146.0 + WRTCMP_cen_lat: 15.0 + WRTCMP_stdlat1: 15.0 + WRTCMP_stdlat2: 15.0 + WRTCMP_nx: 420 + WRTCMP_ny: 348 + WRTCMP_lon_lwr_left: 140 + WRTCMP_lat_lwr_left: 10 + WRTCMP_dx: 3000.0 + WRTCMP_dy: 3000.0 +# +#----------------------------------------------------------------------- +# +# Emulation of the HAFS v0.A grid at 25 km. +# +#----------------------------------------------------------------------- +# +"GSL_HAFSV0.A_25km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -62.0 + ESGgrid_LAT_CTR: 22.0 + ESGgrid_DELX: 25000.0 + ESGgrid_DELY: 25000.0 + ESGgrid_NX: 345 + ESGgrid_NY: 230 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 300 + LAYOUT_X: 5 + LAYOUT_Y: 5 + BLOCKSIZE: 6 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 32 + WRTCMP_output_grid: "regional_latlon" + WRTCMP_cen_lon: -62.0 + WRTCMP_cen_lat: 25.0 + WRTCMP_lon_lwr_left: -114.5 + WRTCMP_lat_lwr_left: -5.0 + WRTCMP_lon_upr_rght: -9.5 + WRTCMP_lat_upr_rght: 55.0 + WRTCMP_dlon: 0.25 + WRTCMP_dlat: 0.25 +# +#----------------------------------------------------------------------- +# +# Emulation of the HAFS v0.A grid at 13 km. +# +#----------------------------------------------------------------------- +# +"GSL_HAFSV0.A_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -62.0 + ESGgrid_LAT_CTR: 22.0 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 665 + ESGgrid_NY: 444 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 180 + LAYOUT_X: 19 + LAYOUT_Y: 12 + BLOCKSIZE: 35 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 32 + WRTCMP_output_grid: "regional_latlon" + WRTCMP_cen_lon: -62.0 + WRTCMP_cen_lat: 25.0 + WRTCMP_lon_lwr_left: -114.5 + WRTCMP_lat_lwr_left: -5.0 + WRTCMP_lon_upr_rght: -9.5 + WRTCMP_lat_upr_rght: 55.0 + WRTCMP_dlon: 0.13 + WRTCMP_dlat: 0.13 +# +#----------------------------------------------------------------------- +# +# Emulation of the HAFS v0.A grid at 3 km. +# +#----------------------------------------------------------------------- +# +"GSL_HAFSV0.A_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -62.0 + ESGgrid_LAT_CTR: 22.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 2880 + ESGgrid_NY: 1920 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 40 + LAYOUT_X: 32 + LAYOUT_Y: 24 + BLOCKSIZE: 32 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 32 + WRTCMP_output_grid: "regional_latlon" + WRTCMP_cen_lon: -62.0 + WRTCMP_cen_lat: 25.0 + WRTCMP_lon_lwr_left: -114.5 + WRTCMP_lat_lwr_left: -5.0 + WRTCMP_lon_upr_rght: -9.5 + WRTCMP_lat_upr_rght: 55.0 + WRTCMP_dlon: 0.03 + WRTCMP_dlat: 0.03 +# +#----------------------------------------------------------------------- +# +# 50-km HRRR Alaska grid. +# +#----------------------------------------------------------------------- +# +"GSD_HRRR_AK_50km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -163.5 + ESGgrid_LAT_CTR: 62.8 + ESGgrid_DELX: 50000.0 + ESGgrid_DELY: 50000.0 + ESGgrid_NX: 74 + ESGgrid_NY: 51 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 600 + LAYOUT_X: 2 + LAYOUT_Y: 3 + BLOCKSIZE: 37 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 1 + WRTCMP_output_grid: "lambert_conformal" + WRTCMP_cen_lon: -163.5 + WRTCMP_cen_lat: 62.8 + WRTCMP_stdlat1: 62.8 + WRTCMP_stdlat2: 62.8 + WRTCMP_nx: 70 + WRTCMP_ny: 45 + WRTCMP_lon_lwr_left: 172.0 + WRTCMP_lat_lwr_left: 49.0 + WRTCMP_dx: 50000.0 + WRTCMP_dy: 50000.0 +# +#----------------------------------------------------------------------- +# +# Emulation of GSD's RAP domain with ~13km cell size. +# +#----------------------------------------------------------------------- +# +"RRFS_NA_13km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -112.5 + ESGgrid_LAT_CTR: 55.0 + ESGgrid_DELX: 13000.0 + ESGgrid_DELY: 13000.0 + ESGgrid_NX: 912 + ESGgrid_NY: 623 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 50 + LAYOUT_X: 16 + LAYOUT_Y: 16 + BLOCKSIZE: 30 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 16 + WRTCMP_output_grid: "rotated_latlon" + WRTCMP_cen_lon: -113.0 + WRTCMP_cen_lat: 55.0 + WRTCMP_lon_lwr_left: -61.0 + WRTCMP_lat_lwr_left: -37.0 + WRTCMP_lon_upr_rght: 61.0 + WRTCMP_lat_upr_rght: 37.0 + WRTCMP_dlon: 0.1169081 + WRTCMP_dlat: 0.1169081 +# +#----------------------------------------------------------------------- +# +# Future operational RRFS domain with ~3km cell size. +# +#----------------------------------------------------------------------- +# +"RRFS_NA_3km": + GRID_GEN_METHOD: "ESGgrid" + ESGgrid_LON_CTR: -112.5 + ESGgrid_LAT_CTR: 55.0 + ESGgrid_DELX: 3000.0 + ESGgrid_DELY: 3000.0 + ESGgrid_NX: 3950 + ESGgrid_NY: 2700 + ESGgrid_PAZI: 0.0 + ESGgrid_WIDE_HALO_WIDTH: 6 + DT_ATMOS: 36 + LAYOUT_X: 20 # 40 - EMC operational configuration + LAYOUT_Y: 35 # 45 - EMC operational configuration + BLOCKSIZE: 28 + #if QUILTING = True + WRTCMP_write_groups: 1 + WRTCMP_write_tasks_per_group: 144 + WRTCMP_output_grid: "rotated_latlon" + WRTCMP_cen_lon: -113.0 + WRTCMP_cen_lat: 55.0 + WRTCMP_lon_lwr_left: -61.0 + WRTCMP_lat_lwr_left: -37.0 + WRTCMP_lon_upr_rght: 61.0 + WRTCMP_lat_upr_rght: 37.0 + WRTCMP_dlon: 0.025 + WRTCMP_dlat: 0.025 + diff --git a/ush/python_utils/__init__.py b/ush/python_utils/__init__.py index 1e4f6feed..9371488d5 100644 --- a/ush/python_utils/__init__.py +++ b/ush/python_utils/__init__.py @@ -1,4 +1,4 @@ -from .change_case import uppercase, lowercase +from .misc import uppercase, lowercase, find_pattern_in_str, find_pattern_in_file from .check_for_preexist_dir_file import check_for_preexist_dir_file from .check_var_valid_value import check_var_valid_value from .count_files import count_files @@ -9,12 +9,14 @@ from .filesys_cmds_vrfy import cmd_vrfy, cp_vrfy, mv_vrfy, rm_vrfy, ln_vrfy, mkdir_vrfy, cd_vrfy from .get_charvar_from_netcdf import get_charvar_from_netcdf from .get_elem_inds import get_elem_inds -from .get_manage_externals_config_property import get_manage_externals_config_property from .interpol_to_arbit_CRES import interpol_to_arbit_CRES from .print_input_args import print_input_args from .print_msg import print_info_msg, print_err_msg_exit from .process_args import process_args from .run_command import run_command -from .config_parser import cfg_to_shell_str, cfg_to_yaml_str, yaml_safe_load, \ - load_shell_config, load_config_file - +from .config_parser import load_yaml_config, cfg_to_yaml_str, \ + load_json_config, cfg_to_json_str, \ + load_ini_config, cfg_to_ini_str, get_ini_value, \ + load_shell_config, cfg_to_shell_str, \ + load_config_file +from .xml_parser import load_xml_file, has_tag_with_value diff --git a/ush/python_utils/change_case.py b/ush/python_utils/change_case.py deleted file mode 100644 index 4fb46f94e..000000000 --- a/ush/python_utils/change_case.py +++ /dev/null @@ -1,25 +0,0 @@ -#!/usr/bin/env python3 - -def uppercase(str): - """ Function to convert a given string to uppercase - - Args: - str: the string - Return: - Uppercased str - """ - - return str.upper() - - -def lowercase(str): - """ Function to convert a given string to lowercase - - Args: - str: the string - Return: - Lowercase str - """ - - return str.lower() - diff --git a/ush/python_utils/check_for_preexist_dir_file.py b/ush/python_utils/check_for_preexist_dir_file.py index 743f80fcc..97c709cf0 100644 --- a/ush/python_utils/check_for_preexist_dir_file.py +++ b/ush/python_utils/check_for_preexist_dir_file.py @@ -24,7 +24,7 @@ def check_for_preexist_dir_file(path, method): rm_vrfy(' -rf ', path) elif method == 'rename': now = datetime.now() - d = now.strftime("_%Y%m%d_%H%M%S") + d = now.strftime("_old_%Y%m%d_%H%M%S") new_path = path + d print_info_msg(f''' Specified directory or file already exists: diff --git a/ush/python_utils/config_parser.py b/ush/python_utils/config_parser.py index f7d5a86ee..da7131df2 100644 --- a/ush/python_utils/config_parser.py +++ b/ush/python_utils/config_parser.py @@ -1,15 +1,78 @@ #!/usr/bin/env python3 + +""" +This file provides utilities for processing different configuration file formats. +Supported formats include: + a) YAML + b) JSON + c) SHELL + d) INI + +Typical usage involves first loading the config file, then using the dictionary +returnded by load_config to make queries. +""" + import argparse import yaml +import json import sys import os from textwrap import dedent +import configparser from .environment import list_to_str, str_to_list from .print_msg import print_err_msg_exit from .run_command import run_command +########## +# YAML +########## +def load_yaml_config(config_file): + """ Safe load a yaml file """ + + try: + with open(config_file,'r') as f: + cfg = yaml.safe_load(f) + except yaml.YAMLError as e: + print_err_msg_exit(e) + + return cfg + +def cfg_to_yaml_str(cfg): + """ Get contents of config file as a yaml string """ + + return yaml.dump(cfg, sort_keys=False, default_flow_style=False) + +def join_str(loader, node): + """ Custom tag hangler to join strings """ + seq = loader.construct_sequence(node) + return ''.join([str(i) for i in seq]) + +yaml.add_constructor('!join_str', join_str, Loader=yaml.SafeLoader) + +########## +# JSON +########## +def load_json_config(config_file): + """ Load json config file """ + + try: + with open(config_file,'r') as f: + cfg = json.load(f) + except: + print_err_msg_exit(e) + + return cfg + +def cfg_to_json_str(cfg): + """ Get contents of config file as a json string """ + + return json.dumps(cfg, sort_keys=False, indent=4) + +########## +# SHELL +########## def load_shell_config(config_file): """ Loads old style shell config files. We source the config script in a subshell and gets the variables it sets @@ -24,10 +87,11 @@ def load_shell_config(config_file): # do a diff to get variables specifically defined/updated in the script # Method sounds brittle but seems to work ok so far code = dedent(f''' #!/bin/bash - (set -o posix; set) > /tmp/t1 + (set -o posix; set) > ./_t1 {{ . {config_file}; set +x; }} &>/dev/null - (set -o posix; set) > /tmp/t2 - diff /tmp/t1 /tmp/t2 | grep "> " | cut -c 3- + (set -o posix; set) > ./_t2 + diff ./_t1 ./_t2 | grep "> " | cut -c 3- + rm -rf ./_t1 ./_t2 ''') (_,config_str,_) = run_command(code) lines = config_str.splitlines() @@ -41,16 +105,16 @@ def load_shell_config(config_file): cfg[k] = v return cfg -def cfg_to_yaml_str(cfg): - """ Get contents of config file as a yaml string """ - - return yaml.dump(cfg, sort_keys=False, default_flow_style=False) - def cfg_to_shell_str(cfg): - """ Get contents of yaml file as shell script string""" + """ Get contents of config file as shell script string""" shell_str = '' for k,v in cfg.items(): + if isinstance(v,dict): + shell_str += f"# [{k}]\n" + shell_str += cfg_to_shell_str(v) + shell_str += "\n" + continue v1 = list_to_str(v) if isinstance(v,list): shell_str += f'{k}={v1}\n' @@ -58,46 +122,85 @@ def cfg_to_shell_str(cfg): shell_str += f"{k}='{v1}'\n" return shell_str -def yaml_safe_load(file_name): - """ Safe load a yaml file """ +########## +# INI +########## +def load_ini_config(config_file): + """ Load a config file with a format similar to Microsoft's INI files""" - try: - with open(file_name,'r') as f: - cfg = yaml.safe_load(f) - except yaml.YAMLError as e: - print_err_msg_exit(e) + if not os.path.exists(config_file): + print_err_msg_exit(f''' + The specified configuration file does not exist: + \"{file_name}\"''') + + config = configparser.ConfigParser() + config.read(config_file) + config_dict = {s:dict(config.items(s)) for s in config.sections()} + return config_dict + +def get_ini_value(config, section, key): + """ Finds the value of a property in a given section""" + + if not section in config: + print_err_msg_exit(f''' + Section not found: + section = \"{section}\" + valid sections = \"{config.keys()}\"''') + else: + return config[section][key] - return cfg + return None -def join(loader, node): - """ Custom tag hangler to join strings """ - seq = loader.construct_sequence(node) - return ''.join([str(i) for i in seq]) +def cfg_to_ini_str(cfg): + """ Get contents of config file as ini string""" -yaml.add_constructor('!join', join, Loader=yaml.SafeLoader) + ini_str = '' + for k,v in cfg.items(): + if isinstance(v,dict): + ini_str += f"[{k}]\n" + ini_str += cfg_to_ini_str(v) + ini_str += "\n" + continue + v1 = list_to_str(v) + if isinstance(v,list): + ini_str += f'{k}={v1}\n' + else: + ini_str += f"{k}='{v1}'\n" + return ini_str +################## +# CONFIG loader +################## def load_config_file(file_name): - """ Choose yaml/shell file based on extension """ + """ Load config file based on file name extension """ + ext = os.path.splitext(file_name)[1][1:] if ext == "sh": return load_shell_config(file_name) + elif ext == "cfg": + return load_ini_config(file_name) + elif ext == "json": + return load_json_config(file_name) else: - return yaml_safe_load(file_name) - + return load_yaml_config(file_name) if __name__ == "__main__": parser = argparse.ArgumentParser(description=\ - 'Prints contents of yaml file as bash argument-value pairs.') + 'Prints contents of config file.') parser.add_argument('--cfg','-c',dest='cfg',required=True, - help='yaml or regular shell config file to parse') + help='config file to parse') parser.add_argument('--output-type','-o',dest='out_type',required=False, - help='output format: "shell": shell format, any other: yaml format ') + help='output format: can be any of ["shell", "yaml", "ini", "json"]') args = parser.parse_args() cfg = load_config_file(args.cfg) if args.out_type == 'shell': print( cfg_to_shell_str(cfg) ) + elif args.out_type == 'ini': + print( cfg_to_ini_str(cfg) ) + elif args.out_type == 'json': + print( cfg_to_json_str(cfg) ) else: print( cfg_to_yaml_str(cfg) ) diff --git a/ush/python_utils/environment.py b/ush/python_utils/environment.py index e98deaad2..25f03b8fd 100644 --- a/ush/python_utils/environment.py +++ b/ush/python_utils/environment.py @@ -126,7 +126,7 @@ def str_to_list(v): v = v.strip() if not v: return None - if v[0] == '(': + if v[0] == '(' and v[-1] == ')': v = v[1:-1] tokens = shlex.split(v) lst = [] diff --git a/ush/python_utils/fv3write_parms_lambert.py b/ush/python_utils/fv3write_parms_lambert.py new file mode 100755 index 000000000..18b4a8f35 --- /dev/null +++ b/ush/python_utils/fv3write_parms_lambert.py @@ -0,0 +1,91 @@ +#!/usr/bin/env python +# +# To use this tool, you should source the regional workflow environment +# $> source env/wflow_xxx.env +# and activate pygraf (or any one with cartopy installation) +# $> conda activate pygraf +# + +import argparse + +import cartopy.crs as ccrs + +#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ +# +# Main function to return parameters for the FV3 write component. +# + +if __name__ == "__main__": + + parser = argparse.ArgumentParser(description='Determine FV3 write component lat1/lon1 for Lamert Conformal map projection', + epilog=''' ---- Yunheng Wang (2021-07-15). + ''') + #formatter_class=CustomFormatter) + parser.add_argument('-v','--verbose', help='Verbose output', action="store_true") + parser.add_argument('-ca','--ctrlat', help='Lambert Conformal central latitude', type=float, default=38.5 ) + parser.add_argument('-co','--ctrlon', help='Lambert Conformal central longitude', type=float, default=-97.5 ) + parser.add_argument('-s1','--stdlat1',help='Lambert Conformal standard latitude1', type=float, default=38.5 ) + parser.add_argument('-s2','--stdlat2',help='Lambert Conformal standard latitude2', type=float, default=38.5 ) + parser.add_argument('-nx', help='number of grid in X direction', type=int, default=301 ) + parser.add_argument('-ny' ,help='number of grid in Y direction', type=int, default=301 ) + parser.add_argument('-dx' ,help='grid resolution in X direction (meter)',type=float, default=3000.0) + parser.add_argument('-dy' ,help='grid resolution in Y direction (meter)',type=float, default=3000.0) + + args = parser.parse_args() + + if args.verbose: + print("Write component Lambert Conformal Parameters:") + print(f" cen_lat = {args.ctrlat}, cen_lon = {args.ctrlon}, stdlat1 = {args.stdlat1}, stdlat2 = {args.stdlat2}") + print(f" nx = {args.nx}, ny = {args.ny}, dx = {args.dx}, dy = {args.dy}") + + #----------------------------------------------------------------------- + # + # Lambert grid + # + #----------------------------------------------------------------------- + + nx1 = args.nx + ny1 = args.ny + dx1 = args.dx + dy1 = args.dy + + ctrlat = args.ctrlat + ctrlon = args.ctrlon + + xctr = (nx1-1)/2*dx1 + yctr = (ny1-1)/2*dy1 + + carr= ccrs.PlateCarree() + + proj1=ccrs.LambertConformal(central_longitude=ctrlon, central_latitude=ctrlat, + false_easting=xctr, false_northing= yctr, secant_latitudes=None, + standard_parallels=(args.stdlat1, args.stdlat2), globe=None) + + lonlat1 = carr.transform_point(0.0,0.0,proj1) + + if args.verbose: + print() + print(f' lat1 = {lonlat1[1]}, lon1 = {lonlat1[0]}') + print('\n') + + #----------------------------------------------------------------------- + # + # Output write component parameters + # + #----------------------------------------------------------------------- + + print() + print("output_grid: 'lambert_conformal'") + print(f"cen_lat: {args.ctrlat}") + print(f"cen_lon: {args.ctrlon}") + print(f"stdlat1: {args.stdlat1}") + print(f"stdlat2: {args.stdlat2}") + print(f"nx: {args.nx}") + print(f"ny: {args.ny}") + print(f"dx: {args.dx}") + print(f"dy: {args.dy}") + print(f"lat1: {lonlat1[1]}") + print(f"lon1: {lonlat1[0]}") + print() + + # End of program diff --git a/ush/python_utils/get_elem_inds.py b/ush/python_utils/get_elem_inds.py index fd4ea2cee..08cbfbff7 100644 --- a/ush/python_utils/get_elem_inds.py +++ b/ush/python_utils/get_elem_inds.py @@ -1,6 +1,6 @@ #!/usr/bin/env python3 -from .change_case import lowercase +from .misc import lowercase from .check_var_valid_value import check_var_valid_value def get_elem_inds(arr, match, ret_type): diff --git a/ush/python_utils/get_manage_externals_config_property.py b/ush/python_utils/get_manage_externals_config_property.py deleted file mode 100644 index c537a8824..000000000 --- a/ush/python_utils/get_manage_externals_config_property.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 - -import os -import configparser - -from .print_msg import print_err_msg_exit - -def get_manage_externals_config_property(externals_cfg_fp, external_name, property_name): - """ - This function searches a specified manage_externals configuration file - and extracts from it the value of the specified property of the external - with the specified name (e.g. the relative path in which the external - has been/will be cloned by the manage_externals utility). - - Args: - - externals_cfg_fp: - The absolute or relative path to the manage_externals configuration - file that will be searched. - - external_name: - The name of the external to search for in the manage_externals confi- - guration file specified by externals_cfg_fp. - - property_name: - The name of the property whose value to obtain (for the external spe- - cified by external_name). - - Returns: - The property value - """ - - if not os.path.exists(externals_cfg_fp): - print_err_msg_exit(f''' - The specified manage_externals configuration file (externals_cfg_fp) - does not exist: - externals_cfg_fp = \"{externals_cfg_fp}\"''') - - config = configparser.ConfigParser() - config.read(externals_cfg_fp) - - if not external_name in config.sections(): - print_err_msg_exit(f''' - In the specified manage_externals configuration file (externals_cfg_fp), - the specified property (property_name) was not found for the the speci- - fied external (external_name): - externals_cfg_fp = \"{externals_cfg_fp}\" - external_name = \"{external_name}\" - property_name = \"{property_name}\"''') - else: - return config[external_name][property_name] - - return None - diff --git a/ush/python_utils/misc.py b/ush/python_utils/misc.py new file mode 100644 index 000000000..1934ac3d6 --- /dev/null +++ b/ush/python_utils/misc.py @@ -0,0 +1,57 @@ +#!/usr/bin/env python3 + +import re + +def uppercase(str): + """ Function to convert a given string to uppercase + + Args: + str: the string + Return: + Uppercased str + """ + + return str.upper() + + +def lowercase(str): + """ Function to convert a given string to lowercase + + Args: + str: the string + Return: + Lowercase str + """ + + return str.lower() + +def find_pattern_in_str(pattern, source): + """ Find regex pattern in a string + + Args: + pattern: regex expression + source: string + Return: + A tuple of matched groups or None + """ + pattern = re.compile(pattern) + for match in re.finditer(pattern,source): + return match.groups() + return None + +def find_pattern_in_file(pattern, file_name): + """ Find regex pattern in a file + + Args: + pattern: regex expression + file_name: name of text file + Return: + A tuple of matched groups or None + """ + pattern = re.compile(pattern) + with open(file_name) as f: + for line in f: + for match in re.finditer(pattern,line): + return match.groups() + return None + diff --git a/ush/python_utils/print_input_args.py b/ush/python_utils/print_input_args.py index 4f168a612..25d979eae 100644 --- a/ush/python_utils/print_input_args.py +++ b/ush/python_utils/print_input_args.py @@ -4,7 +4,7 @@ import inspect from textwrap import dedent -from .change_case import lowercase +from .misc import lowercase from .print_msg import print_info_msg from .environment import import_vars diff --git a/ush/python_utils/test_python_utils.py b/ush/python_utils/test_python_utils.py index 568de918e..52defb1c6 100644 --- a/ush/python_utils/test_python_utils.py +++ b/ush/python_utils/test_python_utils.py @@ -18,9 +18,23 @@ from python_utils import * class Testing(unittest.TestCase): - def test_change_case(self): + def test_misc(self): self.assertEqual( uppercase('upper'), 'UPPER' ) self.assertEqual( lowercase('LOWER'), 'lower' ) + # regex in file + pattern = f'^[ ]*(lsm_ruc)<\/scheme>[ ]*$' + FILE=f"{self.PATH}/../test_data/suite_FV3_GSD_SAR.xml" + match = find_pattern_in_file(pattern, FILE) + self.assertEqual( ("lsm_ruc",), match) + # regex in string + with open(FILE) as f: + content = f.read() + find_pattern_in_str(pattern, content) + self.assertEqual( ("lsm_ruc",), match) + def test_xml_parser(self): + FILE=f"{self.PATH}/../test_data/suite_FV3_GSD_SAR.xml" + tree = load_xml_file(FILE) + self.assertTrue(has_tag_with_value(tree,"scheme","lsm_ruc")) def test_check_for_preexist_dir_file(self): cmd_vrfy('mkdir -p test_data/dir') self.assertTrue( os.path.exists('test_data/dir') ) @@ -38,8 +52,8 @@ def test_filesys_cmds(self): dPATH=f'{self.PATH}/test_data/dir' mkdir_vrfy(dPATH) self.assertTrue( os.path.exists(dPATH) ) - cp_vrfy(f'{self.PATH}/change_case.py', f'{dPATH}/change_cases.py') - self.assertTrue( os.path.exists(f'{dPATH}/change_cases.py') ) + cp_vrfy(f'{self.PATH}/misc.py', f'{dPATH}/miscs.py') + self.assertTrue( os.path.exists(f'{dPATH}/miscs.py') ) cmd_vrfy(f'rm -rf {dPATH}') self.assertFalse( os.path.exists('tt.py') ) def test_get_charvar_from_netcdf(self): @@ -53,13 +67,6 @@ def test_get_elem_inds(self): self.assertEqual( get_elem_inds(arr, 'egg', 'first' ) , 0 ) self.assertEqual( get_elem_inds(arr, 'egg', 'last' ) , 4 ) self.assertEqual( get_elem_inds(arr, 'egg', 'all' ) , [0, 2, 4] ) - def test_get_manage_externals_config_property(self): - self.assertIn( \ - 'regional_workflow', - get_manage_externals_config_property( \ - f'{self.PATH}/test_data/Externals.cfg', - 'regional_workflow', - 'repo_url')) def test_interpol_to_arbit_CRES(self): RES = 800 RES_array = [ 5, 25, 40, 60, 80, 100, 400, 700, 1000, 1500, 2800, 3000 ] @@ -95,7 +102,7 @@ def test_import_vars(self): set_env_var("MYVAR","MYVAL") env_vars = ["PWD", "MYVAR"] import_vars(env_vars=env_vars) - self.assertEqual( PWD, os.getcwd() ) + self.assertEqual( os.path.realpath(PWD), os.path.realpath(os.getcwd()) ) self.assertEqual(MYVAR,"MYVAL") #test export MYVAR="MYNEWVAL" @@ -106,10 +113,26 @@ def test_import_vars(self): dictionary = { "Hello": "World!" } import_vars(dictionary=dictionary) self.assertEqual( Hello, "World!" ) + #test array + shell_str='("1" "2") \n' + v = str_to_list(shell_str) + self.assertTrue( isinstance(v,list) ) + self.assertEqual(v, [1, 2]) + shell_str = '( "1" "2" \n' + v = str_to_list(shell_str) + self.assertFalse( isinstance(v,list) ) def test_config_parser(self): cfg = { "HRS": [ "1", "2" ] } shell_str = cfg_to_shell_str(cfg) self.assertEqual( shell_str, 'HRS=( "1" "2" )\n') + # ini file + cfg = load_ini_config(f'{self.PATH}/test_data/Externals.cfg') + self.assertIn( \ + 'regional_workflow', + get_ini_value( \ + cfg, + 'regional_workflow', + 'repo_url')) def test_print_msg(self): self.assertEqual( print_info_msg("Hello World!", verbose=False), False) def setUp(self): diff --git a/ush/python_utils/xml_parser.py b/ush/python_utils/xml_parser.py new file mode 100644 index 000000000..0428259c6 --- /dev/null +++ b/ush/python_utils/xml_parser.py @@ -0,0 +1,30 @@ +#!/usr/bin/env python3 + +import xml.etree.ElementTree as ET + +def load_xml_file(xml_file): + """ Loads xml file + + Args: + xml_file: path to xml file + Returns: + root of the xml tree + """ + tree = ET.parse(xml_file) + return tree + +def has_tag_with_value(tree, tag, value): + """ Check if xml tree has a node with tag and value + + Args: + tree: the xml tree + tag: the tag + value: text of tag + Returns: + Boolean + """ + for node in tree.iter(): + if node.tag == tag and node.text == value: + return True + return False + diff --git a/ush/retrieve_data.py b/ush/retrieve_data.py old mode 100644 new mode 100755 index 48210c0d5..d68c08237 --- a/ush/retrieve_data.py +++ b/ush/retrieve_data.py @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 # pylint: disable=logging-fstring-interpolation ''' This script helps users pull data from known data streams, including @@ -31,6 +32,8 @@ import shutil import subprocess import sys +from textwrap import dedent + import yaml @@ -43,7 +46,7 @@ def clean_up_output_dir(expected_subdir, local_archive, output_path, source_path unavailable = {} # Check to make sure the files exist on disk for file_path in source_paths: - local_file_path = os.path.join(output_path,file_path) + local_file_path = os.path.join(output_path, file_path.lstrip("/")) if not os.path.exists(local_file_path): logging.info(f'File does not exist: {local_file_path}') unavailable['hpss'] = source_paths @@ -126,46 +129,6 @@ def download_file(url): return True -def download_requested_files(cla, data_store, store_specs): - - ''' This function interacts with the "download" protocol in a - provided data store specs file to download a set of files requested - by the user. It calls download_file for each individual file that - should be downloaded. ''' - - base_urls = store_specs['url'] - base_urls = base_urls if isinstance(base_urls, list) else [base_urls] - - file_names = store_specs.get('file_names', {}) - if cla.file_type is not None: - file_names = file_names[cla.file_type] - file_names = file_names[cla.anl_or_fcst] - target_path = fill_template(cla.output_path, - cla.cycle_date) - - logging.info(f'Downloaded files will be placed here: \n {target_path}') - orig_path = os.getcwd() - os.chdir(target_path) - unavailable = {} - for base_url in base_urls: - for fcst_hr in cla.fcst_hrs: - for file_name in file_names: - url = os.path.join(base_url, file_name) - url = fill_template(url, cla.cycle_date, fcst_hr) - downloaded = download_file(url) - if not downloaded: - - if unavailable.get(data_store) is None: - unavailable[data_store] = [] - unavailable[data_store].append(target_path) - os.chdir(orig_path) - # Returning here assumes that if the first file - # isn't found, none of the others will be. Don't - # waste time timing out on every requested file. - return unavailable - os.chdir(orig_path) - return unavailable - def fhr_list(args): ''' @@ -184,7 +147,7 @@ def fhr_list(args): Must ensure that the list contains integers. ''' - args = args if isinstance(args, list) else [args] + args = args if isinstance(args, list) else list(args) arg_len = len(args) if arg_len in (2, 3): args[1] += 1 @@ -271,7 +234,7 @@ def find_archive_files(paths, file_names, cycle_date): return '', 0 -def get_requested_files(cla, file_names, input_loc, method='disk'): +def get_requested_files(cla, file_templates, input_loc, method='disk'): ''' This function copies files from disk locations or downloads files from a url, depending on the option specified for @@ -282,12 +245,12 @@ def get_requested_files(cla, file_names, input_loc, method='disk'): Arguments: - cla Namespace object containing command line arguments - file_names Dict of file names by file type and kind - input_loc A string containing a single data location, either a url - or disk path. - method Choice of disk or download to indicate protocol for - retrieval + cla Namespace object containing command line arguments + file_templates a list of file templates + input_loc A string containing a single data location, either a url + or disk path. + method Choice of disk or download to indicate protocol for + retrieval Returns unavailable a dict whose keys are "method" and whose values are a @@ -296,12 +259,10 @@ def get_requested_files(cla, file_names, input_loc, method='disk'): unavailable = {} - if cla.file_type is not None: - file_names = file_names[cla.file_type] - file_names = file_names[cla.anl_or_fcst] + logging.info(f'Getting files named like {file_templates}') - file_names = file_names if isinstance(file_names, list) else \ - [file_names] + file_templates = file_templates if isinstance(file_templates, list) else \ + [file_templates] target_path = fill_template(cla.output_path, cla.cycle_date) @@ -310,8 +271,9 @@ def get_requested_files(cla, file_names, input_loc, method='disk'): os.chdir(target_path) unavailable = {} for fcst_hr in cla.fcst_hrs: - for file_name in file_names: - loc = os.path.join(input_loc, file_name) + for file_template in file_templates: + loc = os.path.join(input_loc, file_template) + logging.debug(f'Full file path: {loc}') loc = fill_template(loc, cla.cycle_date, fcst_hr) if method == 'disk': @@ -358,7 +320,7 @@ def hsi_single_file(file_path, mode='ls'): return file_path -def hpss_requested_files(cla, store_specs): +def hpss_requested_files(cla, file_names, store_specs): ''' This function interacts with the "hpss" protocol in a provided data store specs file to download a set of files requested @@ -392,64 +354,70 @@ def hpss_requested_files(cla, store_specs): f' {list(zip(archive_paths, archive_file_names))}') existing_archive, which_archive = find_archive_files(archive_paths, - archive_file_names, - cla.cycle_date, - ) + archive_file_names, + cla.cycle_date, + ) if not existing_archive: logging.warning('No archive files were found!') unavailable['archive'] = list(zip(archive_paths, archive_file_names)) return unavailable - # Use the found archive file path to get the necessary files - file_names = store_specs.get('file_names', {}) - if cla.file_type is not None: - file_names = file_names[cla.file_type] - file_names = file_names[cla.anl_or_fcst] + logging.info(f'Files in archive are named: {file_names}') - logging.debug(f'Grabbing archive number {which_archive} in list.') - archive_internal_dir = store_specs.get('archive_internal_dir', [''])[which_archive] - archive_internal_dir = fill_template(archive_internal_dir, - cla.cycle_date) + archive_internal_dirs = store_specs.get('archive_internal_dir', ['']) + if isinstance(archive_internal_dirs, dict): + archive_internal_dirs = archive_internal_dirs.get(cla.anl_or_fcst, ['']) - output_path = fill_template(cla.output_path, cla.cycle_date) - logging.info(f'Will place files in {os.path.abspath(output_path)}') - orig_path = os.getcwd() - os.chdir(output_path) - logging.debug(f'CWD: {os.getcwd()}') + # which_archive matters for choosing the correct file names within, + # but we can safely just try all options for the + # archive_internal_dir + logging.debug(f'Checking archive number {which_archive} in list.') - source_paths = [] - for fcst_hr in cla.fcst_hrs: - for file_name in file_names: - source_paths.append(fill_template( - os.path.join(archive_internal_dir, file_name), - cla.cycle_date, - fcst_hr, - )) + for archive_internal_dir_tmpl in archive_internal_dirs: + archive_internal_dir = fill_template(archive_internal_dir_tmpl, + cla.cycle_date) - if store_specs.get('archive_format', 'tar') == 'zip': - # Get the entire file from HPSS - existing_archive = hsi_single_file(existing_archive, mode='get') + output_path = fill_template(cla.output_path, cla.cycle_date) + logging.info(f'Will place files in {os.path.abspath(output_path)}') + orig_path = os.getcwd() + os.chdir(output_path) + logging.debug(f'CWD: {os.getcwd()}') - # Grab only the necessary files from the archive - cmd = f'unzip -o {os.path.basename(existing_archive)} {" ".join(source_paths)}' + source_paths = [] + for fcst_hr in cla.fcst_hrs: + for file_name in file_names: + source_paths.append(fill_template( + os.path.join(archive_internal_dir, file_name), + cla.cycle_date, + fcst_hr, + )) - else: - cmd = f'htar -xvf {existing_archive} {" ".join(source_paths)}' + if store_specs.get('archive_format', 'tar') == 'zip': + # Get the entire file from HPSS + existing_archive = hsi_single_file(existing_archive, mode='get') - logging.info(f'Running command \n {cmd}') - subprocess.run(cmd, - check=True, - shell=True, - ) - - # Check that files exist and Remove any data transfer artifacts. - unavailable = clean_up_output_dir( - expected_subdir=archive_internal_dir, - local_archive=os.path.basename(existing_archive), - output_path=output_path, - source_paths=source_paths, - ) + # Grab only the necessary files from the archive + cmd = f'unzip -o {os.path.basename(existing_archive)} {" ".join(source_paths)}' + + else: + cmd = f'htar -xvf {existing_archive} {" ".join(source_paths)}' + + logging.info(f'Running command \n {cmd}') + subprocess.run(cmd, + check=True, + shell=True, + ) + + # Check that files exist and Remove any data transfer artifacts. + unavailable = clean_up_output_dir( + expected_subdir=archive_internal_dir, + local_archive=os.path.basename(existing_archive), + output_path=output_path, + source_paths=source_paths, + ) + if not unavailable: + return unavailable os.chdir(orig_path) @@ -505,6 +473,30 @@ def setup_logging(debug=False): +def write_summary_file(cla, data_store, file_templates): + + ''' Given the command line arguments and the data store from which the data + was retrieved, write a bash summary file that is needed by the workflow + elements downstream. ''' + + files = [] + for tmpl in file_templates: + files.extend([fill_template(tmpl, cla.cycle_date, fh) for fh in cla.fcst_hrs]) + + summary_fp = os.path.join(cla.output_path, cla.summary_file) + logging.info(f'Writing a summary file to {summary_fp}') + file_contents = dedent(f''' + DATA_SRC={data_store} + EXTRN_MDL_CDATE={cla.cycle_date.strftime('%Y%m%d%H')} + EXTRN_MDL_STAGING_DIR={cla.output_path} + EXTRN_MDL_FNS=( {' '.join(files)} ) + EXTRN_MDL_FHRS=( {' '.join([str(i) for i in cla.fcst_hrs])} ) + ''') + logging.info(f'Contents: {file_contents}') + with open(summary_fp, "w") as summary: + summary.write(file_contents) + + def to_datetime(arg): ''' Return a datetime object give a string like YYYYMMDDHH. ''' @@ -521,29 +513,35 @@ def main(cla): paths in priority order. ''' - setup_logging(cla.debug) - - known_data_info = cla.config.get(cla.external_model) - if known_data_info is None: - msg = ('No data stores have been defined for', - f'{cla.external_model}!') - raise KeyError(msg) + data_stores = cla.data_stores + known_data_info = cla.config.get(cla.external_model, {}) + if not known_data_info: + msg = dedent(f'''No data stores have been defined for + {cla.external_model}!''') + if cla.input_file_path is None: + data_stores = ['disk'] + raise KeyError(msg) + logging.info(msg + ' Only checking provided disk location.') unavailable = {} - for data_store in cla.data_stores: + for data_store in data_stores: logging.info(f'Checking {data_store} for {cla.external_model}') store_specs = known_data_info.get(data_store, {}) if data_store == 'disk': - file_names = cla.file_names if cla.file_names else \ + file_templates = cla.file_templates if cla.file_templates else \ known_data_info.get('hpss', {}).get('file_names') - logging.debug(f'User supplied file names are: {file_names}') - if not file_names: - msg = ('No file name found. They must be provided \ + if isinstance(file_templates, dict): + if cla.file_type is not None: + file_templates = file_templates[cla.file_type] + file_templates = file_templates[cla.anl_or_fcst] + logging.debug(f'User supplied file names are: {file_templates}') + if not file_templates: + msg = ('No file naming convention found. They must be provided \ either on the command line or on in a config file.') raise argparse.ArgumentTypeError(msg) unavailable = get_requested_files(cla, - file_names=file_names, + file_templates=file_templates, input_loc=cla.input_file_path, method='disk', ) @@ -552,27 +550,35 @@ def main(cla): msg = (f'No information is available for {data_store}.') raise KeyError(msg) - if store_specs.get('protocol') == 'download': - file_names = store_specs.get('file_names') - if not file_names: - msg = ('No file name found. They must be provided \ + else: + + file_templates = store_specs.get('file_names') + if isinstance(file_templates, dict): + if cla.file_type is not None: + file_templates = file_templates[cla.file_type] + file_templates = file_templates[cla.anl_or_fcst] + if not file_templates: + msg = ('No file name naming convention found. They must be provided \ either on the command line or on in a config file.') raise argparse.ArgumentTypeError(msg) - unavailable = get_requested_files(cla, - file_names=file_names, - input_loc=store_specs['url'], - method='download', - ) - - if store_specs.get('protocol') == 'htar': - unavailable = hpss_requested_files(cla, store_specs) + if store_specs.get('protocol') == 'download': + unavailable = get_requested_files(cla, + file_templates=file_templates, + input_loc=store_specs['url'], + method='download', + ) + if store_specs.get('protocol') == 'htar': + unavailable = hpss_requested_files(cla, file_templates, store_specs) if not unavailable: # All files are found. Stop looking! + # Write a variable definitions file for the data, if requested + if cla.summary_file: + write_summary_file(cla, data_store, file_templates) break - logging.warning(f'Requested files are unavialable from {data_store}') + logging.warning(f'Requested files are unavailable from {data_store}') if unavailable: logging.error('Could not find any of the requested files.') @@ -657,11 +663,11 @@ def parse_args(): help='Print debug messages', ) parser.add_argument( - '--file_names', - help='A YAML-formatted string that indicates the naming \ + '--file_templates', + help='One or more file template strings defining the naming \ convention the be used for the files retrieved from disk. If \ not provided, the default names from hpss are used.', - type=load_str, + nargs='*', ) parser.add_argument( '--file_type', @@ -672,9 +678,13 @@ def parse_args(): '--input_file_path', help='A path to data stored on disk. The path may contain \ Python templates. File names may be supplied using the \ - --file_names flag, or the default naming convention will be \ + --file_templates flag, or the default naming convention will be \ taken from the --config file.', - nargs='*', + ) + parser.add_argument( + '--summary_file', + help='Name of the summary file to be written to the output \ + directory', ) return parser.parse_args() @@ -684,6 +694,15 @@ def parse_args(): CLA.output_path = path_exists(CLA.output_path) CLA.fcst_hrs = fhr_list(CLA.fcst_hrs) + + setup_logging(CLA.debug) + print(f"Running script retrieve_data.py with args:\n", + f"{('-' * 80)}\n{('-' * 80)}") + for name, val in CLA.__dict__.items(): + if name not in ['config']: + print(f"{name:>15s}: {val}") + print(f"{('-' * 80)}\n{('-' * 80)}") + if 'disk' in CLA.data_stores: # Make sure a path was provided. if not CLA.input_file_path: @@ -697,7 +716,6 @@ def parse_args(): output = subprocess.run('which hsi', check=True, shell=True, - capture_output=True, ) except subprocess.CalledProcessError: logging.error('You requested the hpss data store, but ' \ diff --git a/ush/rocoto/.gitignore b/ush/rocoto/.gitignore deleted file mode 100644 index 7f98f9b45..000000000 --- a/ush/rocoto/.gitignore +++ /dev/null @@ -1,47 +0,0 @@ -# Compiled source # -################### -*.com -*.class -*.dll -*.exe -*.o -*.mod -*.so -*.pyc - -# Temporary files # -################### -*.swp -*.swo -*~ - -# Packages # -############ -*.7z -*.dmg -*.gz -*.iso -*.jar -*.rar -*.tar -*.zip - -# Logs and databases # -###################### -*.log -*.sql -*.sqlite - -# OS generated files # -###################### -.DS_Store* -ehthumbs.db -Icon? -Thumbs.db - -*.lock -package-lock.json - -# Subversion and Git directories -.svn -.git diff --git a/ush/rocoto/fv3gfs_workflow.sh b/ush/rocoto/fv3gfs_workflow.sh deleted file mode 100755 index 7e52c6e76..000000000 --- a/ush/rocoto/fv3gfs_workflow.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/sh - -# Checkout, build, setup and execute the workflow - -set -ex - -fv3gfs_tag="https://svnemc.ncep.noaa.gov/projects/fv3gfs/trunk" - -pslot="fv3test" -expdir="/path/to/expdir" -comrot="/path/to/comrot" -fv3gfs="/path/to/fv3gfs_tag/checkout" -idate="2017073118" -edate="2017080112" - -###################################### -# USER NEED NOT MODIFY BELOW THIS LINE -###################################### - -if [ -d /scratch4/NCEPDEV ]; then - machine="theia" - icsdir="/scratch4/NCEPDEV/global/noscrub/glopara/ICS/FV3GFS" -elif [ -d /gpfs/hps3 ]; then - machine="cray" - icsdir="/gpfs/hps3/emc/global/noscrub/emc.glopara/ICS" -else - echo "Unknown machine $machine, ABORT!" - exit -1 -fi - -[[ -d $expdir/$pslot ]] && rm -rf $expdir/$pslot -[[ -d $comrot/$pslot ]] && rm -rf $comrot/$pslot -[[ -d $fv3gfs/$pslot ]] && rm -rf $fv3gfs/$pslot - -gfs_ver=v15.0.0 -mkdir -p $fv3gfs -cd $fv3gfs -git clone --recursive gerrit:fv3gfs gfs.${gfs_ver} - -cd $fv3gfs/gfs.${gfs_ver}/sorc -sh checkout.sh -sh build_all.sh $machine -sh link_fv3gfs.sh emc $machine - -cd $fv3gfs/gfs.${gfs_ver}/ush/rocoto -python setup_expt.py --pslot $pslot --comrot $comrot --expdir $expdir --idate $idate --edate $edate --icsdir $icsdir --configdir ../parm/config -python setup_workflow.py --expdir $expdir/$pslot - -cd $expdir/$pslot -crontab $pslot.crontab - -exit diff --git a/ush/rocoto/rocoto.py b/ush/rocoto/rocoto.py deleted file mode 100755 index 0a0fdf7d8..000000000 --- a/ush/rocoto/rocoto.py +++ /dev/null @@ -1,346 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### -''' - MODULE: - rocoto.py - - ABOUT: - Helper module to create tasks, metatasks, and dependencies - - AUTHOR: - Rahul.Mahajan - rahul.mahajan@noaa.gov -''' - -def create_metatask(task_dict, metatask_dict): - ''' - create a Rocoto metatask given a dictionary containing task and metatask information - :param metatask_dict: metatask key-value parameters - :type metatask_dict: dict - :param task_dict: task key-value parameters - :type task_dict: dict - :return: Rocoto metatask - :rtype: list - ''' - - # Grab metatask info from the metatask_dict - metataskname = metatask_dict.get('metataskname', 'demometatask') - varname = metatask_dict.get('varname', 'demovar') - varval = metatask_dict.get('varval', 1) - vardict = metatask_dict.get('vardict', None) - - strings = [] - - strings.append('\n' % metataskname) - strings.append('\n') - strings.append('\t%s\n' % (varname, str(varval))) - if vardict is not None: - for key in vardict.keys(): - value = str(vardict[key]) - strings.append('\t%s\n' % (key, value)) - strings.append('\n') - tasklines = create_task(task_dict) - for tl in tasklines: - strings.append('%s' % tl) if tl == '\n' else strings.append('\t%s' % tl) - strings.append('\n') - strings.append('\n') - - return strings - - -def create_task(task_dict): - ''' - create a Rocoto task given a dictionary containing task information - :param task_dict: task key-value parameters - :type task_dict: dict - :return: Rocoto task - :rtype: list - ''' - - # Grab task info from the task_dict - taskname = task_dict.get('taskname', 'demotask') - cycledef = task_dict.get('cycledef', 'democycle') - maxtries = task_dict.get('maxtries', 3) - final = task_dict.get('final', False) - command = task_dict.get('command', 'sleep 10') - jobname = task_dict.get('jobname', 'demojob') - account = task_dict.get('account', 'batch') - queue = task_dict.get('queue', 'debug') - walltime = task_dict.get('walltime', '00:01:00') - log = task_dict.get('log', 'demo.log') - native = task_dict.get('native', None) - memory = task_dict.get('memory', None) - resources = task_dict.get('resources', None) - envar = task_dict.get('envar', None) - dependency = task_dict.get('dependency', None) - - str_maxtries = str(maxtries) - str_final = ' final="true"' if final else '' - envar = envar if isinstance(envar, list) else [envar] - - strings = [] - - strings.append('\n' % \ - (taskname, cycledef, str_maxtries, str_final)) - strings.append('\n') - strings.append('\t%s\n' % command) - strings.append('\n') - strings.append('\t%s\n' % jobname) - strings.append('\t%s\n' % account) - strings.append('\t%s\n' % queue) - if resources is not None: - strings.append('\t%s\n' % resources) - strings.append('\t%s\n' % walltime) - if memory is not None: - strings.append('\t%s\n' % memory) - if native is not None: - strings.append('\t%s\n' % native) - strings.append('\n') - strings.append('\t%s\n' % log) - strings.append('\n') - - if envar[0] is not None: - for e in envar: - strings.append('\t%s\n' % e) - strings.append('\n') - - if dependency is not None: - strings.append('\t\n') - for d in dependency: - strings.append('\t\t%s\n' % d) - strings.append('\t\n') - strings.append('\n') - - strings.append('\n') - - return strings - - -def add_dependency(dep_dict): - ''' - create a simple Rocoto dependency given a dictionary with dependency information - :param dep_dict: dependency key-value parameters - :type dep_dict: dict - :return: Rocoto simple dependency - :rtype: str - ''' - - dep_condition = dep_dict.get('condition', None) - dep_type = dep_dict.get('type', None) - - if dep_type in ['task', 'metatask']: - - string = add_task_tag(dep_dict) - - elif dep_type in ['data']: - - string = add_data_tag(dep_dict) - - elif dep_type in ['cycleexist']: - - string = add_cycle_tag(dep_dict) - - elif dep_type in ['streq', 'strneq']: - - string = add_streq_tag(dep_dict) - - else: - - msg = 'Unknown dependency type %s' % dep_dict['type'] - raise KeyError(msg) - - if dep_condition is not None: - string = '<%s>%s' % (dep_condition, string, dep_condition) - - return string - - -def add_task_tag(dep_dict): - ''' - create a simple task or metatask tag - :param dep_dict: dependency key-value parameters - :type dep_dict: dict - :return: Rocoto simple task or metatask dependency - :rtype: str - ''' - - dep_type = dep_dict.get('type', None) - dep_name = dep_dict.get('name', None) - dep_offset = dep_dict.get('offset', None) - - if dep_name is None: - msg = 'a %s name is necessary for %s dependency' % (dep_type, dep_type) - raise KeyError(msg) - - string = '<' - string += '%sdep %s="%s"' % (dep_type, dep_type, dep_name) - if dep_offset is not None: - string += ' cycle_offset="%s"' % dep_offset - string += '/>' - - return string - -def add_data_tag(dep_dict): - ''' - create a simple data tag - :param dep_dict: dependency key-value parameters - :type dep_dict: dict - :return: Rocoto simple task or metatask dependency - :rtype: str - ''' - - dep_type = dep_dict.get('type', None) - dep_data = dep_dict.get('data', None) - dep_offset = dep_dict.get('offset', None) - - if dep_data is None: - msg = 'a data value is necessary for %s dependency' % dep_type - raise KeyError(msg) - - if dep_offset is None: - if '@' in dep_data: - offset_string_b = '' - offset_string_e = '' - else: - offset_string_b = '' - offset_string_e = '' - else: - offset_string_b = '' % dep_offset - offset_string_e = '' - - string = '' - string += '%s%s%s' % (offset_string_b, dep_data, offset_string_e) - string += '' - - return string - -def add_cycle_tag(dep_dict): - ''' - create a simple cycle exist tag - :param dep_dict: dependency key-value parameters - :type dep_dict: dict - :return: Rocoto simple task or metatask dependency - :rtype: str - ''' - - dep_type = dep_dict.get('type', None) - dep_offset = dep_dict.get('offset', None) - - if dep_offset is None: - msg = 'an offset value is necessary for %s dependency' % dep_type - raise KeyError(msg) - - string = '' % dep_offset - - return string - -def add_streq_tag(dep_dict): - ''' - create a simple string comparison tag - :param dep_dict: dependency key-value parameters - :type dep_dict: dict - :return: Rocoto simple task or metatask dependency - :rtype: str - ''' - - dep_type = dep_dict.get('type', None) - dep_left = dep_dict.get('left', None) - dep_right = dep_dict.get('right', None) - - fail = False - msg = '' - if dep_left is None: - msg += 'a left value is necessary for %s dependency' % dep_type - fail = True - if dep_right is None: - if fail: - msg += '\n' - msg += 'a right value is necessary for %s dependency' % dep_type - fail = True - if fail: - raise KeyError(msg) - - string = '<%s>%s%s' % (dep_type, dep_left, dep_right, dep_type) - - return string - - -def _traverse(o, tree_types=(list, tuple)): - ''' - Traverse through a list of lists or tuples and yeild the value - Objective is to flatten a list of lists or tuples - :param o: list of lists or not - :type o: list, tuple, scalar - :param tree_types: trees to travers - :type tree_types: tuple - :return: value in the list or tuple - :rtype: scalar - ''' - - if isinstance(o, tree_types): - for value in o: - for subvalue in _traverse(value, tree_types): - yield subvalue - else: - yield o - - -def create_dependency(dep_condition=None, dep=None): - ''' - create a compound dependency given a list of dependendies, and compounding condition - the list of dependencies are created using add_dependency - :param dep_condition: dependency condition - :type dep_condition: boolean e.g. and, or, true, false - :param dep: dependency - :type dep: str or list - :return: Rocoto compound dependency - :rtype: list - ''' - - dep = dep if isinstance(dep, list) else [dep] - - strings = [] - - if dep_condition is not None: - strings.append('<%s>' % dep_condition) - - if dep[0] is not None: - for d in dep: - if dep_condition is None: - strings.append('%s' % d) - else: - for e in _traverse(d): - strings.append('\t%s' % e) - - if dep_condition is not None: - strings.append('' % dep_condition) - - return strings - - -def create_envar(name=None,value=None): - ''' - create an Rocoto environment variable given name and value - returns the environment variable as a string - :param name: name of the environment variable - :type name: str - :param value: value of the environment variable - :type value: str or float or int or unicode - :return: Rocoto environment variable key-value pair - :rtype: str - ''' - - string = '' - string += '' - string += '%s' % name - string += '%s' % str(value) - string += '' - - return string diff --git a/ush/rocoto/rocoto_viewer.py b/ush/rocoto/rocoto_viewer.py deleted file mode 100755 index 26db6943f..000000000 --- a/ush/rocoto/rocoto_viewer.py +++ /dev/null @@ -1,2383 +0,0 @@ -#!/usr/bin/env python -# -##@namespace rocoto_viewer -# @brief A Curses based terminal viewer to interact and display the status of a Rocoto Workflow in real time. -# -# @anchor rocoto_viewer -## This Python script allows users to see and interact with a running Rocoto Workflow in real time. -# \image html pythonCurses.jpeg "Rocoto Viewer for Displaying Real-time Status of Workflow" -# -# To launch this viewer simply give it the database and the XML files being used by the \b Rocoto system for your experiment: -# -# rocoto_viewer.py -w my_gfs-workflow.xml -d my_database.db -# -# The script is located in the directory para/exp/rocoto/rocotoviewers/rocotoviewer_curses/rocoto_viewer.py -# The view will continuously update every four minutes and reflect the current status of your workflow. You may use your mouse or arrow keys to select a particular task and view its status details by pressing the key \p c as indicated as \b \ (which runs \b rocotocheck) or perform a \b rocotorewind by pressing \b \ to restart the workflow at that point. Running \b rocotorewind causes the state information of that task to be cleared from the database and resubmits the job to the scheduler. -# -# Tasks marked with the \b \< symbol are \b metatasks and can be expanded by highlight that task with the mouse, and then clicking on the \b \< symbol which then changes to \b \> . You can then click on the \b \> symbol to collapse it again. Alternatively, you can select the 'x' to expand and collapse metatasks when selected. -# -##@cond ROCOTO_VIEWER_CURSES - -from __future__ import division - -import curses - -import os, sys, getpass, getopt, signal, tempfile -from os.path import basename -import subprocess -from math import * - -from __builtin__ import any as b_any -from os.path import realpath, normpath, dirname, getsize -from io import StringIO -from itertools import groupby -from time import time -from multiprocessing import Process, Queue -import time as std_time -from datetime import datetime, timedelta -import uuid -import shutil - -import sqlite3,datetime,collections -import xml.etree.ElementTree as ET -import cPickle - -try: - from dateutil.relativedelta import relativedelta -except ImportError: - #print 'dateutil which uses relativedelta to increment monthly (used by UGCS) is not supported with this version of python. Use Anaconda the native version in /user/bin' - #sys.exit(1) - pass - -# Global Variables -database_file_agmented = None -use_performance_metrics = False -default_column_length = 125 -stat_read_time_delay = 3*60 -temp_workflow_file = '' -header_string = '' -format_string = "jobid slots submit_time start_time cpu_used run_time delimiter=';'" - -ccs_html=''' - - - - - - -''' -bottom_message_scroll = 'heck oot ewind un (->) Next Cycle (<-) Previous Cycle p own elp uit' -bottom_message = 'heck oot ewind un (->) Next Cycle (<-) Previous Cycle elp uit' - -#Global Variables -#================ -list_tasks = False -html_output = False -html_output_file = None -rzdm_path = '' -only_check_point = False -save_checkfile_path = None -use_multiprocessing = True -get_user = getpass.getuser() - -screen_resized = False -debug = None - -mlines = 0 -mcols = 0 - -def sigwinch_handler(signum, frame): - global screen_resized - global mlines - global mcols - term_size = subprocess.Popen(['stty', 'size'], stdout=subprocess.PIPE) - try: - get_term_size, err = term_size.communicate() - except: - return - mlines,mcols = map(int,get_term_size.split()) - screen_resized = True - -def usage(message=None): - curses.endwin() - print>>sys.stderr, ''' -Usage: rocoto_status_viewer.py -w workflow.xml -d database.db [--listtasks]\n [--html=filename.html]\n [--perfmetrics={True,False}] - -Mandatory arguments: - -w workflow.xml - -d database.db -Optional arguments: - --listtasks --- print out a list of all tasks - --html=filename.html --- creates an HTML document of status - --perfmetrics=True --- turn on/off extra columns for performance metrics - --help --- print this usage message''' - - if message is not None: - print>>sys.stderr,'\n'+str(message).rstrip()+'\n' - sys.exit(-1) - -def augment_SQLite3(filename): - - connection=sqlite3.connect(filename) - c=connection.cursor() - #qinfo=c.execute("DROP TABLE IF EXISTS jobs_augment;") - qinfo=c.execute("PRAGMA table_info(jobs_augment)").fetchall() - if any('qtime' in element for element in qinfo): - c.close() - return 'is_already_augmented' - else: - sql_create_augment_table = "CREATE TABLE jobs_augment AS SELECT * FROM jobs;" - q=c.execute(sql_create_augment_table) - q=c.execute("alter table jobs_augment add column qtime integer;") - q=c.execute("alter table jobs_augment add column cputime integer;") - q=c.execute("alter table jobs_augment add column runtime integer;") - q=c.execute("alter table jobs_augment add column slots integer;") - connection.commit() - - c.close() - database_file = filename - return 'now_augmented' - -def isSQLite3(filename): - from produtil.fileop import check_file - from produtil.fileop import deliver_file - if not check_file(filename): - return False - if getsize(filename) < 100: - return False - with open(filename, 'rb') as fd: - header = fd.read(100) - fd.close() - if not header[:16] == 'SQLite format 3\x00': - return False - else: - return True - -def isRocotoWorkflow(filename): - from produtil.fileop import check_file - if not check_file(filename): - return False - with open(filename, 'r') as input: - for line in input: - if 'DOCTYPE workflow' in line: - input.close() - return True - return False - - -def load_produtil_pythonpath(): - - try: - import produtil.cluster - return True - except ImportError: - pass - - PRODUTIL = collections.defaultdict(list) - PRODUTIL['theia'] = '/scratch4/NCEPDEV/global/save/glopara/svn/nceplibs/produtil/trunk/ush' - PRODUTIL['luna'] = '/gpfs/hps3/emc/global/noscrub/emc.glopara/svn/nceplibs/produtil/trunk/ush' - PRODUTIL['tide'] = '/gpfs/td1/emc/global/save/emc.glopara/svn/nceplibs/produtil/trunk/ush' - PRODUTIL['gyre'] = '/gpfs/gd1/emc/global/save/emc.glopara/svn/nceplibs/produtil/trunk/ush' - try_clusters = ('theia','luna','tide','gyre') - - for cluster in try_clusters: - sys.path.append(PRODUTIL[cluster]) - try: - import produtil.cluster - return True - except ImportError: - pass - return False - -def get_arguments(): - from produtil.fileop import check_file - short_opts = "w:d:f:" - long_opts = ["checkfile=","workfolw=","database=","html=","listtasks","onlycheckpoint","help","perfmetrics="] - try: - opts, args = getopt.getopt(sys.argv[1:], short_opts, long_opts) - except getopt.GetoptError as err: - print str(err) - print - usage('SCRIPT IS ABORTING DUE TO UNRECOGNIZED ARGUMENT') - - global save_checkfile_path - global use_performance_metrics - workflow_file = None - database_file = None - perfmetrics_on = None - for k, v in opts: - if k in ('-w', '--workflow'): - workflow_file = v - elif k in ('-d','--database'): - database_file = v - elif k in ('-f','--checkfile'): - save_checkfile_path = v - elif k in ('--perfmetrics'): - perfmetrics_on = v - elif k in ('--listtasks'): - global list_tasks - list_tasks = True - elif k in ('--onlycheckpoint'): - global only_check_point - only_check_point = True - elif k in ('--html'): - global html_output - global rzdm_path - global send_html_to_rzdm - send_html_to_rzdm = True - rzdm_path = v - html_output = True - elif k in ('--help'): - usage('') - else: - pass - #usage('OPTION NOT REGOGNIZED') - - if perfmetrics_on is None: - use_performance_metrics = False - elif perfmetrics_on.lower() == 'true': - use_performance_metrics = True - elif perfmetrics_on.lower() == 'false': - use_performance_metrics = False - elif perfmetrics_on is not None: - usage('perfmetrics must be either set to true or false (e.g. --perfmetrics=True') - - send_html_to_rzdm = False - if len(rzdm_path) != 0: - if ':' not in rzdm_path or '@' not in rzdm_path: - print 'No user name or path found for sending html directory to server, no files will be sent to rzdm' - print 'Creating html folder in: %s'%rzdm_path - else: - send_html_to_rzdm = True - - if list_tasks and workflow_file is None: - usage('In order to list tasks you must supply the XML worflow-file') - - if only_check_point and (workflow_file is None or database_file is None or save_checkfile_path is None): - usage('To use the check point output you must specify the workflow, data base, and the specific name of the checkpoint file') - - if (not list_tasks) and (workflow_file is None or database_file is None): - usage('Booth database-file and workflow-file must be specified') - - if (not list_tasks) and (workflow_file is not None and database_file is not None): - #debug.write('database_file_agmented: '+database_file_agmented+'\n') - if not isSQLite3( database_file ): - usage('%s is not SQLite3 database file'%database_file) - if not isRocotoWorkflow( workflow_file ): - usage('%s is not an Rocoto XML file'%workflow_file) - - #global use_multiprocessing - #if getsize(database_file) < 104857600: - # use_multiprocessing = True - #else: - # use_multiprocessing = True - - return (workflow_file,database_file ) - - -def get_entity_values( workflow_file ): - - entity_values = collections.defaultdict(list) - with open( workflow_file, 'rw' ) as f: - for line in f: - split_line = line.split() - if ']>' in line: - break - if 'ENTITY' in line: - if 'SYSTEM' in line: - value = split_line[3] - else: - value = split_line[2] - entity_values[ split_line[1] ] = value[:-1].replace('"','') - return entity_values - -def timedelta_total_seconds(timedelta): - return ( - timedelta.microseconds + 0.0 + - (timedelta.seconds + timedelta.days * 24 * 3600) * 10 ** 6) / 10 ** 6 - -def get_aug_perf_values( username ): - from produtil.run import run,runstr, batchexe - global html_ouput - global format_keys - cmd = batchexe('which') ['bjobs'] - try: - which_bjobs = runstr(cmd).strip() - except Exception,e: - return None - bjobs = collections.defaultdict(dict) - aug_perf = collections.defaultdict(dict) - cmd = batchexe( which_bjobs )['-a','-o',format_string,'-u',username] - bjobs_line = runstr(cmd) - if 'No job found' in bjobs_line: - return None - bjobs_lines = bjobs_line.split('\n') - for l,line in enumerate(bjobs_lines): - split_line = line.split(';') - if l == 0: - format_keys = split_line - continue - for i, value in enumerate(split_line): - if i == 0: - key = value - else: - if format_keys[i] in ('RUN_TIME','CPU_USED'): - value_list = value.split() - if len(value_list) > 1: - value = value_list[0] - bjobs[key][format_keys[i]] = value - sub_time_string = '' - year = str(datetime.datetime.now().year)+' ' - sub_time = None - bstart_time = None - for jobid,keys in bjobs.iteritems(): - #debug.write(jobid+'\n') - for key in keys: - #debug.write(' '+key+":"+bjobs[jobid][key]+'\n') - try: - int_key = int(bjobs[jobid][key].strip()) - str_key = str(int_key) - except: - str_key = bjobs[jobid][key].strip() - - if key == 'SUBMIT_TIME': - sub_time_string = str_key - try: - sub_time = datetime.datetime.strptime( year+sub_time_string, '%Y %b %d %H:%M' ) - except: - sub_time = None - continue - elif key == 'START_TIME': - bstart_time_string = str_key - try: - bstart_time = datetime.datetime.strptime( year+bstart_time_string, '%Y %b %d %H:%M' ) - except: - bstart_time = None - continue - elif key == 'RUN_TIME': - aug_perf[jobid]['runtime'] = str_key - elif key == 'CPU_USED': - aug_perf[jobid]['cputime'] = str_key - elif key == 'SLOTS': - aug_perf[jobid]['slots'] = str_key - - if bstart_time_string == sub_time_string: - aug_perf[jobid]['qtime'] = '0' - elif sub_time is not None and bstart_time is None : - try: - aug_perf[jobid]['qtime'] = str(int(( datetime.datetime.now() - sub_time ).total_seconds())) - except AttributeError: - aug_perf[jobid]['qtime'] = str(int(timedelta_total_seconds( datetime.datetime.now() - sub_time ))) - - elif sub_time is not None and bstart_time is not None: - try: - aug_perf[jobid]['qtime'] = str(int((bstart_time - sub_time).total_seconds())) - except AttributeError: - aug_perf[jobid]['qtime'] = str(int(timedelta_total_seconds(bstart_time - sub_time))) - else: - aug_perf[jobid]['qtime'] = '-' - - return aug_perf - -def help_screen( screen ): - - max_row = 25 - box_cols = 60 - box = curses.newwin( max_row, box_cols , 5, 32 ) - box.box() - box.border(0) - box.addstr( 0 , 23, ' when done', curses.A_BOLD ) - helpstr= [ 'heck : run rocotocheck on selected task(s)', - 'oot : run rocotoboot on selected task(s)', - 'ewind : run rocotorewind on selected task(s)', - 'un : run rocotorun on selected task(s)', - ' ', - '(->) Next Cycle own (or) Page-dwn to scroll', - '(<-) Previous Cycle own (or) Page-up to scroll ', - ' ', - ' + Arrow Up to selected multiple tasks', - ' + Arrow Down for using with rocoto utils', - 'Double-Click or to expand/collapse metatasks', - ' ', - ' Selects a task for list or opens meta-task list', - ' ', - ' When a meta-task list is open for selection:', - ' Double-Click (or) to select the begining', - ' of a range for selection and repeate to complete', - ' the desired selected list.', - '', - 'oads and renews status data (no rocotorun)', - 'inds the last cycle with a running task', - 'nloads and clears all previously seleted tasks', - 'makes a symlink of log file of highlited task'] - - for i in range(0,len(helpstr)): - box.addstr( 1+i ,2, helpstr[i] ) - x = screen.getch() - while x != ord('q'): - x = screen.getch() - box.refresh() - -def list_selector( screen, selected_strings, strings ): - - global screen_resized - global mlines - global mcols - global highlightText - global highlightSelectedText - global normalText - - def define_box(): - - if len( strings ) < mlines: - max_row = len( strings ) - else: - max_row = mlines - 12 - max_mcols = max(18,len(max( strings, key=len ))) - if max_mcols + 8 < mcols: - box_cols = max_mcols + 8 - else: - box_cols = mcols - 3 - box = curses.newwin( max_row + 6, box_cols , 4, 5 ) - box.box() - box.border(0) - - return box, max_row, box_cols - - strings_selected = selected_strings - string_ctr_selected = '' - - box, max_row, box_cols = define_box() - row_num = len( strings ) - pages = int( ceil( row_num / max_row ) ) - position = 1 - page = 1 - for i in range( 1, max_row+1 ): - if row_num == 0: - box.addstr( 1, 1, "There aren't strings", highlightText ) - else: - print_string = ' '+strings[ i - 1 ]+' ' - if (i == position): - box.addstr( i+1, 2, print_string, highlightText ) - else: - box.addstr( i+1, 2, print_string, normalText ) - if i == row_num: - break - - screen_resized = False - - range_selected = False - string_ctr_selected_prior = '' - - x = screen.getch() - while x != ord('q'): - - if screen_resized: - - screen_resized = False - curses.resizeterm(mlines, mcols) - screen.refresh() - box.clear() - box.erase() - - box, max_row, box_cols = define_box() - - box.border( 0 ) - box.refresh() - - if x in ( curses.KEY_SF, curses.KEY_DOWN): - if x == curses.KEY_SF: - string_selected = strings[ position - 1 ] - if string_selected in strings_selected: - string_ctr_selected = '' - try: - if len(strings_selected) > 0: - strings_selected.remove( string_selected ) - except ValueError: - pass - else: - strings_selected.append( string_selected ) - if page == 1: - if position < i: - position = position + 1 - else: - if pages > 1: - page = page + 1 - position = 1 + ( max_row * ( page - 1 ) ) - elif page == pages: - if position < row_num: - position = position + 1 - else: - if position < max_row + ( max_row * ( page - 1 ) ): - position = position + 1 - else: - box.erase() - box.border(0) - page = page + 1 - position = 1 + ( max_row * ( page - 1 ) ) - if x in ( curses.KEY_SR, curses.KEY_UP): - if x == curses.KEY_SR: - string_selected = strings[ position - 1 ] - if string_selected in strings_selected: - try: - if len(strings_selected) > 0: - strings_selected.remove( string_selected ) - except ValueError: - pass - else: - strings_selected.append( string_selected ) - if page == 1: - if position > 1: - position = position - 1 - else: - if position > ( 1 + ( max_row * ( page - 1 ) ) ): - position = position - 1 - else: - box.erase() - box.border(0) - page = page - 1 - position = max_row + ( max_row * ( page - 1 ) ) - - if x == curses.KEY_PPAGE: - box.erase() - box.border( 0 ) - if page > 1: - page = page - 1 - position = 1 + ( max_row * ( page - 1 ) ) - - if x == curses.KEY_NPAGE: - box.erase() - box.border( 0 ) - #screen.refresh() - if page < pages: - page = page + 1 - position = ( 1 + ( max_row * ( page - 1 ) ) ) - - if x in ( curses.KEY_MOUSE, ord('s') ): - mouse_id, mouse_x, mouse_y, mouse_z, button_state = (0,0,0,0,0) - index_prior_selected = 0 - if x == curses.KEY_MOUSE: - mouse_id, mouse_x, mouse_y, mouse_z, button_state = curses.getmouse() - box.erase() - box.border( 0 ) - pos = mouse_y-5 - if page == 1: - position = pos - else: - position = max_row*(page-1)+pos - - if x == ord('s') or (button_state & curses.BUTTON1_DOUBLE_CLICKED): - string_ctr_selected = strings[ position - 1 ] - if range_selected: - range_selected = False - string_ctr_selected = '' - if string_ctr_selected != string_ctr_selected_prior: - index_prior_selected = strings.index(string_ctr_selected_prior) - if position < index_prior_selected: - first = position-1 - last = index_prior_selected+1 - else: - first = index_prior_selected - last = position - for i in range( first, last ): - if strings[i] in strings_selected: - strings_selected.remove(strings[i]) - else: - strings_selected.append( strings[i] ) - string_ctr_selected_prior = '' - else: - range_selected = True - string_ctr_selected_prior = string_ctr_selected - - if x in (curses.KEY_ENTER, 10, 13) and row_num != 0: - box.border( 0 ) - string_selected = strings[ position - 1 ] - if string_ctr_selected_prior == string_selected: - string_ctr_selected_prior = '' - range_selected = False - if string_selected in strings_selected: - try: - if len(strings_selected) > 0: - strings_selected.remove( string_selected ) - except ValueError: - pass - else: - strings_selected.append( string_selected ) - - if x == ord('U'): - for each_sting in strings: - if each_sting in strings_selected: - if len(strings_selected) > 0: - strings_selected.remove(each_sting) - - for i in range( 1 + ( max_row * ( page - 1 ) ), max_row + 1 + ( max_row * ( page - 1 ) ) ): - if row_num == 0: - box.addstr( 1, 1, "There aren't strings", highlightText ) - else: - if strings[ i - 1 ] == string_ctr_selected_prior: - string_print = '* '+strings[ i - 1 ]+' ' - else: - string_print = ' '+strings[ i - 1 ]+' ' - - start_pos = i - ( max_row * ( page - 1 ) ) + 1 - if ( i + ( max_row * ( page - 1 ) ) == position + ( max_row * ( page - 1 ) ) ): - box.addstr( start_pos, 2, string_print, highlightText ) - else: - box.addstr( start_pos, 2, string_print, normalText ) - if strings[ i - 1 ] in strings_selected: - box.addstr( start_pos, 2, string_print[:1] ) - box.addstr( start_pos, 4, string_print[2:-1], highlightSelectedText | curses.A_DIM ) - if i == row_num: - break - - box.addstr( max_row+3 , 2, 'Select with or' ) - box.addstr( max_row+4 , 2, ' + ' ) - box.addstr( 0 , 7, ' when done', curses.A_BOLD ) - box.refresh() - x = screen.getch() - - return strings_selected - - -def get_rocoto_check(params, queue_check): - from produtil.run import run,runstr, batchexe, exe - workflow_file, database_file, task, cycle, process = params - cmd=batchexe('rocotocheck')['-v',10,'-w',workflow_file,'-d',database_file,'-c',cycle,'-t',task] - check=runstr(cmd) - if check is None: - curses.endwin() - print 'rcotocheck falied: %d'%stat - sys.exit(-1) - queue_check.put(check) - -def rocoto_boot(params): - from produtil.run import run,runstr, batchexe, exe - workflow_file, database_file, cycle, metatask_list, task_list = params - run( exe('yes') | exe('head')['-1'] > '.yes.txt') - if len(task_list) == 0 and len(metatask_list) != 0: - cmd=batchexe('rocotoboot')['--workflow', workflow_file,'--database',database_file,'--cycles',cycle,'--metatasks', metatask_list] < '.yes.txt' - elif len(task_list) != 0 and len(metatask_list) == 0: - cmd=batchexe('rocotoboot')['--workflow', workflow_file,'--database',database_file,'--cycles',cycle,'--tasks', task_list ] < '.yes.txt' - elif len(task_list) != 0 and len(metatask_list) != 0: - cmd=batchexe('rocotoboot')['--workflow', workflow_file,'--database',database_file,'--cycles',cycle,'--tasks', task_list, '--metatasks', metatask_list ] < '.yes.txt' - else: - return 'Warning: No metatasks or tasks where selected when rocotboot was called' - stat=runstr(cmd) - if stat is None: - display_results( 'rcotoboot falied!!','') - return stat - -def rocoto_rewind(params): - from produtil.run import run,runstr, batchexe - workflow_file, database_file, cycle, process = params - cmd=batchexe('rocotorewind')['-w',workflow_file,'-d',database_file,'-c',cycle,process] - stat=runstr(cmd) - if stat is None: - display_results('rcotorewind falied!!','') - return stat - -def rocoto_run(params): - from produtil.run import run,runstr, batchexe - workflow_file, database_file = params - cmd=batchexe('rocotorun')['-w',workflow_file,'-d',database_file] - stat=runstr(cmd ) - stat = '' - if stat is None: - curses.endwin() - print 'rcotorun falied: %d'%stat - sys.exit(-1) - return stat - -def get_tasklist(workflow_file): - import produtil.run, produtil.numerics - tasks_ordered = [] - metatask_list = collections.defaultdict(list) - tree = ET.parse(workflow_file) - root = tree.getroot() - cycledef_group_cycles = collections.defaultdict(list) - if list_tasks: - curses.endwin() - print - cycle_noname = 'default_cycle' - for child in root: - if child.tag == 'cycledef': - if len(child.attrib) != 0: - cycle_def_name = child.attrib['group'] - else: - cycle_def_name = cycle_noname - cycle_string = child.text.split() - - ucgs_is_cron = None - if PACKAGE.lower() == 'ugcs': - start_cycle = produtil.numerics.to_datetime ( entity_values['SDATE'] ) - end_cycle = produtil.numerics.to_datetime ( entity_values['EDATE'] ) - #inc_cycle = produtil.numerics.to_timedelta( entity_values['INC_MONTHS'] ) - #NOTE: this is for the special case when cycle for every month - inc_cycle = int(entity_values['INC_MONTHS']) - if inc_cycle == 0: - inc_cycle = produtil.numerics.to_timedelta( cycle_string[2] ) - ucgs_is_cron = False - else: - ucgs_is_cron = True - only_once_ugcs = True - else: - start_cycle = produtil.numerics.to_datetime ( cycle_string[0] ) - end_cycle = produtil.numerics.to_datetime ( cycle_string[1] ) - inc_cycle = produtil.numerics.to_timedelta( cycle_string[2] ) - - while start_cycle <= end_cycle: - cycledef_group_cycles[cycle_def_name].append(start_cycle.strftime("%Y%m%d%H%M")) - if PACKAGE.lower() == 'ugcs' and ucgs_is_cron: - try: - start_cycle = start_cycle + relativedelta(months=+inc_cycle) - except AttributeError: - curses.endwin() - print;print - print 'dateutil which uses relativedelta to increment monthly (used by UGCS) is not supported with this version of python.\nUse Anaconda the native version in /user/bin' - sys.exit(-1) - else: - start_cycle = start_cycle + inc_cycle - #if list_tasks: - #print 'cycledef=%s number of cycles %s inc: %s'%(cycle_def_name, len(cycledef_group_cycles[cycle_def_name]),inc_cycle) - #print 'contails cycles',cycledef_group_cycles[cycle_def_name] - if child.tag == 'task': - task_name = child.attrib['name'] - log_file = child.find('join').find('cyclestr').text.replace( '@Y@m@d@H','CYCLE' ) - #if len(log_file) != 0: - # print 'LOG: %s %s'%( task_name, log_file ) - if 'cycledefs' in child.attrib: - task_cycledefs = child.attrib['cycledefs'] - #if list_tasks: - # print 'task_cycledefs:',task_cycledefs - else: - task_cycledefs = cycle_noname - if list_tasks: - print task_name,task_cycledefs - #dependancies = child.getiterator('dependency') - #for dependency in dependancies: - # for them in dependency.getchildren(): - # print them.attrib - tasks_ordered.append((task_name,task_cycledefs,log_file)) - elif child.tag == 'metatask': - all_metatasks_iterator = child.getiterator('metatask') - all_vars = dict() ; all_tasks = [] - for i,metatasks in enumerate(all_metatasks_iterator): - metatask_name = 'NO_NAME' - try: - metatask_name = metatasks.attrib['name'] - except: - pass - if list_tasks: - print ' '*i+'metatask:',metatask_name - all_vars_list = metatasks.findall('var') - all_tasks_list = metatasks.findall('task') - for var in all_vars_list: - var_list_values = var.text.split() - #print ' '+' '*i+'(%d) var name:'%i,var.attrib['name'],var_list_values - all_vars[var.attrib['name']] = var_list_values - for task in all_tasks_list: - task_name = task.attrib['name'] - task_log = task.find('join').find('cyclestr').text.replace( '@Y@m@d@H','CYCLE' ) - #if len(task_log) != 0: - # print 'LOG: %s %s'%( task_name, task_log) - #print ' '+' '*i+'(%d) task name:'%i,task.attrib['name'] - if 'cycledefs' in task.attrib: - task_cycledefs = task.attrib['cycledefs'] - #if list_tasks: - # print 'task_cycledefs (meta):',task_cycledefs - else: - task_cycledefs = cycle_noname - all_tasks.append((task_name,task_cycledefs,task_log)) - add_task = [] - for task_name in all_tasks: - first_task_resolved = False - first_task_resolved_name = '' - add_task[:] = [] - add_task.append(task_name) - for name,vars in all_vars.iteritems(): - replace_var = '#'+name+'#' - #print 'TASK_NAME: %s | %s'%(task_name,replace_var) - for each_task_name in add_task: - #for each_task_name in all_tasks: - if replace_var in each_task_name[0]: - for var in vars: - new_task_name = each_task_name[0].replace(replace_var, var) - new_task_log = each_task_name[2].replace(replace_var, var) - add_task.append((new_task_name,each_task_name[1],new_task_log)) - for task in add_task: - if '#' not in task[0]: - if task[0] not in [ j[0] for j in tasks_ordered]: - tasks_ordered.append(task) - if not first_task_resolved: - first_task_resolved = True - first_task_resolved_name = task[0] - if metatask_name == 'NO_NAME': - metatask_list[task[0]].append(task[0]) - else: - metatask_list[task[0]].append(metatask_name) - metatask_list[task[0]].append(task[0]) - else: - metatask_list[first_task_resolved_name].append(task[0]) - if list_tasks: - print ' '+' '*i+task[0],task[1],'LOG:',task[2] - - # Default expantion of metatasks True = collapsed - #for metatask,metatasks in metatask_list.iteritems(): - # metatask_list[metatask].append(True) - - return tasks_ordered,metatask_list,cycledef_group_cycles - -def get_rocoto_stat(params, queue_stat): - workflow_file, database_file, tasks_ordered, metatask_list, cycledef_group_cycles = params - - global temp_workflow_file - global database_file_agmented - if len(tasks_ordered) == 0 or len(metatask_list) == 0 or len(cycledef_group_cycles) == 0 or list_tasks: - tasks_ordered, metatask_list,cycledef_group_cycles = get_tasklist(temp_workflow_file) - - if use_performance_metrics: - aug_perf = get_aug_perf_values(get_user) - else: - aug_perf = None - - info=collections.defaultdict(list) - cycles=set() - - connection=sqlite3.connect(database_file) - c=connection.cursor() - - if use_performance_metrics: - q=c.execute("DROP TABLE IF EXISTS jobs_augment_tmp;") - sql_create_augment_table = "CREATE TABLE jobs_augment_tmp AS SELECT * FROM jobs;" - q=c.execute(sql_create_augment_table) - q=c.execute("alter table jobs_augment_tmp add column qtime integer;") - q=c.execute("alter table jobs_augment_tmp add column cputime integer;") - q=c.execute("alter table jobs_augment_tmp add column runtime integer;") - q=c.execute("alter table jobs_augment_tmp add column slots integer;") - - sq_command = '' - column_updates = ('qtime','cputime','runtime','slots') - sqlite_merge_command = "%s=(SELECT jobs_augment.%s FROM jobs_augment WHERE jobs_augment.id=jobs_augment_tmp.id)" - for column in column_updates: - sq_command += sqlite_merge_command%(column,column)+',' - sq_command=';'.join(sq_command.rsplit(',', 1)) - sq_command = 'UPDATE jobs_augment_tmp SET '+sq_command - q=c.execute(sq_command) - - sq_command = 'UPDATE jobs_augment_tmp SET ' - sqlite_update_command = "%s = '%s' WHERE jobs_augment_tmp.jobid = %s" - #debug.write('WRITING TO DATABASE'+'\n') - for perf_jobid,perf_values in aug_perf.iteritems(): - for name,each_value in perf_values.iteritems(): - q=c.execute(sq_command+sqlite_update_command%(name,each_value,perf_jobid)) - #debug.write('SQL: '+sq_command+sqlite_update_command%(name,each_value,perf_jobid+'\n')) - - qinfo=c.execute("DROP TABLE IF EXISTS jobs_augment;") - qinfo=c.execute("ALTER TABLE jobs_augment_tmp RENAME TO jobs_augment;") - - cycledifitions = [] - q=c.execute('SELECT id, groupname, cycledef FROM cycledef') - for row in q: - (theid, groupname, cycledef) = row - cycledifitions.append( (theid, groupname, cycledef) ) - - cycle_done_stat = dict() - q=c.execute('SELECT id,cycle,done FROM cycles') - for row in q: - (theid,cycle,done)=row - cycles.add(cycle) - cycle_done_stat[cycle]=done - - if use_performance_metrics: - q=c.execute('SELECT id,jobid,taskname,cycle,state,exit_status,duration,tries,qtime,cputime,runtime,slots FROM jobs_augment') - else: - q=c.execute('SELECT id,jobid,taskname,cycle,state,exit_status,duration,tries FROM jobs') - - q_get = [] - entered_jobids = [] - last_task_index = 0 - for row in q: - row = tuple('-' if x is None else x for x in row) - if use_performance_metrics: - (theid, jobid,taskname,cycle,state,exit_status,duration,tries,qtime,cputime,runtime,slots)=row - else: - (theid, jobid,taskname,cycle,state,exit_status,duration,tries,)=row - if jobid in entered_jobids: - continue - else: - if taskname in tasks_ordered: - task_index = [x[0] for x in task_ordered].index(taskname) - #task_index = tasks_ordered.index(taskname) - last_task_index = task_index - else: - task_index = last_task_index - - if use_performance_metrics: - q_get.append( (theid,jobid,task_index,taskname,cycle,state,exit_status,duration,tries,qtime,cputime,runtime,slots) ) - else: - q_get.append( (theid,jobid,task_index,taskname,cycle,state,exit_status,duration,tries) ) - entered_jobids.append(jobid) - - q_get.sort( key=lambda x: x[2] ) - - connection.commit() - c.close() - - for row in q_get: - if use_performance_metrics: - (theid,jobid,task_order,taskname,cycle,state,exit_status,duration,tries,qtime,cputime,runtime,slots)=row - else: - (theid,jobid,task_order,taskname,cycle,state,exit_status,duration,tries)=row - if jobid != '-': - if use_performance_metrics: - line = '%s %s %s %s %s %s %s %s %s %s %s'%(datetime.datetime.fromtimestamp(cycle).strftime('%Y%m%d%H%M'),taskname,str(jobid),str(state),str(exit_status),str(tries),str(duration).split('.')[0],str(slots),str(qtime),str(cputime).split('.')[0],str(runtime)) - else: - line = '%s %s %s %s %s %s %s'%(datetime.datetime.fromtimestamp(cycle).strftime('%Y%m%d%H%M'),taskname,str(jobid),str(state),str(exit_status),str(tries),str(duration).split('.')[0]) - #debug.write('LINE: '+line+'\n') - info[cycle].append(line) - - for every_cycle in cycles: - if len(info[every_cycle]) == 0: - info[every_cycle].append('place holder') - - new_info=collections.defaultdict(list) - job_ids = [] - job_id = '' - for each_cycle,lines_in_cycle in info.iteritems(): - for task in tasks_ordered: - skip_task = False - for each_line in lines_in_cycle: - if task[0] == each_line.split()[1]: - #if task[0]+' ' in each_line: - job_id = each_line.split()[2] - if job_id in job_ids: - break - cycle_string = datetime.datetime.fromtimestamp(each_cycle).strftime('%Y%m%d%H%M') - #print 'TESTB:', len(task), task[0],task[1] - cycledefs = task[1].split(',') - if len(cycledefs) > 1: - #print 'Checking if %s for %s is in a gfs cycle:'%(task[0],cycle_string) - for each_cycledef in cycledefs: - #print 'group:', each_cycledef, cycledef_group_cycles[each_cycledef] - if cycle_string in cycledef_group_cycles[each_cycledef]: - #print 'Found:', task[0],'with cycle',cycle_string - new_info[each_cycle].append(each_line) - job_ids.append(job_id) - skip_task = True - break - elif cycle_string in cycledef_group_cycles[task[1]]: - new_info[each_cycle].append(each_line) - job_ids.append(job_id) - skip_task = True - break - if skip_task: - continue - line = datetime.datetime.fromtimestamp(each_cycle).strftime('%Y%m%d%H%M')+' '*7+task[0]+' - - - - -' - cycle_string = datetime.datetime.fromtimestamp(each_cycle).strftime('%Y%m%d%H%M') - cycledefs = task[1].split(',') - if len(cycledefs) > 1: - for each_cycledef in cycledefs: - if cycle_string in cycledef_group_cycles[each_cycledef]: - new_info[each_cycle].append(line) - skip_task = True - break - elif cycle_string in cycledef_group_cycles[task[1]]: - new_info[each_cycle].append(line) - skip_task = True - if skip_task: - continue - - rocoto_stat = [] - for cycle in sorted(cycles): - if len(new_info[cycle]) != 0: - rocoto_stat.append(new_info[cycle]) - - if save_checkfile_path is not None: - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - with open(save_checkfile_path, 'w') as savefile: - rocoto_data_and_time = (rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles, stat_update_time) - cPickle.dump(rocoto_data_and_time, savefile) - if only_check_point: - sys.exit(0) - - if use_multiprocessing: - queue_stat.put((rocoto_stat, tasks_ordered, metatask_list, cycledef_group_cycles)) - else: - return (rocoto_stat, tasks_ordered, metatask_list, cycledef_group_cycles) - - -def display_results(results,screen,params): - from produtil.fileop import check_file - results_lines = results.split('\n') - num_lines,num_columns = (len(results_lines)+3,len(max(results_lines, key=len))+1) - pad_pos = 0 - force_load_stat = False - global mlines - global mcols - while True: - screen.clear() - screen.refresh() - results_pad = curses.newpad(num_lines,num_columns) - for results_line in results_lines: - results_pad.addstr(results_line+'\n') - results_pad.refresh( pad_pos, 0, 0,0, mlines-3,mcols-1) - extra_1 = extra_2 = '' - if pad_pos < num_lines-mlines-2 or pad_pos > 0: - extra_1 = '/ Scroll' - if len(params) != 0: - extra_2 = 'ave results to a file' - screen.addstr(mlines-1,0,' Return %s %s'%(extra_1,extra_2),curses.A_BOLD) - event = screen.getch() - if event == curses.KEY_RESIZE: - screen.refresh() - elif event in ( curses.KEY_PPAGE, ord('u') ): - if pad_pos < num_lines-mlines-2: - pad_pos += 1 - elif event in ( curses.KEY_NPAGE, ord('d') ): - if pad_pos != 0: - pad_pos -= 1 - elif event == curses.KEY_ENTER or event == 10: - screen.clear() - break - elif event == ord('s'): - strg = [] - strg.append(PSLOT) - for i in range(2,5): - try: - if ' ' not in basename(params[i]): - strg.append(basename(params[i]).split('.')[0]) - except: - pass - if len(strg) == 0: - strg = 'rocotoviewer_outout_file' - save_results_file = '_'.join(strg)+'.txt' - inc_int = 0 - while check_file(save_results_file): - if '(%d)'%inc_int in save_results_file: - save_results_file = save_results_file.replace('(%d)'%inc_int,'(%d)'%(inc_int+1)) - inc_int += 1 - else: - save_results_file = basename(save_results_file.split('.')[0])+'(%d)'%inc_int+'.txt' - out_file = open(save_results_file,'w') - out_file.write(results) - out_file.close() - screen.addstr(mlines-1,0,'Saved file %s'%save_results_file+' '*10) - screen.refresh() - std_time.sleep(0.5) - - return - -def main(screen): - - global mlines - global mcols - global default_column_length - global use_multiprocessing - global highlightText - global highlightSelectedText - global normalText - global PSLOT - global PACKAGE - global entity_values - - event = 10 - - if not sys.stdin.isatty(): - if screen != 'dummy': - print 'There seems to be a problem with the curses init' - sys.exit(-1) - else: - mlines = 100 - else: - mlines, mcols = screen.getmaxyx() - - #global debug - #PWD = os.getcwd() - #debug = open(PWD+'/debug.log','a',0) - - (workflow_file,database_file) = get_arguments() - - if not load_produtil_pythonpath(): - curses.endwin() - print '\n\nCRITICAL ERROR: The produtil package could not be loaded from your system' - sys.exit(-1) - - if html_output: - if sys.stdin.isatty(): - curses.endwin() - print '\nPreparing to write out an html folder' - use_multiprocessing = False - - import produtil.run, produtil.numerics - from produtil.run import run,runstr, batchexe - from produtil.fileop import check_file, makedirs, deliver_file, remove_file, make_symlinks_in - from produtil.prog import shbackslash - - header_string = ' CYCLE TASK JOBID STATE EXIT TRIES DURATION' - header_string_under = '========(updated:tttttttttttttttt)========== PSLOT: pslot ===========================' - - global use_performance_metrics - aug_perf = collections.defaultdict(dict) - if use_performance_metrics: - result = augment_SQLite3( database_file ) - aug_perf = get_aug_perf_values(get_user) - header_string += ' SLOTS QTIME CPU RUN\n' - header_string_under += '=============================\n' - header_string += header_string_under - default_column_length = 122 - else: - aug_perf = None - header_string = header_string+'\n'+header_string_under+'\n' - default_column_length = 91 - - html_output_dir = None - entity_values = get_entity_values( workflow_file ) - workflow_name = 'gfs_workflow' - if 'ROTDIR' in entity_values: - ROTDIR = entity_values['ROTDIR'] - else: - ROTDIR = 'no_rotdir' - if 'PSLOT' in entity_values: - PSLOT = entity_values['PSLOT'] - else: - PSLOT = 'no_name' - if 'PACKAGE' in entity_values: - PACKAGE = entity_values['PACKAGE'] - if PACKAGE == 'ugcs': - workflow_name = 'ugcs_workflow' - if PACKAGE == 'gfs': - workflow_name = 'gfs_workflow' - else: - PACKAGE = 'none' - if 'EXPDIR' in entity_values: - EXPDIR = entity_values['EXPDIR'] - else: - EXPDIR = '.' - - if html_output: - html_ptr = None - if not send_html_to_rzdm and len(rzdm_path) != 0: - html_output_dir = shbackslash(rzdm_path) - else: - html_output_dir = shbackslash('%s/pr%s'%(workflow_name,PSLOT)) - print 'writing html to directory:',html_output_dir - html_output_file = shbackslash( html_output_dir+'/index.html' ) - html_header_line = '\n' - if use_performance_metrics: - html_header_line = html_header_line+''+'\n' - else: - html_header_line = html_header_line+'\n' - print 'Generating html folder html: %s ...'%html_output_file - cmd = batchexe('rm') ['-Rf', html_output_dir ] - stat=runstr(cmd) - makedirs( html_output_dir ) - html_ptr = open(html_output_file,'w') - html_ptr.write(ccs_html) - break_file = False - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - html_discribe_line = '\n
CYCLETASKJOBIDSTATEEXITTRIESDURATIONSLOTSQTIMECPURUN
\n\n\n'%(stat_update_time,PSLOT) - html_discribe_line += '\n\n
ExpandRefreshed: %sPSLOT: %s
ROTDIR: %sTurn Around Times
\n
\n'%(workflow_name,ROTDIR,PSLOT) - html_discribe_line += html_header_line - html_ptr.write( html_discribe_line ) - else: - curses.start_color() - curses.use_default_colors() - screen.refresh() - curses.mousemask(1) - curses.noecho() - for i in range(0, curses.COLORS): - curses.init_pair(i + 1, i,curses.COLOR_BLACK) - if i == 4: - curses.init_pair(i + 1, i,curses.COLOR_WHITE) - curses.init_pair(8, 0, -1) - - curses.mousemask(curses.ALL_MOUSE_EVENTS) - #curses.init_pair(6,curses.COLOR_BLACK, curses.COLOR_CYAN) - highlightText = curses.A_STANDOUT - highlightSelectedText = curses.color_pair(5) - normalText = curses.A_NORMAL - - cmd = batchexe('which') ['rocotorun'] - try: - which_rocoto = runstr(cmd).strip() - except Exception,e: - curses.endwin() - print '\n\nCRITICAL ERROR: rocotorun is not in your path, user "module load rocoto"' - sys.exit(0) - - os.environ['TZ']='UTC' - std_time.tzset() - - #stdout_buff = StringIO() - #stderr_buff = StringIO() - #sys.stdout = stdout_buff - #sys.stderr = stderr_buff - - HOME = os.environ['HOME'] - rocoto_temp = HOME+'/.rocoto/tmp' - makedirs( rocoto_temp ) - - global temp_workflow_file - workflow_basename = basename(workflow_file)+'.' - temp_file= tempfile.NamedTemporaryFile(prefix=workflow_basename, dir=rocoto_temp, delete=False) - temp_workflow_file = temp_file.name - old = open(workflow_file) - temp = [] - for line in old: - if '&ENV_VARS;' not in line: - temp.append(line) - - for line in temp: - temp_file.write(line) - - temp_file.close() - old.close() - - tasks_ordered = [] - metatask_list = collections.defaultdict(list) - cycledef_group_cycles = collections.defaultdict(list) - - queue_stat = Queue() - queue_check = Queue() - - if only_check_point: - curses.endwin() - sys.stdout = os.fdopen(0,'w',0) - print 'Creating check point file ...' - params = (workflow_file, database_file, tasks_ordered, metatask_list, cycledef_group_cycles ) - get_rocoto_stat( params, queue_stat ) - - stat_update_time = '' - params_check = '' - header = None - - process_get_rocoto_stat = None - process_get_rocoto_check = None - - cycle = 0 - if html_output: - mlines = 100 - mcols = 125 - if not html_output and mcols < default_column_length: - curses.endwin() - print - print 'Your terminal is only %d characters must be at least %d to display workflow status'%(mcols,default_column_length) - sys.exit(-1) - if not html_output: - screen.refresh() - rocoto_stat_params = '' - rocoto_stat_params_tmp = '' - step = 0.0 ; i = 0 - dots = ('. ','.. ','... ','.... ','.....',' ....',' ...',' .') - dot_stat = 0 ; dot_check = 0 - current_time = time() - meta_tasklist = collections.defaultdict(list) - - if save_checkfile_path is not None and check_file(save_checkfile_path): - with open(save_checkfile_path) as savefile: - rocoto_data_and_time = cPickle.load(savefile) - rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles, stat_update_time = rocoto_data_and_time - start_time = time() - stat_read_time_delay - 10 - header = header_string - header = header.replace('t'*16,stat_update_time) - if PACKAGE.lower() == 'ugcs': - header = header.replace(' PSLOT: pslot ','==== UGCS ====') - elif PSLOT.lower() == 'no_name': - header = header.replace(' PSLOT: pslot ','==============') - reduce_header_size = 0 - else: - header = header.replace(' PSLOT: pslot ','==== UGCS ====') - reduce_header_size = 0 - if reduce_header_size > 0: - header = header[:-reduce_header_size] - header = header[reduce_header_size:] - if list_tasks: - params = (workflow_file, database_file, tasks_ordered, metatask_list, cycledef_group_cycles ) - get_rocoto_stat( params, Queue() ) - curses.endwin() - sys.stdout = os.fdopen(0,'w',0) - sys.exit(0) - - - if save_checkfile_path is None or (save_checkfile_path is not None and not check_file(save_checkfile_path)): - params = (workflow_file, database_file, tasks_ordered, metatask_list,cycledef_group_cycles) - if use_multiprocessing: - process_get_rocoto_stat = Process( target=get_rocoto_stat, args=[params, queue_stat] ) - process_get_rocoto_stat.start() - screen.addstr(mlines-2,0,'No checkpoint file, must get rocoto stats please wait',curses.A_BOLD) - screen.addstr(mlines-1,0,'Running rocotostat ',curses.A_BOLD) - else: - (rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles) = get_rocoto_stat( params, Queue() ) - header = header_string - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - header = header.replace('t'*16,stat_update_time) - if PSLOT.lower() == 'no_name': - header = header.replace(' PSLOT: pslot ','==============') - reduce_header_size = 0 - elif PACKAGE.lower() == 'ugcs': - header = header.replace(' PSLOT: pslot ','==== UGCS ====') - reduce_header_size = 0 - else: - header = header.replace('pslot',PSLOT) - reduce_header_size = int((len(PSLOT)-len('PSLOT'))/2) - if reduce_header_size > 0: - header = header[:-reduce_header_size] - header = header[reduce_header_size:] - - while use_multiprocessing: - if mcols < default_column_length: - curses.endwin() - print - print 'Your terminal is only %d characters must be at least %d to display workflow status'%(mcols,default_column_length) - sys.exit(-1) - step += 0.001 - if step > 100: - step = 0.0 - i = (0 if i == len(dots)-1 else i+1 ) - curses.curs_set(0) - screen.addstr(mlines-1,19,dots[i],curses.A_BOLD) - screen.refresh() - try: - rocoto_stat_params = queue_stat.get_nowait() - except: - pass - if len(rocoto_stat_params) != 0: - (rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles) = rocoto_stat_params - if use_multiprocessing: - process_get_rocoto_stat.join() - process_get_rocoto_stat.terminate() - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - header = header_string - header = header.replace('t'*16,stat_update_time) - if PSLOT.lower() == 'no_name': - header = header.replace(' PSLOT: pslot ','==============') - reduce_header_size = 0 - elif PACKAGE.lower() == 'ugcs': - header = header.replace(' PSLOT: pslot ','==== UGCS ====') - reduce_header_size = 0 - else: - header = header.replace('pslot',PSLOT) - reduce_header_size = int((len(PSLOT)-len('PSLOT'))/2) - if reduce_header_size > 0: - header = header[:-reduce_header_size] - header = header[reduce_header_size:] - break - - start_time = time() - - num_cycle = len(rocoto_stat) - time_to_load = (time()- current_time)/60.0 - - pad_pos = 0 - update_pad = True - task = 0 ; execute_task = '' ; execute_cycle = '' - loading_stat = False - loading_check = False - find_next = 0 - check_task = '' ; check_cycle = '' - rocoto_check = '' - break_twice = False - search_string = '' - - meta_tasks = [] - metatasks_state_cycle = [] - metatasks_state_string_cycle = [] - - metatask_list_copy = collections.defaultdict(list) - metatask_name = collections.defaultdict(list) - for each_metatask in metatask_list: - metatask_name[each_metatask] = metatask_list[each_metatask][0] - del metatask_list[each_metatask][0] - - tasks_in_cycle = [] - for each_cycle in rocoto_stat: - list_of_tasks_per_cycle = [] - meta_tasks_in_cycle = [] - for each_line in each_cycle: - line_has_metatask = False - for check_metatask, check_metatask_list in metatask_list.iteritems(): - if check_metatask in each_line: - meta_tasks_in_cycle.append( (check_metatask, True, check_metatask_list ) ) - line_has_metatask = True - continue - else: - for every_meta_task in check_metatask_list: - each_element_in_line = each_line.split() - if every_meta_task != check_metatask: - for item in each_element_in_line: - if every_meta_task == item: - meta_tasks_in_cycle.append((every_meta_task, False, check_metatask) ) - line_has_metatask = True - if not line_has_metatask: - if '---' not in each_line.split()[1]: - list_of_tasks_per_cycle.append(each_line.split()[1]) - meta_tasks_in_cycle.append(('False',False,'False')) - - tasks_in_cycle.append(list_of_tasks_per_cycle) - - meta_tasks_state = dict() - meta_tasks_state_string = dict() - for check_metatask, check_metatask_list in metatask_list.iteritems(): - meta_tasks_state[check_metatask] = True - meta_tasks_state_string[check_metatask] = '' - meta_tasks_state['False'] = False - - meta_tasks.append(meta_tasks_in_cycle) - metatasks_state_cycle.append(meta_tasks_state) - metatasks_state_string_cycle.append(meta_tasks_state_string) - - update_metatask_state_status_message = True - ''' -# This lists each metatask and its elements -# for the first cycle for code edification - curses.endwin() - print - for each_metatask in meta_tasks[0]: - if each_metatask[1]: - print metatask_name[each_metatask[2][0]] - for task in each_metatask[2]: - print '',task - sys.exit(0) - ''' - - metatask_list_per_cycle = [] - metatask_list_by_name = collections.defaultdict(dict) - for each_cycle in meta_tasks: - list_of_metatasks_in_cycle = [] - for each_metatask in each_cycle: - if each_metatask[1]: - tasks_in_metatask_list = [] - for task in each_metatask[2]: - tasks_in_metatask_list.append( task ) - metatask_list_by_name[ metatask_name[each_metatask[2][0]] ] = tasks_in_metatask_list - list_of_metatasks_in_cycle.append( metatask_name[each_metatask[2][0]] ) - metatask_list_per_cycle.append(list_of_metatasks_in_cycle) - - found = False - end_found = False - found_cycle = 0 - found_end_cycle = 0 - for find_cycle in range(0,len(rocoto_stat)): - for lines in rocoto_stat[find_cycle]: - if not found and any(x in lines for x in ['RUNNING', 'QUEUED']): - found = True - found_cycle = find_cycle - if found and not any(x in lines for x in ['RUNNING', 'QUEUED']): - end_found = True - found_end_cycle = find_cycle - break - - get_number_of_stats = 0 - if found: - cycle = found_cycle - else: - get_number_of_stats = 2 - if len(rocoto_stat) > 2: - cycle = len(rocoto_stat) - 2 - else: cycle = 0 - - if html_output: - if cycle > 2: - cycle -= 2 - html_start_cycle = cycle - - html_output_firstpass = True - #debug.write('num cycles: %s\n'%str(len(rocoto_stat))) - while True: - num_columns = default_column_length - mlines = 90; mcols = 125 - if header is None: - header = ' ' - if update_pad is True: - #debug.write('cycle: %s\n'%str(cycle)) - num_lines = len(rocoto_stat[cycle]) - #debug.write('len rocoto_stat[cycle]: %s\n'%str(num_lines)) - line_correction = 0 - for count_meta_tasks in meta_tasks[cycle]: - if count_meta_tasks[1] and metatasks_state_cycle[cycle][ count_meta_tasks[0] ]: - line_correction += len(count_meta_tasks[2]) - 1 - num_lines -= line_correction - update_pad = False - line_number = -1 - colapsed_metatask = False - for line_num,line in enumerate(rocoto_stat[cycle]): - columns = line.split() - count_columns = line.split(' ') - spaces = [] - for c,sub_group in groupby(count_columns): - if c != '': continue - spaces.append(' '*len(list(sub_group))) - spaces.append('') - text_color = {'SUCCEEDED':3,'QUEUED':4,'DEAD':2,'FAILED':2,'RUNNING':6} - skip_task = False - - if not meta_tasks[cycle][line_num][1] and metatasks_state_cycle[cycle][ meta_tasks[cycle][line_num][2] ] : - skip_task = True - else: - line_number +=1 - html_line = '' - if use_performance_metrics and len(columns) == 7: - for i in range(0,4): - columns.append('-') - for i,column in enumerate(columns): - if skip_task: continue - if not use_performance_metrics and i > 7: continue - execute_cycle = columns[0] - if i == 0: - if meta_tasks[cycle][line_num][1]: - if metatasks_state_cycle[cycle][columns[1]]: - colapsed_metatask = True - if update_metatask_state_status_message or len(metatasks_state_string_cycle[cycle][ columns[1] ])==0: - get_state_list = [] - total_numer_of_tasks = len(meta_tasks[cycle][line_num][2]) - for check_metatask_line in rocoto_stat[cycle]: - split_check_metatask_line = check_metatask_line.split() - for each_metatask in meta_tasks[cycle][line_num][2]: - if each_metatask == split_check_metatask_line[1]: - get_state_list.append(split_check_metatask_line[3]) - metatask_state = columns[3] - if 'SUCCEEDED' in get_state_list: - metatask_state = '(%d/%d) SUCCEEDED'%(get_state_list.count('SUCCEEDED'),total_numer_of_tasks) - if 'QUEUED' in get_state_list: - metatask_state = '(%d/%d) QUEUED'%(get_state_list.count('QUEUED'),total_numer_of_tasks) - if 'RUNNING' in get_state_list: - metatask_state = '(%d/%d) RUNNING'%(get_state_list.count('RUNNING'),total_numer_of_tasks) - if 'DEAD' in get_state_list: - metatask_state = '(%d/%d) DEAD'%(get_state_list.count('DEAD'),total_numer_of_tasks) - metatasks_state_string_cycle[cycle][ columns[1] ] = metatask_state - html_line += ''+column+'' - elif i == 1: - save_column = column - if colapsed_metatask: - colapsed_metatask = False - column = metatask_name[column] - display_column = (column if len(column) < 40 else column[:40]) - if line_number == task: - execute_task = save_column - if html_output: - log_file = '' - for find_task in tasks_ordered: - if find_task[0] == column: - log_file = find_task[2].replace('CYCLE', execute_cycle[:-2] ) - if check_file(shbackslash( log_file )): - deliver_file( log_file, html_output_dir ) - log_file_base = os.path.basename(log_file) - html_line += ''%log_file_base+display_column+'' - else: - html_line += ''+display_column+'' - elif i == 2: - if len(column) > 7: - column = column[:7] - html_line += ''+column+'' - elif i == 3: - if meta_tasks[cycle][line_num][1] and len(metatasks_state_string_cycle[cycle][ columns[1] ].split())!=1 and metatasks_state_cycle[cycle][columns[1]]: - column = metatasks_state_string_cycle[cycle][ columns[1] ] - if len(column)>15: - if column.split()[1] == 'SUCCEEDED': - html_line += ''+column[:15]+'' - elif column.split()[1] == 'QUEUED': - html_line += ''+column[:15]+'' - elif column.split()[1] in('DEAD','FAILED'): - html_line += ''+column[:15]+'' - elif column.split()[1] == 'RUNNING': - html_line += ''+column[:15]+'' - else: - html_line += ''+column[:15]+'' - else: - if column.split()[1] == 'SUCCEEDED': - html_line += ''+column+'' - elif column.split()[1] == 'QUEUED': - html_line += ''+column+'' - elif column.split()[1] in('DEAD','FAILED'): - html_line += ''+column+'' - elif column.split()[1] == 'RUNNING': - html_line += ''+column+'' - else: - html_line += ''+column+'' - elif column in text_color: - if column == 'SUCCEEDED': - html_line += ''+column+'' - elif column == 'QUEUED': - html_line += ''+column+'' - elif column in('DEAD','FAILED'): - html_line += ''+column+'' - elif column == 'RUNNING': - html_line += ''+column+'' - else: - html_line += ''+column+'' - else: - html_line += ''+column+'' - else: - if len(column)<6: - html_line += ''+column+'' - else: - html_line += ''+column+'' - if not skip_task: - html_line += '\n' - html_ptr.write(html_line) - - update_metatask_state_status_message = False - - found_still_running = False - cycle += 1 - update_pad = True - for find_cycle in range(cycle,len(rocoto_stat)): - for lines in rocoto_stat[find_cycle]: - if 'RUNNING' in lines: - found_still_running = True - break - break - if get_number_of_stats >= 0: - found_still_running = True - if cycle < len(rocoto_stat) or found_still_running: - html_line = '\n' - html_line += '\n
\n\n' - html_line += html_header_line - html_ptr.write(html_line) - get_number_of_stats -= 1 - else: - html_line = '\n' - html_line += '\n' - html_line += '\n' - html_ptr.write(html_line) - html_ptr.close() - if html_output_firstpass: - for meta_cycle in range(0,len(rocoto_stat)): - for execute_task in metatasks_state_cycle[meta_cycle]: - metatasks_state_cycle[meta_cycle][execute_task] = False - html_output_file = shbackslash( html_output_dir+'/index_exp.html' ) - html_ptr = open(html_output_file,'w') - html_ptr.write(ccs_html) - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - html_discribe_line = '\n\n\n\n'%(stat_update_time,PSLOT) - html_discribe_line += '\n\n
CollapseRefreshed: %sPSLOT: %s
ROTDIR: %sTurn Around Times
\n
\n'%(workflow_name,ROTDIR,PSLOT) - html_discribe_line += html_header_line - html_ptr.write( html_discribe_line ) - html_output_firstpass = False - #cycle = html_start_cycle - if not html_output_firstpass: - if send_html_to_rzdm: - print 'sending html files to rzdm using rsync ...' - cmd=batchexe('rsync')['-avzr','--delete', html_output_dir, rzdm_path] - stat=runstr(cmd) - if stat is None: - print 'warning rsync to %s failed'%html_output_dir - sys.exit(-1) - else: - print 'done' - sys.exit(0) - else: - - # Main Curses Screen Loop - # Write to curses screen when HTML is not outputted - highlight_CYCLE = False - highlight_WORKFLOW = False - get_execute_task_track = False - screen.clear() - global screen_resized - selected_tasks = collections.defaultdict(list) - selected_meta_tasks = collections.defaultdict(list) - execute_metatask = None - colapsed_metatask = None - task = 0 - while True: - if not check_file(workflow_file) or not check_file(database_file): - curses.endwin() - print;print - print 'rocoto_viwer quit because the Rocoto database or XML file used by this session when missing' - sys.exit(-1) - job_id = None - curses.noecho() - num_columns = default_column_length - if header is None: - header = ' ' - if highlight_WORKFLOW: - header_split = header.split('\n') - screen.addstr(0,0,header_split[0]+'\n') - screen.addstr(header_split[1],curses.A_STANDOUT) - else: - screen.addstr(0,0,header) - if update_pad is True: - num_lines = len(rocoto_stat[cycle]) - line_correction = 0 - for count_meta_tasks in meta_tasks[cycle]: - if count_meta_tasks[1] and metatasks_state_cycle[cycle][ count_meta_tasks[0] ]: - line_correction += len(count_meta_tasks[2]) - 1 - num_lines -= line_correction - update_pad = False - if mlines > num_lines: - pad = curses.newpad(mlines ,num_columns) - else: - pad = curses.newpad(num_lines+1 ,num_columns) - line_number = -1 - for line_num,line in enumerate(rocoto_stat[cycle]): - #debug.write('DISPLAY LINE: '+line+'\n') - colapsed_metatask = False - columns = line.split() - count_columns = line.split(' ') - spaces = [] - for c,sub_group in groupby(count_columns): - if c != '': continue - spaces.append(' '*len(list(sub_group))) - spaces.append('') - text_color = {'SUCCEEDED':3,'QUEUED':4,'DEAD':2,'FAILED':2,'RUNNING':6} - skip_task = False - - if not meta_tasks[cycle][line_num][1] and metatasks_state_cycle[cycle][ meta_tasks[cycle][line_num][2] ] : - skip_task = True - else: - line_number +=1 - if use_performance_metrics and len(columns) == 7: - for i in range(0,4): - columns.append('-') - for i,column in enumerate(columns): - if skip_task: continue - if not use_performance_metrics and i > 7: continue - execute_cycle = columns[0] - if i == 0: - if meta_tasks[cycle][line_num][1]: - if metatasks_state_cycle[cycle][columns[1]]: - if highlight_CYCLE: - pad.addstr(column, curses.A_STANDOUT) - else: - pad.addstr(column) - pad.addstr(' < ') - colapsed_metatask = True - if update_metatask_state_status_message or len(metatasks_state_string_cycle[cycle][ columns[1] ])==0: - get_state_list = [] - total_numer_of_tasks = len(meta_tasks[cycle][line_num][2]) - for check_metatask_line in rocoto_stat[cycle]: - split_check_metatask_line = check_metatask_line.split() - for each_metatask in meta_tasks[cycle][line_num][2]: - if each_metatask == split_check_metatask_line[1]: - get_state_list.append(split_check_metatask_line[3]) - red_override = False - metatask_state = columns[3] - if 'SUCCEEDED' in get_state_list: - metatask_state = '(%d/%d) SUCCEEDED'%(get_state_list.count('SUCCEEDED'),total_numer_of_tasks) - if 'QUEUED' in get_state_list: - metatask_state = '(%d/%d) QUEUED'%(get_state_list.count('QUEUED'),total_numer_of_tasks) - if 'RUNNING' in get_state_list: - metatask_state = '(%d/%d) RUNNING'%(get_state_list.count('RUNNING'),total_numer_of_tasks) - if 'FAILED' in get_state_list: - metatask_state = '(%d/%d) FAILED'%(get_state_list.count('FAILED'),total_numer_of_tasks) - red_override = True - if 'DEAD' in get_state_list: - red_override = True - metatask_state = '(%d/%d) DEAD'%(get_state_list.count('DEAD'),total_numer_of_tasks) - metatasks_state_string_cycle[cycle][ columns[1] ] = metatask_state - else: - if highlight_CYCLE: - pad.addstr(column, curses.A_STANDOUT) - else: - pad.addstr(column) - pad.addstr(' > ') - else: - if highlight_CYCLE: - pad.addstr(column,curses.A_STANDOUT) - pad.addstr(' ') - else: - pad.addstr(column+' ') - elif i == 1: - save_column = column - if colapsed_metatask: - column = metatask_name[column] - display_column = (column if len(column) < 19 else column[:19]) - if line_number == task and not highlight_CYCLE and not highlight_WORKFLOW : - pad.addstr(display_column,curses.A_STANDOUT) - execute_task_track = save_column - if colapsed_metatask: - execute_metatask_check = True - execute_metatask = column - metatask_list_of_selected_metatask = meta_tasks[cycle][line_num][2] - else: - execute_metatask_check = False - execute_metatask = None - metatask_list_of_selected_metatask = None - execute_task = column - else: - #if column in metatask_list_by_name[metatask_name[column]]: - # display_column = ' '+display_column - if column in selected_tasks[execute_cycle]: - pad.addstr(display_column, highlightSelectedText ) - elif column in selected_meta_tasks[execute_cycle]: - pad.addstr(display_column, highlightSelectedText ) - else: - pad.addstr(display_column) - pad.addstr(' '*(21-len(display_column))) - elif i == 2: - job_id = column.strip() - if len(job_id) > 9: - job_id = job_id[:9] - if job_id == '-': - pad.addstr(job_id+' '*9) - else: - pad.addstr(job_id+' '*(10-len(job_id))) - elif i == 3: - if meta_tasks[cycle][line_num][1] and len(metatasks_state_string_cycle[cycle][ columns[1] ].split())!=1 and metatasks_state_cycle[cycle][columns[1]]: - column = metatasks_state_string_cycle[cycle][ columns[1] ] - if red_override: - the_text_color = 2 - else: - the_text_color = text_color[column.split()[1]] - - if len(column) >= 15: - pad.addstr( column[:15],curses.color_pair(the_text_color)|curses.A_STANDOUT) - column = column[:15] - else: - pad.addstr( column,curses.color_pair(the_text_color)|curses.A_STANDOUT) - elif column in text_color: - pad.addstr(column, curses.color_pair(text_color[column])|curses.A_STANDOUT) - else: - pad.addstr(column) - pad.addstr(' '*(16-len(column)),curses.color_pair(8)) - elif i in (4,5,6,7,8,9,10): - if len(column) < 8: - pad.addstr(column+' '*(8-len(column))) - else: - pad.addstr(column.strip()+' ') - - if not skip_task: - pad.addstr('\n') - - update_metatask_state_status_message = False - pad.refresh( pad_pos, 0, 2,0, mlines-4,mcols) - - entire_workflow = 'Hit to open cycle based information page (implementation pending)' - entire_cycle = '********* The ENTIRE CYCLE has been selected for an action **********' - - try: - if highlight_WORKFLOW: - screen.addstr(mlines-2,0,entire_workflow,curses.A_BOLD) - else: - screen.addstr(mlines-2,0,' '*len(entire_workflow)) - if highlight_CYCLE: - screen.addstr(mlines-2,0,entire_cycle,curses.A_BOLD) - elif not highlight_WORKFLOW: - screen.addstr(mlines-2,0,' '*len(entire_cycle)) - if pad_pos < num_lines-mlines+4 or pad_pos > 0: - screen.addstr(mlines-1,0,' '*len(bottom_message_scroll)) - screen.addstr(mlines-1,0,bottom_message_scroll,curses.A_BOLD) - else: - screen.addstr(mlines-1,0,' '*len(bottom_message_scroll)) - screen.addstr(mlines-1,0,bottom_message,curses.A_BOLD) - except: - std_time.sleep(1) - pass - - if num_columns > mcols: - curses.endwin() - print - print 'Your terminal is only %s characters must be at least %s to display workflow status'%(str(mcols),str(num_columns)) - sys.exit(-1) - - if loading_stat: - dot_stat = (0 if dot_stat == len(dots)-1 else dot_stat+1 ) - screen.addstr(mlines-2,0,'Running rocotostat ') - screen.addstr(mlines-2,20,dots[dot_stat]) - try: - rocoto_stat_tmp = queue_stat.get_nowait() - except: - rocoto_stat_tmp = '' - if len(rocoto_stat_tmp) != 0: - (rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles) = rocoto_stat_tmp - process_get_rocoto_stat.join() - process_get_rocoto_stat.terminate() - update_pad = True - loading_stat = False - rocoto_stat_tmp = '' - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - header = header_string - header = header.replace('t'*16,stat_update_time) - header = header.replace('pslot',PSLOT) - reduce_header_size = int((len(PSLOT)-len('PSLOT'))/2) - if reduce_header_size > 0: - header = header[:-reduce_header_size] - header = header[reduce_header_size:] - screen.addstr(mlines-2,0,'Updated new rocotostatus: %s'%stat_update_time+' '*48) - screen.refresh() - std_time.sleep(0.5) - screen.addstr(mlines-2,0,' '*100) - screen.refresh() - - if loading_check: - if time() - current_check_time > 5: - dot_check = (0 if dot_check == len(dots)-1 else dot_check+1 ) - loc = (0 if not loading_stat else 27) - screen.addstr(mlines-2,loc,'Running rocotocheck ') - screen.addstr(mlines-2,loc+20,dots[dot_check]) - try: - rocoto_check = queue_check.get_nowait() - except: - pass - if len(rocoto_check) != 0: - process_get_rocoto_check.join() - process_get_rocoto_check.terminate() - loading_check = False - if time() - current_check_time > 5: - event = screen.getch() - time_inc = 0.0 - while event != curses.KEY_ENTER and event != 10: - message_string = 'rocotocheck for %s %s is ready for vieweing'%(params_check[2],params_check[3]) - message_string = (message_string if len(message_string) < mcols else message_string[:mcols-1]) - time_inc += 1 - if time_inc > 4: - screen.addstr(mlines-2,0, message_string) - screen.addstr(mlines-2,len(message_string),' ') - time_inc = 0.0 - else: - screen.addstr(mlines-2,0,message_string) - screen.addstr(mlines-2,len(message_string),' ',curses.A_BOLD) - event = screen.getch() - display_results(rocoto_check,screen,params_check) - rocoto_check = '' - - curses.curs_set(0) - curses.halfdelay(2) - screen.keypad(1) - event = screen.getch() - - if event in (curses.KEY_LEFT, curses.KEY_RIGHT): - highlight_CYCLE = False - highlight_WORKFLOW = False - if event == curses.KEY_LEFT: - pad_pos = 0 - #debug.write('KEY_LEFT %s\n'%pad_pos) - if cycle - 1 >= 0: - cycle -= 1 - elif event == curses.KEY_RIGHT: - pad_pos = 0 - #debug.write('KEY_RIGHT %s\n'%pad_pos) - if cycle + 1 < num_cycle: - cycle += 1 - num_lines = len(rocoto_stat[cycle]) - line_correction = 0 - for count_meta_tasks in meta_tasks[cycle]: - if count_meta_tasks[1] and metatasks_state_cycle[cycle][ count_meta_tasks[0] ]: - line_correction += len(count_meta_tasks[2])-1 - num_lines -= line_correction - if task > num_lines-1: - task = num_lines-1 - update_pad = True - if event == ord('Q'): - break - if get_execute_task_track: - get_execute_task_track = False - if execute_task_track in metatasks_state_cycle[cycle]: - metatasks_state_cycle[cycle][execute_task_track] = not metatasks_state_cycle[cycle][execute_task_track] - update_metatask_state_status_message = True - update_pad = True - if event == curses.KEY_MOUSE: - mouse_id, mouse_x, mouse_y, mouse_z, button_state = curses.getmouse() - task_mouse_pos = pad_pos+mouse_y-2 - if task_mouse_pos >= 0 and task_mouse_pos < num_lines: - task = task_mouse_pos - update_pad = True - if button_state & curses.BUTTON1_DOUBLE_CLICKED and mouse_x in range(12,15): - get_execute_task_track = True - if event == ord('x'): - if execute_task_track in metatasks_state_cycle[cycle]: - metatasks_state_cycle[cycle][execute_task_track] = not metatasks_state_cycle[cycle][execute_task_track] - update_metatask_state_status_message = True - update_pad = True - if screen_resized: - screen.erase() - screen.refresh() - update_pad = True - task = pad_pos - screen_resized = False - curses.resizeterm(mlines, mcols) - #debug.write('SCREEN RESIZED %s (%d,%d)\n'%(pad_pos,mlines,mcols)) - if mcols < default_column_length: - curses.endwin() - print - print 'Your terminal is only %d characters must be at least %d to display workflow status'%(mcols,default_column_length) - sys.exit(-1) - elif event in ( curses.KEY_NPAGE, ord('d') ): - highlight_CYCLE = False - highlight_WORKFLOW = False - if pad_pos + mlines < num_lines-mlines+5: - pad_pos += mlines - 5 - task += mlines - 5 - else: - pad_pos = num_lines-mlines+5 - task = num_lines-1 - update_pad = True - elif event in ( curses.KEY_PPAGE, ord('u') ): - highlight_CYCLE = False - highlight_WORKFLOW = False - if pad_pos != 0: - if pad_pos - mlines > 0: - pad_pos -= mlines - 5 - if task > pad_pos+mlines-6: - task -= mlines - 5 - else: - pad_pos = 0 - task = 0 - update_pad = True - elif event in (curses.KEY_UP, curses.KEY_SR): - if task == 0: - if highlight_CYCLE: - highlight_CYCLE = False - highlight_WORKFLOW = True - if not highlight_WORKFLOW: - highlight_CYCLE = True - if task != pad_pos: - update_pad = True - task -= 1 - elif pad_pos != 0: - pad_pos -= 1 - task -= 1 - if event == curses.KEY_SR: - if execute_metatask_check: - if execute_metatask in selected_meta_tasks[execute_cycle]: - if len(selected_meta_tasks[execute_cycle]) > 0: - selected_meta_tasks[execute_cycle].remove(execute_metatask) - else: - selected_meta_tasks[execute_cycle].append(execute_metatask) - else: - if execute_task in selected_tasks[execute_cycle]: - if len(selected_tasks[execute_cycle]) > 0: - selected_tasks[execute_cycle].remove(execute_task) - else: - selected_tasks[execute_cycle].append(execute_task) - update_pad = True - elif event in ( curses.KEY_DOWN, curses.KEY_SF ): - if highlight_CYCLE or highlight_WORKFLOW: - task = -1 - highlight_CYCLE = False - highlight_WORKFLOW = False - if task != num_lines-1 and task < pad_pos+mlines-6: - task += 1 - elif pad_pos < num_lines-mlines+5: - pad_pos += 1 - task += 1 - if event == curses.KEY_SF: - if execute_metatask_check: - if execute_metatask in selected_meta_tasks[execute_cycle]: - if len(selected_meta_tasks[execute_cycle]): - selected_meta_tasks[execute_cycle].remove(execute_metatask) - else: - selected_meta_tasks[execute_cycle].append(execute_metatask) - else: - if execute_task in selected_tasks[execute_cycle]: - if len(selected_tasks[execute_cycle]) > 0: - selected_tasks[execute_cycle].remove(execute_task) - else: - selected_tasks[execute_cycle].append(execute_task) - update_pad = True - elif event == ord('c'): - if loading_check == True: - screen.addstr(mlines-2,0,'rocotocheck is all reading running ') - screen.refresh() - std_time.sleep(0.5) - screen.addstr(mlines-2,0,' '*100) - screen.refresh() - else: - loc = (0 if not loading_stat else 27) - screen.addstr(mlines-2,loc,'Running rocotocheck ') - screen.refresh() - params_check = (workflow_file, database_file, execute_task, execute_cycle, 'check') - process_get_rocoto_check = Process( target=get_rocoto_check, args=[params_check, queue_check] ) - process_get_rocoto_check.start() - loading_check = True - current_check_time = time() - elif event == ord('f'): - log_file = '' - for find_task in tasks_ordered: - if find_task[0] == execute_task: - log_file = find_task[2].replace('CYCLE', execute_cycle[:-2] ) - if check_file(log_file): - links = [] - links.append(log_file) - try: - make_symlinks_in(links,EXPDIR,force=True) - except: - pass - elif event in (curses.KEY_ENTER, 10, 13): - - if execute_metatask_check: - selected_tasks[execute_cycle] = list_selector( screen, selected_tasks[execute_cycle], metatask_list_of_selected_metatask ) - screen.erase() - else: - if execute_task in selected_tasks[execute_cycle]: - if len(selected_tasks[execute_cycle]) > 0: - selected_tasks[execute_cycle].remove(execute_task) - else: - selected_tasks[execute_cycle].append(execute_task) - - elif event == ord('r'): - screen.clear() - process = '' - if highlight_CYCLE: - screen.addstr('Are you sure you want to rewind all the tasks in the cycle %s by running:\n\n'%execute_cycle) - process = '-a' - #highlight_WORKFLOW = False - elif execute_metatask_check and len(selected_tasks[execute_cycle]) == 0: - for tasks in metatask_list_of_selected_metatask: - process += '-t ' + tasks+' ' - screen.addstr('Are you sure you want to rewind all the tasks in the metatask (%s) by running:\n\n'%execute_task) - elif len(selected_tasks[execute_cycle]) != 0 or len(selected_meta_tasks[execute_cycle]) != 0: - if len(selected_tasks[execute_cycle]) != 0: - selected_tasks_string = '' - screen.addstr('Selected tasks:\n\n') - for tasks in selected_tasks[execute_cycle]: - selected_tasks_string += tasks+'\t' - process += '-t ' + tasks+' ' - screen.addstr(selected_tasks_string+'\n\n') - if len(selected_meta_tasks[execute_cycle]) != 0: - selected_tasks_string = '' - screen.addstr('Selected %d entire meta-tasks and their tasks:\n\n'%len( selected_meta_tasks[execute_cycle])) - for meta_task_selected in selected_meta_tasks[execute_cycle]: - for tasks in metatask_list_by_name[meta_task_selected]: - selected_tasks_string += tasks+'\t' - process += '-t ' + tasks+' ' - screen.addstr(selected_tasks_string+'\n\n') - screen.addstr('\nAre you sure you want to rewind all these seleted tasks by running:\n\n') - elif len(selected_tasks[execute_cycle]) == 0: - process = '-t '+ execute_task - screen.addstr('Are you sure you want to rewind the single task %s by running:\n\n'%execute_task) - screen.addstr('rocotorewind -c %s -d %s -w %s %s\n\n'%(execute_cycle,basename(database_file),basename(workflow_file),process)) - screen.addstr('Enter: es or o',curses.A_BOLD) - while True: - event = screen.getch() - if event == ord('y') or event == ord('Y'): - params = (workflow_file, database_file, execute_cycle,process) - results = rocoto_rewind(params) - results_params = ('','','rewind',execute_cycle,'tasks') - try: - display_results(results,screen,results_params) - except: - screen.addstr('\n\nRewind of this job was successful but displaying of the stdout failed\n') - screen.addstr('Output has been written out to the file rocotorewind_output.log\n') - screen.addstr('Press to continue') - with open('rocotorewind_output.log','a') as rocotorewind_logfile: - rocotorewind_logfile.write('\n\n'+results) - while True: - event = screen.getch() - if event in (curses.KEY_ENTER, 10, 13): - break - selected_tasks[execute_cycle] = [] - break - elif event == ord('n') or event == ord('N'): - break - screen.clear() - update_pad = True - elif event == ord('U'): - selected_tasks[execute_cycle] = [] - selected_meta_tasks[execute_cycle] = [] - update_pad = True - elif event == ord('b'): - process = '' - screen.clear() - list_meta_tasks = '' - list_of_tasks = '' - boot_task_list = '' ; tasks_to_boot = [] - boot_metatask_list = '' ; metatasks_to_boot = [] - if highlight_CYCLE: - screen.addstr('You have selected to boot the entire cycle %s:\n\n'%execute_cycle,curses.A_BOLD) - metatasks_to_boot = metatask_list_per_cycle[cycle] - tasks_to_boot = tasks_in_cycle[cycle] - elif len(selected_tasks[execute_cycle]) != 0 or len(selected_meta_tasks[execute_cycle]) != 0: - screen.addstr('You have a list selected tasks and/or metatasks to boot:\n\n',curses.A_BOLD) - metatasks_to_boot = selected_tasks[execute_cycle] - tasks_to_boot = selected_tasks[execute_cycle] - elif execute_metatask_check: - screen.addstr('Are you sure you want boot the entire meta task %s by running:\n\n'%execute_metatask) - metatasks_to_boot.append(execute_metatask) - elif len(selected_tasks[execute_cycle]) == 0: - tasks_to_boot.append(execute_task) - screen.addstr('Are you sure you want boot the task %s by running rocotoboot with:'%execute_task) - else: - update_pad = True - continue - - if len(metatasks_to_boot) > 0: - list_meta_tasks = ' ' - screen.addstr('Metatasks selected in cycle:\n\n',curses.A_BOLD) - for meta_task in metatasks_to_boot: - list_meta_tasks += meta_task+' ' - boot_metatask_list += meta_task+',' - boot_metatask_list = boot_metatask_list[:-1] - screen.addstr( list_meta_tasks ) - if len(tasks_to_boot) > 0: - list_of_tasks = ' ' - screen.addstr('\n\nTasks selected in cycle:\n\n',curses.A_BOLD) - for a_task in tasks_to_boot: - list_of_tasks += a_task+' ' - boot_task_list += a_task+',' - boot_task_list = boot_task_list[:-1] - screen.addstr( list_of_tasks ) - - screen.addstr('\n\nAre you sure you want to boot all the tasks and/or metatasks in the cycle %s by running:\n\n'%execute_cycle,curses.A_BOLD) - if len(boot_metatask_list) != 0: - list_meta_tasks = '--metatasks '+"'"+boot_metatask_list+"'" - if len(boot_task_list) != 0: - list_of_tasks = ' --tasks '+"'"+boot_task_list+"'" - screen.addstr('rocotoboot -d %s -w %s %s\n\n'%(basename(database_file),basename(workflow_file),list_meta_tasks+list_of_tasks)) - screen.addstr('Enter: es or o',curses.A_BOLD) - - while True: - event = screen.getch() - if event == ord('y') or event == ord('Y'): - params = (workflow_file, database_file, execute_cycle, boot_metatask_list, boot_task_list) - results = rocoto_boot(params) - display_results(results,screen,('','',execute_cycle,'rocotoboot_output')) - break - elif event == ord('n') or event == ord('N'): - break - screen.clear() - update_pad = True - elif event == ord('R'): - screen.addstr(mlines-2,0,'Running rocotorun and rocotostat ...'+' '*60,curses.A_BOLD) - params = (workflow_file, database_file) - rocoto_run(params) - update_pad = True - screen.clear() - if loading_stat == True: - screen.addstr(mlines-2,0,'rocotostat is all reading running'+' '*60) - screen.refresh() - std_time.sleep(0.5) - else: - start_time = 0 - elif event == ord('/'): - curses.echo() - find_next = 1 - screen.addstr(mlines-3,0,' '*100) - screen.refresh() - screen.addstr(mlines-3,0,'/') - screen.refresh() - search_string = screen.getstr(mlines-3,1,50) - break_twice = False - screen.addstr(mlines-3,0,' '*100) - screen.refresh() - for every_cycle in range(0,len(rocoto_stat)): - for line_number,line in enumerate(rocoto_stat[every_cycle]): - if search_string in line: - task = line_number - if num_lines < mlines: - pad_pos = 0 - else: - pad_pos = task - update_pad = True - cycle = every_cycle - break_twice = True - break - if break_twice: - screen.clear() - break - else: - find_next = 1 - elif (event == ord('n') or event == ord('N')) and len(search_string) != 0: - if event == ord('n'): - find_next += 1 - else: - if find_next - 1 >= 1: - find_next -= 1 - found_next = 0 - break_twice = False - for every_cycle in range(0,len(rocoto_stat)): - for line_number,line in enumerate(rocoto_stat[every_cycle]): - if search_string in line: - found_next += 1 - if find_next == found_next: - task = line_number - if num_lines < mlines: - pad_pos = 0 - else: - pad_pos = task - update_pad = True - cycle = every_cycle - break_twice = True - break - if break_twice: - screen.clear() - break - if not break_twice: - find_next = 1 - - elif event == ord('F'): - for find_cycle in range(0,len(rocoto_stat)): - for lines in rocoto_stat[find_cycle]: - if 'RUNNING' in line: - break - break - if find_cycle > 1: - cycle = find_cycle - 2 - update_pad = True - elif event == ord('l'): - start_time -= stat_read_time_delay - elif event == ord('h'): - update_pad = True - help_screen(screen) - screen.clear() - current_time = time() - diff = current_time - start_time - if diff > stat_read_time_delay and not loading_stat: - start_time = current_time - if not use_multiprocessing: - params = (workflow_file, database_file, tasks_ordered, metatask_list,cycledef_group_cycles) - (rocoto_stat, tasks_ordered, metatask_list,cycledef_group_cycles) = get_rocoto_stat( params, Queue() ) - stat_update_time = str(datetime.datetime.now()).rsplit(':',1)[0] - header = header_string - header = header.replace('t'*16,stat_update_time) - header = header.replace('pslot',PSLOT) - reduce_header_size = int((len(PSLOT)-len('PSLOT'))/2) - if reduce_header_size > 0: - header = header[:-reduce_header_size] - header = header[reduce_header_size:] - update_pad = True - screen.clear() - else: - loading_stat = True - screen.addstr(mlines-2,0,'Running rocotostat ') - params = (workflow_file, database_file, tasks_ordered, metatask_list,cycledef_group_cycles) - process_get_rocoto_stat = Process( target=get_rocoto_stat, args=[params, queue_stat] ) - process_get_rocoto_stat.start() - - if use_multiprocessing: - if process_get_rocoto_stat is not None: - if process_get_rocoto_stat.is_alive(): - process_get_rocoto_stat.terminate() - if process_get_rocoto_check is not None: - if process_get_rocoto_check.is_alive(): - process_get_rocoto_check.terminate() - - #debug.close() - -if __name__ == '__main__': - if not load_produtil_pythonpath(): - print '\n\nCRITICAL ERROR: The produtil package could not be loaded from your system' - sys.exit(-1) - from produtil.fileop import remove_file - try: - signal.signal(signal.SIGWINCH, sigwinch_handler) - sys.stdout = sys.__stdout__ - sys.stderr = sys.__stderr__ - if sys.stdin.isatty(): - curses.wrapper(main) - else: - screen = 'dummy' - main(screen) - remove_file(temp_workflow_file) - except KeyboardInterrupt: - print "Got KeyboardInterrupt exception. Exiting..." - sys.exit(-1) diff --git a/ush/rocoto/setup_expt.py b/ush/rocoto/setup_expt.py deleted file mode 100755 index 05fedec7c..000000000 --- a/ush/rocoto/setup_expt.py +++ /dev/null @@ -1,199 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### - -import os -import sys -import glob -import shutil -from datetime import datetime -from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter - - -global machines -global expdir, configdir, comrot, pslot, resdet, resens, nens, cdump, idate, edate, gfs_cyc - - -machines = ['THEIA', 'WCOSS_C', 'WCOSS_DELL_P3'] - - -def makedirs_if_missing(d): - if not os.path.exists(d): - os.makedirs(d) - - -def create_EXPDIR(): - - makedirs_if_missing(expdir) - configs = glob.glob('%s/config.*' % configdir) - if len(configs) == 0: - msg = 'no config files found in %s' % configdir - raise IOError(msg) - for config in configs: - shutil.copy(config, expdir) - - return - - -def create_COMROT(): - - idatestr = idate.strftime('%Y%m%d%H') - cymd = idate.strftime('%Y%m%d') - chh = idate.strftime('%H') - - makedirs_if_missing(comrot) - - # Link ensemble member initial conditions - enkfdir = 'enkf.%s.%s/%s' % (cdump, cymd, chh) - makedirs_if_missing(os.path.join(comrot, enkfdir)) - for i in range(1, nens + 1): - makedirs_if_missing(os.path.join(comrot, enkfdir, 'mem%03d' % i)) - os.symlink(os.path.join(icsdir, idatestr, 'C%d' % resens, 'mem%03d' % i, 'INPUT'), - os.path.join(comrot, enkfdir, 'mem%03d' % i, 'INPUT')) - - # Link deterministic initial conditions - detdir = '%s.%s/%s' % (cdump, cymd, chh) - makedirs_if_missing(os.path.join(comrot, detdir)) - os.symlink(os.path.join(icsdir, idatestr, 'C%d' % resdet, 'control', 'INPUT'), - os.path.join(comrot, detdir, 'INPUT')) - - # Link bias correction and radiance diagnostics files - for fname in ['abias', 'abias_pc', 'abias_air', 'radstat']: - os.symlink(os.path.join(icsdir, idatestr, '%s.t%sz.%s' % (cdump, chh, fname)), - os.path.join(comrot, detdir, '%s.t%sz.%s' % (cdump, chh, fname))) - - return - - -def edit_baseconfig(): - - base_config = '%s/config.base' % expdir - - here = os.path.dirname(__file__) - top = os.path.abspath(os.path.join( - os.path.abspath(here), '../..')) - - # make a copy of the default before editing - shutil.copy(base_config, base_config + '.default') - - print '\nSDATE = %s\nEDATE = %s' % (idate, edate) - with open(base_config + '.default', 'rt') as fi: - with open(base_config + '.new', 'wt') as fo: - for line in fi: - line = line.replace('@MACHINE@', machine.upper()) \ - .replace('@PSLOT@', pslot) \ - .replace('@SDATE@', idate.strftime('%Y%m%d%H')) \ - .replace('@EDATE@', edate.strftime('%Y%m%d%H')) \ - .replace('@CASEENS@', 'C%d' % resens) \ - .replace('@CASECTL@', 'C%d' % resdet) \ - .replace('@NMEM_ENKF@', '%d' % nens) \ - .replace('@HOMEgfs@', top) \ - .replace('@gfs_cyc@', '%d' % gfs_cyc) - if expdir is not None: - line = line.replace('@EXPDIR@', os.path.dirname(expdir)) - if comrot is not None: - line = line.replace('@ROTDIR@', os.path.dirname(comrot)) - if 'ICSDIR' in line: - continue - fo.write(line) - os.unlink(base_config) - os.rename(base_config + '.new', base_config) - - print '' - print 'EDITED: %s/config.base as per user input.' % expdir - print 'DEFAULT: %s/config.base.default is for reference only.' % expdir - print 'Please verify and delete the default file before proceeding.' - print '' - - return - - -if __name__ == '__main__': - - description = '''Setup files and directories to start a GFS parallel. -Create EXPDIR, copy config files -Create COMROT experiment directory structure, -link initial condition files from $ICSDIR to $COMROT''' - - parser = ArgumentParser(description=description, formatter_class=ArgumentDefaultsHelpFormatter) - parser.add_argument('--pslot', help='parallel experiment name', type=str, required=False, default='test') - parser.add_argument('--resdet', help='resolution of the deterministic model forecast', type=int, required=False, default=384) - parser.add_argument('--resens', help='resolution of the ensemble model forecast', type=int, required=False, default=192) - parser.add_argument('--comrot', help='full path to COMROT', type=str, required=False, default=None) - parser.add_argument('--expdir', help='full path to EXPDIR', type=str, required=False, default=None) - parser.add_argument('--idate', help='starting date of experiment, initial conditions must exist!', type=str, required=True) - parser.add_argument('--edate', help='end date experiment', type=str, required=True) - parser.add_argument('--icsdir', help='full path to initial condition directory', type=str, required=True) - parser.add_argument('--configdir', help='full path to directory containing the config files', type=str, required=False, default=None) - parser.add_argument('--nens', help='number of ensemble members', type=int, required=False, default=20) - parser.add_argument('--cdump', help='CDUMP to start the experiment', type=str, required=False, default='gdas') - parser.add_argument('--gfs_cyc', help='GFS cycles to run', type=int, choices=[0, 1, 2, 4], default=1, required=False) - - args = parser.parse_args() - - if os.path.exists('/scratch3'): - machine = 'THEIA' - elif os.path.exists('/gpfs') and os.path.exists('/etc/SuSE-release'): - machine = 'WCOSS_C' - elif os.path.exists('/gpfs/dell2'): - machine = 'WCOSS_DELL_P3' - else: - print 'workflow is currently only supported on: %s' % ' '.join(machines) - raise NotImplementedError('Cannot auto-detect platform, ABORT!') - - configdir = args.configdir - if not configdir: - configdir = os.path.abspath(os.path.dirname(__file__) + '/../parm/config') - - pslot = args.pslot - idate = datetime.strptime(args.idate, '%Y%m%d%H') - edate = datetime.strptime(args.edate, '%Y%m%d%H') - icsdir = args.icsdir - resdet = args.resdet - resens = args.resens - comrot = args.comrot if args.comrot is None else os.path.join(args.comrot, pslot) - expdir = args.expdir if args.expdir is None else os.path.join(args.expdir, pslot) - nens = args.nens - cdump = args.cdump - gfs_cyc = args.gfs_cyc - - if not os.path.exists(icsdir): - msg = 'Initial conditions do not exist in %s' % icsdir - raise IOError(msg) - - # COMROT directory - create_comrot = True - if os.path.exists(comrot): - print - print 'COMROT already exists in %s' % comrot - print - overwrite_comrot = raw_input('Do you wish to over-write COMROT [y/N]: ') - create_comrot = True if overwrite_comrot in ['y', 'yes', 'Y', 'YES'] else False - if create_comrot: - shutil.rmtree(comrot) - - if create_comrot: - create_COMROT() - - # EXP directory - create_expdir = True - if os.path.exists(expdir): - print - print 'EXPDIR already exists in %s' % expdir - print - overwrite_expdir = raw_input('Do you wish to over-write EXPDIR [y/N]: ') - create_expdir = True if overwrite_expdir in ['y', 'yes', 'Y', 'YES'] else False - if create_expdir: - shutil.rmtree(expdir) - - if create_expdir: - create_EXPDIR() - edit_baseconfig() - - sys.exit(0) diff --git a/ush/rocoto/setup_expt_fcstonly.py b/ush/rocoto/setup_expt_fcstonly.py deleted file mode 100755 index 08d6c6595..000000000 --- a/ush/rocoto/setup_expt_fcstonly.py +++ /dev/null @@ -1,159 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### - -import os -import sys -import glob -import shutil -from datetime import datetime -from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter - - -global machines -global expdir, configdir, comrot, pslot, res, idate, edate, gfs_cyc - - -machines = ['THEIA', 'WCOSS_C', 'WCOSS_DELL_P3'] - - -def makedirs_if_missing(d): - if not os.path.exists(d): - os.makedirs(d) - - -def create_EXPDIR(): - - makedirs_if_missing(expdir) - configs = glob.glob('%s/config.*' % configdir) - if len(configs) == 0: - msg = 'no config files found in %s' % configdir - raise IOError(msg) - for config in configs: - shutil.copy(config, expdir) - - return - - -def create_COMROT(): - - makedirs_if_missing(comrot) - - return - - -def edit_baseconfig(): - - base_config = '%s/config.base' % expdir - - here = os.path.dirname(__file__) - top = os.path.abspath(os.path.join(os.path.abspath(here), '../..')) - - # make a copy of the default before editing - shutil.copy(base_config, base_config + '.default') - - print '\nSDATE = %s\nEDATE = %s' % (idate, edate) - with open(base_config + '.default', 'rt') as fi: - with open(base_config + '.new', 'wt') as fo: - for line in fi: - line = line.replace('@MACHINE@', machine.upper()) \ - .replace('@PSLOT@', pslot) \ - .replace('@SDATE@', idate.strftime('%Y%m%d%H')) \ - .replace('@EDATE@', edate.strftime('%Y%m%d%H')) \ - .replace('@CASECTL@', 'C%d' % res) \ - .replace('@HOMEgfs@', top) \ - .replace('@gfs_cyc@', '%d' % gfs_cyc) - if expdir is not None: - line = line.replace('@EXPDIR@', os.path.dirname(expdir)) - if comrot is not None: - line = line.replace('@ROTDIR@', os.path.dirname(comrot)) - line = line.replace('@ICSDIR@', os.path.join(os.path.dirname(comrot), 'FV3ICS')) - fo.write(line) - os.unlink(base_config) - os.rename(base_config + '.new', base_config) - - print '' - print 'EDITED: %s/config.base as per user input.' % expdir - print 'DEFAULT: %s/config.base.default is for reference only.' % expdir - print 'Please verify and delete the default file before proceeding.' - print '' - - return - - -if __name__ == '__main__': - - description = '''Setup files and directories to start a GFS parallel. -Create EXPDIR, copy config files -Create COMROT experiment directory structure''' - - parser = ArgumentParser(description=description, formatter_class=ArgumentDefaultsHelpFormatter) - parser.add_argument('--pslot', help='parallel experiment name', type=str, required=False, default='test') - parser.add_argument('--res', help='resolution of the model forecast', type=int, required=False, default=192) - parser.add_argument('--comrot', help='full path to COMROT', type=str, required=False, default=None) - parser.add_argument('--expdir', help='full path to EXPDIR', type=str, required=False, default=None) - parser.add_argument('--idate', help='starting date of experiment, initial conditions must exist!', type=str, required=True) - parser.add_argument('--edate', help='end date experiment', type=str, required=True) - parser.add_argument('--configdir', help='full path to directory containing the config files', type=str, required=False, default=None) - parser.add_argument('--gfs_cyc', help='GFS cycles to run', type=int, choices=[0, 1, 2, 4], default=1, required=False) - - args = parser.parse_args() - - if os.path.exists('/scratch3'): - machine = 'THEIA' - elif os.path.exists('/gpfs') and os.path.exists('/etc/SuSE-release'): - machine = 'WCOSS_C' - elif os.path.exists('/gpfs/dell2'): - machine = 'WCOSS_DELL_P3' - else: - print 'workflow is currently only supported on: %s' % ' '.join(machines) - raise NotImplementedError('Cannot auto-detect platform, ABORT!') - - configdir = args.configdir - if not configdir: - configdir = os.path.abspath(os.path.dirname(__file__) + '/../parm/config') - - pslot = args.pslot - idate = datetime.strptime(args.idate, '%Y%m%d%H') - edate = datetime.strptime(args.edate, '%Y%m%d%H') - res = args.res - comrot = args.comrot if args.comrot is None else os.path.join(args.comrot, pslot) - expdir = args.expdir if args.expdir is None else os.path.join(args.expdir, pslot) - gfs_cyc = args.gfs_cyc - - # COMROT directory - create_comrot = True - if os.path.exists(comrot): - print - print 'COMROT already exists in %s' % comrot - print - overwrite_comrot = raw_input('Do you wish to over-write COMROT [y/N]: ') - create_comrot = True if overwrite_comrot in ['y', 'yes', 'Y', 'YES'] else False - if create_comrot: - shutil.rmtree(comrot) - - if create_comrot: - create_COMROT() - - # EXP directory - create_expdir = True - if os.path.exists(expdir): - print - print 'EXPDIR already exists in %s' % expdir - print - overwrite_expdir = raw_input('Do you wish to over-write EXPDIR [y/N]: ') - create_expdir = True if overwrite_expdir in ['y', 'yes', 'Y', 'YES'] else False - if create_expdir: - shutil.rmtree(expdir) - - if create_expdir: - create_EXPDIR() - edit_baseconfig() - - sys.exit(0) diff --git a/ush/rocoto/setup_workflow.py b/ush/rocoto/setup_workflow.py deleted file mode 100755 index c296dbbac..000000000 --- a/ush/rocoto/setup_workflow.py +++ /dev/null @@ -1,831 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### -''' - PROGRAM: - Create the ROCOTO workflow given the configuration of the GFS parallel - - AUTHOR: - Rahul.Mahajan - rahul.mahajan@noaa.gov - - FILE DEPENDENCIES: - 1. config files for the parallel; e.g. config.base, config.fcst[.gfs], etc. - Without these dependencies, the script will fail - - OUTPUT: - 1. PSLOT.xml: XML workflow - 2. PSLOT.crontab: crontab for ROCOTO run command -''' - -import os -import sys -import numpy as np -from datetime import datetime, timedelta -from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter -from collections import OrderedDict -import rocoto -import workflow_utils as wfu - - -def main(): - parser = ArgumentParser(description='Setup XML workflow and CRONTAB for a GFS parallel.', formatter_class=ArgumentDefaultsHelpFormatter) - parser.add_argument('--expdir', help='full path to experiment directory containing config files', type=str, required=False, default=os.environ['PWD']) - args = parser.parse_args() - - configs = wfu.get_configs(args.expdir) - - _base = wfu.config_parser([wfu.find_config('config.base', configs)]) - - if not os.path.samefile(args.expdir, _base['EXPDIR']): - print 'MISMATCH in experiment directories!' - print 'config.base: EXPDIR = %s' % repr(_base['EXPDIR']) - print 'input arg: --expdir = %s' % repr(args.expdir) - sys.exit(1) - - gfs_steps = ['prep', 'anal', 'fcst', 'postsnd', 'post', 'awips', 'gempak', 'vrfy', 'arch'] - hyb_steps = ['eobs', 'eomg', 'eupd', 'ecen', 'efcs', 'epos', 'earc'] - - steps = gfs_steps + hyb_steps if _base.get('DOHYBVAR', 'NO') == 'YES' else gfs_steps - - dict_configs = wfu.source_configs(configs, steps) - - # Check and set gfs_cyc specific variables - if dict_configs['base']['gfs_cyc'] != 0: - dict_configs['base'] = get_gfs_cyc_dates(dict_configs['base']) - - # First create workflow XML - create_xml(dict_configs) - - # Next create the crontab - wfu.create_crontab(dict_configs['base']) - - return - - -def get_gfs_cyc_dates(base): - ''' - Generate GFS dates from experiment dates and gfs_cyc choice - ''' - - base_out = base.copy() - - gfs_cyc = base['gfs_cyc'] - sdate = base['SDATE'] - edate = base['EDATE'] - - interval_gfs = wfu.get_gfs_interval(gfs_cyc) - - # Set GFS cycling dates - hrdet = 0 - if gfs_cyc == 1: - hrinc = 24 - sdate.hour - hrdet = edate.hour - elif gfs_cyc == 2: - if sdate.hour in [0, 12]: - hrinc = 12 - elif sdate.hour in [6, 18]: - hrinc = 6 - if edate.hour in [6, 18]: - hrdet = 6 - elif gfs_cyc == 4: - hrinc = 6 - sdate_gfs = sdate + timedelta(hours=hrinc) - edate_gfs = edate - timedelta(hours=hrdet) - if sdate_gfs > edate: - print 'W A R N I N G!' - print 'Starting date for GFS cycles is after Ending date of experiment' - print 'SDATE = %s, EDATE = %s' % (sdate.strftime('%Y%m%d%H'), edate.strftime('%Y%m%d%H')) - print 'SDATE_GFS = %s, EDATE_GFS = %s' % (sdate_gfs.strftime('%Y%m%d%H'), edate_gfs.strftime('%Y%m%d%H')) - gfs_cyc = 0 - - base_out['gfs_cyc'] = gfs_cyc - base_out['SDATE_GFS'] = sdate_gfs - base_out['EDATE_GFS'] = edate_gfs - base_out['INTERVAL_GFS'] = interval_gfs - - fhmax_gfs = {} - for hh in ['00', '06', '12', '18']: - fhmax_gfs[hh] = base.get('FHMAX_GFS_%s' % hh, 'FHMAX_GFS_00') - base_out['FHMAX_GFS'] = fhmax_gfs - - return base_out - - -def get_preamble(): - ''' - Generate preamble for XML - ''' - - strings = [] - - strings.append('\n') - strings.append('\n') - - return ''.join(strings) - - -def get_definitions(base): - ''' - Create entities related to the experiment - ''' - - strings = [] - - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['PSLOT']) - strings.append('\t\n' % base['SDATE'].strftime('%Y%m%d%H%M')) - strings.append('\t\n' % base['EDATE'].strftime('%Y%m%d%H%M')) - - if base['gfs_cyc'] != 0: - strings.append(get_gfs_dates(base)) - strings.append('\n') - - strings.append('\t\n') - strings.append('\t\n' % base['RUN_ENVIR']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['EXPDIR']) - strings.append('\t\n' % base['ROTDIR']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['HOMEgfs']) - strings.append('\t\n' % base['BASE_JOB']) - strings.append('\t\n' % base['DMPDIR']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['ACCOUNT']) - strings.append('\t\n' % base['QUEUE']) - strings.append('\t\n' % base['QUEUE_ARCH']) - strings.append('\t\n' % wfu.get_scheduler(base['machine'])) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\n') - - return ''.join(strings) - - -def get_gfs_dates(base): - ''' - Generate GFS dates entities - ''' - - strings = [] - - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['SDATE_GFS'].strftime('%Y%m%d%H%M')) - strings.append('\t\n' % base['EDATE_GFS'].strftime('%Y%m%d%H%M')) - strings.append('\t\n' % base['INTERVAL_GFS']) - - return ''.join(strings) - - -def get_gdasgfs_resources(dict_configs, cdump='gdas'): - ''' - Create GDAS or GFS resource entities - ''' - - base = dict_configs['base'] - machine = base.get('machine', 'WCOSS_C') - do_bufrsnd = base.get('DO_BUFRSND', 'NO').upper() - do_gempak = base.get('DO_GEMPAK', 'NO').upper() - do_awips = base.get('DO_AWIPS', 'NO').upper() - - tasks = ['prep', 'anal', 'fcst', 'post', 'vrfy', 'arch'] - - if cdump in ['gfs'] and do_bufrsnd in ['Y', 'YES']: - tasks += ['postsnd'] - if cdump in ['gfs'] and do_gempak in ['Y', 'YES']: - tasks += ['gempak'] - if cdump in ['gfs'] and do_awips in ['Y', 'YES']: - tasks += ['awips'] - - dict_resources = OrderedDict() - - for task in tasks: - - cfg = dict_configs[task] - - wtimestr, resstr, queuestr, memstr, natstr = wfu.get_resources(machine, cfg, task, cdump=cdump) - taskstr = '%s_%s' % (task.upper(), cdump.upper()) - - strings = [] - strings.append('\t\n' % (taskstr, queuestr)) - strings.append('\t\n' % (taskstr, wtimestr)) - strings.append('\t\n' % (taskstr, resstr)) - strings.append('\t\n' % (taskstr, memstr)) - strings.append('\t\n' % (taskstr, natstr)) - - dict_resources['%s%s' % (cdump, task)] = ''.join(strings) - - return dict_resources - - -def get_hyb_resources(dict_configs): - ''' - Create hybrid resource entities - ''' - - base = dict_configs['base'] - machine = base.get('machine', 'WCOSS_C') - lobsdiag_forenkf = base.get('lobsdiag_forenkf', '.false.').upper() - eupd_cyc= base.get('EUPD_CYC', 'gdas').upper() - - dict_resources = OrderedDict() - - # These tasks can be run in either or both cycles - tasks1 = ['eobs', 'eomg', 'eupd'] - if lobsdiag_forenkf in ['.T.', '.TRUE.']: - tasks.remove('eomg') - - if eupd_cyc in ['BOTH']: - cdumps = ['gfs', 'gdas'] - elif eupd_cyc in ['GFS']: - cdumps = ['gfs'] - elif eupd_cyc in ['GDAS']: - cdumps = ['gdas'] - - for cdump in cdumps: - for task in tasks1: - - cfg = dict_configs['eobs'] if task in ['eomg'] else dict_configs[task] - - wtimestr, resstr, queuestr, memstr, natstr = wfu.get_resources(machine, cfg, task, cdump=cdump) - - taskstr = '%s_%s' % (task.upper(), cdump.upper()) - - strings = [] - - strings.append('\t\n' % (taskstr, queuestr)) - strings.append('\t\n' % (taskstr, wtimestr)) - strings.append('\t\n' % (taskstr, resstr)) - strings.append('\t\n' % (taskstr, memstr)) - strings.append('\t\n' % (taskstr, natstr)) - - dict_resources['%s%s' % (cdump, task)] = ''.join(strings) - - - # These tasks are always run as part of the GDAS cycle - cdump = 'gdas' - tasks2 = ['ecen', 'efcs', 'epos', 'earc'] - for task in tasks2: - - cfg = dict_configs[task] - - wtimestr, resstr, queuestr, memstr, natstr = wfu.get_resources(machine, cfg, task, cdump=cdump) - - taskstr = '%s_%s' % (task.upper(), cdump.upper()) - - strings = [] - strings.append('\t\n' % (taskstr, queuestr)) - strings.append('\t\n' % (taskstr, wtimestr)) - strings.append('\t\n' % (taskstr, resstr)) - strings.append('\t\n' % (taskstr, memstr)) - strings.append('\t\n' % (taskstr, natstr)) - - dict_resources['%s%s' % (cdump, task)] = ''.join(strings) - - return dict_resources - - -def get_gdasgfs_tasks(dict_configs, cdump='gdas'): - ''' - Create GDAS or GFS tasks - ''' - - envars = [] - envars.append(rocoto.create_envar(name='RUN_ENVIR', value='&RUN_ENVIR;')) - envars.append(rocoto.create_envar(name='HOMEgfs', value='&HOMEgfs;')) - envars.append(rocoto.create_envar(name='EXPDIR', value='&EXPDIR;')) - envars.append(rocoto.create_envar(name='CDATE', value='@Y@m@d@H')) - envars.append(rocoto.create_envar(name='CDUMP', value='%s' % cdump)) - envars.append(rocoto.create_envar(name='PDY', value='@Y@m@d')) - envars.append(rocoto.create_envar(name='cyc', value='@H')) - - base = dict_configs['base'] - gfs_cyc = base.get('gfs_cyc', 0) - dohybvar = base.get('DOHYBVAR', 'NO').upper() - eupd_cyc = base.get('EUPD_CYC', 'gdas').upper() - do_bufrsnd = base.get('DO_BUFRSND', 'NO').upper() - do_gempak = base.get('DO_GEMPAK', 'NO').upper() - do_awips = base.get('DO_AWIPS', 'NO').upper() - - dict_tasks = OrderedDict() - - # prep - deps = [] - dep_dict = {'type': 'metatask', 'name': '%spost' % 'gdas', 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ROTDIR;/gdas.@Y@m@d/@H/gdas.t@Hz.atmf009.nemsio' - dep_dict = {'type': 'data', 'data': data, 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&DMPDIR;/@Y@m@d@H/%s/%s.t@Hz.updated.status.tm00.bufr_d' % (cdump, cdump) - dep_dict = {'type': 'data', 'data': data} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - - gfs_enkf = True if eupd_cyc in ['BOTH', 'GFS'] and dohybvar in ['Y', 'YES'] else False - - if gfs_enkf and cdump in ['gfs']: - if gfs_cyc == 4: - task = wfu.create_wf_task('prep', cdump=cdump, envar=envars, dependency=dependencies) - else: - task = wfu.create_wf_task('prep', cdump=cdump, envar=envars, dependency=dependencies, cycledef='gdas') - - else: - task = wfu.create_wf_task('prep', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%sprep' % cdump] = task - - # anal - deps = [] - dep_dict = {'type': 'task', 'name': '%sprep' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - if dohybvar in ['y', 'Y', 'yes', 'YES']: - dep_dict = {'type': 'metatask', 'name': '%sepmn' % 'gdas', 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - else: - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('anal', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%sanal' % cdump] = task - - # fcst - deps = [] - dep_dict = {'type': 'task', 'name': '%sanal' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - if cdump in ['gdas']: - dep_dict = {'type': 'cycleexist', 'condition': 'not', 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='or', dep=deps) - elif cdump in ['gfs']: - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('fcst', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%sfcst' % cdump] = task - - # post - deps = [] - data = '&ROTDIR;/%s.@Y@m@d/@H/%s.t@Hz.log#dep#.nemsio' % (cdump, cdump) - dep_dict = {'type': 'data', 'data': data} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'task', 'name': '%sfcst' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='or', dep=deps) - fhrgrp = rocoto.create_envar(name='FHRGRP', value='#grp#') - fhrlst = rocoto.create_envar(name='FHRLST', value='#lst#') - ROTDIR = rocoto.create_envar(name='ROTDIR', value='&ROTDIR;') - postenvars = envars + [fhrgrp] + [fhrlst] + [ROTDIR] - varname1, varname2, varname3 = 'grp', 'dep', 'lst' - varval1, varval2, varval3 = get_postgroups(dict_configs['post'], cdump=cdump) - vardict = {varname2: varval2, varname3: varval3} - task = wfu.create_wf_task('post', cdump=cdump, envar=postenvars, dependency=dependencies, - metatask='post', varname=varname1, varval=varval1, vardict=vardict) - - dict_tasks['%spost' % cdump] = task - - # vrfy - deps = [] - dep_dict = {'type': 'metatask', 'name': '%spost' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('vrfy', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%svrfy' % cdump] = task - - - if cdump in ['gfs'] and do_bufrsnd in ['Y', 'YES']: - #postsnd - deps = [] - dep_dict = {'type': 'task', 'name': '%sfcst' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('postsnd', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%spostsnd' % cdump] = task - - if cdump in ['gfs'] and do_awips in ['Y', 'YES']: - # awips - deps = [] - data = '&ROTDIR;/%s.@Y@m@d/@H/%s.t@Hz.sfluxgrb#dep#.grib2.idx' % (cdump, cdump) - dep_dict = {'type': 'data', 'data': data} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'metatask', 'name': '%spost' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='or', dep=deps) - fhrgrp = rocoto.create_envar(name='FHRGRP', value='#grp#') - fhrlst = rocoto.create_envar(name='FHRLST', value='#lst#') - ROTDIR = rocoto.create_envar(name='ROTDIR', value='&ROTDIR;') - awipsenvars = envars + [fhrgrp] + [fhrlst] + [ROTDIR] - varname1, varname2, varname3 = 'grp', 'dep', 'lst' - varval1, varval2, varval3 = get_awipsgroups(dict_configs['awips'], cdump=cdump) - vardict = {varname2: varval2, varname3: varval3} - task = wfu.create_wf_task('awips', cdump=cdump, envar=awipsenvars, dependency=dependencies, - metatask='awips', varname=varname1, varval=varval1, vardict=vardict) - - dict_tasks['%sawips' % cdump] = task - - if cdump in ['gfs'] and do_gempak in ['Y', 'YES']: - # gempak - deps = [] - dep_dict = {'type': 'metatask', 'name': '%spost' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('gempak', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%sgempak' % cdump] = task - - # arch - deps = [] - dep_dict = {'type': 'task', 'name': '%svrfy' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'streq', 'left': '&ARCHIVE_TO_HPSS;', 'right': 'YES'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - task = wfu.create_wf_task('arch', cdump=cdump, envar=envars, dependency=dependencies) - - dict_tasks['%sarch' % cdump] = task - - return dict_tasks - - -def get_hyb_tasks(dict_configs, cycledef='enkf'): - ''' - Create Hybrid tasks - ''' - - # Determine groups based on ensemble size and grouping - base = dict_configs['base'] - nens = base['NMEM_ENKF'] - lobsdiag_forenkf = base.get('lobsdiag_forenkf', '.false.').upper() - eupd_cyc = base.get('EUPD_CYC', 'gdas').upper() - - eobs = dict_configs['eobs'] - nens_eomg = eobs['NMEM_EOMGGRP'] - neomg_grps = nens / nens_eomg - EOMGGROUPS = ' '.join(['%02d' % x for x in range(1, neomg_grps + 1)]) - - efcs = dict_configs['efcs'] - nens_efcs = efcs['NMEM_EFCSGRP'] - nefcs_grps = nens / nens_efcs - EFCSGROUPS = ' '.join(['%02d' % x for x in range(1, nefcs_grps + 1)]) - - earc = dict_configs['earc'] - nens_earc = earc['NMEM_EARCGRP'] - nearc_grps = nens / nens_earc - EARCGROUPS = ' '.join(['%02d' % x for x in range(0, nearc_grps + 1)]) - - envars = [] - envars.append(rocoto.create_envar(name='RUN_ENVIR', value='&RUN_ENVIR;')) - envars.append(rocoto.create_envar(name='HOMEgfs', value='&HOMEgfs;')) - envars.append(rocoto.create_envar(name='EXPDIR', value='&EXPDIR;')) - envars.append(rocoto.create_envar(name='CDATE', value='@Y@m@d@H')) - #envars.append(rocoto.create_envar(name='CDUMP', value='%s' % cdump)) - envars.append(rocoto.create_envar(name='PDY', value='@Y@m@d')) - envars.append(rocoto.create_envar(name='cyc', value='@H')) - - ensgrp = rocoto.create_envar(name='ENSGRP', value='#grp#') - - dict_tasks = OrderedDict() - - if eupd_cyc in ['BOTH']: - cdumps = ['gfs', 'gdas'] - elif eupd_cyc in ['GFS']: - cdumps = ['gfs'] - elif eupd_cyc in ['GDAS']: - cdumps = ['gdas'] - - for cdump in cdumps: - - envar_cdump = rocoto.create_envar(name='CDUMP', value='%s' % cdump) - envars1 = envars + [envar_cdump] - - # eobs - deps = [] - dep_dict = {'type': 'task', 'name': '%sprep' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'metatask', 'name': '%sepmn' % 'gdas', 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - task = wfu.create_wf_task('eobs', cdump=cdump, envar=envars1, dependency=dependencies, cycledef=cycledef) - - dict_tasks['%seobs' % cdump] = task - - # eomn, eomg - if lobsdiag_forenkf in ['.F.', '.FALSE.']: - deps = [] - dep_dict = {'type': 'task', 'name': '%seobs' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - eomgenvars= envars1 + [ensgrp] - task = wfu.create_wf_task('eomg', cdump=cdump, envar=eomgenvars, dependency=dependencies, - metatask='eomn', varname='grp', varval=EOMGGROUPS, cycledef=cycledef) - - dict_tasks['%seomn' % cdump] = task - - # eupd - deps = [] - if lobsdiag_forenkf in ['.F.', '.FALSE.']: - dep_dict = {'type': 'metatask', 'name': '%seomn' % cdump} - else: - dep_dict = {'type': 'task', 'name': '%seobs' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('eupd', cdump=cdump, envar=envars1, dependency=dependencies, cycledef=cycledef) - - dict_tasks['%seupd' % cdump] = task - - # All hybrid tasks beyond this point are always executed in the GDAS cycle - cdump = 'gdas' - envar_cdump = rocoto.create_envar(name='CDUMP', value='%s' % cdump) - envars1 = envars + [envar_cdump] - cdump_eupd = 'gfs' if eupd_cyc in ['GFS'] else 'gdas' - - # ecen - deps = [] - dep_dict = {'type': 'task', 'name': '%sanal' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'task', 'name': '%seupd' % cdump_eupd} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - task = wfu.create_wf_task('ecen', cdump=cdump, envar=envars1, dependency=dependencies, cycledef=cycledef) - - dict_tasks['%secen' % cdump] = task - - # efmn, efcs - deps = [] - dep_dict = {'type': 'task', 'name': '%secen' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type': 'cycleexist', 'condition': 'not', 'offset': '-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='or', dep=deps) - efcsenvars = envars1 + [ensgrp] - task = wfu.create_wf_task('efcs', cdump=cdump, envar=efcsenvars, dependency=dependencies, - metatask='efmn', varname='grp', varval=EFCSGROUPS, cycledef=cycledef) - - dict_tasks['%sefmn' % cdump] = task - - # epmn, epos - deps = [] - dep_dict = {'type': 'metatask', 'name': '%sefmn' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - fhrgrp = rocoto.create_envar(name='FHRGRP', value='#grp#') - fhrlst = rocoto.create_envar(name='FHRLST', value='#lst#') - eposenvars = envars1 + [fhrgrp] + [fhrlst] - varname1, varname2, varname3 = 'grp', 'dep', 'lst' - varval1, varval2, varval3 = get_eposgroups(dict_configs['epos'], cdump=cdump) - vardict = {varname2: varval2, varname3: varval3} - task = wfu.create_wf_task('epos', cdump=cdump, envar=eposenvars, dependency=dependencies, - metatask='epmn', varname=varname1, varval=varval1, vardict=vardict) - - dict_tasks['%sepmn' % cdump] = task - - # eamn, earc - deps = [] - dep_dict = {'type': 'metatask', 'name': '%sepmn' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - earcenvars = envars1 + [ensgrp] - task = wfu.create_wf_task('earc', cdump=cdump, envar=earcenvars, dependency=dependencies, - metatask='eamn', varname='grp', varval=EARCGROUPS, cycledef=cycledef) - - dict_tasks['%seamn' % cdump] = task - - return dict_tasks - - -def get_workflow_header(base): - ''' - Create the workflow header block - ''' - - strings = [] - - strings.append('\n') - strings.append(']>\n') - strings.append('\n') - strings.append('\n') - strings.append('\n') - strings.append('\t&EXPDIR;/logs/@Y@m@d@H.log\n') - strings.append('\n') - strings.append('\t\n') - strings.append('\t&SDATE; &SDATE; 06:00:00\n') - strings.append('\t&SDATE; &EDATE; 06:00:00\n') - strings.append('\t&SDATE; &EDATE; 06:00:00\n') - if base['gfs_cyc'] != 0: - strings.append('\t&SDATE_GFS; &EDATE_GFS; &INTERVAL_GFS;\n') - - strings.append('\n') - - return ''.join(strings) - - -def get_workflow_footer(): - ''' - Generate workflow footer - ''' - - strings = [] - strings.append('\n\n') - - return ''.join(strings) - - -def get_postgroups(post, cdump='gdas'): - - fhmin = post['FHMIN'] - fhmax = post['FHMAX'] - fhout = post['FHOUT'] - - # Get a list of all forecast hours - if cdump in ['gdas']: - fhrs = range(fhmin, fhmax+fhout, fhout) - elif cdump in ['gfs']: - fhmax = np.max([post['FHMAX_GFS_00'],post['FHMAX_GFS_06'],post['FHMAX_GFS_12'],post['FHMAX_GFS_18']]) - fhout = post['FHOUT_GFS'] - fhmax_hf = post['FHMAX_HF_GFS'] - fhout_hf = post['FHOUT_HF_GFS'] - fhrs_hf = range(fhmin, fhmax_hf+fhout_hf, fhout_hf) - fhrs = fhrs_hf + range(fhrs_hf[-1]+fhout, fhmax+fhout, fhout) - - npostgrp = post['NPOSTGRP'] - ngrps = npostgrp if len(fhrs) > npostgrp else len(fhrs) - - fhrs = ['f%03d' % f for f in fhrs] - fhrs = np.array_split(fhrs, ngrps) - fhrs = [f.tolist() for f in fhrs] - - fhrgrp = ' '.join(['%03d' % x for x in range(0, ngrps+1)]) - fhrdep = ' '.join(['f000'] + [f[-1] for f in fhrs]) - fhrlst = ' '.join(['anl'] + ['_'.join(f) for f in fhrs]) - - return fhrgrp, fhrdep, fhrlst - -def get_awipsgroups(awips, cdump='gdas'): - - fhmin = awips['FHMIN'] - fhmax = awips['FHMAX'] - fhout = awips['FHOUT'] - - # Get a list of all forecast hours - if cdump in ['gdas']: - fhrs = range(fhmin, fhmax+fhout, fhout) - elif cdump in ['gfs']: - fhmax = np.max([awips['FHMAX_GFS_00'],awips['FHMAX_GFS_06'],awips['FHMAX_GFS_12'],awips['FHMAX_GFS_18']]) - fhout = awips['FHOUT_GFS'] - fhmax_hf = awips['FHMAX_HF_GFS'] - fhout_hf = awips['FHOUT_HF_GFS'] - if fhmax > 240: - fhmax = 240 - if fhmax_hf > 240: - fhmax_hf = 240 - fhrs_hf = range(fhmin, fhmax_hf+fhout_hf, fhout_hf) - fhrs = fhrs_hf + range(fhrs_hf[-1]+fhout, fhmax+fhout, fhout) - - nawipsgrp = awips['NAWIPSGRP'] - ngrps = nawipsgrp if len(fhrs) > nawipsgrp else len(fhrs) - - fhrs = ['f%03d' % f for f in fhrs] - fhrs = np.array_split(fhrs, ngrps) - fhrs = [f.tolist() for f in fhrs] - - fhrgrp = ' '.join(['%03d' % x for x in range(0, ngrps)]) - fhrdep = ' '.join([f[-1] for f in fhrs]) - fhrlst = ' '.join(['_'.join(f) for f in fhrs]) - - return fhrgrp, fhrdep, fhrlst - -def get_eposgroups(epos, cdump='gdas'): - - fhmin = epos['FHMIN_ENKF'] - fhmax = epos['FHMAX_ENKF'] - fhout = epos['FHOUT_ENKF'] - fhrs = range(fhmin, fhmax+fhout, fhout) - - neposgrp = epos['NEPOSGRP'] - ngrps = neposgrp if len(fhrs) > neposgrp else len(fhrs) - - fhrs = ['f%03d' % f for f in fhrs] - fhrs = np.array_split(fhrs, ngrps) - fhrs = [f.tolist() for f in fhrs] - - fhrgrp = ' '.join(['%03d' % x for x in range(0, ngrps)]) - fhrdep = ' '.join([f[-1] for f in fhrs]) - fhrlst = ' '.join(['_'.join(f) for f in fhrs]) - - return fhrgrp, fhrdep, fhrlst - - -def dict_to_strings(dict_in): - - strings = [] - for key in dict_in.keys(): - strings.append(dict_in[key]) - strings.append('\n') - - return ''.join(strings) - - -def create_xml(dict_configs): - ''' - Given an dictionary of sourced config files, - create the workflow XML - ''' - - base = dict_configs['base'] - dohybvar = base.get('DOHYBVAR', 'NO').upper() - gfs_cyc = base.get('gfs_cyc', 0) - eupd_cyc = base.get('EUPD_CYC', 'gdas').upper() - - # Start collecting workflow pieces - preamble = get_preamble() - definitions = get_definitions(base) - workflow_header = get_workflow_header(base) - workflow_footer = get_workflow_footer() - - # Get GDAS related entities, resources, workflow - dict_gdas_resources = get_gdasgfs_resources(dict_configs) - dict_gdas_tasks = get_gdasgfs_tasks(dict_configs) - - # Get hybrid related entities, resources, workflow - if dohybvar in ['Y', 'YES']: - dict_hyb_resources = get_hyb_resources(dict_configs) - dict_hyb_tasks = get_hyb_tasks(dict_configs) - - # Get GFS cycle related entities, resources, workflow - dict_gfs_resources = get_gdasgfs_resources(dict_configs, cdump='gfs') - dict_gfs_tasks = get_gdasgfs_tasks(dict_configs, cdump='gfs') - - # Put together the XML file - xmlfile = [] - - xmlfile.append(preamble) - - xmlfile.append(definitions) - - xmlfile.append(dict_to_strings(dict_gdas_resources)) - - if dohybvar in ['Y', 'YES']: - xmlfile.append(dict_to_strings(dict_hyb_resources)) - - if gfs_cyc != 0: - xmlfile.append(dict_to_strings(dict_gfs_resources)) - elif gfs_cyc == 0 and dohybvar in ['Y', 'YES'] and eupd_cyc in ['BOTH', 'GFS']: - xmlfile.append(dict_gfs_resources['gfsprep']) - - xmlfile.append(workflow_header) - - xmlfile.append(dict_to_strings(dict_gdas_tasks)) - - if dohybvar in ['Y', 'YES']: - xmlfile.append(dict_to_strings(dict_hyb_tasks)) - - if gfs_cyc != 0: - xmlfile.append(dict_to_strings(dict_gfs_tasks)) - elif gfs_cyc == 0 and dohybvar in ['Y', 'YES'] and eupd_cyc in ['BOTH', 'GFS']: - xmlfile.append(dict_gfs_tasks['gfsprep']) - xmlfile.append('\n') - - xmlfile.append(wfu.create_firstcyc_task()) - - xmlfile.append(workflow_footer) - - # Write the XML file - fh = open('%s/%s.xml' % (base['EXPDIR'], base['PSLOT']), 'w') - fh.write(''.join(xmlfile)) - fh.close() - - return - - -if __name__ == '__main__': - main() - sys.exit(0) diff --git a/ush/rocoto/setup_workflow_fcstonly.py b/ush/rocoto/setup_workflow_fcstonly.py deleted file mode 100755 index c5300e0f0..000000000 --- a/ush/rocoto/setup_workflow_fcstonly.py +++ /dev/null @@ -1,378 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### -''' - PROGRAM: - Create the ROCOTO workflow for a forecast only experiment given the configuration of the GFS parallel - - AUTHOR: - Rahul.Mahajan - rahul.mahajan@noaa.gov - - FILE DEPENDENCIES: - 1. config files for the parallel; e.g. config.base, config.fcst[.gfs], etc. - Without this dependency, the script will fail - - OUTPUT: - 1. PSLOT.xml: XML workflow - 2. PSLOT.crontab: crontab for ROCOTO run command - -''' - -import os -import sys -import numpy as np -from datetime import datetime -from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter -import rocoto -import workflow_utils as wfu - - -taskplan = ['getic', 'fv3ic', 'fcst', 'post', 'vrfy', 'arch'] - -def main(): - parser = ArgumentParser(description='Setup XML workflow and CRONTAB for a forecast only experiment.', formatter_class=ArgumentDefaultsHelpFormatter) - parser.add_argument('--expdir',help='full path to experiment directory containing config files', type=str, required=False, default=os.environ['PWD']) - parser.add_argument('--cdump',help='cycle to run forecasts', type=str, choices=['gdas', 'gfs'], default='gfs', required=False) - - args = parser.parse_args() - - configs = wfu.get_configs(args.expdir) - - _base = wfu.config_parser([wfu.find_config('config.base', configs)]) - - if not os.path.samefile(args.expdir,_base['EXPDIR']): - print 'MISMATCH in experiment directories!' - print 'config.base: EXPDIR = %s' % repr(_base['EXPDIR']) - print 'input arg: --expdir = %s' % repr(args.expdir) - sys.exit(1) - - dict_configs = wfu.source_configs(configs, taskplan) - - dict_configs['base']['CDUMP'] = args.cdump - - # First create workflow XML - create_xml(dict_configs) - - # Next create the crontab - wfu.create_crontab(dict_configs['base']) - - return - - -def get_preamble(): - ''' - Generate preamble for XML - ''' - - strings = [] - - strings.append('\n') - strings.append('\n') - - return ''.join(strings) - - -def get_definitions(base): - ''' - Create entities related to the experiment - ''' - - strings = [] - - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['PSLOT']) - strings.append('\t\n' % base['CDUMP']) - strings.append('\t\n' % base['CASE']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['SDATE'].strftime('%Y%m%d%H%M')) - strings.append('\t\n' % base['EDATE'].strftime('%Y%m%d%H%M')) - if base['INTERVAL'] is None: - print 'cycle INTERVAL cannot be None' - sys.exit(1) - strings.append('\t\n' % base['INTERVAL']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['RUN_ENVIR']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['EXPDIR']) - strings.append('\t\n' % base['ROTDIR']) - strings.append('\t\n' % base['ICSDIR']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['HOMEgfs']) - strings.append('\t\n' % base['BASE_JOB']) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n' % base['ACCOUNT']) - strings.append('\t\n' % base['QUEUE']) - strings.append('\t\n' % base['QUEUE_ARCH']) - strings.append('\t\n' % wfu.get_scheduler(base['machine'])) - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\t\n') - strings.append('\n') - - return ''.join(strings) - - -def get_resources(dict_configs, cdump='gdas'): - ''' - Create resource entities - ''' - - strings = [] - - strings.append('\t\n') - strings.append('\n') - - machine = dict_configs['base']['machine'] - - for task in taskplan: - - cfg = dict_configs[task] - - wtimestr, resstr, queuestr, memstr, natstr = wfu.get_resources(machine, cfg, task, cdump=cdump) - - taskstr = '%s_%s' % (task.upper(), cdump.upper()) - - strings.append('\t\n' % (taskstr, queuestr)) - strings.append('\t\n' % (taskstr, wtimestr)) - strings.append('\t\n' % (taskstr, resstr)) - strings.append('\t\n' % (taskstr, memstr)) - strings.append('\t\n' % (taskstr, natstr)) - - strings.append('\n') - - strings.append('\t\n') - - return ''.join(strings) - - -def get_postgroups(post, cdump='gdas'): - - fhmin = post['FHMIN'] - fhmax = post['FHMAX'] - fhout = post['FHOUT'] - - # Get a list of all forecast hours - if cdump in ['gdas']: - fhrs = range(fhmin, fhmax+fhout, fhout) - elif cdump in ['gfs']: - fhmax = np.max([post['FHMAX_GFS_00'],post['FHMAX_GFS_06'],post['FHMAX_GFS_12'],post['FHMAX_GFS_18']]) - fhout = post['FHOUT_GFS'] - fhmax_hf = post['FHMAX_HF_GFS'] - fhout_hf = post['FHOUT_HF_GFS'] - fhrs_hf = range(fhmin, fhmax_hf+fhout_hf, fhout_hf) - fhrs = fhrs_hf + range(fhrs_hf[-1]+fhout, fhmax+fhout, fhout) - - npostgrp = post['NPOSTGRP'] - ngrps = npostgrp if len(fhrs) > npostgrp else len(fhrs) - - fhrs = ['f%03d' % f for f in fhrs] - fhrs = np.array_split(fhrs, ngrps) - fhrs = [f.tolist() for f in fhrs] - - fhrgrp = ' '.join(['%03d' % x for x in range(1, ngrps+1)]) - fhrdep = ' '.join([f[-1] for f in fhrs]) - fhrlst = ' '.join(['_'.join(f) for f in fhrs]) - - return fhrgrp, fhrdep, fhrlst - - -def get_workflow(dict_configs, cdump='gdas'): - ''' - Create tasks for forecast only workflow - ''' - - envars = [] - envars.append(rocoto.create_envar(name='RUN_ENVIR', value='&RUN_ENVIR;')) - envars.append(rocoto.create_envar(name='HOMEgfs', value='&HOMEgfs;')) - envars.append(rocoto.create_envar(name='EXPDIR', value='&EXPDIR;')) - envars.append(rocoto.create_envar(name='CDATE', value='@Y@m@d@H')) - envars.append(rocoto.create_envar(name='CDUMP', value='&CDUMP;')) - envars.append(rocoto.create_envar(name='PDY', value='@Y@m@d')) - envars.append(rocoto.create_envar(name='cyc', value='@H')) - - tasks = [] - - # getics - deps = [] - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/pgbanl.&CDUMP;.@Y@m@d@H' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/siganl.&CDUMP;.@Y@m@d@H' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/sfcanl.&CDUMP;.@Y@m@d@H' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - deps = rocoto.create_dependency(dep_condition='and', dep=deps) - dependencies = rocoto.create_dependency(dep_condition='not', dep=deps) - task = wfu.create_wf_task('getic', cdump=cdump, envar=envars, dependency=dependencies) - tasks.append(task) - tasks.append('\n') - - # chgres - deps = [] - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/siganl.&CDUMP;.@Y@m@d@H' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/sfcanl.&CDUMP;.@Y@m@d@H' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - - deps = [] - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/&CASE;/INPUT/gfs_data.tile6.nc' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/&CASE;/INPUT/sfc_data.tile6.nc' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - deps = rocoto.create_dependency(dep_condition='and', dep=deps) - dependencies2 = rocoto.create_dependency(dep_condition='not', dep=deps) - - deps = [] - deps.append(dependencies) - deps.append(dependencies2) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - - task = wfu.create_wf_task('fv3ic', cdump=cdump, envar=envars, dependency=dependencies) - tasks.append(task) - tasks.append('\n') - - # fcst - deps = [] - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/&CASE;/INPUT/gfs_data.tile6.nc' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - data = '&ICSDIR;/@Y@m@d@H/&CDUMP;/&CASE;/INPUT/sfc_data.tile6.nc' - dep_dict = {'type':'data', 'data':data} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - task = wfu.create_wf_task('fcst', cdump=cdump, envar=envars, dependency=dependencies) - tasks.append(task) - tasks.append('\n') - - # post - deps = [] - data = '&ROTDIR;/%s.@Y@m@d/@H/%s.t@Hz.log#dep#.nemsio' % (cdump, cdump) - dep_dict = {'type': 'data', 'data': data} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - fhrgrp = rocoto.create_envar(name='FHRGRP', value='#grp#') - fhrlst = rocoto.create_envar(name='FHRLST', value='#lst#') - ROTDIR = rocoto.create_envar(name='ROTDIR', value='&ROTDIR;') - postenvars = envars + [fhrgrp] + [fhrlst] + [ROTDIR] - varname1, varname2, varname3 = 'grp', 'dep', 'lst' - varval1, varval2, varval3 = get_postgroups(dict_configs['post'], cdump=cdump) - vardict = {varname2: varval2, varname3: varval3} - task = wfu.create_wf_task('post', cdump=cdump, envar=postenvars, dependency=dependencies, - metatask='post', varname=varname1, varval=varval1, vardict=vardict) - tasks.append(task) - tasks.append('\n') - - # vrfy - deps = [] - dep_dict = {'type':'metatask', 'name':'%spost' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep=deps) - task = wfu.create_wf_task('vrfy', cdump=cdump, envar=envars, dependency=dependencies) - tasks.append(task) - tasks.append('\n') - - # arch - deps = [] - dep_dict = {'type':'task', 'name':'%svrfy' % cdump} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type':'streq', 'left':'&ARCHIVE_TO_HPSS;', 'right':'YES'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - task = wfu.create_wf_task('arch', cdump=cdump, envar=envars, dependency=dependencies, final=True) - tasks.append(task) - tasks.append('\n') - - return ''.join(tasks) - - -def get_workflow_body(dict_configs, cdump='gdas'): - ''' - Create the workflow body - ''' - - strings = [] - - strings.append('\n') - strings.append(']>\n') - strings.append('\n') - strings.append('\n') - strings.append('\n') - strings.append('\t&EXPDIR;/logs/@Y@m@d@H.log\n') - strings.append('\n') - strings.append('\t\n') - strings.append('\t&SDATE; &EDATE; &INTERVAL;\n' % cdump) - strings.append('\n') - strings.append(get_workflow(dict_configs, cdump=cdump)) - strings.append('\n') - strings.append('\n') - - return ''.join(strings) - - -def create_xml(dict_configs): - ''' - Given an experiment directory containing config files and - XML directory containing XML templates, create the workflow XML - ''' - - - dict_configs['base']['INTERVAL'] = wfu.get_gfs_interval(dict_configs['base']['gfs_cyc']) - base = dict_configs['base'] - - preamble = get_preamble() - definitions = get_definitions(base) - resources = get_resources(dict_configs, cdump=base['CDUMP']) - workflow = get_workflow_body(dict_configs, cdump=base['CDUMP']) - - # Start writing the XML file - fh = open('%s/%s.xml' % (base['EXPDIR'], base['PSLOT']), 'w') - - fh.write(preamble) - fh.write(definitions) - fh.write(resources) - fh.write(workflow) - - fh.close() - - return - -if __name__ == '__main__': - main() - sys.exit(0) diff --git a/ush/rocoto/workflow_utils.py b/ush/rocoto/workflow_utils.py deleted file mode 100755 index 6bbaaf921..000000000 --- a/ush/rocoto/workflow_utils.py +++ /dev/null @@ -1,341 +0,0 @@ -#!/usr/bin/env python - -############################################################### -# < next few lines under version control, D O N O T E D I T > -# $Date$ -# $Revision$ -# $Author$ -# $Id$ -############################################################### -''' - Module containing functions all workflow setups require -''' -import random -import os -import sys -import glob -import subprocess -import numpy as np -from distutils.spawn import find_executable -from datetime import datetime, timedelta -import rocoto - -DATE_ENV_VARS=['CDATE','SDATE','EDATE'] -SCHEDULER_MAP={'ZEUS':'moabtorque', - 'THEIA':'moabtorque', - 'WCOSS':'lsf', - 'WCOSS_DELL_P3':'lsf', - 'WCOSS_C':'lsfcray'} - -class UnknownMachineError(Exception): pass -class UnknownConfigError(Exception): pass -class ShellScriptException(Exception): - def __init__(self,scripts,errors): - self.scripts = scripts - self.errors = errors - super(ShellScriptException,self).__init__( - str(errors)+ - ': error processing'+ - (' '.join(scripts))) - -def get_shell_env(scripts): - vars=dict() - runme=''.join([ 'source %s ; '%(s,) for s in scripts ]) - magic='--- ENVIRONMENT BEGIN %d ---'%random.randint(0,64**5) - runme+='/bin/echo -n "%s" ; /usr/bin/env -0'%(magic,) - with open('/dev/null','wb+') as null: - env=subprocess.Popen(runme,shell=True,stdin=null.fileno(), - stdout=subprocess.PIPE) - (out,err)=env.communicate() - begin=out.find(magic) - if begin<0: - raise ShellScriptException(scripts,'Cannot find magic string; ' - 'at least one script failed: '+repr(out)) - for entry in out[begin+len(magic):].split('\x00'): - iequal=entry.find('=') - vars[entry[0:iequal]] = entry[iequal+1:] - return vars - -def get_script_env(scripts): - default_env=get_shell_env([]) - and_script_env=get_shell_env(scripts) - vars_just_in_script=set(and_script_env)-set(default_env) - union_env=dict(default_env) - union_env.update(and_script_env) - return dict([ (v,union_env[v]) for v in vars_just_in_script ]) - -def cast_or_not(type,value): - try: - return type(value) - except ValueError: - return value - -def get_configs(expdir): - """ - Given an experiment directory containing config files, - return a list of configs minus the ones ending with ".default" - """ - result=list() - for config in glob.glob('%s/config.*' % expdir): - if not config.endswith('.default'): - result.append(config) - return result - -def find_config(config_name, configs): - - for config in configs: - if config_name == os.path.basename(config): - return config - - raise UnknownConfigError("%s does not exist (known: %s), ABORT!" % ( - config_name,repr(config_name))) - -def source_configs(configs, tasks): - ''' - Given list of config files, source them - and return a dictionary for each task - Every task depends on config.base - ''' - - dict_configs = {} - - # Return config.base as well - dict_configs['base'] = config_parser([find_config('config.base', configs)]) - - # Source the list of input tasks - for task in tasks: - - files = [] - - files.append(find_config('config.base', configs)) - - if task in ['eobs', 'eomg']: - files.append(find_config('config.anal', configs)) - files.append(find_config('config.eobs', configs)) - elif task in ['eupd']: - files.append(find_config('config.anal', configs)) - files.append(find_config('config.eupd', configs)) - elif task in ['efcs']: - files.append(find_config('config.fcst', configs)) - files.append(find_config('config.efcs', configs)) - else: - files.append(find_config('config.%s' % task, configs)) - - print 'sourcing config.%s' % task - dict_configs[task] = config_parser(files) - - return dict_configs - - -def config_parser(files): - """ - Given the name of config file, key-value pair of all variables in the config file is returned as a dictionary - :param files: config file or list of config files - :type files: list or str or unicode - :return: Key value pairs representing the environment variables defined - in the script. - :rtype: dict - """ - if isinstance(files,basestring): - files=[files] - varbles=dict() - for key,value in get_script_env(files).iteritems(): - if key in DATE_ENV_VARS: # likely a date, convert to datetime - varbles[key] = datetime.strptime(value,'%Y%m%d%H') - elif '.' in value: # Likely a number and that too a float - varbles[key] = cast_or_not(float,value) - else: # Still could be a number, may be an integer - varbles[key] = cast_or_not(int,value) - - return varbles - -def get_scheduler(machine): - """Determine the scheduler""" - try: - return SCHEDULER_MAP[machine] - except KeyError: - raise UnknownMachineError('Unknown machine: %s'%(machine,)) - - -def create_wf_task(task, cdump='gdas', cycledef=None, envar=None, dependency=None, \ - metatask=None, varname=None, varval=None, vardict=None, \ - final=False): - - if metatask is None: - taskstr = '%s' % task - else: - taskstr = '%s#%s#' % (task, varname) - metataskstr = '%s%s' % (cdump, metatask) - metatask_dict = {'metataskname': metataskstr, \ - 'varname': '%s' % varname, \ - 'varval': '%s' % varval, \ - 'vardict': vardict} - - taskstr = '%s%s' % (cdump, taskstr) - cycledefstr = cdump if cycledef is None else cycledef - - task_dict = {'taskname': '%s' % taskstr, \ - 'cycledef': '%s' % cycledefstr, \ - 'maxtries': '&MAXTRIES;', \ - 'command': '&JOBS_DIR;/%s.sh' % task, \ - 'jobname': '&PSLOT;_%s_@H' % taskstr, \ - 'account': '&ACCOUNT;', \ - 'queue': '&QUEUE_%s_%s;' % (task.upper(), cdump.upper()), \ - 'walltime': '&WALLTIME_%s_%s;' % (task.upper(), cdump.upper()), \ - 'native': '&NATIVE_%s_%s;' % (task.upper(), cdump.upper()), \ - 'memory': '&MEMORY_%s_%s;' % (task.upper(), cdump.upper()), \ - 'resources': '&RESOURCES_%s_%s;' % (task.upper(), cdump.upper()), \ - 'log': '&ROTDIR;/logs/@Y@m@d@H/%s.log' % taskstr, \ - 'envar': envar, \ - 'dependency': dependency, \ - 'final': final} - - if metatask is None: - task = rocoto.create_task(task_dict) - else: - task = rocoto.create_metatask(task_dict, metatask_dict) - task = ''.join(task) - - return task - - -def create_firstcyc_task(cdump='gdas'): - ''' - This task is needed to run to finalize the first half cycle - ''' - - task = 'firstcyc' - taskstr = '%s' % task - - deps = [] - data = '&EXPDIR;/logs/@Y@m@d@H.log' - dep_dict = {'type':'data', 'data':data, 'offset':'24:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dep_dict = {'type':'cycleexist', 'condition':'not', 'offset':'-06:00:00'} - deps.append(rocoto.add_dependency(dep_dict)) - dependencies = rocoto.create_dependency(dep_condition='and', dep=deps) - - task_dict = {'taskname': '%s' % taskstr, \ - 'cycledef': 'first', \ - 'maxtries': '&MAXTRIES;', \ - 'final' : True, \ - 'command': 'sleep 1', \ - 'jobname': '&PSLOT;_%s_@H' % taskstr, \ - 'account': '&ACCOUNT;', \ - 'queue': '&QUEUE_ARCH;', \ - 'walltime': '&WALLTIME_ARCH_%s;' % cdump.upper(), \ - 'native': '&NATIVE_ARCH_%s;' % cdump.upper(), \ - 'resources': '&RESOURCES_ARCH_%s;' % cdump.upper(), \ - 'log': '&ROTDIR;/logs/@Y@m@d@H/%s.log' % taskstr, \ - 'dependency': dependencies} - - task = rocoto.create_task(task_dict) - - return ''.join(task) - - -def get_gfs_interval(gfs_cyc): - ''' - return interval in hours based on gfs_cyc - ''' - - # Get interval from cyc_input - if gfs_cyc == 0: - interval = None - if gfs_cyc == 1: - interval = '24:00:00' - elif gfs_cyc == 2: - interval = '12:00:00' - elif gfs_cyc == 4: - interval = '06:00:00' - - return interval - - -def get_resources(machine, cfg, task, cdump='gdas'): - - if cdump in ['gfs'] and 'wtime_%s_gfs' % task in cfg.keys(): - wtimestr = cfg['wtime_%s_gfs' % task] - else: - wtimestr = cfg['wtime_%s' % task] - - ltask = 'eobs' if task in ['eomg'] else task - - memory = cfg.get('memory_%s' % ltask, None) - - if cdump in ['gfs'] and 'npe_%s_gfs' % task in cfg.keys(): - tasks = cfg['npe_%s_gfs' % ltask] - else: - tasks = cfg['npe_%s' % ltask] - - if cdump in ['gfs'] and 'npe_node_%s_gfs' % task in cfg.keys(): - ppn = cfg['npe_node_%s_gfs' % ltask] - else: - ppn = cfg['npe_node_%s' % ltask] - - if machine in [ 'WCOSS_DELL_P3']: - threads = cfg['nth_%s' % ltask] - - nodes = np.int(np.ceil(np.float(tasks) / np.float(ppn))) - - memstr = '' if memory is None else str(memory) - natstr = '' - - if machine in ['ZEUS', 'THEIA', 'WCOSS_C', 'WCOSS_DELL_P3']: - resstr = '%d:ppn=%d' % (nodes, ppn) - - if machine in ['WCOSS_C'] and task in ['arch', 'earc', 'getic']: - resstr += '' - - if machine in ['WCOSS_DELL_P3']: - natstr = "-R 'affinity[core(%d)]'" % (threads) - - if task in ['arch', 'earc', 'getic']: - natstr = "-R 'affinity[core(1)]'" - - elif machine in ['WCOSS']: - resstr = '%d' % tasks - - queuestr = '&QUEUE_ARCH;' if task in ['arch', 'earc', 'getic'] else '&QUEUE;' - - return wtimestr, resstr, queuestr, memstr, natstr - - -def create_crontab(base, cronint=5): - ''' - Create crontab to execute rocotorun every cronint (5) minutes - ''' - - # No point creating a crontab if rocotorun is not available. - rocotoruncmd = find_executable('rocotorun') - if rocotoruncmd is None: - print 'Failed to find rocotorun, crontab will not be created' - return - - cronintstr = '*/%d * * * *' % cronint - rocotorunstr = '%s -d %s/%s.db -w %s/%s.xml' % (rocotoruncmd, base['EXPDIR'], base['PSLOT'], base['EXPDIR'], base['PSLOT']) - - # On WCOSS, rocoto module needs to be loaded everytime cron runs - if base['machine'] in ['WCOSS']: - rocotoloadstr = '. /usrx/local/Modules/default/init/sh; module use -a /usrx/local/emc_rocoto/modulefiles; module load rocoto/20170119-master)' - rocotorunstr = '(%s %s)' % (rocotoloadstr, rocotorunstr) - - try: - REPLYTO = os.environ['REPLYTO'] - except: - REPLYTO = '' - - strings = [] - - strings.append('\n') - strings.append('#################### %s ####################\n' % base['PSLOT']) - strings.append('MAILTO="%s"\n' % REPLYTO) - strings.append('%s %s\n' % (cronintstr, rocotorunstr)) - strings.append('#################################################################\n') - strings.append('\n') - - fh = open(os.path.join(base['EXPDIR'], '%s.crontab' % base['PSLOT']), 'w') - fh.write(''.join(strings)) - fh.close() - - return diff --git a/ush/set_FV3nml_ens_stoch_seeds.py b/ush/set_FV3nml_ens_stoch_seeds.py new file mode 100644 index 000000000..9d9ae4b39 --- /dev/null +++ b/ush/set_FV3nml_ens_stoch_seeds.py @@ -0,0 +1,137 @@ +#!/usr/bin/env python3 + +import unittest +import os +from textwrap import dedent +from datetime import datetime + +from python_utils import print_input_args, print_info_msg, print_err_msg_exit,\ + date_to_str, mkdir_vrfy,cp_vrfy,\ + import_vars,set_env_var,\ + define_macos_utilities, cfg_to_yaml_str + +from set_namelist import set_namelist + +def set_FV3nml_ens_stoch_seeds(cdate): + """ + This function, for an ensemble-enabled experiment + (i.e. for an experiment for which the workflow configuration variable + DO_ENSEMBLE has been set to "TRUE"), creates new namelist files with + unique stochastic "seed" parameters, using a base namelist file in the + ${EXPTDIR} directory as a template. These new namelist files are stored + within each member directory housed within each cycle directory. Files + of any two ensemble members differ only in their stochastic "seed" + parameter values. These namelist files are generated when this file is + called as part of the RUN_FCST_TN task. + + Args: + cdate + Returns: + None + """ + + print_input_args(locals()) + + # import all environment variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # For a given cycle and member, generate a namelist file with unique + # seed values. + # + #----------------------------------------------------------------------- + # + ensmem_name=f"mem{ENSMEM_INDX}" + + fv3_nml_ensmem_fp=os.path.join(CYCLE_BASEDIR, f"{date_to_str(cdate,True)}{os.sep}{ensmem_name}{os.sep}{FV3_NML_FN}") + + ensmem_num=ENSMEM_INDX + + cdate_i = int(cdate.strftime('%Y%m%d')) + + settings = {} + nam_stochy_dict = {} + + if DO_SPP: + iseed_sppt=cdate_i*1000 + ensmem_num*10 + 1 + nam_stochy_dict.update({ + 'iseed_sppt': iseed_sppt + }) + + if DO_SHUM: + iseed_shum=cdate_i*1000 + ensmem_num*10 + 2 + nam_stochy_dict.update({ + 'iseed_shum': iseed_shum + }) + + if DO_SKEB: + iseed_skeb=cdate_i*1000 + ensmem_num*10 + 3 + nam_stochy_dict.update({ + 'iseed_skeb': iseed_skeb + }) + + settings['nam_stochy'] = nam_stochy_dict + + if DO_SPP: + num_iseed_spp=len(ISEED_SPP) + iseed_spp = [None]*num_iseed_spp + for i in range(num_iseed_spp): + iseed_spp[i]=cdate_i*1000 + ensmem_num*10 + ISEED_SPP[i] + + settings['nam_spperts'] = { + 'iseed_spp': iseed_spp + } + else: + settings['nam_spperts'] = {} + + if DO_LSM_SPP: + iseed_lsm_spp=cdate_i*1000 + ensmem_num*10 + 9 + + settings['nam_sppperts'] = { + 'iseed_lndp': [iseed_lsm_spp] + } + + settings_str = cfg_to_yaml_str(settings) + try: + set_namelist(["-q", "-n", FV3_NML_FP, "-u", settings_str, "-o", fv3_nml_ensmem_fp]) + except: + print_err_msg_exit(dedent(f''' + Call to python script set_namelist.py to set the variables in the FV3 + namelist file that specify the paths to the surface climatology files + failed. Parameters passed to this script are: + Full path to base namelist file: + FV3_NML_FP = \"{FV3_NML_FP}\" + Full path to output namelist file: + fv3_nml_ensmem_fp = \"{fv3_nml_ensmem_fp}\" + Namelist settings specified on command line (these have highest precedence): + settings = + {settings_str}''')) + +class Testing(unittest.TestCase): + def test_set_FV3nml_ens_stoch_seeds(self): + set_FV3nml_ens_stoch_seeds(cdate=self.cdate) + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + USHDIR = os.path.dirname(os.path.abspath(__file__)) + EXPTDIR = os.path.join(USHDIR,"test_data","expt"); + cp_vrfy(os.path.join(USHDIR,f'templates{os.sep}input.nml.FV3'), \ + os.path.join(EXPTDIR,'input.nml')) + self.cdate=datetime(2021, 1, 1) + + mkdir_vrfy("-p", os.path.join(EXPTDIR,f'{date_to_str(self.cdate,True)}{os.sep}mem0')) + set_env_var("USHDIR",USHDIR) + set_env_var("CYCLE_BASEDIR",EXPTDIR) + set_env_var("ENSMEM_INDX",0) + set_env_var("FV3_NML_FN","input.nml") + set_env_var("FV3_NML_FP",os.path.join(EXPTDIR,"input.nml")) + set_env_var("DO_SPP",True) + set_env_var("DO_SHUM",True) + set_env_var("DO_SKEB",True) + set_env_var("DO_LSM_SPP",True) + ISEED_SPP = [ 4, 4, 4, 4, 4] + set_env_var("ISEED_SPP",ISEED_SPP) + diff --git a/ush/set_FV3nml_stoch_params.sh b/ush/set_FV3nml_ens_stoch_seeds.sh similarity index 87% rename from ush/set_FV3nml_stoch_params.sh rename to ush/set_FV3nml_ens_stoch_seeds.sh index 8abfc45d4..64322d664 100644 --- a/ush/set_FV3nml_stoch_params.sh +++ b/ush/set_FV3nml_ens_stoch_seeds.sh @@ -13,7 +13,7 @@ # #----------------------------------------------------------------------- # -function set_FV3nml_stoch_params() { +function set_FV3nml_ens_stoch_seeds() { # #----------------------------------------------------------------------- # @@ -98,48 +98,67 @@ function set_FV3nml_stoch_params() { ensmem_num=$((10#${ENSMEM_INDX})) + settings="\ +'nam_stochy': {" + + if [ ${DO_SPPT} = TRUE ]; then + + iseed_sppt=$(( cdate*1000 + ensmem_num*10 + 1 )) + settings="$settings + 'iseed_sppt': ${iseed_sppt}," + + fi + + if [ ${DO_SHUM} = TRUE ]; then + iseed_shum=$(( cdate*1000 + ensmem_num*10 + 2 )) + settings="$settings + 'iseed_shum': ${iseed_shum}," + + fi + + if [ ${DO_SKEB} = TRUE ]; then + iseed_skeb=$(( cdate*1000 + ensmem_num*10 + 3 )) - iseed_sppt=$(( cdate*1000 + ensmem_num*10 + 1 )) - iseed_lsm_spp=$(( cdate*1000 + ensmem_num*10 + 9)) + settings="$settings + 'iseed_skeb': ${iseed_skeb}," + + fi + settings="$settings + }" + + settings="$settings +'nam_sppperts': {" + + if [ ${DO_SPP} = TRUE ]; then num_iseed_spp=${#ISEED_SPP[@]} for (( i=0; i<${num_iseed_spp}; i++ )); do iseed_spp[$i]=$(( cdate*1000 + ensmem_num*10 + ${ISEED_SPP[$i]} )) done - settings="" - - if [ ${DO_SPPT} = TRUE ] || [ ${DO_SHUM} = TRUE ] || [ ${DO_SKEB} = TRUE ]; then - - settings=$settings"\ -'nam_stochy': { - 'iseed_shum': ${iseed_shum}, - 'iseed_skeb': ${iseed_skeb}, - 'iseed_sppt': ${iseed_sppt}, - } -" + settings="$settings + 'iseed_spp': [ $( printf "%s, " "${iseed_spp[@]}" ) ]," + fi - if [ ${DO_SPP} = TRUE ]; then + settings="$settings + }" - settings=$settings"\ -'nam_sppperts': { - 'iseed_spp': [ $( printf "%s, " "${iseed_spp[@]}" ) ] - } -" - fi + settings="$settings +'nam_sfcperts': {" if [ ${DO_LSM_SPP} = TRUE ]; then + + iseed_lsm_spp=$(( cdate*1000 + ensmem_num*10 + 9)) + + settings="$settings + 'iseed_lndp': [ $( printf "%s, " "${iseed_lsm_spp[@]}" ) ]," - settings=$settings"\ -'nam_sfcperts': { - 'iseed_lndp': [ $( printf "%s, " "${iseed_lsm_spp[@]}" ) ] - } -" fi - if [ -n "$settings" ]; then + settings="$settings + }" $USHDIR/set_namelist.py -q \ -n ${FV3_NML_FP} \ @@ -156,14 +175,6 @@ failed. Parameters passed to this script are: Namelist settings specified on command line (these have highest precedence): settings = $settings" - - else - - print_info_msg "\ -The variable \"settings\" is empty, so not setting any namelist values." - - fi - # #----------------------------------------------------------------------- # diff --git a/ush/set_FV3nml_sfc_climo_filenames.py b/ush/set_FV3nml_sfc_climo_filenames.py new file mode 100644 index 000000000..518f81ae3 --- /dev/null +++ b/ush/set_FV3nml_sfc_climo_filenames.py @@ -0,0 +1,141 @@ +#!/usr/bin/env python3 + +import unittest +import os +from textwrap import dedent + +from python_utils import print_input_args, print_info_msg, print_err_msg_exit,\ + check_var_valid_value,mv_vrfy,mkdir_vrfy,cp_vrfy,\ + rm_vrfy,import_vars,set_env_var,\ + define_macos_utilities,find_pattern_in_str,cfg_to_yaml_str + +from set_namelist import set_namelist + +def set_FV3nml_sfc_climo_filenames(): + """ + This function sets the values of the variables in + the forecast model's namelist file that specify the paths to the surface + climatology files on the FV3LAM native grid (which are either pregenerated + or created by the MAKE_SFC_CLIMO_TN task). Note that the workflow + generation scripts create symlinks to these surface climatology files + in the FIXLAM directory, and the values in the namelist file that get + set by this function are relative or full paths to these links. + + Args: + None + Returns: + None + """ + + # import all environment variables + import_vars() + + # The regular expression regex_search set below will be used to extract + # from the elements of the array FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING + # the name of the namelist variable to set and the corresponding surface + # climatology field from which to form the name of the surface climatology file + regex_search = "^[ ]*([^| ]+)[ ]*[|][ ]*([^| ]+)[ ]*$" + + # Set the suffix of the surface climatology files. + suffix = "tileX.nc" + + # create yaml-complaint string + settings = {} + + dummy_run_dir = os.path.join(EXPTDIR, "any_cyc") + if DO_ENSEMBLE == "TRUE": + dummy_run_dir += os.sep + "any_ensmem" + + namsfc_dict = {} + for mapping in FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING: + tup = find_pattern_in_str(regex_search, mapping) + nml_var_name = tup[0] + sfc_climo_field_name = tup[1] + + check_var_valid_value(sfc_climo_field_name, SFC_CLIMO_FIELDS) + + fp = os.path.join(FIXLAM, f'{CRES}.{sfc_climo_field_name}.{suffix}') + if RUN_ENVIR != "nco": + fp = os.path.relpath(os.path.realpath(fp), start=dummy_run_dir) + + namsfc_dict[nml_var_name] = fp + + settings['namsfc_dict'] = namsfc_dict + settings_str = cfg_to_yaml_str(settings) + + + print_info_msg(f''' + The variable \"settings\" specifying values of the namelist variables + has been set as follows: + + settings = + {settings_str}''', verbose=DEBUG) + + # Rename the FV3 namelist and call set_namelist + fv3_nml_base_fp = f'{FV3_NML_FP}.base' + mv_vrfy(f'{FV3_NML_FP} {fv3_nml_base_fp}') + + try: + set_namelist(["-q", "-n", fv3_nml_base_fp, "-u", settings_str, "-o", FV3_NML_FP]) + except: + print_err_msg_exit(f''' + Call to python script set_namelist.py to set the variables in the FV3 + namelist file that specify the paths to the surface climatology files + failed. Parameters passed to this script are: + Full path to base namelist file: + fv3_nml_base_fp = \"{fv3_nml_base_fp}\" + Full path to output namelist file: + FV3_NML_FP = \"{FV3_NML_FP}\" + Namelist settings specified on command line (these have highest precedence): + settings = + {settings_str}''') + + rm_vrfy(f'{fv3_nml_base_fp}') + +class Testing(unittest.TestCase): + def test_set_FV3nml_sfc_climo_filenames(self): + set_FV3nml_sfc_climo_filenames() + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + USHDIR = os.path.dirname(os.path.abspath(__file__)) + EXPTDIR = os.path.join(USHDIR, "test_data", "expt"); + FIXLAM = os.path.join(EXPTDIR, "fix_lam") + mkdir_vrfy("-p",FIXLAM) + cp_vrfy(os.path.join(USHDIR,f'templates{os.sep}input.nml.FV3'), \ + os.path.join(EXPTDIR,'input.nml')) + set_env_var("USHDIR",USHDIR) + set_env_var("EXPTDIR",EXPTDIR) + set_env_var("FIXLAM",FIXLAM) + set_env_var("DO_ENSEMBLE",False) + set_env_var("CRES","C3357") + set_env_var("RUN_ENVIR","nco") + set_env_var("FV3_NML_FP",os.path.join(EXPTDIR,"input.nml")) + + FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING=[ + "FNALBC | snowfree_albedo", + "FNALBC2 | facsf", + "FNTG3C | substrate_temperature", + "FNVEGC | vegetation_greenness", + "FNVETC | vegetation_type", + "FNSOTC | soil_type", + "FNVMNC | vegetation_greenness", + "FNVMXC | vegetation_greenness", + "FNSLPC | slope_type", + "FNABSC | maximum_snow_albedo" + ] + SFC_CLIMO_FIELDS=[ + "facsf", + "maximum_snow_albedo", + "slope_type", + "snowfree_albedo", + "soil_type", + "substrate_temperature", + "vegetation_greenness", + "vegetation_type" + ] + set_env_var("FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING", + FV3_NML_VARNAME_TO_SFC_CLIMO_FIELD_MAPPING) + set_env_var("SFC_CLIMO_FIELDS",SFC_CLIMO_FIELDS) + diff --git a/ush/set_cycle_dates.py b/ush/set_cycle_dates.py new file mode 100644 index 000000000..69a707b97 --- /dev/null +++ b/ush/set_cycle_dates.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python3 + +import unittest +from datetime import datetime,timedelta,date + +from python_utils import print_input_args, print_err_msg_exit + +def set_cycle_dates(date_start, date_end, cycle_hrs, incr_cycl_freq): + """ This file defines a function that, given the starting date (date_start, + in the form YYYYMMDD), the ending date (date_end, in the form YYYYMMDD), + and an array containing the cycle hours for each day (whose elements + have the form HH), returns an array of cycle date-hours whose elements + have the form YYYYMMDD. Here, YYYY is a four-digit year, MM is a two- + digit month, DD is a two-digit day of the month, and HH is a two-digit + hour of the day. + + Args: + date_start: start date + date_end: end date + cycle_hrs: [ HH0, HH1, ...] + incr_cycl_freq: cycle frequency increment in hours + Returns: + A list of dates in a format YYYYMMDDHH + """ + + print_input_args(locals()) + + #calculate date increment + if incr_cycl_freq <= 24: + incr_days = 1 + else: + incr_days = incr_cycl_freq // 24 + if incr_cycl_freq % 24 != 0: + print_err_msg_exit(f''' + INCR_CYCL_FREQ is not divided by 24: + INCR_CYCL_FREQ = \"{incr_cycl_freq}\"''') + + #iterate over days and cycles + all_cdates = [] + d = date_start + while d <= date_end: + for c in cycle_hrs: + dc = d + timedelta(hours=c) + v = datetime.strftime(dc,'%Y%m%d%H') + all_cdates.append(v) + d += timedelta(days=incr_days) + + return all_cdates + +class Testing(unittest.TestCase): + def test_set_cycle_dates(self): + cdates = set_cycle_dates(date_start=datetime(2022,1,1), date_end=datetime(2022,1,4), + cycle_hrs=[6,12], incr_cycl_freq=48) + self.assertEqual(cdates, ['2022010106', '2022010112','2022010306', '2022010312']) diff --git a/ush/set_extrn_mdl_params.py b/ush/set_extrn_mdl_params.py new file mode 100644 index 000000000..f4fffc590 --- /dev/null +++ b/ush/set_extrn_mdl_params.py @@ -0,0 +1,56 @@ +#!/usr/bin/env python3 + +import unittest + +from python_utils import import_vars, export_vars, set_env_var, get_env_var + +def set_extrn_mdl_params(): + """ Sets parameters associated with the external model used for initial + conditions (ICs) and lateral boundary conditions (LBCs). + Args: + None + Returns: + None + """ + + #import all env variables + import_vars() + + global EXTRN_MDL_LBCS_OFFSET_HRS + + # + #----------------------------------------------------------------------- + # + # Set EXTRN_MDL_LBCS_OFFSET_HRS, which is the number of hours to shift + # the starting time of the external model that provides lateral boundary + # conditions. + # + #----------------------------------------------------------------------- + # + if EXTRN_MDL_NAME_LBCS == "RAP": + EXTRN_MDL_LBCS_OFFSET_HRS=EXTRN_MDL_LBCS_OFFSET_HRS or "3" + else: + EXTRN_MDL_LBCS_OFFSET_HRS=EXTRN_MDL_LBCS_OFFSET_HRS or "0" + + # export values we set above + env_vars = ["EXTRN_MDL_LBCS_OFFSET_HRS"] + export_vars(env_vars=env_vars) +# +#----------------------------------------------------------------------- +# +# Call the function defined above. +# +#----------------------------------------------------------------------- +# +if __name__ == "__main__": + set_extrn_mdl_params() + +class Testing(unittest.TestCase): + def test_extrn_mdl_params(self): + set_extrn_mdl_params() + EXTRN_MDL_LBCS_OFFSET_HRS = get_env_var("EXTRN_MDL_LBCS_OFFSET_HRS") + self.assertEqual(EXTRN_MDL_LBCS_OFFSET_HRS,3) + + def setUp(self): + set_env_var("EXTRN_MDL_NAME_LBCS","RAP") + set_env_var("EXTRN_MDL_LBCS_OFFSET_HRS",None) diff --git a/ush/set_extrn_mdl_params.sh b/ush/set_extrn_mdl_params.sh index f110a4c26..12b11e88d 100644 --- a/ush/set_extrn_mdl_params.sh +++ b/ush/set_extrn_mdl_params.sh @@ -6,23 +6,6 @@ #----------------------------------------------------------------------- # function set_extrn_mdl_params() { - # - #----------------------------------------------------------------------- - # - # Use known locations or COMINgfs as default, depending on RUN_ENVIR - # - #----------------------------------------------------------------------- - # - if [ "${RUN_ENVIR}" = "nco" ]; then - EXTRN_MDL_SYSBASEDIR_ICS="${EXTRN_MDL_SYSBASEDIR_ICS:-$COMINgfs}" - EXTRN_MDL_SYSBASEDIR_LBCS="${EXTRN_MDL_SYSBASEDIR_LBCS:-$COMINgfs}" - else - EXTRN_MDL_SYSBASEDIR_ICS="${EXTRN_MDL_SYSBASEDIR_ICS:-$(set_known_sys_dir \ - ${EXTRN_MDL_NAME_ICS})}" - EXTRN_MDL_SYSBASEDIR_LBCS="${EXTRN_MDL_SYSBASEDIR_LBCS:-$(set_known_sys_dir \ - ${EXTRN_MDL_NAME_LBCS})}" - fi - # #----------------------------------------------------------------------- # @@ -36,7 +19,7 @@ function set_extrn_mdl_params() { "RAP") EXTRN_MDL_LBCS_OFFSET_HRS=${EXTRN_MDL_LBCS_OFFSET_HRS:-"3"} ;; - "*") + *) EXTRN_MDL_LBCS_OFFSET_HRS=${EXTRN_MDL_LBCS_OFFSET_HRS:-"0"} ;; esac diff --git a/ush/set_gridparams_ESGgrid.py b/ush/set_gridparams_ESGgrid.py new file mode 100644 index 000000000..c1ead9522 --- /dev/null +++ b/ush/set_gridparams_ESGgrid.py @@ -0,0 +1,110 @@ +#!/usr/bin/env python3 + +import unittest +from datetime import datetime,timedelta + +from constants import radius_Earth,degs_per_radian +from python_utils import import_vars, set_env_var, print_input_args + +def set_gridparams_ESGgrid(lon_ctr,lat_ctr,nx,ny,halo_width,delx,dely,pazi): + """ Sets the parameters for a grid that is to be generated using the "ESGgrid" + grid generation method (i.e. GRID_GEN_METHOD set to "ESGgrid"). + + Args: + lon_ctr + lat_ctr + nx + ny + halo_width + delx + dely + pazi + Returns: + Tuple of inputs, and 4 outputs (see return statement) + """ + + print_input_args(locals()) + # + #----------------------------------------------------------------------- + # + # For a ESGgrid-type grid, the orography filtering is performed by pass- + # ing to the orography filtering the parameters for an "equivalent" glo- + # bal uniform cubed-sphere grid. These are the parameters that a global + # uniform cubed-sphere grid needs to have in order to have a nominal + # grid cell size equal to that of the (average) cell size on the region- + # al grid. These globally-equivalent parameters include a resolution + # (in units of number of cells in each of the two horizontal directions) + # and a stretch factor. The equivalent resolution is calculated in the + # script that generates the grid, and the stretch factor needs to be set + # to 1 because we are considering an equivalent globally UNIFORM grid. + # However, it turns out that with a non-symmetric regional grid (one in + # which nx is not equal to ny), setting stretch_factor to 1 fails be- + # cause the orography filtering program is designed for a global cubed- + # sphere grid and thus assumes that nx and ny for a given tile are equal + # when stretch_factor is exactly equal to 1. + # ^^-- Why is this? Seems like symmetry btwn x and y should still hold when the stretch factor is not equal to 1. + # It turns out that the program will work if we set stretch_factor to a + # value that is not exactly 1. This is what we do below. + # + #----------------------------------------------------------------------- + # + stretch_factor=0.999 # Check whether the orography program has been fixed so that we can set this to 1... + # + #----------------------------------------------------------------------- + # + # Set parameters needed as inputs to the regional_grid grid generation + # code. + # + #----------------------------------------------------------------------- + # + del_angle_x_sg = (delx / (2.0 * radius_Earth)) * degs_per_radian + del_angle_y_sg = (dely / (2.0 * radius_Earth)) * degs_per_radian + neg_nx_of_dom_with_wide_halo = -(nx + 2 * halo_width) + neg_ny_of_dom_with_wide_halo = -(ny + 2 * halo_width) + # + #----------------------------------------------------------------------- + # + # return output variables. + # + #----------------------------------------------------------------------- + # + return (lon_ctr,lat_ctr,nx,ny,pazi,halo_width,stretch_factor, + "{:0.10f}".format(del_angle_x_sg), + "{:0.10f}".format(del_angle_y_sg), + "{:.0f}".format(neg_nx_of_dom_with_wide_halo), + "{:.0f}".format(neg_ny_of_dom_with_wide_halo)) + +class Testing(unittest.TestCase): + def test_set_gridparams_ESGgrid(self): + + (LON_CTR,LAT_CTR,NX,NY,PAZI,NHW,STRETCH_FAC, + DEL_ANGLE_X_SG, + DEL_ANGLE_Y_SG, + NEG_NX_OF_DOM_WITH_WIDE_HALO, + NEG_NY_OF_DOM_WITH_WIDE_HALO) = set_gridparams_ESGgrid( \ + lon_ctr=-97.5, + lat_ctr=38.5, + nx=1748, + ny=1038, + pazi=0.0, + halo_width=6, + delx=3000.0, + dely=3000.0) + + self.assertEqual(\ + (LON_CTR,LAT_CTR,NX,NY,PAZI,NHW,STRETCH_FAC, + DEL_ANGLE_X_SG, + DEL_ANGLE_Y_SG, + NEG_NX_OF_DOM_WITH_WIDE_HALO, + NEG_NY_OF_DOM_WITH_WIDE_HALO), + (-97.5, 38.5, 1748, 1038, 0.0, 6,0.999, + "0.0134894006", + "0.0134894006", + "-1760", + "-1050") + ) + + def setUp(self): + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + diff --git a/ush/set_gridparams_GFDLgrid.py b/ush/set_gridparams_GFDLgrid.py new file mode 100644 index 000000000..0dab11f75 --- /dev/null +++ b/ush/set_gridparams_GFDLgrid.py @@ -0,0 +1,467 @@ +#!/usr/bin/env python3 + +import unittest + +from constants import radius_Earth,degs_per_radian + +from python_utils import import_vars, set_env_var, print_input_args, \ + print_info_msg, print_err_msg_exit + +def prime_factors(n): + i = 2 + factors = [] + while i * i <= n: + if n % i: + i += 1 + else: + n //= i + factors.append(i) + if n > 1: + factors.append(n) + return factors + +def set_gridparams_GFDLgrid(lon_of_t6_ctr, lat_of_t6_ctr, res_of_t6g, stretch_factor, + refine_ratio_t6g_to_t7g, + istart_of_t7_on_t6g, iend_of_t7_on_t6g, + jstart_of_t7_on_t6g, jend_of_t7_on_t6g): + """ Sets the parameters for a grid that is to be generated using the "GFDLgrid" + grid generation method (i.e. GRID_GEN_METHOD set to "ESGgrid"). + + Args: + lon_of_t6_ctr + lat_of_t6_ctr + res_of_t6g + stretch_factor + refine_ratio_t6g_to_t7g + istart_of_t7_on_t6g + iend_of_t7_on_t6g + jstart_of_t7_on_t6g + jend_of_t7_on_t6g): + Returns: + Tuple of inputs and outputs (see return statement) + """ + + print_input_args(locals()) + + # get needed environment variables + IMPORTS = ['VERBOSE', 'RUN_ENVIR', 'NH4'] + import_vars(env_vars=IMPORTS) + + # + #----------------------------------------------------------------------- + # + # To simplify the grid setup, we require that tile 7 be centered on tile + # 6. Note that this is not really a restriction because tile 6 can al- + # ways be moved so that it is centered on tile 7 [the location of tile 6 + # doesn't really matter because for a regional setup, the forecast model + # will only run on tile 7 (not on tiles 1-6)]. + # + # We now check that tile 7 is centered on tile 6 by checking (1) that + # the number of cells (on tile 6) between the left boundaries of these + # two tiles is equal to that between their right boundaries and (2) that + # the number of cells (on tile 6) between the bottom boundaries of these + # two tiles is equal to that between their top boundaries. If not, we + # print out an error message and exit. If so, we set the longitude and + # latitude of the center of tile 7 to those of tile 6 and continue. + # + #----------------------------------------------------------------------- + # + + nx_of_t6_on_t6g = res_of_t6g + ny_of_t6_on_t6g = res_of_t6g + + num_left_margin_cells_on_t6g = istart_of_t7_on_t6g - 1 + num_right_margin_cells_on_t6g = nx_of_t6_on_t6g - iend_of_t7_on_t6g + + # This if-statement can hopefully be removed once EMC agrees to make their + # GFDLgrid type grids (tile 7) symmetric about tile 6. + if RUN_ENVIR != "nco": + if num_left_margin_cells_on_t6g != num_right_margin_cells_on_t6g: + print_err_msg_exit(f''' + In order for tile 7 to be centered in the x direction on tile 6, the x- + direction tile 6 cell indices at which tile 7 starts and ends (given by + istart_of_t7_on_t6g and iend_of_t7_on_t6g, respectively) must be set + such that the number of tile 6 cells in the margin between the left + boundaries of tiles 6 and 7 (given by num_left_margin_cells_on_t6g) is + equal to that in the margin between their right boundaries (given by + num_right_margin_cells_on_t6g): + istart_of_t7_on_t6g = {istart_of_t7_on_t6g} + iend_of_t7_on_t6g = {iend_of_t7_on_t6g} + num_left_margin_cells_on_t6g = {num_left_margin_cells_on_t6g} + num_right_margin_cells_on_t6g = {num_right_margin_cells_on_t6g} + Note that the total number of cells in the x-direction on tile 6 is gi- + ven by: + nx_of_t6_on_t6g = {nx_of_t6_on_t6g} + Please reset istart_of_t7_on_t6g and iend_of_t7_on_t6g and rerun.''') + + num_bot_margin_cells_on_t6g = jstart_of_t7_on_t6g - 1 + num_top_margin_cells_on_t6g = ny_of_t6_on_t6g - jend_of_t7_on_t6g + + # This if-statement can hopefully be removed once EMC agrees to make their + # GFDLgrid type grids (tile 7) symmetric about tile 6. + if RUN_ENVIR != "nco": + if num_bot_margin_cells_on_t6g != num_top_margin_cells_on_t6g: + print_err_msg_exit(f''' + In order for tile 7 to be centered in the y direction on tile 6, the y- + direction tile 6 cell indices at which tile 7 starts and ends (given by + jstart_of_t7_on_t6g and jend_of_t7_on_t6g, respectively) must be set + such that the number of tile 6 cells in the margin between the left + boundaries of tiles 6 and 7 (given by num_left_margin_cells_on_t6g) is + equal to that in the margin between their right boundaries (given by + num_right_margin_cells_on_t6g): + jstart_of_t7_on_t6g = {jstart_of_t7_on_t6g} + jend_of_t7_on_t6g = {jend_of_t7_on_t6g} + num_bot_margin_cells_on_t6g = {num_bot_margin_cells_on_t6g} + num_top_margin_cells_on_t6g = {num_top_margin_cells_on_t6g} + Note that the total number of cells in the y-direction on tile 6 is gi- + ven by: + ny_of_t6_on_t6g = {ny_of_t6_on_t6g} + Please reset jstart_of_t7_on_t6g and jend_of_t7_on_t6g and rerun.''') + + lon_of_t7_ctr = lon_of_t6_ctr + lat_of_t7_ctr = lat_of_t6_ctr + # + #----------------------------------------------------------------------- + # + # The grid generation script grid_gen_scr called below in turn calls the + # make_hgrid utility/executable to construct the regional grid. make_- + # hgrid accepts as arguments the index limits (i.e. starting and ending + # indices) of the regional grid on the supergrid of the regional grid's + # parent tile. The regional grid's parent tile is tile 6, and the su- + # pergrid of any given tile is defined as the grid obtained by doubling + # the number of cells in each direction on that tile's grid. We will + # denote these index limits by + # + # istart_of_t7_on_t6sg + # iend_of_t7_on_t6sg + # jstart_of_t7_on_t6sg + # jend_of_t7_on_t6sg + # + # The "_T6SG" suffix in these names is used to indicate that the indices + # are on the supergrid of tile 6. Recall, however, that we have as in- + # puts the index limits of the regional grid on the tile 6 grid, not its + # supergrid. These are given by + # + # istart_of_t7_on_t6g + # iend_of_t7_on_t6g + # jstart_of_t7_on_t6g + # jend_of_t7_on_t6g + # + # We can obtain the former from the latter by recalling that the super- + # grid has twice the resolution of the original grid. Thus, + # + # istart_of_t7_on_t6sg = 2*istart_of_t7_on_t6g - 1 + # iend_of_t7_on_t6sg = 2*iend_of_t7_on_t6g + # jstart_of_t7_on_t6sg = 2*jstart_of_t7_on_t6g - 1 + # jend_of_t7_on_t6sg = 2*jend_of_t7_on_t6g + # + # These are obtained assuming that grid cells on tile 6 must either be + # completely within the regional domain or completely outside of it, + # i.e. the boundary of the regional grid must coincide with gridlines + # on the tile 6 grid; it cannot cut through tile 6 cells. (Note that + # this implies that the starting indices on the tile 6 supergrid must be + # odd while the ending indices must be even; the above expressions sa- + # tisfy this requirement.) We perfrom these calculations next. + # + #----------------------------------------------------------------------- + # + istart_of_t7_on_t6sg = 2*istart_of_t7_on_t6g - 1 + iend_of_t7_on_t6sg = 2*iend_of_t7_on_t6g + jstart_of_t7_on_t6sg = 2*jstart_of_t7_on_t6g - 1 + jend_of_t7_on_t6sg = 2*jend_of_t7_on_t6g + # + #----------------------------------------------------------------------- + # + # If we simply pass to make_hgrid the index limits of the regional grid + # on the tile 6 supergrid calculated above, make_hgrid will generate a + # regional grid without a halo. To obtain a regional grid with a halo, + # we must pass to make_hgrid the index limits (on the tile 6 supergrid) + # of the regional grid including a halo. We will let the variables + # + # istart_of_t7_with_halo_on_t6sg + # iend_of_t7_with_halo_on_t6sg + # jstart_of_t7_with_halo_on_t6sg + # jend_of_t7_with_halo_on_t6sg + # + # denote these limits. The reason we include "_wide_halo" in these va- + # riable names is that the halo of the grid that we will first generate + # will be wider than the halos that are actually needed as inputs to the + # FV3LAM model (i.e. the 0-cell-wide, 3-cell-wide, and 4-cell-wide halos + # described above). We will generate the grids with narrower halos that + # the model needs later on by "shaving" layers of cells from this wide- + # halo grid. Next, we describe how to calculate the above indices. + # + # Let halo_width_on_t7g denote the width of the "wide" halo in units of number of + # grid cells on the regional grid (i.e. tile 7) that we'd like to have + # along all four edges of the regional domain (left, right, bottom, and + # top). To obtain the corresponding halo width in units of number of + # cells on the tile 6 grid -- which we denote by halo_width_on_t6g -- we simply di- + # vide halo_width_on_t7g by the refinement ratio, i.e. + # + # halo_width_on_t6g = halo_width_on_t7g/refine_ratio_t6g_to_t7g + # + # The corresponding halo width on the tile 6 supergrid is then given by + # + # halo_width_on_t6sg = 2*halo_width_on_t6g + # = 2*halo_width_on_t7g/refine_ratio_t6g_to_t7g + # + # Note that halo_width_on_t6sg must be an integer, but the expression for it de- + # rived above may not yield an integer. To ensure that the halo has a + # width of at least halo_width_on_t7g cells on the regional grid, we round up the + # result of the expression above for halo_width_on_t6sg, i.e. we redefine halo_width_on_t6sg + # to be + # + # halo_width_on_t6sg = ceil(2*halo_width_on_t7g/refine_ratio_t6g_to_t7g) + # + # where ceil(...) is the ceiling function, i.e. it rounds its floating + # point argument up to the next larger integer. Since in bash division + # of two integers returns a truncated integer and since bash has no + # built-in ceil(...) function, we perform the rounding-up operation by + # adding the denominator (of the argument of ceil(...) above) minus 1 to + # the original numerator, i.e. by redefining halo_width_on_t6sg to be + # + # halo_width_on_t6sg = (2*halo_width_on_t7g + refine_ratio_t6g_to_t7g - 1)/refine_ratio_t6g_to_t7g + # + # This trick works when dividing one positive integer by another. + # + # In order to calculate halo_width_on_t6g using the above expression, we must + # first specify halo_width_on_t7g. Next, we specify an initial value for it by + # setting it to one more than the largest-width halo that the model ac- + # tually needs, which is NH4. We then calculate halo_width_on_t6sg using the + # above expression. Note that these values of halo_width_on_t7g and halo_width_on_t6sg will + # likely not be their final values; their final values will be calcula- + # ted later below after calculating the starting and ending indices of + # the regional grid with wide halo on the tile 6 supergrid and then ad- + # justing the latter to satisfy certain conditions. + # + #----------------------------------------------------------------------- + # + halo_width_on_t7g = NH4 + 1 + halo_width_on_t6sg = (2*halo_width_on_t7g + refine_ratio_t6g_to_t7g - 1)/refine_ratio_t6g_to_t7g + # + #----------------------------------------------------------------------- + # + # With an initial value of halo_width_on_t6sg now available, we can obtain the + # tile 6 supergrid index limits of the regional domain (including the + # wide halo) from the index limits for the regional domain without a ha- + # lo by simply subtracting halo_width_on_t6sg from the lower index limits and add- + # ing halo_width_on_t6sg to the upper index limits, i.e. + # + # istart_of_t7_with_halo_on_t6sg = istart_of_t7_on_t6sg - halo_width_on_t6sg + # iend_of_t7_with_halo_on_t6sg = iend_of_t7_on_t6sg + halo_width_on_t6sg + # jstart_of_t7_with_halo_on_t6sg = jstart_of_t7_on_t6sg - halo_width_on_t6sg + # jend_of_t7_with_halo_on_t6sg = jend_of_t7_on_t6sg + halo_width_on_t6sg + # + # We calculate these next. + # + #----------------------------------------------------------------------- + # + istart_of_t7_with_halo_on_t6sg = int(istart_of_t7_on_t6sg - halo_width_on_t6sg) + iend_of_t7_with_halo_on_t6sg = int(iend_of_t7_on_t6sg + halo_width_on_t6sg) + jstart_of_t7_with_halo_on_t6sg = int(jstart_of_t7_on_t6sg - halo_width_on_t6sg) + jend_of_t7_with_halo_on_t6sg = int(jend_of_t7_on_t6sg + halo_width_on_t6sg) + # + #----------------------------------------------------------------------- + # + # As for the regional grid without a halo, the regional grid with a wide + # halo that make_hgrid will generate must be such that grid cells on + # tile 6 either lie completely within this grid or outside of it, i.e. + # they cannot lie partially within/outside of it. This implies that the + # starting indices on the tile 6 supergrid of the grid with wide halo + # must be odd while the ending indices must be even. Thus, below, we + # subtract 1 from the starting indices if they are even (which ensures + # that there will be at least halo_width_on_t7g halo cells along the left and bot- + # tom boundaries), and we add 1 to the ending indices if they are odd + # (which ensures that there will be at least halo_width_on_t7g halo cells along the + # right and top boundaries). + # + #----------------------------------------------------------------------- + # + if istart_of_t7_with_halo_on_t6sg % 2 == 0: + istart_of_t7_with_halo_on_t6sg = istart_of_t7_with_halo_on_t6sg - 1 + + if iend_of_t7_with_halo_on_t6sg % 2 == 1: + iend_of_t7_with_halo_on_t6sg = iend_of_t7_with_halo_on_t6sg + 1 + + if jstart_of_t7_with_halo_on_t6sg % 2 == 0: + jstart_of_t7_with_halo_on_t6sg = jstart_of_t7_with_halo_on_t6sg - 1 + + if jend_of_t7_with_halo_on_t6sg % 2 == 1: + jend_of_t7_with_halo_on_t6sg = jend_of_t7_with_halo_on_t6sg + 1 + # + #----------------------------------------------------------------------- + # + # Now that the starting and ending tile 6 supergrid indices of the re- + # gional grid with the wide halo have been calculated (and adjusted), we + # recalculate the width of the wide halo on: + # + # 1) the tile 6 supergrid; + # 2) the tile 6 grid; and + # 3) the tile 7 grid. + # + # These are the final values of these quantities that are guaranteed to + # correspond to the starting and ending indices on the tile 6 supergrid. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + Original values of the halo width on the tile 6 supergrid and on the + tile 7 grid are: + halo_width_on_t6sg = {halo_width_on_t6sg} + halo_width_on_t7g = {halo_width_on_t7g}''', verbose=VERBOSE) + + halo_width_on_t6sg = istart_of_t7_on_t6sg - istart_of_t7_with_halo_on_t6sg + halo_width_on_t6g = halo_width_on_t6sg//2 + halo_width_on_t7g = int(halo_width_on_t6g*refine_ratio_t6g_to_t7g) + + print_info_msg(f''' + Values of the halo width on the tile 6 supergrid and on the tile 7 grid + AFTER adjustments are: + halo_width_on_t6sg = {halo_width_on_t6sg} + halo_width_on_t7g = {halo_width_on_t7g}''', verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # Calculate the number of cells that the regional domain (without halo) + # has in each of the two horizontal directions (say x and y). We denote + # these by nx_of_t7_on_t7g and ny_of_t7_on_t7g, respectively. These + # will be needed in the "shave" steps in the grid generation task of the + # workflow. + # + #----------------------------------------------------------------------- + # + nx_of_t7_on_t6sg = iend_of_t7_on_t6sg - istart_of_t7_on_t6sg + 1 + nx_of_t7_on_t6g = nx_of_t7_on_t6sg/2 + nx_of_t7_on_t7g = int(nx_of_t7_on_t6g*refine_ratio_t6g_to_t7g) + + ny_of_t7_on_t6sg = jend_of_t7_on_t6sg - jstart_of_t7_on_t6sg + 1 + ny_of_t7_on_t6g = ny_of_t7_on_t6sg/2 + ny_of_t7_on_t7g = int(ny_of_t7_on_t6g*refine_ratio_t6g_to_t7g) + # + # The following are set only for informational purposes. + # + nx_of_t6_on_t6sg = 2*nx_of_t6_on_t6g + ny_of_t6_on_t6sg = 2*ny_of_t6_on_t6g + + prime_factors_nx_of_t7_on_t7g = prime_factors(nx_of_t7_on_t7g) + prime_factors_ny_of_t7_on_t7g = prime_factors(ny_of_t7_on_t7g) + + print_info_msg(f''' + The number of cells in the two horizontal directions (x and y) on the + parent tile's (tile 6) grid and supergrid are: + nx_of_t6_on_t6g = {nx_of_t6_on_t6g} + ny_of_t6_on_t6g = {ny_of_t6_on_t6g} + nx_of_t6_on_t6sg = {nx_of_t6_on_t6sg} + ny_of_t6_on_t6sg = {ny_of_t6_on_t6sg} + + The number of cells in the two horizontal directions on the tile 6 grid + and supergrid that the regional domain (tile 7) WITHOUT A HALO encompas- + ses are: + nx_of_t7_on_t6g = {nx_of_t7_on_t6g} + ny_of_t7_on_t6g = {ny_of_t7_on_t6g} + nx_of_t7_on_t6sg = {nx_of_t7_on_t6sg} + ny_of_t7_on_t6sg = {ny_of_t7_on_t6sg} + + The starting and ending i and j indices on the tile 6 grid used to gene- + rate this regional grid are: + istart_of_t7_on_t6g = {istart_of_t7_on_t6g} + iend_of_t7_on_t6g = {iend_of_t7_on_t6g} + jstart_of_t7_on_t6g = {jstart_of_t7_on_t6g} + jend_of_t7_on_t6g = {jend_of_t7_on_t6g} + + The corresponding starting and ending i and j indices on the tile 6 su- + pergrid are: + istart_of_t7_on_t6sg = {istart_of_t7_on_t6sg} + iend_of_t7_on_t6sg = {iend_of_t7_on_t6sg} + jstart_of_t7_on_t6sg = {jstart_of_t7_on_t6sg} + jend_of_t7_on_t6sg = {jend_of_t7_on_t6sg} + + The refinement ratio (ratio of the number of cells in tile 7 that abut + a single cell in tile 6) is: + refine_ratio_t6g_to_t7g = {refine_ratio_t6g_to_t7g} + + The number of cells in the two horizontal directions on the regional do- + main's (i.e. tile 7's) grid WITHOUT A HALO are: + nx_of_t7_on_t7g = {nx_of_t7_on_t7g} + ny_of_t7_on_t7g = {ny_of_t7_on_t7g} + + The prime factors of nx_of_t7_on_t7g and ny_of_t7_on_t7g are (useful for + determining an MPI task layout): + prime_factors_nx_of_t7_on_t7g: {prime_factors_nx_of_t7_on_t7g} + prime_factors_ny_of_t7_on_t7g: {prime_factors_ny_of_t7_on_t7g}''', verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # For informational purposes, calculate the number of cells in each di- + # rection on the regional grid including the wide halo (of width halo_- + # width_on_t7g cells). We denote these by nx_of_t7_with_halo_on_t7g and + # ny_of_t7_with_halo_on_t7g, respectively. + # + #----------------------------------------------------------------------- + # + nx_of_t7_with_halo_on_t6sg = iend_of_t7_with_halo_on_t6sg - istart_of_t7_with_halo_on_t6sg + 1 + nx_of_t7_with_halo_on_t6g = nx_of_t7_with_halo_on_t6sg/2 + nx_of_t7_with_halo_on_t7g = nx_of_t7_with_halo_on_t6g*refine_ratio_t6g_to_t7g + + ny_of_t7_with_halo_on_t6sg = jend_of_t7_with_halo_on_t6sg - jstart_of_t7_with_halo_on_t6sg + 1 + ny_of_t7_with_halo_on_t6g = ny_of_t7_with_halo_on_t6sg/2 + ny_of_t7_with_halo_on_t7g = ny_of_t7_with_halo_on_t6g*refine_ratio_t6g_to_t7g + + print_info_msg(f''' + nx_of_t7_with_halo_on_t7g = {nx_of_t7_with_halo_on_t7g} + (istart_of_t7_with_halo_on_t6sg = {istart_of_t7_with_halo_on_t6sg}, + iend_of_t7_with_halo_on_t6sg = {iend_of_t7_with_halo_on_t6sg})''', verbose=VERBOSE) + + print_info_msg(f''' + ny_of_t7_with_halo_on_t7g = {ny_of_t7_with_halo_on_t7g} + (jstart_of_t7_with_halo_on_t6sg = {jstart_of_t7_with_halo_on_t6sg}, + jend_of_t7_with_halo_on_t6sg = {jend_of_t7_with_halo_on_t6sg})''', verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # Return output variables. + # + #----------------------------------------------------------------------- + # + return (lon_of_t7_ctr, lat_of_t7_ctr, nx_of_t7_on_t7g, ny_of_t7_on_t7g, + halo_width_on_t7g, stretch_factor, + istart_of_t7_with_halo_on_t6sg, + iend_of_t7_with_halo_on_t6sg, + jstart_of_t7_with_halo_on_t6sg, + jend_of_t7_with_halo_on_t6sg) + +class Testing(unittest.TestCase): + def test_set_gridparams_GFDLgrid(self): + (LON_CTR,LAT_CTR,NX,NY,NHW,STRETCH_FAC, + ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG) = set_gridparams_GFDLgrid( \ + lon_of_t6_ctr=-97.5, \ + lat_of_t6_ctr=38.5, \ + res_of_t6g=96, \ + stretch_factor=1.4, \ + refine_ratio_t6g_to_t7g=3, \ + istart_of_t7_on_t6g=13, \ + iend_of_t7_on_t6g=84, \ + jstart_of_t7_on_t6g=17, \ + jend_of_t7_on_t6g=80) + + self.assertEqual(\ + (LON_CTR,LAT_CTR,NX,NY,NHW,STRETCH_FAC, + ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG), + (-97.5,38.5,216,192,6,1.4, + 21, + 172, + 29, + 164) + ) + + def setUp(self): + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + set_env_var('NH4', 4) + diff --git a/ush/set_namelist.py b/ush/set_namelist.py index 43bbeb8a6..84c61975e 100755 --- a/ush/set_namelist.py +++ b/ush/set_namelist.py @@ -69,6 +69,7 @@ import argparse import collections import os +import sys import f90nml import yaml @@ -137,7 +138,7 @@ def path_ok(arg): msg = f'{arg} is not a writable path!' raise argparse.ArgumentTypeError(msg) -def parse_args(): +def parse_args(argv): ''' Function maintains the arguments accepted by this script. Please see @@ -192,7 +193,7 @@ def parse_args(): action='store_true', help='If provided, suppress all output.', ) - return parser.parse_args() + return parser.parse_args(argv) def dict_diff(dict1, dict2): @@ -284,10 +285,15 @@ def update_dict(dest, newdict, quiet=False): dest[sect] = {} dest[sect][key] = value -def main(cla): +def set_namelist(argv): ''' Using input command line arguments (cla), update a Fortran namelist file. ''' + # parse argumetns + cla = parse_args(argv) + if cla.config: + cla.config, _ = config_exists(cla.config) + # Load base namelist into dict nml = f90nml.Namelist() if cla.nml is not None: @@ -324,7 +330,4 @@ def main(cla): if __name__ == '__main__': - cla = parse_args() - if cla.config: - cla.config, _ = config_exists(cla.config) - main(cla) + set_namelist(sys.argv[1:]) diff --git a/ush/set_ozone_param.py b/ush/set_ozone_param.py new file mode 100644 index 000000000..4c9998c3c --- /dev/null +++ b/ush/set_ozone_param.py @@ -0,0 +1,231 @@ +#!/usr/bin/env python3 + +import os +import unittest +from textwrap import dedent + +from python_utils import import_vars,export_vars,set_env_var,list_to_str,\ + print_input_args, print_info_msg, print_err_msg_exit,\ + define_macos_utilities,load_xml_file,has_tag_with_value,find_pattern_in_str + +def set_ozone_param(ccpp_phys_suite_fp): + """ Function that does the following: + (1) Determines the ozone parameterization being used by checking in the + CCPP physics suite XML. + + (2) Sets the name of the global ozone production/loss file in the FIXgsm + FIXgsm system directory to copy to the experiment's FIXam directory. + + (3) Resets the last element of the workflow array variable + FIXgsm_FILES_TO_COPY_TO_FIXam that contains the files to copy from + FIXgsm to FIXam (this last element is initially set to a dummy + value) to the name of the ozone production/loss file set in the + previous step. + + (4) Resets the element of the workflow array variable + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING (this array contains the + mapping between the symlinks to create in any cycle directory and + the files in the FIXam directory that are their targets) that + specifies the mapping for the ozone symlink/file such that the + target FIXam file name is set to the name of the ozone production/ + loss file set above. + + Args: + ccpp_phys_suite_fp: full path to CCPP physics suite + Returns: + ozone_param: a string + """ + + print_input_args(locals()) + + # import all environment variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # Get the name of the ozone parameterization being used. There are two + # possible ozone parameterizations: + # + # (1) A parameterization developed/published in 2015. Here, we refer to + # this as the 2015 parameterization. If this is being used, then we + # set the variable ozone_param to the string "ozphys_2015". + # + # (2) A parameterization developed/published sometime after 2015. Here, + # we refer to this as the after-2015 parameterization. If this is + # being used, then we set the variable ozone_param to the string + # "ozphys". + # + # We check the CCPP physics suite definition file (SDF) to determine the + # parameterization being used. If this file contains the line + # + # ozphys_2015 + # + # then the 2015 parameterization is being used. If it instead contains + # the line + # + # ozphys + # + # then the after-2015 parameterization is being used. (The SDF should + # contain exactly one of these lines; not both nor neither; we check for + # this.) + # + #----------------------------------------------------------------------- + # + tree = load_xml_file(ccpp_phys_suite_fp) + ozone_param = "" + if has_tag_with_value(tree, "scheme", "ozphys_2015"): + fixgsm_ozone_fn="ozprdlos_2015_new_sbuvO3_tclm15_nuchem.f77" + ozone_param = "ozphys_2015" + elif has_tag_with_value(tree, "scheme", "ozphys"): + fixgsm_ozone_fn="global_o3prdlos.f77" + ozone_param = "ozphys" + else: + print_err_msg_exit(f''' + Unknown or no ozone parameterization + specified in the CCPP physics suite file (ccpp_phys_suite_fp): + ccpp_phys_suite_fp = \"{ccpp_phys_suite_fp}\" + ozone_param = \"{ozone_param}\"''') + # + #----------------------------------------------------------------------- + # + # Set the last element of the array FIXgsm_FILES_TO_COPY_TO_FIXam to the + # name of the ozone production/loss file to copy from the FIXgsm to the + # FIXam directory. + # + #----------------------------------------------------------------------- + # + i=len(FIXgsm_FILES_TO_COPY_TO_FIXam) - 1 + FIXgsm_FILES_TO_COPY_TO_FIXam[i]=f"{fixgsm_ozone_fn}" + # + #----------------------------------------------------------------------- + # + # Set the element in the array CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING that + # specifies the mapping between the symlink for the ozone production/loss + # file that must be created in each cycle directory and its target in the + # FIXam directory. The name of the symlink is alrady in the array, but + # the target is not because it depends on the ozone parameterization that + # the physics suite uses. Since we determined the ozone parameterization + # above, we now set the target of the symlink accordingly. + # + #----------------------------------------------------------------------- + # + ozone_symlink="global_o3prdlos.f77" + fixgsm_ozone_fn_is_set=False + regex_search="^[ ]*([^| ]*)[ ]*[|][ ]*([^| ]*)[ ]*$" + num_symlinks=len(CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING) + + for i in range(num_symlinks): + mapping=CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING[i] + symlink = find_pattern_in_str(regex_search, mapping) + if symlink is not None: + symlink = symlink[0] + if symlink == ozone_symlink: + regex_search="^[ ]*([^| ]+[ ]*)[|][ ]*([^| ]*)[ ]*$" + mapping_ozone = find_pattern_in_str(regex_search, mapping)[0] + mapping_ozone=f"{mapping_ozone}| {fixgsm_ozone_fn}" + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING[i]=f"{mapping_ozone}" + fixgsm_ozone_fn_is_set=True + break + # + #----------------------------------------------------------------------- + # + # If fixgsm_ozone_fn_is_set is set to True, then the appropriate element + # of the array CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING was set successfully. + # In this case, print out the new version of this array. Otherwise, print + # out an error message and exit. + # + #----------------------------------------------------------------------- + # + if fixgsm_ozone_fn_is_set: + + msg=dedent(f''' + After setting the file name of the ozone production/loss file in the + FIXgsm directory (based on the ozone parameterization specified in the + CCPP suite definition file), the array specifying the mapping between + the symlinks that need to be created in the cycle directories and the + files in the FIXam directory is: + + ''') + msg+=dedent(f''' + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING = {list_to_str(CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING)} + ''') + print_info_msg(msg,verbose=VERBOSE) + + else: + + print_err_msg_exit(f''' + Unable to set name of the ozone production/loss file in the FIXgsm directory + in the array that specifies the mapping between the symlinks that need to + be created in the cycle directories and the files in the FIXgsm directory: + fixgsm_ozone_fn_is_set = \"{fixgsm_ozone_fn_is_set}\"''') + + EXPORTS = ["CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING", "FIXgsm_FILES_TO_COPY_TO_FIXam"] + export_vars(env_vars=EXPORTS) + + return ozone_param + +class Testing(unittest.TestCase): + def test_set_ozone_param(self): + self.assertEqual( "ozphys_2015", + set_ozone_param(ccpp_phys_suite_fp=f"test_data{os.sep}suite_FV3_GSD_SAR.xml") ) + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING = [ + "aerosol.dat | global_climaeropac_global.txt", + "co2historicaldata_2010.txt | fix_co2_proj/global_co2historicaldata_2010.txt", + "co2historicaldata_2011.txt | fix_co2_proj/global_co2historicaldata_2011.txt", + "co2historicaldata_2012.txt | fix_co2_proj/global_co2historicaldata_2012.txt", + "co2historicaldata_2013.txt | fix_co2_proj/global_co2historicaldata_2013.txt", + "co2historicaldata_2014.txt | fix_co2_proj/global_co2historicaldata_2014.txt", + "co2historicaldata_2015.txt | fix_co2_proj/global_co2historicaldata_2015.txt", + "co2historicaldata_2016.txt | fix_co2_proj/global_co2historicaldata_2016.txt", + "co2historicaldata_2017.txt | fix_co2_proj/global_co2historicaldata_2017.txt", + "co2historicaldata_2018.txt | fix_co2_proj/global_co2historicaldata_2018.txt", + "co2historicaldata_2019.txt | fix_co2_proj/global_co2historicaldata_2019.txt", + "co2historicaldata_2020.txt | fix_co2_proj/global_co2historicaldata_2020.txt", + "co2historicaldata_2021.txt | fix_co2_proj/global_co2historicaldata_2021.txt", + "co2historicaldata_glob.txt | global_co2historicaldata_glob.txt", + "co2monthlycyc.txt | co2monthlycyc.txt", + "global_h2oprdlos.f77 | global_h2o_pltc.f77", + "global_zorclim.1x1.grb | global_zorclim.1x1.grb", + "sfc_emissivity_idx.txt | global_sfc_emissivity_idx.txt", + "solarconstant_noaa_an.txt | global_solarconstant_noaa_an.txt", + "global_o3prdlos.f77 | ozprdlos_2015_new_sbuvO3_tclm15_nuchem.f77"] + FIXgsm_FILES_TO_COPY_TO_FIXam = [ + "global_glacier.2x2.grb", + "global_maxice.2x2.grb", + "RTGSST.1982.2012.monthly.clim.grb", + "global_snoclim.1.875.grb", + "CFSR.SEAICE.1982.2012.monthly.clim.grb", + "global_soilmgldas.t126.384.190.grb", + "seaice_newland.grb", + "global_climaeropac_global.txt", + "fix_co2_proj/global_co2historicaldata_2010.txt", + "fix_co2_proj/global_co2historicaldata_2011.txt", + "fix_co2_proj/global_co2historicaldata_2012.txt", + "fix_co2_proj/global_co2historicaldata_2013.txt", + "fix_co2_proj/global_co2historicaldata_2014.txt", + "fix_co2_proj/global_co2historicaldata_2015.txt", + "fix_co2_proj/global_co2historicaldata_2016.txt", + "fix_co2_proj/global_co2historicaldata_2017.txt", + "fix_co2_proj/global_co2historicaldata_2018.txt", + "fix_co2_proj/global_co2historicaldata_2019.txt", + "fix_co2_proj/global_co2historicaldata_2020.txt", + "fix_co2_proj/global_co2historicaldata_2021.txt", + "global_co2historicaldata_glob.txt", + "co2monthlycyc.txt", + "global_h2o_pltc.f77", + "global_hyblev.l65.txt", + "global_zorclim.1x1.grb", + "global_sfc_emissivity_idx.txt", + "global_solarconstant_noaa_an.txt", + "geo_em.d01.lat-lon.2.5m.HGT_M.nc", + "HGT.Beljaars_filtered.lat-lon.30s_res.nc", + "ozprdlos_2015_new_sbuvO3_tclm15_nuchem.f77"] + + set_env_var('CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING', CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING) + set_env_var('FIXgsm_FILES_TO_COPY_TO_FIXam', FIXgsm_FILES_TO_COPY_TO_FIXam) diff --git a/ush/set_predef_grid_params.py b/ush/set_predef_grid_params.py new file mode 100644 index 000000000..283851715 --- /dev/null +++ b/ush/set_predef_grid_params.py @@ -0,0 +1,64 @@ +#!/usr/bin/env python3 + +import unittest +import os + +from constants import radius_Earth,degs_per_radian + +from python_utils import process_args,import_vars,export_vars,set_env_var,get_env_var,\ + print_input_args,define_macos_utilities, load_config_file, \ + cfg_to_yaml_str + +def set_predef_grid_params(): + """ Sets grid parameters for the specified predfined grid + + Args: + None + Returns: + None + """ + # import all environement variables + import_vars() + + params_dict = load_config_file("predef_grid_params.yaml") + params_dict = params_dict[PREDEF_GRID_NAME] + + # if QUILTING = False, skip variables that start with "WRTCMP_" + if not QUILTING: + params_dict = {k: v for k,v in params_dict.items() \ + if not k.startswith("WRTCMP_") } + + # take care of special vars + special_vars = ['DT_ATMOS', 'LAYOUT_X', 'LAYOUT_Y', 'BLOCKSIZE'] + for var in special_vars: + if globals()[var] is not None: + params_dict[var] = globals()[var] + + #export variables to environment + export_vars(source_dict=params_dict) +# +#----------------------------------------------------------------------- +# +# Call the function defined above. +# +#----------------------------------------------------------------------- +# +if __name__ == "__main__": + set_predef_grid_params() + +class Testing(unittest.TestCase): + def test_set_predef_grid_params(self): + set_predef_grid_params() + self.assertEqual(get_env_var('GRID_GEN_METHOD'),"ESGgrid") + self.assertEqual(get_env_var('ESGgrid_LON_CTR'),-97.5) + + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',False) + set_env_var('PREDEF_GRID_NAME',"RRFS_CONUS_3km") + set_env_var('DT_ATMOS',36) + set_env_var('LAYOUT_X',18) + set_env_var('LAYOUT_Y',36) + set_env_var('BLOCKSIZE',28) + set_env_var('QUILTING',False) + diff --git a/ush/set_predef_grid_params.sh b/ush/set_predef_grid_params.sh index dd664670a..fd4ef530b 100644 --- a/ush/set_predef_grid_params.sh +++ b/ush/set_predef_grid_params.sh @@ -109,6 +109,52 @@ case ${PREDEF_GRID_NAME} in ESGgrid_DELX="25000.0" ESGgrid_DELY="25000.0" + ESGgrid_NX="219" + ESGgrid_NY="131" + + ESGgrid_PAZI="0.0" + + ESGgrid_WIDE_HALO_WIDTH="6" + + DT_ATMOS="${DT_ATMOS:-40}" + + LAYOUT_X="${LAYOUT_X:-5}" + LAYOUT_Y="${LAYOUT_Y:-2}" + BLOCKSIZE="${BLOCKSIZE:-40}" + + if [ "$QUILTING" = "TRUE" ]; then + WRTCMP_write_groups="1" + WRTCMP_write_tasks_per_group="2" + WRTCMP_output_grid="lambert_conformal" + WRTCMP_cen_lon="${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat1="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" + WRTCMP_nx="217" + WRTCMP_ny="128" + WRTCMP_lon_lwr_left="-122.719528" + WRTCMP_lat_lwr_left="21.138123" + WRTCMP_dx="${ESGgrid_DELX}" + WRTCMP_dy="${ESGgrid_DELY}" + fi + ;; +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~25km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_25km") + + GRID_GEN_METHOD="ESGgrid" + + ESGgrid_LON_CTR="-97.5" + ESGgrid_LAT_CTR="38.5" + + ESGgrid_DELX="25000.0" + ESGgrid_DELY="25000.0" + ESGgrid_NX="202" ESGgrid_NY="116" @@ -155,11 +201,57 @@ case ${PREDEF_GRID_NAME} in ESGgrid_DELX="13000.0" ESGgrid_DELY="13000.0" + ESGgrid_NX="420" + ESGgrid_NY="252" + + ESGgrid_PAZI="0.0" + + ESGgrid_WIDE_HALO_WIDTH="6" + + DT_ATMOS="${DT_ATMOS:-45}" + + LAYOUT_X="${LAYOUT_X:-16}" + LAYOUT_Y="${LAYOUT_Y:-10}" + BLOCKSIZE="${BLOCKSIZE:-32}" + + if [ "$QUILTING" = "TRUE" ]; then + WRTCMP_write_groups="1" + WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) + WRTCMP_output_grid="lambert_conformal" + WRTCMP_cen_lon="${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat1="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" + WRTCMP_nx="416" + WRTCMP_ny="245" + WRTCMP_lon_lwr_left="-122.719528" + WRTCMP_lat_lwr_left="21.138123" + WRTCMP_dx="${ESGgrid_DELX}" + WRTCMP_dy="${ESGgrid_DELY}" + fi + ;; +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~13km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_13km") + + GRID_GEN_METHOD="ESGgrid" + + ESGgrid_LON_CTR="-97.5" + ESGgrid_LAT_CTR="38.5" + + ESGgrid_DELX="13000.0" + ESGgrid_DELY="13000.0" + ESGgrid_NX="396" ESGgrid_NY="232" ESGgrid_PAZI="0.0" - + ESGgrid_WIDE_HALO_WIDTH="6" DT_ATMOS="${DT_ATMOS:-45}" @@ -201,6 +293,52 @@ case ${PREDEF_GRID_NAME} in ESGgrid_DELX="3000.0" ESGgrid_DELY="3000.0" + ESGgrid_NX="1820" + ESGgrid_NY="1092" + + ESGgrid_PAZI="0.0" + + ESGgrid_WIDE_HALO_WIDTH="6" + + DT_ATMOS="${DT_ATMOS:-36}" + + LAYOUT_X="${LAYOUT_X:-28}" + LAYOUT_Y="${LAYOUT_Y:-28}" + BLOCKSIZE="${BLOCKSIZE:-29}" + + if [ "$QUILTING" = "TRUE" ]; then + WRTCMP_write_groups="1" + WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) + WRTCMP_output_grid="lambert_conformal" + WRTCMP_cen_lon="${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat1="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" + WRTCMP_nx="1799" + WRTCMP_ny="1059" + WRTCMP_lon_lwr_left="-122.719528" + WRTCMP_lat_lwr_left="21.138123" + WRTCMP_dx="${ESGgrid_DELX}" + WRTCMP_dy="${ESGgrid_DELY}" + fi + ;; +# +#----------------------------------------------------------------------- +# +# The RRFS CONUS domain with ~3km cells that can be initialized from the HRRR. +# +#----------------------------------------------------------------------- +# +"RRFS_CONUScompact_3km") + + GRID_GEN_METHOD="ESGgrid" + + ESGgrid_LON_CTR="-97.5" + ESGgrid_LAT_CTR="38.5" + + ESGgrid_DELX="3000.0" + ESGgrid_DELY="3000.0" + ESGgrid_NX="1748" ESGgrid_NY="1038" @@ -251,7 +389,7 @@ case ${PREDEF_GRID_NAME} in ESGgrid_NY="600" ESGgrid_PAZI="0.0" - + ESGgrid_WIDE_HALO_WIDTH="6" DT_ATMOS="${DT_ATMOS:-40}" @@ -279,6 +417,54 @@ case ${PREDEF_GRID_NAME} in # #----------------------------------------------------------------------- # +# A subconus domain over Indianapolis, Indiana with ~3km cells. This is +# mostly for testing on a 3km grid with a much small number of cells than +# on the full CONUS. +# +#----------------------------------------------------------------------- +# +"SUBCONUS_Ind_3km") + + GRID_GEN_METHOD="ESGgrid" + + ESGgrid_LON_CTR="-86.16" + ESGgrid_LAT_CTR="39.77" + + ESGgrid_DELX="3000.0" + ESGgrid_DELY="3000.0" + + ESGgrid_NX="200" + ESGgrid_NY="200" + + ESGgrid_PAZI="0.0" + + ESGgrid_WIDE_HALO_WIDTH="6" + + DT_ATMOS="${DT_ATMOS:-40}" + + LAYOUT_X="${LAYOUT_X:-5}" + LAYOUT_Y="${LAYOUT_Y:-5}" + BLOCKSIZE="${BLOCKSIZE:-40}" + + if [ "$QUILTING" = "TRUE" ]; then + WRTCMP_write_groups="1" + WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) + WRTCMP_output_grid="lambert_conformal" + WRTCMP_cen_lon="${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat1="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" + WRTCMP_nx="197" + WRTCMP_ny="197" + WRTCMP_lon_lwr_left="-89.47120417" + WRTCMP_lat_lwr_left="37.07809642" + WRTCMP_dx="${ESGgrid_DELX}" + WRTCMP_dy="${ESGgrid_DELY}" + fi + ;; +# +#----------------------------------------------------------------------- +# # The RRFS Alaska domain with ~13km cells. # # Note: @@ -300,7 +486,7 @@ case ${PREDEF_GRID_NAME} in ESGgrid_NY="240" ESGgrid_PAZI="0.0" - + ESGgrid_WIDE_HALO_WIDTH="6" # DT_ATMOS="${DT_ATMOS:-50}" @@ -420,7 +606,7 @@ case ${PREDEF_GRID_NAME} in ESGgrid_NY="1020" ESGgrid_PAZI="0.0" - + ESGgrid_WIDE_HALO_WIDTH="6" # DT_ATMOS="${DT_ATMOS:-50}" @@ -449,6 +635,60 @@ case ${PREDEF_GRID_NAME} in # #----------------------------------------------------------------------- # +# The WoFS domain with ~3km cells. +# +# Note: +# The WoFS domain will generate a 301 x 301 output grid (WRITE COMPONENT) and +# will eventually be movable (ESGgrid_LON_CTR/ESGgrid_LAT_CTR). A python script +# python_utils/fv3write_parms_lambert will be useful to determine +# WRTCMP_lon_lwr_left and WRTCMP_lat_lwr_left locations (only for Lambert map +# projection currently) of the quilting output when the domain location is +# moved. Later, it should be integrated into the workflow. +# +#----------------------------------------------------------------------- +# +"WoFS_3km") + + GRID_GEN_METHOD="ESGgrid" + + ESGgrid_LON_CTR="-97.5" + ESGgrid_LAT_CTR="38.5" + + ESGgrid_DELX="3000.0" + ESGgrid_DELY="3000.0" + + ESGgrid_NX="361" + ESGgrid_NY="361" + + ESGgrid_PAZI="0.0" + + ESGgrid_WIDE_HALO_WIDTH="6" + + DT_ATMOS="${DT_ATMOS:-20}" + + LAYOUT_X="${LAYOUT_X:-18}" + LAYOUT_Y="${LAYOUT_Y:-12}" + BLOCKSIZE="${BLOCKSIZE:-30}" + + if [ "$QUILTING" = "TRUE" ]; then + WRTCMP_write_groups="1" + WRTCMP_write_tasks_per_group=$(( 1*LAYOUT_Y )) + WRTCMP_output_grid="lambert_conformal" + WRTCMP_cen_lon="${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat1="${ESGgrid_LAT_CTR}" + WRTCMP_stdlat2="${ESGgrid_LAT_CTR}" + WRTCMP_nx="301" + WRTCMP_ny="301" + WRTCMP_lon_lwr_left="-102.3802487" + WRTCMP_lat_lwr_left="34.3407918" + WRTCMP_dx="${ESGgrid_DELX}" + WRTCMP_dy="${ESGgrid_DELY}" + fi + ;; +# +#----------------------------------------------------------------------- +# # A CONUS domain of GFDLgrid type with ~25km cells. # # Note: @@ -1121,60 +1361,16 @@ case ${PREDEF_GRID_NAME} in # "RRFS_NA_13km") -# if [ "${GRID_GEN_METHOD}" = "GFDLgrid" ]; then -# -# GFDLgrid_LON_T6_CTR="-106.0" -# GFDLgrid_LAT_T6_CTR="54.0" -# GFDLgrid_STRETCH_FAC="0.63" -# GFDLgrid_RES="384" -# GFDLgrid_REFINE_RATIO="3" -# -# num_margin_cells_T6_left="10" -# GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G=$(( num_margin_cells_T6_left + 1 )) -# -# num_margin_cells_T6_right="10" -# GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G=$(( GFDLgrid_RES - num_margin_cells_T6_right )) -# -# num_margin_cells_T6_bottom="10" -# GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G=$(( num_margin_cells_T6_bottom + 1 )) -# -# num_margin_cells_T6_top="10" -# GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G=$(( GFDLgrid_RES - num_margin_cells_T6_top )) -# -# GFDLgrid_USE_GFDLgrid_RES_IN_FILENAMES="FALSE" -# -# DT_ATMOS="50" -# -# LAYOUT_X="14" -# LAYOUT_Y="14" -# BLOCKSIZE="26" -# -# if [ "$QUILTING" = "TRUE" ]; then -# WRTCMP_write_groups="1" -# WRTCMP_write_tasks_per_group="14" -# WRTCMP_output_grid="rotated_latlon" -# WRTCMP_cen_lon="${GFDLgrid_LON_T6_CTR}" -# WRTCMP_cen_lat="${GFDLgrid_LAT_T6_CTR}" -# WRTCMP_lon_lwr_left="-57.9926" -# WRTCMP_lat_lwr_left="-50.74344" -# WRTCMP_lon_upr_rght="57.99249" -# WRTCMP_lat_upr_rght="50.74344" -# WRTCMP_dlon="0.1218331" -# WRTCMP_dlat="0.121833" -# fi -# -# elif [ "${GRID_GEN_METHOD}" = "ESGgrid" ]; then - GRID_GEN_METHOD="ESGgrid" - ESGgrid_LON_CTR="-106.0" - ESGgrid_LAT_CTR="54.0" + ESGgrid_LON_CTR="-112.5" + ESGgrid_LAT_CTR="55.0" ESGgrid_DELX="13000.0" ESGgrid_DELY="13000.0" - ESGgrid_NX="960" - ESGgrid_NY="960" + ESGgrid_NX="912" + ESGgrid_NY="623" ESGgrid_PAZI="0.0" @@ -1190,12 +1386,12 @@ case ${PREDEF_GRID_NAME} in WRTCMP_write_groups="1" WRTCMP_write_tasks_per_group="16" WRTCMP_output_grid="rotated_latlon" - WRTCMP_cen_lon="${ESGgrid_LON_CTR}" - WRTCMP_cen_lat="${ESGgrid_LAT_CTR}" - WRTCMP_lon_lwr_left="-55.82538869" - WRTCMP_lat_lwr_left="-48.57685654" - WRTCMP_lon_upr_rght="55.82538869" - WRTCMP_lat_upr_rght="48.57685654" + WRTCMP_cen_lon="-113.0" #"${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="55.0" #"${ESGgrid_LAT_CTR}" + WRTCMP_lon_lwr_left="-61.0" + WRTCMP_lat_lwr_left="-37.0" + WRTCMP_lon_upr_rght="61.0" + WRTCMP_lat_upr_rght="37.0" WRTCMP_dlon=$( printf "%.9f" $( bc -l <<< "(${ESGgrid_DELX}/${radius_Earth})*${degs_per_radian}" ) ) WRTCMP_dlat=$( printf "%.9f" $( bc -l <<< "(${ESGgrid_DELY}/${radius_Earth})*${degs_per_radian}" ) ) fi @@ -1212,35 +1408,35 @@ case ${PREDEF_GRID_NAME} in GRID_GEN_METHOD="ESGgrid" - ESGgrid_LON_CTR=-107.5 - ESGgrid_LAT_CTR=51.5 + ESGgrid_LON_CTR=-112.5 + ESGgrid_LAT_CTR=55.0 ESGgrid_DELX="3000.0" ESGgrid_DELY="3000.0" - ESGgrid_NX=3640 - ESGgrid_NY=2520 + ESGgrid_NX=3950 + ESGgrid_NY=2700 - ESGgrid_PAZI="-13.0" + ESGgrid_PAZI="0.0" ESGgrid_WIDE_HALO_WIDTH=6 DT_ATMOS="${DT_ATMOS:-36}" - LAYOUT_X="${LAYOUT_X:-18}" # 40 - EMC operational configuration - LAYOUT_Y="${LAYOUT_Y:-36}" # 45 - EMC operational configuration + LAYOUT_X="${LAYOUT_X:-20}" # 40 - EMC operational configuration + LAYOUT_Y="${LAYOUT_Y:-35}" # 45 - EMC operational configuration BLOCKSIZE="${BLOCKSIZE:-28}" if [ "$QUILTING" = "TRUE" ]; then WRTCMP_write_groups="1" WRTCMP_write_tasks_per_group="144" WRTCMP_output_grid="rotated_latlon" - WRTCMP_cen_lon="-112.0" #${ESGgrid_LON_CTR}" - WRTCMP_cen_lat="48.0" #${ESGgrid_LAT_CTR}" - WRTCMP_lon_lwr_left="-51.0" - WRTCMP_lat_lwr_left="-33.0" - WRTCMP_lon_upr_rght="51.0" - WRTCMP_lat_upr_rght="33.0" + WRTCMP_cen_lon="-113.0" #"${ESGgrid_LON_CTR}" + WRTCMP_cen_lat="55.0" #"${ESGgrid_LAT_CTR}" + WRTCMP_lon_lwr_left="-61.0" + WRTCMP_lat_lwr_left="-37.0" + WRTCMP_lon_upr_rght="61.0" + WRTCMP_lat_upr_rght="37.0" WRTCMP_dlon="0.025" #$( printf "%.9f" $( bc -l <<< "(${ESGgrid_DELX}/${radius_Earth})*${degs_per_radian}" ) ) WRTCMP_dlat="0.025" #$( printf "%.9f" $( bc -l <<< "(${ESGgrid_DELY}/${radius_Earth})*${degs_per_radian}" ) ) fi diff --git a/ush/set_thompson_mp_fix_files.py b/ush/set_thompson_mp_fix_files.py new file mode 100644 index 000000000..9a1703e09 --- /dev/null +++ b/ush/set_thompson_mp_fix_files.py @@ -0,0 +1,176 @@ +#!/usr/bin/env python3 + +import os +import unittest +from textwrap import dedent + +from python_utils import import_vars,export_vars,set_env_var,list_to_str,\ + print_input_args,print_info_msg, print_err_msg_exit,\ + define_macos_utilities,load_xml_file,has_tag_with_value + +def set_thompson_mp_fix_files(ccpp_phys_suite_fp, thompson_mp_climo_fn): + """ Function that first checks whether the Thompson + microphysics parameterization is being called by the selected physics + suite. If not, it sets the output variable whose name is specified by + output_varname_sdf_uses_thompson_mp to "FALSE" and exits. If so, it + sets this variable to "TRUE" and modifies the workflow arrays + FIXgsm_FILES_TO_COPY_TO_FIXam and CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING + to ensure that fixed files needed by the Thompson microphysics + parameterization are copied to the FIXam directory and that appropriate + symlinks to these files are created in the run directories. + + Args: + ccpp_phys_suite_fp: full path to CCPP physics suite + thompson_mp_climo_fn: netcdf file for thompson microphysics + Returns: + boolean: sdf_uses_thompson_mp + """ + + print_input_args(locals()) + + # import all environment variables + import_vars() + + # + #----------------------------------------------------------------------- + # + # Check the suite definition file to see whether the Thompson microphysics + # parameterization is being used. + # + #----------------------------------------------------------------------- + # + tree = load_xml_file(ccpp_phys_suite_fp) + sdf_uses_thompson_mp = has_tag_with_value(tree, "scheme", "mp_thompson") + # + #----------------------------------------------------------------------- + # + # If the Thompson microphysics parameterization is being used, then... + # + #----------------------------------------------------------------------- + # + if sdf_uses_thompson_mp: + # + #----------------------------------------------------------------------- + # + # Append the names of the fixed files needed by the Thompson microphysics + # parameterization to the workflow array FIXgsm_FILES_TO_COPY_TO_FIXam, + # and append to the workflow array CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING + # the mappings between these files and the names of the corresponding + # symlinks that need to be created in the run directories. + # + #----------------------------------------------------------------------- + # + thompson_mp_fix_files=[ + "CCN_ACTIVATE.BIN", + "freezeH2O.dat", + "qr_acr_qg.dat", + "qr_acr_qs.dat", + "qr_acr_qgV2.dat", + "qr_acr_qsV2.dat" + ] + + if (EXTRN_MDL_NAME_ICS != "HRRR" and EXTRN_MDL_NAME_ICS != "RAP") or \ + (EXTRN_MDL_NAME_LBCS != "HRRR" and EXTRN_MDL_NAME_LBCS != "RAP"): + thompson_mp_fix_files.append(thompson_mp_climo_fn) + + FIXgsm_FILES_TO_COPY_TO_FIXam.extend(thompson_mp_fix_files) + + for fix_file in thompson_mp_fix_files: + mapping=f"{fix_file} | {fix_file}" + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING.append(mapping) + + msg=dedent(f''' + Since the Thompson microphysics parameterization is being used by this + physics suite (CCPP_PHYS_SUITE), the names of the fixed files needed by + this scheme have been appended to the array FIXgsm_FILES_TO_COPY_TO_FIXam, + and the mappings between these files and the symlinks that need to be + created in the cycle directories have been appended to the array + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING. After these modifications, the + values of these parameters are as follows: + + ''') + msg+=dedent(f''' + CCPP_PHYS_SUITE = \"{CCPP_PHYS_SUITE}\" + + FIXgsm_FILES_TO_COPY_TO_FIXam = {list_to_str(FIXgsm_FILES_TO_COPY_TO_FIXam)} + ''') + msg+=dedent(f''' + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING = {list_to_str(CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING)} + ''') + print_info_msg(msg) + + EXPORTS = [ "CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING", "FIXgsm_FILES_TO_COPY_TO_FIXam" ] + export_vars(env_vars=EXPORTS) + + return sdf_uses_thompson_mp + +class Testing(unittest.TestCase): + def test_set_thompson_mp_fix_files(self): + self.assertEqual( True, + set_thompson_mp_fix_files(ccpp_phys_suite_fp=f"test_data{os.sep}suite_FV3_GSD_SAR.xml", + thompson_mp_climo_fn="Thompson_MP_MONTHLY_CLIMO.nc") ) + def setUp(self): + define_macos_utilities(); + set_env_var('DEBUG',True) + set_env_var('VERBOSE',True) + set_env_var('EXTRN_MDL_NAME_ICS',"FV3GFS") + set_env_var('EXTRN_MDL_NAME_LBCS',"FV3GFS") + set_env_var('CCPP_PHYS_SUITE',"FV3_GSD_SAR") + + CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING = [ + "aerosol.dat | global_climaeropac_global.txt", + "co2historicaldata_2010.txt | fix_co2_proj/global_co2historicaldata_2010.txt", + "co2historicaldata_2011.txt | fix_co2_proj/global_co2historicaldata_2011.txt", + "co2historicaldata_2012.txt | fix_co2_proj/global_co2historicaldata_2012.txt", + "co2historicaldata_2013.txt | fix_co2_proj/global_co2historicaldata_2013.txt", + "co2historicaldata_2014.txt | fix_co2_proj/global_co2historicaldata_2014.txt", + "co2historicaldata_2015.txt | fix_co2_proj/global_co2historicaldata_2015.txt", + "co2historicaldata_2016.txt | fix_co2_proj/global_co2historicaldata_2016.txt", + "co2historicaldata_2017.txt | fix_co2_proj/global_co2historicaldata_2017.txt", + "co2historicaldata_2018.txt | fix_co2_proj/global_co2historicaldata_2018.txt", + "co2historicaldata_2019.txt | fix_co2_proj/global_co2historicaldata_2019.txt", + "co2historicaldata_2020.txt | fix_co2_proj/global_co2historicaldata_2020.txt", + "co2historicaldata_2021.txt | fix_co2_proj/global_co2historicaldata_2021.txt", + "co2historicaldata_glob.txt | global_co2historicaldata_glob.txt", + "co2monthlycyc.txt | co2monthlycyc.txt", + "global_h2oprdlos.f77 | global_h2o_pltc.f77", + "global_zorclim.1x1.grb | global_zorclim.1x1.grb", + "sfc_emissivity_idx.txt | global_sfc_emissivity_idx.txt", + "solarconstant_noaa_an.txt | global_solarconstant_noaa_an.txt", + "global_o3prdlos.f77 | ozprdlos_2015_new_sbuvO3_tclm15_nuchem.f77"] + + FIXgsm_FILES_TO_COPY_TO_FIXam = [ + "global_glacier.2x2.grb", + "global_maxice.2x2.grb", + "RTGSST.1982.2012.monthly.clim.grb", + "global_snoclim.1.875.grb", + "CFSR.SEAICE.1982.2012.monthly.clim.grb", + "global_soilmgldas.t126.384.190.grb", + "seaice_newland.grb", + "global_climaeropac_global.txt", + "fix_co2_proj/global_co2historicaldata_2010.txt", + "fix_co2_proj/global_co2historicaldata_2011.txt", + "fix_co2_proj/global_co2historicaldata_2012.txt", + "fix_co2_proj/global_co2historicaldata_2013.txt", + "fix_co2_proj/global_co2historicaldata_2014.txt", + "fix_co2_proj/global_co2historicaldata_2015.txt", + "fix_co2_proj/global_co2historicaldata_2016.txt", + "fix_co2_proj/global_co2historicaldata_2017.txt", + "fix_co2_proj/global_co2historicaldata_2018.txt", + "fix_co2_proj/global_co2historicaldata_2019.txt", + "fix_co2_proj/global_co2historicaldata_2020.txt", + "fix_co2_proj/global_co2historicaldata_2021.txt", + "global_co2historicaldata_glob.txt", + "co2monthlycyc.txt", + "global_h2o_pltc.f77", + "global_hyblev.l65.txt", + "global_zorclim.1x1.grb", + "global_sfc_emissivity_idx.txt", + "global_solarconstant_noaa_an.txt", + "geo_em.d01.lat-lon.2.5m.HGT_M.nc", + "HGT.Beljaars_filtered.lat-lon.30s_res.nc", + "ozprdlos_2015_new_sbuvO3_tclm15_nuchem.f77"] + + set_env_var('CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING', CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING) + set_env_var('FIXgsm_FILES_TO_COPY_TO_FIXam', FIXgsm_FILES_TO_COPY_TO_FIXam) + diff --git a/ush/setup.py b/ush/setup.py new file mode 100644 index 000000000..2e6cb912c --- /dev/null +++ b/ush/setup.py @@ -0,0 +1,2220 @@ +#!/usr/bin/env python3 + +import os +import sys +import datetime +from textwrap import dedent + +from python_utils import cd_vrfy, mkdir_vrfy, rm_vrfy, check_var_valid_value,\ + lowercase,uppercase,check_for_preexist_dir_file,\ + list_to_str, type_to_str, \ + import_vars, export_vars, get_env_var, print_info_msg,\ + print_err_msg_exit, load_config_file, cfg_to_shell_str,\ + load_shell_config, load_ini_config, get_ini_value + +from set_cycle_dates import set_cycle_dates +from set_predef_grid_params import set_predef_grid_params +from set_ozone_param import set_ozone_param +from set_extrn_mdl_params import set_extrn_mdl_params +from set_gridparams_ESGgrid import set_gridparams_ESGgrid +from set_gridparams_GFDLgrid import set_gridparams_GFDLgrid +from link_fix import link_fix +from check_ruc_lsm import check_ruc_lsm +from set_thompson_mp_fix_files import set_thompson_mp_fix_files + +def setup(): + """ Function that sets a secondary set + of parameters needed by the various scripts that are called by the + FV3-LAM rocoto community workflow. This secondary set of parameters is + calculated using the primary set of user-defined parameters in the de- + fault and custom experiment/workflow configuration scripts (whose file + names are defined below). This script then saves both sets of parame- + ters in a global variable definitions file (really a bash script) in + the experiment directory. This file then gets sourced by the various + scripts called by the tasks in the workflow. + + Args: + None + Returns: + None + """ + + ushdir=os.path.dirname(os.path.abspath(__file__)) + cd_vrfy(ushdir) + + # import all environment variables + import_vars() + + # print message + print_info_msg(f''' + ======================================================================== + Starting function setup() in \"{os.path.basename(__file__)}\"... + ========================================================================''') + # + #----------------------------------------------------------------------- + # + # Set the name of the configuration file containing default values for + # the experiment/workflow variables. Then source the file. + # + #----------------------------------------------------------------------- + # + EXPT_DEFAULT_CONFIG_FN="config_defaults.yaml" + cfg_d = load_config_file(EXPT_DEFAULT_CONFIG_FN) + import_vars(dictionary=cfg_d) + # + #----------------------------------------------------------------------- + # + # If a user-specified configuration file exists, source it. This file + # contains user-specified values for a subset of the experiment/workflow + # variables that override their default values. Note that the user- + # specified configuration file is not tracked by the repository, whereas + # the default configuration file is tracked. + # + #----------------------------------------------------------------------- + # + if os.path.exists(EXPT_CONFIG_FN): + # + # We require that the variables being set in the user-specified configu- + # ration file have counterparts in the default configuration file. This + # is so that we do not introduce new variables in the user-specified + # configuration file without also officially introducing them in the de- + # fault configuration file. Thus, before sourcing the user-specified + # configuration file, we check that all variables in the user-specified + # configuration file are also assigned default values in the default + # configuration file. + # + cfg_u = load_config_file(os.path.join(ushdir,EXPT_CONFIG_FN)) + cfg_d.update(cfg_u) + if cfg_u.items() > cfg_d.items(): + print_err_msg_exit(f''' + User specified config file {EXPT_CONGIG_FN} has variables that are + not defined in the default configuration file {EXPT_DEFAULT_CONFIG_FN}''') + import_vars(dictionary=cfg_u) + + # + #----------------------------------------------------------------------- + # + # If PREDEF_GRID_NAME is set to a non-empty string, set or reset parameters + # according to the predefined domain specified. + # + #----------------------------------------------------------------------- + # + + # export env vars before calling another module + export_vars() + + if PREDEF_GRID_NAME: + set_predef_grid_params() + + import_vars() + + # + #----------------------------------------------------------------------- + # + # Make sure different variables are set to their corresponding valid value + # + #----------------------------------------------------------------------- + # + global VERBOSE + if DEBUG and not VERBOSE: + print_info_msg(''' + Resetting VERBOSE to \"TRUE\" because DEBUG has been set to \"TRUE\"...''') + VERBOSE=False + + # + #----------------------------------------------------------------------- + # + # Set magnitude of stochastic ad-hoc schemes to -999.0 if they are not + # being used. This is required at the moment, since "do_shum/sppt/skeb" + # does not override the use of the scheme unless the magnitude is also + # specifically set to -999.0. If all "do_shum/sppt/skeb" are set to + # "false," then none will run, regardless of the magnitude values. + # + #----------------------------------------------------------------------- + # + global SHUM_MAG, SKEB_MAG, SPPT_MAG + if not DO_SHUM: + SHUM_MAG=-999.0 + if not DO_SKEB: + SKEB_MAG=-999.0 + if not DO_SPPT: + SPPT_MAG=-999.0 + # + #----------------------------------------------------------------------- + # + # If running with SPP in MYNN PBL, MYNN SFC, GSL GWD, Thompson MP, or + # RRTMG, count the number of entries in SPP_VAR_LIST to correctly set + # N_VAR_SPP, otherwise set it to zero. + # + #----------------------------------------------------------------------- + # + global N_VAR_SPP + N_VAR_SPP=0 + if DO_SPP: + N_VAR_SPP = len(SPP_VAR_LIST) + # + #----------------------------------------------------------------------- + # + # If running with Noah or RUC-LSM SPP, count the number of entries in + # LSM_SPP_VAR_LIST to correctly set N_VAR_LNDP, otherwise set it to zero. + # Also set LNDP_TYPE to 2 for LSM SPP, otherwise set it to zero. Finally, + # initialize an "FHCYC_LSM_SPP" variable to 0 and set it to 999 if LSM SPP + # is turned on. This requirement is necessary since LSM SPP cannot run with + # FHCYC=0 at the moment, but FHCYC cannot be set to anything less than the + # length of the forecast either. A bug fix will be submitted to + # ufs-weather-model soon, at which point, this requirement can be removed + # from regional_workflow. + # + #----------------------------------------------------------------------- + # + global N_VAR_LNDP, LNDP_TYPE, FHCYC_LSM_SPP_OR_NOT + N_VAR_LNDP=0 + LNDP_TYPE=0 + FHCYC_LSM_SPP_OR_NOT=0 + if DO_LSM_SPP: + N_VAR_LNDP=len(LSM_SPP_VAR_LIST) + LNDP_TYPE=2 + FHCYC_LSM_SPP_OR_NOT=999 + # + #----------------------------------------------------------------------- + # + # If running with SPP, confirm that each SPP-related namelist value + # contains the same number of entries as N_VAR_SPP (set above to be equal + # to the number of entries in SPP_VAR_LIST). + # + #----------------------------------------------------------------------- + # + if DO_SPP: + if ( len(SPP_MAG_LIST) != N_VAR_SPP ) or \ + ( len(SPP_LSCALE) != N_VAR_SPP) or \ + ( len(SPP_TSCALE) != N_VAR_SPP) or \ + ( len(SPP_SIGTOP1) != N_VAR_SPP) or \ + ( len(SPP_SIGTOP2) != N_VAR_SPP) or \ + ( len(SPP_STDDEV_CUTOFF) != N_VAR_SPP) or \ + ( len(ISEED_SPP) != N_VAR_SPP): + print_err_msg_exit(f''' + All MYNN PBL, MYNN SFC, GSL GWD, Thompson MP, or RRTMG SPP-related namelist + variables set in {CONFIG_FN} must be equal in number of entries to what is + found in SPP_VAR_LIST: + Number of entries in SPP_VAR_LIST = \"{len(SPP_VAR_LIST)}\"''') + # + #----------------------------------------------------------------------- + # + # If running with LSM SPP, confirm that each LSM SPP-related namelist + # value contains the same number of entries as N_VAR_LNDP (set above to + # be equal to the number of entries in LSM_SPP_VAR_LIST). + # + #----------------------------------------------------------------------- + # + if DO_LSM_SPP: + if ( len(LSM_SPP_MAG_LIST) != N_VAR_LNDP) or \ + ( len(LSM_SPP_LSCALE) != N_VAR_LNDP) or \ + ( len(LSM_SPP_TSCALE) != N_VAR_LNDP): + print_err_msg_exit(f''' + All Noah or RUC-LSM SPP-related namelist variables (except ISEED_LSM_SPP) + set in {CONFIG_FN} must be equal in number of entries to what is found in + SPP_VAR_LIST: + Number of entries in SPP_VAR_LIST = \"{len(LSM_SPP_VAR_LIST)}\"''') + # + # The current script should be located in the ush subdirectory of the + # workflow directory. Thus, the workflow directory is the one above the + # directory of the current script. + # + SR_WX_APP_TOP_DIR=os.path.abspath( os.path.dirname(__file__) + \ + os.sep + os.pardir + os.sep + os.pardir) + + # + #----------------------------------------------------------------------- + # + # Set the base directories in which codes obtained from external reposi- + # tories (using the manage_externals tool) are placed. Obtain the rela- + # tive paths to these directories by reading them in from the manage_ex- + # ternals configuration file. (Note that these are relative to the lo- + # cation of the configuration file.) Then form the full paths to these + # directories. Finally, make sure that each of these directories actu- + # ally exists. + # + #----------------------------------------------------------------------- + # + mng_extrns_cfg_fn = os.path.join(SR_WX_APP_TOP_DIR, "Externals.cfg") + try: + mng_extrns_cfg_fn = os.readlink(mng_extrns_cfg_fn) + except: + pass + property_name="local_path" + cfg = load_ini_config(mng_extrns_cfg_fn) + # + # Get the path to the workflow scripts + # + external_name="regional_workflow" + HOMErrfs = get_ini_value(cfg, external_name, property_name) + + if not HOMErrfs: + print_err_msg_exit(f''' + Externals.cfg does not contain "{external_name}".''') + + HOMErrfs = os.path.join(SR_WX_APP_TOP_DIR, HOMErrfs) + # + # Get the base directory of the FV3 forecast model code. + # + external_name=FCST_MODEL + UFS_WTHR_MDL_DIR = get_ini_value(cfg, external_name,property_name) + + if not UFS_WTHR_MDL_DIR: + print_err_msg_exit(f''' + Externals.cfg does not contain "{external_name}".''') + + UFS_WTHR_MDL_DIR=os.path.join(SR_WX_APP_TOP_DIR, UFS_WTHR_MDL_DIR) + if not os.path.exists(UFS_WTHR_MDL_DIR): + print_err_msg_exit(f''' + The base directory in which the FV3 source code should be located + (UFS_WTHR_MDL_DIR) does not exist: + UFS_WTHR_MDL_DIR = \"{UFS_WTHR_MDL_DIR}\" + Please clone the external repository containing the code in this directory, + build the executable, and then rerun the workflow.''') + # + # Get the base directory of the UFS_UTILS codes. + # + external_name="ufs_utils" + UFS_UTILS_DIR=get_ini_value(cfg, external_name, property_name) + + if not UFS_UTILS_DIR: + print_err_msg_exit(f''' + Externals.cfg does not contain "{external_name}".''') + + UFS_UTILS_DIR=os.path.join(SR_WX_APP_TOP_DIR, UFS_UTILS_DIR) + if not os.path.exists(UFS_UTILS_DIR): + print_err_msg_exit(f''' + The base directory in which the UFS utilities source codes should be lo- + cated (UFS_UTILS_DIR) does not exist: + UFS_UTILS_DIR = \"{UFS_UTILS_DIR}\" + Please clone the external repository containing the code in this direct- + ory, build the executables, and then rerun the workflow.''') + # + # Get the base directory of the UPP code. + # + external_name="UPP" + UPP_DIR=get_ini_value(cfg,external_name,property_name ) + if not UPP_DIR: + print_err_msg_exit(f''' + Externals.cfg does not contain "{external_name}".''') + + UPP_DIR=os.path.join(SR_WX_APP_TOP_DIR, UPP_DIR) + if not os.path.exists(UPP_DIR): + print_err_msg_exit(f''' + The base directory in which the UPP source code should be located + (UPP_DIR) does not exist: + UPP_DIR = \"{UPP_DIR}\" + Please clone the external repository containing the code in this directory, + build the executable, and then rerun the workflow.''') + + # + # Define some other useful paths + # + global USHDIR, SCRIPTSDIR, JOBSDIR,SORCDIR, SRC_DIR, PARMDIR, MODULES_DIR, EXECDIR, TEMPLATE_DIR, \ + VX_CONFIG_DIR, METPLUS_CONF, MET_CONFIG + + USHDIR = os.path.join(HOMErrfs,"ush") + SCRIPTSDIR = os.path.join(HOMErrfs,"scripts") + JOBSDIR = os.path.join(HOMErrfs,"jobs") + SORCDIR = os.path.join(HOMErrfs,"sorc") + SRC_DIR = os.path.join(SR_WX_APP_TOP_DIR,"src") + PARMDIR = os.path.join(HOMErrfs,"parm") + MODULES_DIR = os.path.join(HOMErrfs,"modulefiles") + EXECDIR = os.path.join(SR_WX_APP_TOP_DIR,"bin") + TEMPLATE_DIR = os.path.join(USHDIR,"templates") + VX_CONFIG_DIR = os.path.join(TEMPLATE_DIR,"parm") + METPLUS_CONF = os.path.join(TEMPLATE_DIR,"parm","metplus") + MET_CONFIG = os.path.join(TEMPLATE_DIR,"parm","met") + + # + #----------------------------------------------------------------------- + # + # Source the machine config file containing architechture information, + # queue names, and supported input file paths. + # + #----------------------------------------------------------------------- + # + global MACHINE + global MACHINE_FILE + global FIXgsm, FIXaer, FIXlut, TOPO_DIR, SFC_CLIMO_INPUT_DIR, DOMAIN_PREGEN_BASEDIR, \ + RELATIVE_LINK_FLAG, WORKFLOW_MANAGER, NCORES_PER_NODE, SCHED, \ + QUEUE_DEFAULT, QUEUE_HPSS, QUEUE_FCST, \ + PARTITION_DEFAULT, PARTITION_HPSS, PARTITION_FCST + + MACHINE = uppercase(MACHINE) + RELATIVE_LINK_FLAG="--relative" + MACHINE_FILE=MACHINE_FILE or os.path.join(USHDIR,"machine",f"{lowercase(MACHINE)}.sh") + machine_cfg = load_shell_config(MACHINE_FILE) + import_vars(dictionary=machine_cfg) + + if not NCORES_PER_NODE: + print_err_msg_exit(f""" + NCORES_PER_NODE has not been specified in the file {MACHINE_FILE} + Please ensure this value has been set for your desired platform. """) + + if not (FIXgsm and FIXaer and FIXlut and TOPO_DIR and SFC_CLIMO_INPUT_DIR): + print_err_msg_exit(f''' + One or more fix file directories have not been specified for this machine: + MACHINE = \"{MACHINE}\" + FIXgsm = \"{FIXgsm or ""} + FIXaer = \"{FIXaer or ""} + FIXlut = \"{FIXlut or ""} + TOPO_DIR = \"{TOPO_DIR or ""} + SFC_CLIMO_INPUT_DIR = \"{SFC_CLIMO_INPUT_DIR or ""} + DOMAIN_PREGEN_BASEDIR = \"{DOMAIN_PREGEN_BASEDIR or ""} + You can specify the missing location(s) in config.sh''') + + # + #----------------------------------------------------------------------- + # + # Set the names of the build and workflow module files (if not + # already specified by the user). These are the files that need to be + # sourced before building the component SRW App codes and running various + # workflow scripts, respectively. + # + #----------------------------------------------------------------------- + # + global WFLOW_MOD_FN, BUILD_MOD_FN + machine=lowercase(MACHINE) + WFLOW_MOD_FN=WFLOW_MOD_FN or f"wflow_{machine}" + BUILD_MOD_FN=BUILD_MOD_FN or f"build_{machine}_{COMPILER}" + # + #----------------------------------------------------------------------- + # + # Calculate a default value for the number of processes per node for the + # RUN_FCST_TN task. Then set PPN_RUN_FCST to this default value if + # PPN_RUN_FCST is not already specified by the user. + # + #----------------------------------------------------------------------- + # + global PPN_RUN_FCST + ppn_run_fcst_default = NCORES_PER_NODE // OMP_NUM_THREADS_RUN_FCST + PPN_RUN_FCST=PPN_RUN_FCST or ppn_run_fcst_default + # + #----------------------------------------------------------------------- + # + # If we are using a workflow manager check that the ACCOUNT variable is + # not empty. + # + #----------------------------------------------------------------------- + # + if WORKFLOW_MANAGER != "none": + if not ACCOUNT: + print_err_msg_exit(f''' + The variable ACCOUNT cannot be empty if you are using a workflow manager: + ACCOUNT = \"ACCOUNT\" + WORKFLOW_MANAGER = \"WORKFLOW_MANAGER\"''') + # + #----------------------------------------------------------------------- + # + # Set the grid type (GTYPE). In general, in the FV3 code, this can take + # on one of the following values: "global", "stretch", "nest", and "re- + # gional". The first three values are for various configurations of a + # global grid, while the last one is for a regional grid. Since here we + # are only interested in a regional grid, GTYPE must be set to "region- + # al". + # + #----------------------------------------------------------------------- + # + global TILE_RGNL, GTYPE + GTYPE="regional" + TILE_RGNL="7" + + #----------------------------------------------------------------------- + # + # Set USE_MERRA_CLIMO to either "TRUE" or "FALSE" so we don't + # have to consider other valid values later on. + # + #----------------------------------------------------------------------- + global USE_MERRA_CLIMO + if CCPP_PHYS_SUITE == "FV3_GFS_v15_thompson_mynn_lam3km": + USE_MERRA_CLIMO=True + # + #----------------------------------------------------------------------- + # + # Set CPL to TRUE/FALSE based on FCST_MODEL. + # + #----------------------------------------------------------------------- + # + global CPL + if FCST_MODEL == "ufs-weather-model": + CPL=False + elif FCST_MODEL == "fv3gfs_aqm": + CPL=True + else: + print_err_msg_exit(f''' + The coupling flag CPL has not been specified for this value of FCST_MODEL: + FCST_MODEL = \"{FCST_MODEL}\"''') + # + #----------------------------------------------------------------------- + # + # Make sure RESTART_INTERVAL is set to an integer value if present + # + #----------------------------------------------------------------------- + # + if not isinstance(RESTART_INTERVAL,int): + print_err_msg_exit(f''' + RESTART_INTERVAL must be set to an integer number of hours. + RESTART_INTERVAL = \"{RESTART_INTERVAL}\"''') + # + #----------------------------------------------------------------------- + # + # Check that DATE_FIRST_CYCL and DATE_LAST_CYCL are strings consisting + # of exactly 8 digits. + # + #----------------------------------------------------------------------- + # + if not isinstance(DATE_FIRST_CYCL,datetime.date): + print_err_msg_exit(f''' + DATE_FIRST_CYCL must be a string consisting of exactly 8 digits of the + form \"YYYYMMDD\", where YYYY is the 4-digit year, MM is the 2-digit + month, and DD is the 2-digit day-of-month. + DATE_FIRST_CYCL = \"{DATE_FIRST_CYCL}\"''') + + if not isinstance(DATE_LAST_CYCL,datetime.date): + print_err_msg_exit(f''' + DATE_LAST_CYCL must be a string consisting of exactly 8 digits of the + form \"YYYYMMDD\", where YYYY is the 4-digit year, MM is the 2-digit + month, and DD is the 2-digit day-of-month. + DATE_LAST_CYCL = \"{DATE_LAST_CYCL}\"''') + # + #----------------------------------------------------------------------- + # + # Check that all elements of CYCL_HRS are strings consisting of exactly + # 2 digits that are between "00" and "23", inclusive. + # + #----------------------------------------------------------------------- + # + i=0 + for CYCL in CYCL_HRS: + if CYCL < 0 or CYCL > 23: + print_err_msg_exit(f''' + Each element of CYCL_HRS must be an integer between \"00\" and \"23\", in- + clusive (including a leading \"0\", if necessary), specifying an hour-of- + day. Element #{i} of CYCL_HRS (where the index of the first element is 0) + does not have this form: + CYCL_HRS = {CYCL_HRS} + CYCL_HRS[{i}] = \"{CYCL_HRS[i]}\"''') + + i=i+1 + # + #----------------------------------------------------------------------- + # Check cycle increment for cycle frequency (cycl_freq). + # only if INCR_CYCL_FREQ < 24. + #----------------------------------------------------------------------- + # + if INCR_CYCL_FREQ < 24 and i > 1: + cycl_intv=(24//i) + if cycl_intv != INCR_CYCL_FREQ: + print_err_msg_exit(f''' + The number of CYCL_HRS does not match with that expected by INCR_CYCL_FREQ: + INCR_CYCL_FREQ = {INCR_CYCL_FREQ} + cycle interval by the number of CYCL_HRS = {cycl_intv} + CYCL_HRS = {CYCL_HRS} ''') + + for itmp in range(1,i): + itm1=itmp-1 + cycl_next_itmp=CYCL_HRS[itm1] + INCR_CYCL_FREQ + if cycl_next_itmp != CYCL_HRS[itmp]: + print_err_msg_exit(f''' + Element {itmp} of CYCL_HRS does not match with the increment of cycle + frequency INCR_CYCL_FREQ: + CYCL_HRS = {CYCL_HRS} + INCR_CYCL_FREQ = {INCR_CYCL_FREQ} + CYCL_HRS[{itmp}] = \"{CYCL_HRS[itmp]}\"''') + # + #----------------------------------------------------------------------- + # + # Call a function to generate the array ALL_CDATES containing the cycle + # dates/hours for which to run forecasts. The elements of this array + # will have the form YYYYMMDDHH. They are the starting dates/times of + # the forecasts that will be run in the experiment. Then set NUM_CYCLES + # to the number of elements in this array. + # + #----------------------------------------------------------------------- + # + + ALL_CDATES = set_cycle_dates( \ + date_start=DATE_FIRST_CYCL, + date_end=DATE_LAST_CYCL, + cycle_hrs=CYCL_HRS, + incr_cycl_freq=INCR_CYCL_FREQ) + + NUM_CYCLES=len(ALL_CDATES) + + if NUM_CYCLES > 90: + ALL_CDATES=None + print_info_msg(f''' + Too many cycles in ALL_CDATES to list, redefining in abbreviated form." + ALL_CDATES="{DATE_FIRST_CYCL}{CYCL_HRS[0]}...{DATE_LAST_CYCL}{CYCL_HRS[-1]}''') + # + #----------------------------------------------------------------------- + # + # If using a custom post configuration file, make sure that it exists. + # + #----------------------------------------------------------------------- + # + if USE_CUSTOM_POST_CONFIG_FILE: + if not os.path.exists(CUSTOM_POST_CONFIG_FP): + print_err_msg_exit(f''' + The custom post configuration specified by CUSTOM_POST_CONFIG_FP does not + exist: + CUSTOM_POST_CONFIG_FP = \"{CUSTOM_POST_CONFIG_FP}\"''') + # + #----------------------------------------------------------------------- + # + # If using external CRTM fix files to allow post-processing of synthetic + # satellite products from the UPP, then make sure the fix file directory + # exists. + # + #----------------------------------------------------------------------- + # + if USE_CRTM: + if not os.path.exists(CRTM_DIR): + print_err_msg_exit(f''' + The external CRTM fix file directory specified by CRTM_DIR does not exist: + CRTM_DIR = \"{CRTM_DIR}\"''') + # + #----------------------------------------------------------------------- + # + # The forecast length (in integer hours) cannot contain more than 3 cha- + # racters. Thus, its maximum value is 999. Check whether the specified + # forecast length exceeds this maximum value. If so, print out a warn- + # ing and exit this script. + # + #----------------------------------------------------------------------- + # + fcst_len_hrs_max=999 + if FCST_LEN_HRS > fcst_len_hrs_max: + print_err_msg_exit(f''' + Forecast length is greater than maximum allowed length: + FCST_LEN_HRS = {FCST_LEN_HRS} + fcst_len_hrs_max = {fcst_len_hrs_max}''') + # + #----------------------------------------------------------------------- + # + # Check whether the forecast length (FCST_LEN_HRS) is evenly divisible + # by the BC update interval (LBC_SPEC_INTVL_HRS). If not, print out a + # warning and exit this script. If so, generate an array of forecast + # hours at which the boundary values will be updated. + # + #----------------------------------------------------------------------- + # + rem=FCST_LEN_HRS%LBC_SPEC_INTVL_HRS + + if rem != 0: + print_err_msg_exit(f''' + The forecast length (FCST_LEN_HRS) is not evenly divisible by the lateral + boundary conditions update interval (LBC_SPEC_INTVL_HRS): + FCST_LEN_HRS = {FCST_LEN_HRS} + LBC_SPEC_INTVL_HRS = {LBC_SPEC_INTVL_HRS} + rem = FCST_LEN_HRS%%LBC_SPEC_INTVL_HRS = {rem}''') + # + #----------------------------------------------------------------------- + # + # Set the array containing the forecast hours at which the lateral + # boundary conditions (LBCs) need to be updated. Note that this array + # does not include the 0-th hour (initial time). + # + #----------------------------------------------------------------------- + # + LBC_SPEC_FCST_HRS=[ i for i in range(LBC_SPEC_INTVL_HRS, \ + LBC_SPEC_INTVL_HRS + FCST_LEN_HRS, \ + LBC_SPEC_INTVL_HRS ) ] + # + #----------------------------------------------------------------------- + # + # Check to make sure that various computational parameters needed by the + # forecast model are set to non-empty values. At this point in the + # experiment generation, all of these should be set to valid (non-empty) + # values. + # + #----------------------------------------------------------------------- + # + if not DT_ATMOS: + print_err_msg_exit(f''' + The forecast model main time step (DT_ATMOS) is set to a null string: + DT_ATMOS = {DT_ATMOS} + Please set this to a valid numerical value in the user-specified experiment + configuration file (EXPT_CONFIG_FP) and rerun: + EXPT_CONFIG_FP = \"{EXPT_CONFIG_FP}\"''') + + if not LAYOUT_X: + print_err_msg_exit(f''' + The number of MPI processes to be used in the x direction (LAYOUT_X) by + the forecast job is set to a null string: + LAYOUT_X = {LAYOUT_X} + Please set this to a valid numerical value in the user-specified experiment + configuration file (EXPT_CONFIG_FP) and rerun: + EXPT_CONFIG_FP = \"{EXPT_CONFIG_FP}\"''') + + if not LAYOUT_Y: + print_err_msg_exit(f''' + The number of MPI processes to be used in the y direction (LAYOUT_Y) by + the forecast job is set to a null string: + LAYOUT_Y = {LAYOUT_Y} + Please set this to a valid numerical value in the user-specified experiment + configuration file (EXPT_CONFIG_FP) and rerun: + EXPT_CONFIG_FP = \"{EXPT_CONFIG_FP}\"''') + + if not BLOCKSIZE: + print_err_msg_exit(f''' + The cache size to use for each MPI task of the forecast (BLOCKSIZE) is + set to a null string: + BLOCKSIZE = {BLOCKSIZE} + Please set this to a valid numerical value in the user-specified experiment + configuration file (EXPT_CONFIG_FP) and rerun: + EXPT_CONFIG_FP = \"{EXPT_CONFIG_FP}\"''') + # + #----------------------------------------------------------------------- + # + # If performing sub-hourly model output and post-processing, check that + # the output interval DT_SUBHOURLY_POST_MNTS (in minutes) is specified + # correctly. + # + #----------------------------------------------------------------------- + # + global SUB_HOURLY_POST + + if SUB_HOURLY_POST: + # + # Check that DT_SUBHOURLY_POST_MNTS is between 0 and 59, inclusive. + # + if DT_SUBHOURLY_POST_MNTS < 0 or DT_SUBHOURLY_POST_MNTS > 59: + print_err_msg_exit(f''' + When performing sub-hourly post (i.e. SUB_HOURLY_POST set to \"TRUE\"), + DT_SUBHOURLY_POST_MNTS must be set to an integer between 0 and 59, + inclusive but in this case is not: + SUB_HOURLY_POST = \"{SUB_HOURLY_POST}\" + DT_SUBHOURLY_POST_MNTS = \"{DT_SUBHOURLY_POST_MNTS}\"''') + # + # Check that DT_SUBHOURLY_POST_MNTS (after converting to seconds) is + # evenly divisible by the forecast model's main time step DT_ATMOS. + # + rem=( DT_SUBHOURLY_POST_MNTS*60 % DT_ATMOS ) + if rem != 0: + print_err_msg_exit(f''' + When performing sub-hourly post (i.e. SUB_HOURLY_POST set to \"TRUE\"), + the time interval specified by DT_SUBHOURLY_POST_MNTS (after converting + to seconds) must be evenly divisible by the time step DT_ATMOS used in + the forecast model, i.e. the remainder (rem) must be zero. In this case, + it is not: + SUB_HOURLY_POST = \"{SUB_HOURLY_POST}\" + DT_SUBHOURLY_POST_MNTS = \"{DT_SUBHOURLY_POST_MNTS}\" + DT_ATMOS = \"{DT_ATMOS}\" + rem = (DT_SUBHOURLY_POST_MNTS*60) %% DT_ATMOS = {rem} + Please reset DT_SUBHOURLY_POST_MNTS and/or DT_ATMOS so that this remainder + is zero.''') + # + # If DT_SUBHOURLY_POST_MNTS is set to 0 (with SUB_HOURLY_POST set to + # True), then we're not really performing subhourly post-processing. + # In this case, reset SUB_HOURLY_POST to False and print out an + # informational message that such a change was made. + # + if DT_SUBHOURLY_POST_MNTS == 0: + print_info_msg(f''' + When performing sub-hourly post (i.e. SUB_HOURLY_POST set to \"TRUE\"), + DT_SUBHOURLY_POST_MNTS must be set to a value greater than 0; otherwise, + sub-hourly output is not really being performed: + SUB_HOURLY_POST = \"{SUB_HOURLY_POST}\" + DT_SUBHOURLY_POST_MNTS = \"{DT_SUBHOURLY_POST_MNTS}\" + Resetting SUB_HOURLY_POST to \"FALSE\". If you do not want this, you + must set DT_SUBHOURLY_POST_MNTS to something other than zero.''') + SUB_HOURLY_POST=False + # + #----------------------------------------------------------------------- + # + # If the base directory (EXPT_BASEDIR) in which the experiment subdirectory + # (EXPT_SUBDIR) will be located does not start with a "/", then it is + # either set to a null string or contains a relative directory. In both + # cases, prepend to it the absolute path of the default directory under + # which the experiment directories are placed. If EXPT_BASEDIR was set + # to a null string, it will get reset to this default experiment directory, + # and if it was set to a relative directory, it will get reset to an + # absolute directory that points to the relative directory under the + # default experiment directory. Then create EXPT_BASEDIR if it doesn't + # already exist. + # + #----------------------------------------------------------------------- + # + global EXPT_BASEDIR + if (not EXPT_BASEDIR) or (EXPT_BASEDIR[0] != "/"): + if not EXPT_BASEDIR: + EXPT_BASEDIR = "" + EXPT_BASEDIR = os.path.join(SR_WX_APP_TOP_DIR,"..","expt_dirs",EXPT_BASEDIR) + try: + EXPT_BASEDIR = os.path.realpath(EXPT_BASEDIR) + except: + pass + EXPT_BASEDIR = os.path.abspath(EXPT_BASEDIR) + + mkdir_vrfy(f' -p "{EXPT_BASEDIR}"') + # + #----------------------------------------------------------------------- + # + # If the experiment subdirectory name (EXPT_SUBDIR) is set to an empty + # string, print out an error message and exit. + # + #----------------------------------------------------------------------- + # + if not EXPT_SUBDIR: + print_err_msg_exit(f''' + The name of the experiment subdirectory (EXPT_SUBDIR) cannot be empty: + EXPT_SUBDIR = \"{EXPT_SUBDIR}\"''') + # + #----------------------------------------------------------------------- + # + # Set the full path to the experiment directory. Then check if it already + # exists and if so, deal with it as specified by PREEXISTING_DIR_METHOD. + # + #----------------------------------------------------------------------- + # + global EXPTDIR + EXPTDIR = os.path.join(EXPT_BASEDIR, EXPT_SUBDIR) + check_for_preexist_dir_file(EXPTDIR,PREEXISTING_DIR_METHOD) + # + #----------------------------------------------------------------------- + # + # Set other directories, some of which may depend on EXPTDIR (depending + # on whether we're running in NCO or community mode, i.e. whether RUN_ENVIR + # is set to "nco" or "community"). Definitions: + # + # LOGDIR: + # Directory in which the log files from the workflow tasks will be placed. + # + # FIXam: + # This is the directory that will contain the fixed files or symlinks to + # the fixed files containing various fields on global grids (which are + # usually much coarser than the native FV3-LAM grid). + # + # FIXclim: + # This is the directory that will contain the MERRA2 aerosol climatology + # data file and lookup tables for optics properties + # + # FIXLAM: + # This is the directory that will contain the fixed files or symlinks to + # the fixed files containing the grid, orography, and surface climatology + # on the native FV3-LAM grid. + # + # CYCLE_BASEDIR: + # The base directory in which the directories for the various cycles will + # be placed. + # + # COMROOT: + # In NCO mode, this is the full path to the "com" directory under which + # output from the RUN_POST_TN task will be placed. Note that this output + # is not placed directly under COMROOT but several directories further + # down. More specifically, for a cycle starting at yyyymmddhh, it is at + # + # $COMROOT/$NET/$envir/$RUN.$yyyymmdd/$hh + # + # Below, we set COMROOT in terms of PTMP as COMROOT="$PTMP/com". COMOROOT + # is not used by the workflow in community mode. + # + # COMOUT_BASEDIR: + # In NCO mode, this is the base directory directly under which the output + # from the RUN_POST_TN task will be placed, i.e. it is the cycle-independent + # portion of the RUN_POST_TN task's output directory. It is given by + # + # $COMROOT/$NET/$model_ver + # + # COMOUT_BASEDIR is not used by the workflow in community mode. + # + # POST_OUTPUT_DOMAIN_NAME: + # The PREDEF_GRID_NAME is set by default. + # + #----------------------------------------------------------------------- + # + global LOGDIR, FIXam, FIXclim, FIXLAM, CYCLE_BASEDIR, \ + COMROOT, COMOUT_BASEDIR, POST_OUTPUT_DOMAIN_NAME + + LOGDIR = os.path.join(EXPTDIR, "log") + + FIXam = os.path.join(EXPTDIR, "fix_am") + FIXclim = os.path.join(EXPTDIR, "fix_clim") + FIXLAM = os.path.join(EXPTDIR, "fix_lam") + + if RUN_ENVIR == "nco": + + CYCLE_BASEDIR = os.path.join(STMP, "tmpnwprd", RUN) + check_for_preexist_dir_file(CYCLE_BASEDIR,PREEXISTING_DIR_METHOD) + COMROOT = os.path.join(PTMP, "com") + COMOUT_BASEDIR = os.path.join(COMROOT, NET, model_ver) + check_for_preexist_dir_file(COMOUT_BASEDIR,PREEXISTING_DIR_METHOD) + + else: + + CYCLE_BASEDIR=EXPTDIR + COMROOT="" + COMOUT_BASEDIR="" + + if POST_OUTPUT_DOMAIN_NAME is None: + if PREDEF_GRID_NAME is None: + print_err_msg_exit(f''' + The domain name used in naming the run_post output files + (POST_OUTPUT_DOMAIN_NAME) has not been set: + POST_OUTPUT_DOMAIN_NAME = \"{POST_OUTPUT_DOMAIN_NAME}\" + If this experiment is not using a predefined grid (i.e. if + PREDEF_GRID_NAME is set to a null string), POST_OUTPUT_DOMAIN_NAME + must be set in the configuration file (\"{EXPT_CONFIG_FN}\"). ''') + + POST_OUTPUT_DOMAIN_NAME = PREDEF_GRID_NAME + + POST_OUTPUT_DOMAIN_NAME = lowercase(POST_OUTPUT_DOMAIN_NAME) + # + #----------------------------------------------------------------------- + # + # The FV3 forecast model needs the following input files in the run di- + # rectory to start a forecast: + # + # (1) The data table file + # (2) The diagnostics table file + # (3) The field table file + # (4) The FV3 namelist file + # (5) The model configuration file + # (6) The NEMS configuration file + # + # If using CCPP, it also needs: + # + # (7) The CCPP physics suite definition file + # + # The workflow contains templates for the first six of these files. + # Template files are versions of these files that contain placeholder + # (i.e. dummy) values for various parameters. The experiment/workflow + # generation scripts copy these templates to appropriate locations in + # the experiment directory (either the top of the experiment directory + # or one of the cycle subdirectories) and replace the placeholders in + # these copies by actual values specified in the experiment/workflow + # configuration file (or derived from such values). The scripts then + # use the resulting "actual" files as inputs to the forecast model. + # + # Note that the CCPP physics suite defintion file does not have a cor- + # responding template file because it does not contain any values that + # need to be replaced according to the experiment/workflow configura- + # tion. If using CCPP, this file simply needs to be copied over from + # its location in the forecast model's directory structure to the ex- + # periment directory. + # + # Below, we first set the names of the templates for the first six files + # listed above. We then set the full paths to these template files. + # Note that some of these file names depend on the physics suite while + # others do not. + # + #----------------------------------------------------------------------- + # + global DATA_TABLE_TMPL_FN, DIAG_TABLE_TMPL_FN, FIELD_TABLE_TMPL_FN, \ + MODEL_CONFIG_TMPL_FN, NEMS_CONFIG_TMPL_FN + global DATA_TABLE_TMPL_FP, DIAG_TABLE_TMPL_FP, FIELD_TABLE_TMPL_FP, \ + MODEL_CONFIG_TMPL_FP, NEMS_CONFIG_TMPL_FP + global FV3_NML_BASE_SUITE_FP, FV3_NML_YAML_CONFIG_FP,FV3_NML_BASE_ENS_FP + + dot_ccpp_phys_suite_or_null=f".{CCPP_PHYS_SUITE}" + + # Names of input files that the forecast model (ufs-weather-model) expects + # to read in. These should only be changed if the input file names in the + # forecast model code are changed. + #---------------------------------- + DATA_TABLE_FN = "data_table" + DIAG_TABLE_FN = "diag_table" + FIELD_TABLE_FN = "field_table" + MODEL_CONFIG_FN = "model_configure" + NEMS_CONFIG_FN = "nems.configure" + #---------------------------------- + + if DATA_TABLE_TMPL_FN is None: + DATA_TABLE_TMPL_FN = DATA_TABLE_FN + if DIAG_TABLE_TMPL_FN is None: + DIAG_TABLE_TMPL_FN = f"{DIAG_TABLE_FN}{dot_ccpp_phys_suite_or_null}" + if FIELD_TABLE_TMPL_FN is None: + FIELD_TABLE_TMPL_FN = f"{FIELD_TABLE_FN}{dot_ccpp_phys_suite_or_null}" + if MODEL_CONFIG_TMPL_FN is None: + MODEL_CONFIG_TMPL_FN = MODEL_CONFIG_FN + if NEMS_CONFIG_TMPL_FN is None: + NEMS_CONFIG_TMPL_FN = NEMS_CONFIG_FN + + DATA_TABLE_TMPL_FP = os.path.join(TEMPLATE_DIR,DATA_TABLE_TMPL_FN) + DIAG_TABLE_TMPL_FP = os.path.join(TEMPLATE_DIR,DIAG_TABLE_TMPL_FN) + FIELD_TABLE_TMPL_FP = os.path.join(TEMPLATE_DIR,FIELD_TABLE_TMPL_FN) + FV3_NML_BASE_SUITE_FP = os.path.join(TEMPLATE_DIR,FV3_NML_BASE_SUITE_FN) + FV3_NML_YAML_CONFIG_FP = os.path.join(TEMPLATE_DIR,FV3_NML_YAML_CONFIG_FN) + FV3_NML_BASE_ENS_FP = os.path.join(EXPTDIR,FV3_NML_BASE_ENS_FN) + MODEL_CONFIG_TMPL_FP = os.path.join(TEMPLATE_DIR,MODEL_CONFIG_TMPL_FN) + NEMS_CONFIG_TMPL_FP = os.path.join(TEMPLATE_DIR,NEMS_CONFIG_TMPL_FN) + # + #----------------------------------------------------------------------- + # + # Set: + # + # 1) the variable CCPP_PHYS_SUITE_FN to the name of the CCPP physics + # suite definition file. + # 2) the variable CCPP_PHYS_SUITE_IN_CCPP_FP to the full path of this + # file in the forecast model's directory structure. + # 3) the variable CCPP_PHYS_SUITE_FP to the full path of this file in + # the experiment directory. + # + # Note that the experiment/workflow generation scripts will copy this + # file from CCPP_PHYS_SUITE_IN_CCPP_FP to CCPP_PHYS_SUITE_FP. Then, for + # each cycle, the forecast launch script will create a link in the cycle + # run directory to the copy of this file at CCPP_PHYS_SUITE_FP. + # + #----------------------------------------------------------------------- + # + global CCPP_PHYS_SUITE_FN, CCPP_PHYS_SUITE_IN_CCPP_FP, CCPP_PHYS_SUITE_FP + CCPP_PHYS_SUITE_FN=f"suite_{CCPP_PHYS_SUITE}.xml" + CCPP_PHYS_SUITE_IN_CCPP_FP=os.path.join(UFS_WTHR_MDL_DIR, "FV3","ccpp","suites",CCPP_PHYS_SUITE_FN) + CCPP_PHYS_SUITE_FP=os.path.join(EXPTDIR, CCPP_PHYS_SUITE_FN) + if not os.path.exists(CCPP_PHYS_SUITE_IN_CCPP_FP): + print_err_msg_exit(f''' + The CCPP suite definition file (CCPP_PHYS_SUITE_IN_CCPP_FP) does not exist + in the local clone of the ufs-weather-model: + CCPP_PHYS_SUITE_IN_CCPP_FP = \"{CCPP_PHYS_SUITE_IN_CCPP_FP}\"''') + # + #----------------------------------------------------------------------- + # + # Set: + # + # 1) the variable FIELD_DICT_FN to the name of the field dictionary + # file. + # 2) the variable FIELD_DICT_IN_UWM_FP to the full path of this + # file in the forecast model's directory structure. + # 3) the variable FIELD_DICT_FP to the full path of this file in + # the experiment directory. + # + #----------------------------------------------------------------------- + # + global FIELD_DICT_FN, FIELD_DICT_IN_UWM_FP, FIELD_DICT_FP + FIELD_DICT_FN = "fd_nems.yaml" + FIELD_DICT_IN_UWM_FP = os.path.join(UFS_WTHR_MDL_DIR, "tests", "parm", FIELD_DICT_FN) + FIELD_DICT_FP = os.path.join(EXPTDIR, FIELD_DICT_FN) + if not os.path.exists(FIELD_DICT_IN_UWM_FP): + print_err_msg_exit(f''' + The field dictionary file (FIELD_DICT_IN_UWM_FP) does not exist + in the local clone of the ufs-weather-model: + FIELD_DICT_IN_UWM_FP = \"{FIELD_DICT_IN_UWM_FP}\"''') + # + #----------------------------------------------------------------------- + # + # Call the function that sets the ozone parameterization being used and + # modifies associated parameters accordingly. + # + #----------------------------------------------------------------------- + # + + # export env vars before calling another module + export_vars() + + OZONE_PARAM = set_ozone_param( \ + ccpp_phys_suite_fp=CCPP_PHYS_SUITE_IN_CCPP_FP) + + IMPORTS = ["CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING", "FIXgsm_FILES_TO_COPY_TO_FIXam"] + import_vars(env_vars=IMPORTS) + # + #----------------------------------------------------------------------- + # + # Set the full paths to those forecast model input files that are cycle- + # independent, i.e. they don't include information about the cycle's + # starting day/time. These are: + # + # * The data table file [(1) in the list above)] + # * The field table file [(3) in the list above)] + # * The FV3 namelist file [(4) in the list above)] + # * The NEMS configuration file [(6) in the list above)] + # + # Since they are cycle-independent, the experiment/workflow generation + # scripts will place them in the main experiment directory (EXPTDIR). + # The script that runs each cycle will then create links to these files + # in the run directories of the individual cycles (which are subdirecto- + # ries under EXPTDIR). + # + # The remaining two input files to the forecast model, i.e. + # + # * The diagnostics table file [(2) in the list above)] + # * The model configuration file [(5) in the list above)] + # + # contain parameters that depend on the cycle start date. Thus, custom + # versions of these two files must be generated for each cycle and then + # placed directly in the run directories of the cycles (not EXPTDIR). + # For this reason, the full paths to their locations vary by cycle and + # cannot be set here (i.e. they can only be set in the loop over the + # cycles in the rocoto workflow XML file). + # + #----------------------------------------------------------------------- + # + global DATA_TABLE_FP, FIELD_TABLE_FP, FV3_NML_FN, FV3_NML_FP, NEMS_CONFIG_FP + DATA_TABLE_FP = os.path.join(EXPTDIR, DATA_TABLE_FN) + FIELD_TABLE_FP = os.path.join(EXPTDIR, FIELD_TABLE_FN) + FV3_NML_FN = os.path.splitext(FV3_NML_BASE_SUITE_FN)[0] + FV3_NML_FP = os.path.join(EXPTDIR, FV3_NML_FN) + NEMS_CONFIG_FP = os.path.join(EXPTDIR, NEMS_CONFIG_FN) + # + #----------------------------------------------------------------------- + # + # If USE_USER_STAGED_EXTRN_FILES is set to TRUE, make sure that the user- + # specified directories under which the external model files should be + # located actually exist. + # + #----------------------------------------------------------------------- + # + if USE_USER_STAGED_EXTRN_FILES: + # Check for the base directory up to the first templated field. + idx = EXTRN_MDL_SOURCE_BASEDIR_ICS.find("$") + if idx == -1: + idx=len(EXTRN_MDL_SOURCE_BASEDIR_ICS) + + if not os.path.exists(EXTRN_MDL_SOURCE_BASEDIR_ICS[:idx]): + print_err_msg_exit(f''' + The directory (EXTRN_MDL_SOURCE_BASEDIR_ICS) in which the user-staged + external model files for generating ICs should be located does not exist: + EXTRN_MDL_SOURCE_BASEDIR_ICS = \"{EXTRN_MDL_SOURCE_BASEDIR_ICS}\"''') + + idx = EXTRN_MDL_SOURCE_BASEDIR_LBCS.find("$") + if idx == -1: + idx=len(EXTRN_MDL_SOURCE_BASEDIR_LBCS) + + if not os.path.exists(EXTRN_MDL_SOURCE_BASEDIR_LBCS[:idx]): + print_err_msg_exit(f''' + The directory (EXTRN_MDL_SOURCE_BASEDIR_LBCS) in which the user-staged + external model files for generating LBCs should be located does not exist: + EXTRN_MDL_SOURCE_BASEDIR_LBCS = \"{EXTRN_MDL_SOURCE_BASEDIR_LBCS}\"''') + # + #----------------------------------------------------------------------- + # + # Make sure that DO_ENSEMBLE is set to a valid value. Then set the names + # of the ensemble members. These will be used to set the ensemble member + # directories. Also, set the full path to the FV3 namelist file corresponding + # to each ensemble member. + # + #----------------------------------------------------------------------- + # + global NDIGITS_ENSMEM_NAMES,ENSMEM_NAMES,FV3_NML_ENSMEM_FPS,NUM_ENS_MEMBERS + NDIGITS_ENSMEM_NAMES=0 + ENSMEM_NAMES=[] + FV3_NML_ENSMEM_FPS=[] + if DO_ENSEMBLE: + NDIGITS_ENSMEM_NAMES=len(str(NUM_ENS_MEMBERS)) + fmt=f"0{NDIGITS_ENSMEM_NAMES}d" + for i in range(NUM_ENS_MEMBERS): + ENSMEM_NAMES.append(f"mem{fmt}".format(i+1)) + FV3_NML_ENSMEM_FPS.append(os.path.join(EXPTDIR, f"{FV3_NML_FN}_{ENSMEM_NAMES[i]}")) + # + #----------------------------------------------------------------------- + # + # Set the full path to the forecast model executable. + # + #----------------------------------------------------------------------- + # + global FV3_EXEC_FP + FV3_EXEC_FP = os.path.join(EXECDIR, FV3_EXEC_FN) + # + #----------------------------------------------------------------------- + # + # Set the full path to the script that can be used to (re)launch the + # workflow. Also, if USE_CRON_TO_RELAUNCH is set to TRUE, set the line + # to add to the cron table to automatically relaunch the workflow every + # CRON_RELAUNCH_INTVL_MNTS minutes. Otherwise, set the variable con- + # taining this line to a null string. + # + #----------------------------------------------------------------------- + # + global WFLOW_LAUNCH_SCRIPT_FP, WFLOW_LAUNCH_LOG_FP, CRONTAB_LINE + WFLOW_LAUNCH_SCRIPT_FP = os.path.join(USHDIR, WFLOW_LAUNCH_SCRIPT_FN) + WFLOW_LAUNCH_LOG_FP = os.path.join(EXPTDIR, WFLOW_LAUNCH_LOG_FN) + if USE_CRON_TO_RELAUNCH: + CRONTAB_LINE=f'''*/{CRON_RELAUNCH_INTVL_MNTS} * * * * cd {EXPTDIR} && ./{WFLOW_LAUNCH_SCRIPT_FN} called_from_cron="TRUE" >> ./{WFLOW_LAUNCH_LOG_FN} 2>&1''' + else: + CRONTAB_LINE="" + # + #----------------------------------------------------------------------- + # + # Set the full path to the script that, for a given task, loads the + # necessary module files and runs the tasks. + # + #----------------------------------------------------------------------- + # + global LOAD_MODULES_RUN_TASK_FP + LOAD_MODULES_RUN_TASK_FP = os.path.join(USHDIR, "load_modules_run_task.sh") + # + #----------------------------------------------------------------------- + # + # Define the various work subdirectories under the main work directory. + # Each of these corresponds to a different step/substep/task in the pre- + # processing, as follows: + # + # GRID_DIR: + # Directory in which the grid files will be placed (if RUN_TASK_MAKE_GRID + # is set to True) or searched for (if RUN_TASK_MAKE_GRID is set to + # False). + # + # OROG_DIR: + # Directory in which the orography files will be placed (if RUN_TASK_MAKE_OROG + # is set to True) or searched for (if RUN_TASK_MAKE_OROG is set to + # False). + # + # SFC_CLIMO_DIR: + # Directory in which the surface climatology files will be placed (if + # RUN_TASK_MAKE_SFC_CLIMO is set to True) or searched for (if + # RUN_TASK_MAKE_SFC_CLIMO is set to False). + # + #---------------------------------------------------------------------- + # + global RUN_TASK_MAKE_GRID, RUN_TASK_MAKE_OROG, RUN_TASK_MAKE_SFC_CLIMO + global GRID_DIR, OROG_DIR, SFC_CLIMO_DIR + global RUN_TASK_VX_GRIDSTAT, RUN_TASK_VX_POINTSTAT, RUN_TASK_VX_ENSGRID + + # + #----------------------------------------------------------------------- + # + # Make sure that DO_ENSEMBLE is set to TRUE when running ensemble vx. + # + #----------------------------------------------------------------------- + # + if (not DO_ENSEMBLE) and (RUN_TASK_VX_ENSGRID or RUN_TASK_VX_ENSPOINT): + print_err_msg_exit(f''' + Ensemble verification can not be run unless running in ensemble mode: + DO_ENSEMBLE = \"{DO_ENSEMBLE}\" + RUN_TASK_VX_ENSGRID = \"{RUN_TASK_VX_ENSGRID}\" + RUN_TASK_VX_ENSPOINT = \"{RUN_TASK_VX_ENSPOINT}\"''') + + if RUN_ENVIR == "nco": + + nco_fix_dir = os.path.join(DOMAIN_PREGEN_BASEDIR, PREDEF_GRID_NAME) + if not os.path.exists(nco_fix_dir): + print_err_msg_exit(f''' + The directory (nco_fix_dir) that should contain the pregenerated grid, + orography, and surface climatology files does not exist: + nco_fix_dir = \"{nco_fix_dir}\"''') + + if RUN_TASK_MAKE_GRID or \ + ( not RUN_TASK_MAKE_GRID and \ + GRID_DIR != nco_fix_dir ): + + msg=f''' + When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated + grid files already exist in the directory + + {DOMAIN_PREGEN_BASEDIR}/{PREDEF_GRID_NAME} + + where + + DOMAIN_PREGEN_BASEDIR = \"{DOMAIN_PREGEN_BASEDIR}\" + PREDEF_GRID_NAME = \"{PREDEF_GRID_NAME}\" + + Thus, the MAKE_GRID_TN task must not be run (i.e. RUN_TASK_MAKE_GRID must + be set to \"FALSE\"), and the directory in which to look for the grid + files (i.e. GRID_DIR) must be set to the one above. Current values for + these quantities are: + + RUN_TASK_MAKE_GRID = \"{RUN_TASK_MAKE_GRID}\" + GRID_DIR = \"{GRID_DIR}\" + + Resetting RUN_TASK_MAKE_GRID to \"FALSE\" and GRID_DIR to the one above. + Reset values are: + ''' + + RUN_TASK_MAKE_GRID=False + GRID_DIR=nco_fix_dir + + msg+=f''' + RUN_TASK_MAKE_GRID = \"{RUN_TASK_MAKE_GRID}\" + GRID_DIR = \"{GRID_DIR}\" + ''' + + print_info_msg(msg) + + + if RUN_TASK_MAKE_OROG or \ + ( not RUN_TASK_MAKE_OROG and \ + OROG_DIR != nco_fix_dir ): + + msg=f''' + When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated + orography files already exist in the directory + {DOMAIN_PREGEN_BASEDIR}/{PREDEF_GRID_NAME} + + where + + DOMAIN_PREGEN_BASEDIR = \"{DOMAIN_PREGEN_BASEDIR}\" + PREDEF_GRID_NAME = \"{PREDEF_GRID_NAME}\" + + Thus, the MAKE_OROG_TN task must not be run (i.e. RUN_TASK_MAKE_OROG must + be set to \"FALSE\"), and the directory in which to look for the orography + files (i.e. OROG_DIR) must be set to the one above. Current values for + these quantities are: + + RUN_TASK_MAKE_OROG = \"{RUN_TASK_MAKE_OROG}\" + OROG_DIR = \"{OROG_DIR}\" + + Resetting RUN_TASK_MAKE_OROG to \"FALSE\" and OROG_DIR to the one above. + Reset values are: + ''' + + RUN_TASK_MAKE_OROG=False + OROG_DIR=nco_fix_dir + + msg+=f''' + RUN_TASK_MAKE_OROG = \"{RUN_TASK_MAKE_OROG}\" + OROG_DIR = \"{OROG_DIR}\" + ''' + + print_info_msg(msg) + + + if RUN_TASK_MAKE_SFC_CLIMO or \ + ( not RUN_TASK_MAKE_SFC_CLIMO and \ + SFC_CLIMO_DIR != nco_fix_dir ): + + msg=f''' + When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated + surface climatology files already exist in the directory + + {DOMAIN_PREGEN_BASEDIR}/{PREDEF_GRID_NAME} + + where + + DOMAIN_PREGEN_BASEDIR = \"{DOMAIN_PREGEN_BASEDIR}\" + PREDEF_GRID_NAME = \"{PREDEF_GRID_NAME}\" + + Thus, the MAKE_SFC_CLIMO_TN task must not be run (i.e. RUN_TASK_MAKE_SFC_CLIMO + must be set to \"FALSE\"), and the directory in which to look for the + surface climatology files (i.e. SFC_CLIMO_DIR) must be set to the one + above. Current values for these quantities are: + + RUN_TASK_MAKE_SFC_CLIMO = \"{RUN_TASK_MAKE_SFC_CLIMO}\" + SFC_CLIMO_DIR = \"{SFC_CLIMO_DIR}\" + + Resetting RUN_TASK_MAKE_SFC_CLIMO to \"FALSE\" and SFC_CLIMO_DIR to the + one above. Reset values are: + ''' + + RUN_TASK_MAKE_SFC_CLIMO=False + SFC_CLIMO_DIR=nco_fix_dir + + msg+=f''' + RUN_TASK_MAKE_SFC_CLIMO = \"{RUN_TASK_MAKE_SFC_CLIMO}\" + SFC_CLIMO_DIR = \"{SFC_CLIMO_DIR}\" + ''' + + print_info_msg(msg) + + if RUN_TASK_VX_GRIDSTAT: + + msg=f''' + When RUN_ENVIR is set to \"nco\", it is assumed that the verification + will not be run. + RUN_TASK_VX_GRIDSTAT = \"{RUN_TASK_VX_GRIDSTAT}\" + Resetting RUN_TASK_VX_GRIDSTAT to \"FALSE\" + Reset value is:''' + + RUN_TASK_VX_GRIDSTAT=False + + msg+=f''' + RUN_TASK_VX_GRIDSTAT = \"{RUN_TASK_VX_GRIDSTAT}\" + ''' + + print_info_msg(msg) + + if RUN_TASK_VX_POINTSTAT: + + msg=f''' + When RUN_ENVIR is set to \"nco\", it is assumed that the verification + will not be run. + RUN_TASK_VX_POINTSTAT = \"{RUN_TASK_VX_POINTSTAT}\" + Resetting RUN_TASK_VX_POINTSTAT to \"FALSE\" + Reset value is:''' + + RUN_TASK_VX_POINTSTAT=False + + msg=f''' + RUN_TASK_VX_POINTSTAT = \"{RUN_TASK_VX_POINTSTAT}\" + ''' + + print_info_msg(msg) + + if RUN_TASK_VX_ENSGRID: + + msg=f''' + When RUN_ENVIR is set to \"nco\", it is assumed that the verification + will not be run. + RUN_TASK_VX_ENSGRID = \"{RUN_TASK_VX_ENSGRID}\" + Resetting RUN_TASK_VX_ENSGRID to \"FALSE\" + Reset value is:''' + + RUN_TASK_VX_ENSGRID=False + + msg+=f''' + RUN_TASK_VX_ENSGRID = \"{RUN_TASK_VX_ENSGRID}\" + ''' + + print_info_msg(msg) + + # + #----------------------------------------------------------------------- + # + # Now consider community mode. + # + #----------------------------------------------------------------------- + # + else: + # + # If RUN_TASK_MAKE_GRID is set to False, the workflow will look for + # the pregenerated grid files in GRID_DIR. In this case, make sure that + # GRID_DIR exists. Otherwise, set it to a predefined location under the + # experiment directory (EXPTDIR). + # + if not RUN_TASK_MAKE_GRID: + if not os.path.exists(GRID_DIR): + print_err_msg_exit(f''' + The directory (GRID_DIR) that should contain the pregenerated grid files + does not exist: + GRID_DIR = \"{GRID_DIR}\"''') + else: + GRID_DIR=os.path.join(EXPTDIR,"grid") + # + # If RUN_TASK_MAKE_OROG is set to False, the workflow will look for + # the pregenerated orography files in OROG_DIR. In this case, make sure + # that OROG_DIR exists. Otherwise, set it to a predefined location under + # the experiment directory (EXPTDIR). + # + if not RUN_TASK_MAKE_OROG: + if not os.path.exists(OROG_DIR): + print_err_msg_exit(f''' + The directory (OROG_DIR) that should contain the pregenerated orography + files does not exist: + OROG_DIR = \"{OROG_DIR}\"''') + else: + OROG_DIR=os.path.join(EXPTDIR,"orog") + # + # If RUN_TASK_MAKE_SFC_CLIMO is set to False, the workflow will look + # for the pregenerated surface climatology files in SFC_CLIMO_DIR. In + # this case, make sure that SFC_CLIMO_DIR exists. Otherwise, set it to + # a predefined location under the experiment directory (EXPTDIR). + # + if not RUN_TASK_MAKE_SFC_CLIMO: + if not os.path.exists(SFC_CLIMO_DIR): + print_err_msg_exit(f''' + The directory (SFC_CLIMO_DIR) that should contain the pregenerated surface + climatology files does not exist: + SFC_CLIMO_DIR = \"{SFC_CLIMO_DIR}\"''') + else: + SFC_CLIMO_DIR=os.path.join(EXPTDIR,"sfc_climo") + + #----------------------------------------------------------------------- + # + # Set cycle-independent parameters associated with the external models + # from which we will obtain the ICs and LBCs. + # + #----------------------------------------------------------------------- + # + + # export env vars before calling another module + export_vars() + + set_extrn_mdl_params() + + IMPORTS = ["EXTRN_MDL_SYSBASEDIR_ICS", "EXTRN_MDL_SYSBASEDIR_LBCS", "EXTRN_MDL_LBCS_OFFSET_HRS"] + import_vars(env_vars=IMPORTS) + # + #----------------------------------------------------------------------- + # + # Any regional model must be supplied lateral boundary conditions (in + # addition to initial conditions) to be able to perform a forecast. In + # the FV3-LAM model, these boundary conditions (BCs) are supplied using a + # "halo" of grid cells around the regional domain that extend beyond the + # boundary of the domain. The model is formulated such that along with + # files containing these BCs, it needs as input the following files (in + # NetCDF format): + # + # 1) A grid file that includes a halo of 3 cells beyond the boundary of + # the domain. + # 2) A grid file that includes a halo of 4 cells beyond the boundary of + # the domain. + # 3) A (filtered) orography file without a halo, i.e. a halo of width + # 0 cells. + # 4) A (filtered) orography file that includes a halo of 4 cells beyond + # the boundary of the domain. + # + # Note that the regional grid is referred to as "tile 7" in the code. + # We will let: + # + # * NH0 denote the width (in units of number of cells on tile 7) of + # the 0-cell-wide halo, i.e. NH0 = 0; + # + # * NH3 denote the width (in units of number of cells on tile 7) of + # the 3-cell-wide halo, i.e. NH3 = 3; and + # + # * NH4 denote the width (in units of number of cells on tile 7) of + # the 4-cell-wide halo, i.e. NH4 = 4. + # + # We define these variables next. + # + #----------------------------------------------------------------------- + # + global NH0,NH3,NH4 + NH0=0 + NH3=3 + NH4=4 + + # export env vars + EXPORTS = ["NH0","NH3","NH4"] + export_vars(env_vars = EXPORTS) + # + #----------------------------------------------------------------------- + # + # Set parameters according to the type of horizontal grid generation + # method specified. First consider GFDL's global-parent-grid based + # method. + # + #----------------------------------------------------------------------- + # + global LON_CTR,LAT_CTR,NX,NY,NHW,STRETCH_FAC,\ + ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG,\ + IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG,\ + JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG,\ + JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG + global PAZI,DEL_ANGLE_X_SG,DEL_ANGLE_Y_SG,\ + NEG_NX_OF_DOM_WITH_WIDE_HALO,\ + NEG_NY_OF_DOM_WITH_WIDE_HALO + + if GRID_GEN_METHOD == "GFDLgrid": + + (\ + LON_CTR,LAT_CTR,NX,NY,NHW,STRETCH_FAC, + ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG, + JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG \ + ) = \ + set_gridparams_GFDLgrid( \ + lon_of_t6_ctr=GFDLgrid_LON_T6_CTR, \ + lat_of_t6_ctr=GFDLgrid_LAT_T6_CTR, \ + res_of_t6g=GFDLgrid_RES, \ + stretch_factor=GFDLgrid_STRETCH_FAC, \ + refine_ratio_t6g_to_t7g=GFDLgrid_REFINE_RATIO, \ + istart_of_t7_on_t6g=GFDLgrid_ISTART_OF_RGNL_DOM_ON_T6G, \ + iend_of_t7_on_t6g=GFDLgrid_IEND_OF_RGNL_DOM_ON_T6G, \ + jstart_of_t7_on_t6g=GFDLgrid_JSTART_OF_RGNL_DOM_ON_T6G, \ + jend_of_t7_on_t6g=GFDLgrid_JEND_OF_RGNL_DOM_ON_T6G) + # + #----------------------------------------------------------------------- + # + # Now consider Jim Purser's map projection/grid generation method. + # + #----------------------------------------------------------------------- + # + elif GRID_GEN_METHOD == "ESGgrid": + + (\ + LON_CTR,LAT_CTR,NX,NY,PAZI, + NHW,STRETCH_FAC,DEL_ANGLE_X_SG,DEL_ANGLE_Y_SG, + NEG_NX_OF_DOM_WITH_WIDE_HALO, + NEG_NY_OF_DOM_WITH_WIDE_HALO \ + ) = \ + set_gridparams_ESGgrid( \ + lon_ctr=ESGgrid_LON_CTR, \ + lat_ctr=ESGgrid_LAT_CTR, \ + nx=ESGgrid_NX, \ + ny=ESGgrid_NY, \ + pazi=ESGgrid_PAZI, \ + halo_width=ESGgrid_WIDE_HALO_WIDTH, \ + delx=ESGgrid_DELX, \ + dely=ESGgrid_DELY) + + # + #----------------------------------------------------------------------- + # + # Create a new experiment directory. Note that at this point we are + # guaranteed that there is no preexisting experiment directory. For + # platforms with no workflow manager, we need to create LOGDIR as well, + # since it won't be created later at runtime. + # + #----------------------------------------------------------------------- + # + mkdir_vrfy(f' -p "{EXPTDIR}"') + mkdir_vrfy(f' -p "{LOGDIR}"') + # + #----------------------------------------------------------------------- + # + # If not running the MAKE_GRID_TN, MAKE_OROG_TN, and/or MAKE_SFC_CLIMO + # tasks, create symlinks under the FIXLAM directory to pregenerated grid, + # orography, and surface climatology files. In the process, also set + # RES_IN_FIXLAM_FILENAMES, which is the resolution of the grid (in units + # of number of grid points on an equivalent global uniform cubed-sphere + # grid) used in the names of the fixed files in the FIXLAM directory. + # + #----------------------------------------------------------------------- + # + mkdir_vrfy(f' -p "{FIXLAM}"') + RES_IN_FIXLAM_FILENAMES="" + # + #----------------------------------------------------------------------- + # + # If the grid file generation task in the workflow is going to be skipped + # (because pregenerated files are available), create links in the FIXLAM + # directory to the pregenerated grid files. + # + #----------------------------------------------------------------------- + # + + # export env vars + export_vars() + + # link fix files + res_in_grid_fns="" + if not RUN_TASK_MAKE_GRID: + + res_in_grid_fns = link_fix( \ + verbose=VERBOSE, \ + file_group="grid") + + RES_IN_FIXLAM_FILENAMES=res_in_grid_fns + # + #----------------------------------------------------------------------- + # + # If the orography file generation task in the workflow is going to be + # skipped (because pregenerated files are available), create links in + # the FIXLAM directory to the pregenerated orography files. + # + #----------------------------------------------------------------------- + # + res_in_orog_fns="" + if not RUN_TASK_MAKE_OROG: + + res_in_orog_fns = link_fix( \ + verbose=VERBOSE, \ + file_group="orog") + + if not RES_IN_FIXLAM_FILENAMES and \ + ( res_in_orog_fns != RES_IN_FIXLAM_FILENAMES): + print_err_msg_exit(f''' + The resolution extracted from the orography file names (res_in_orog_fns) + does not match the resolution in other groups of files already consi- + dered (RES_IN_FIXLAM_FILENAMES): + res_in_orog_fns = {res_in_orog_fns} + RES_IN_FIXLAM_FILENAMES = {RES_IN_FIXLAM_FILENAMES}''') + else: + RES_IN_FIXLAM_FILENAMES=res_in_orog_fns + # + #----------------------------------------------------------------------- + # + # If the surface climatology file generation task in the workflow is + # going to be skipped (because pregenerated files are available), create + # links in the FIXLAM directory to the pregenerated surface climatology + # files. + # + #----------------------------------------------------------------------- + # + res_in_sfc_climo_fns="" + if not RUN_TASK_MAKE_SFC_CLIMO: + + res_in_sfc_climo_fns = link_fix( \ + verbose=VERBOSE, \ + file_group="sfc_climo") + + if RES_IN_FIXLAM_FILENAMES and \ + res_in_sfc_climo_fns != RES_IN_FIXLAM_FILENAMES: + print_err_msg_exit(f''' + The resolution extracted from the surface climatology file names (res_- + in_sfc_climo_fns) does not match the resolution in other groups of files + already considered (RES_IN_FIXLAM_FILENAMES): + res_in_sfc_climo_fns = {res_in_sfc_climo_fns} + RES_IN_FIXLAM_FILENAMES = {RES_IN_FIXLAM_FILENAMES}''') + else: + RES_IN_FIXLAM_FILENAMES=res_in_sfc_climo_fns + # + #----------------------------------------------------------------------- + # + # The variable CRES is needed in constructing various file names. If + # not running the make_grid task, we can set it here. Otherwise, it + # will get set to a valid value by that task. + # + #----------------------------------------------------------------------- + # + global CRES + CRES="" + if not RUN_TASK_MAKE_GRID: + CRES=f"C{RES_IN_FIXLAM_FILENAMES}" + # + #----------------------------------------------------------------------- + # + # Make sure that WRITE_DOPOST is set to a valid value. + # + #----------------------------------------------------------------------- + # + global RUN_TASK_RUN_POST + if WRITE_DOPOST: + # Turn off run_post + RUN_TASK_RUN_POST=False + + # Check if SUB_HOURLY_POST is on + if SUB_HOURLY_POST: + print_err_msg_exit(f''' + SUB_HOURLY_POST is NOT available with Inline Post yet.''') + # + #----------------------------------------------------------------------- + # + # Calculate PE_MEMBER01. This is the number of MPI tasks used for the + # forecast, including those for the write component if QUILTING is set + # to True. + # + #----------------------------------------------------------------------- + # + global PE_MEMBER01 + PE_MEMBER01=LAYOUT_X*LAYOUT_Y + if QUILTING: + PE_MEMBER01 = PE_MEMBER01 + WRTCMP_write_groups*WRTCMP_write_tasks_per_group + + print_info_msg(f''' + The number of MPI tasks for the forecast (including those for the write + component if it is being used) are: + PE_MEMBER01 = {PE_MEMBER01}''', verbose=VERBOSE) + # + #----------------------------------------------------------------------- + # + # Calculate the number of nodes (NNODES_RUN_FCST) to request from the job + # scheduler for the forecast task (RUN_FCST_TN). This is just PE_MEMBER01 + # dividied by the number of processes per node we want to request for this + # task (PPN_RUN_FCST), then rounded up to the nearest integer, i.e. + # + # NNODES_RUN_FCST = ceil(PE_MEMBER01/PPN_RUN_FCST) + # + # where ceil(...) is the ceiling function, i.e. it rounds its floating + # point argument up to the next larger integer. Since in bash, division + # of two integers returns a truncated integer, and since bash has no + # built-in ceil(...) function, we perform the rounding-up operation by + # adding the denominator (of the argument of ceil(...) above) minus 1 to + # the original numerator, i.e. by redefining NNODES_RUN_FCST to be + # + # NNODES_RUN_FCST = (PE_MEMBER01 + PPN_RUN_FCST - 1)/PPN_RUN_FCST + # + #----------------------------------------------------------------------- + # + global NNODES_RUN_FCST + NNODES_RUN_FCST= (PE_MEMBER01 + PPN_RUN_FCST - 1)//PPN_RUN_FCST + + # + #----------------------------------------------------------------------- + # + # Call the function that checks whether the RUC land surface model (LSM) + # is being called by the physics suite and sets the workflow variable + # SDF_USES_RUC_LSM to True or False accordingly. + # + #----------------------------------------------------------------------- + # + global SDF_USES_RUC_LSM + SDF_USES_RUC_LSM = check_ruc_lsm( \ + ccpp_phys_suite_fp=CCPP_PHYS_SUITE_IN_CCPP_FP) + # + #----------------------------------------------------------------------- + # + # Set the name of the file containing aerosol climatology data that, if + # necessary, can be used to generate approximate versions of the aerosol + # fields needed by Thompson microphysics. This file will be used to + # generate such approximate aerosol fields in the ICs and LBCs if Thompson + # MP is included in the physics suite and if the exteranl model for ICs + # or LBCs does not already provide these fields. Also, set the full path + # to this file. + # + #----------------------------------------------------------------------- + # + THOMPSON_MP_CLIMO_FN="Thompson_MP_MONTHLY_CLIMO.nc" + THOMPSON_MP_CLIMO_FP=os.path.join(FIXam,THOMPSON_MP_CLIMO_FN) + # + #----------------------------------------------------------------------- + # + # Call the function that, if the Thompson microphysics parameterization + # is being called by the physics suite, modifies certain workflow arrays + # to ensure that fixed files needed by this parameterization are copied + # to the FIXam directory and appropriate symlinks to them are created in + # the run directories. This function also sets the workflow variable + # SDF_USES_THOMPSON_MP that indicates whether Thompson MP is called by + # the physics suite. + # + #----------------------------------------------------------------------- + # + SDF_USES_THOMPSON_MP = set_thompson_mp_fix_files( \ + ccpp_phys_suite_fp=CCPP_PHYS_SUITE_IN_CCPP_FP, \ + thompson_mp_climo_fn=THOMPSON_MP_CLIMO_FN) + + IMPORTS = [ "CYCLEDIR_LINKS_TO_FIXam_FILES_MAPPING", "FIXgsm_FILES_TO_COPY_TO_FIXam" ] + import_vars(env_vars=IMPORTS) + # + #----------------------------------------------------------------------- + # + # Generate the shell script that will appear in the experiment directory + # (EXPTDIR) and will contain definitions of variables needed by the va- + # rious scripts in the workflow. We refer to this as the experiment/ + # workflow global variable definitions file. We will create this file + # by: + # + # 1) Copying the default workflow/experiment configuration file (speci- + # fied by EXPT_DEFAULT_CONFIG_FN and located in the shell script di- + # rectory specified by USHDIR) to the experiment directory and rena- + # ming it to the name specified by GLOBAL_VAR_DEFNS_FN. + # + # 2) Resetting the default variable values in this file to their current + # values. This is necessary because these variables may have been + # reset by the user-specified configuration file (if one exists in + # USHDIR) and/or by this setup script, e.g. because predef_domain is + # set to a valid non-empty value. + # + # 3) Appending to the variable definitions file any new variables intro- + # duced in this setup script that may be needed by the scripts that + # perform the various tasks in the workflow (and which source the va- + # riable defintions file). + # + # First, set the full path to the variable definitions file and copy the + # default configuration script into it. + # + #----------------------------------------------------------------------- + # + + # update dictionary with globals() values + update_dict = {k: globals()[k] for k in cfg_d.keys() if k in globals() } + cfg_d.update(update_dict) + + # write the updated default dictionary + global GLOBAL_VAR_DEFNS_FP + GLOBAL_VAR_DEFNS_FP=os.path.join(EXPTDIR,GLOBAL_VAR_DEFNS_FN) + all_lines=cfg_to_shell_str(cfg_d) + with open(GLOBAL_VAR_DEFNS_FP,'w') as f: + msg = f""" # + #----------------------------------------------------------------------- + #----------------------------------------------------------------------- + # Section 1: + # This section contains (most of) the primary experiment variables, i.e. + # those variables that are defined in the default configuration file + # (config_defaults.sh) and that can be reset via the user-specified + # experiment configuration file (config.sh). + #----------------------------------------------------------------------- + #----------------------------------------------------------------------- + # + """ + f.write(dedent(msg)) + f.write(all_lines) + + # print info message + msg=dedent(f''' + Before updating default values of experiment variables to user-specified + values, the variable \"line_list\" contains: + + ''') + + msg +=dedent(f''' + {all_lines}''') + + print_info_msg(msg,verbose=DEBUG) + # + # print info message + # + print_info_msg(f''' + Generating the global experiment variable definitions file specified by + GLOBAL_VAR_DEFNS_FN: + GLOBAL_VAR_DEFNS_FN = \"{GLOBAL_VAR_DEFNS_FN}\" + Full path to this file is: + GLOBAL_VAR_DEFNS_FP = \"{GLOBAL_VAR_DEFNS_FP}\" + For more detailed information, set DEBUG to \"TRUE\" in the experiment + configuration file (\"{EXPT_CONFIG_FN}\").''') + + # + #----------------------------------------------------------------------- + # + # Append additional variable definitions (and comments) to the variable + # definitions file. These variables have been set above using the vari- + # ables in the default and local configuration scripts. These variables + # are needed by various tasks/scripts in the workflow. + # + #----------------------------------------------------------------------- + # + msg = f""" + # + #----------------------------------------------------------------------- + #----------------------------------------------------------------------- + # Section 2: + # This section defines variables that have been derived from the primary + # set of experiment variables above (we refer to these as \"derived\" or + # \"secondary\" variables). + #----------------------------------------------------------------------- + #----------------------------------------------------------------------- + # + + # + #----------------------------------------------------------------------- + # + # Full path to workflow (re)launch script, its log file, and the line + # that gets added to the cron table to launch this script if the flag + # USE_CRON_TO_RELAUNCH is set to \"TRUE\". + # + #----------------------------------------------------------------------- + # + WFLOW_LAUNCH_SCRIPT_FP='{WFLOW_LAUNCH_SCRIPT_FP}' + WFLOW_LAUNCH_LOG_FP='{WFLOW_LAUNCH_LOG_FP}' + CRONTAB_LINE='{CRONTAB_LINE}' + # + #----------------------------------------------------------------------- + # + # Directories. + # + #----------------------------------------------------------------------- + # + SR_WX_APP_TOP_DIR='{SR_WX_APP_TOP_DIR}' + HOMErrfs='{HOMErrfs}' + USHDIR='{USHDIR}' + SCRIPTSDIR='{SCRIPTSDIR}' + JOBSDIR='{JOBSDIR}' + SORCDIR='{SORCDIR}' + SRC_DIR='{SRC_DIR}' + PARMDIR='{PARMDIR}' + MODULES_DIR='{MODULES_DIR}' + EXECDIR='{EXECDIR}' + FIXam='{FIXam}' + FIXclim='{FIXclim}' + FIXLAM='{FIXLAM}' + FIXgsm='{FIXgsm}' + FIXaer='{FIXaer}' + FIXlut='{FIXlut}' + COMROOT='{COMROOT}' + COMOUT_BASEDIR='{COMOUT_BASEDIR}' + TEMPLATE_DIR='{TEMPLATE_DIR}' + VX_CONFIG_DIR='{VX_CONFIG_DIR}' + METPLUS_CONF='{METPLUS_CONF}' + MET_CONFIG='{MET_CONFIG}' + UFS_WTHR_MDL_DIR='{UFS_WTHR_MDL_DIR}' + UFS_UTILS_DIR='{UFS_UTILS_DIR}' + SFC_CLIMO_INPUT_DIR='{SFC_CLIMO_INPUT_DIR}' + TOPO_DIR='{TOPO_DIR}' + UPP_DIR='{UPP_DIR}' + + EXPTDIR='{EXPTDIR}' + LOGDIR='{LOGDIR}' + CYCLE_BASEDIR='{CYCLE_BASEDIR}' + GRID_DIR='{GRID_DIR}' + OROG_DIR='{OROG_DIR}' + SFC_CLIMO_DIR='{SFC_CLIMO_DIR}' + + NDIGITS_ENSMEM_NAMES='{NDIGITS_ENSMEM_NAMES}' + ENSMEM_NAMES={list_to_str(ENSMEM_NAMES)} + FV3_NML_ENSMEM_FPS={list_to_str(FV3_NML_ENSMEM_FPS)} + # + #----------------------------------------------------------------------- + # + # Files. + # + #----------------------------------------------------------------------- + # + GLOBAL_VAR_DEFNS_FP='{GLOBAL_VAR_DEFNS_FP}' + + DATA_TABLE_FN='{DATA_TABLE_FN}' + DIAG_TABLE_FN='{DIAG_TABLE_FN}' + FIELD_TABLE_FN='{FIELD_TABLE_FN}' + MODEL_CONFIG_FN='{MODEL_CONFIG_FN}' + NEMS_CONFIG_FN='{NEMS_CONFIG_FN}' + + DATA_TABLE_TMPL_FN='{DATA_TABLE_TMPL_FN}' + DIAG_TABLE_TMPL_FN='{DIAG_TABLE_TMPL_FN}' + FIELD_TABLE_TMPL_FN='{FIELD_TABLE_TMPL_FN}' + MODEL_CONFIG_TMPL_FN='{MODEL_CONFIG_TMPL_FN}' + NEMS_CONFIG_TMPL_FN='{NEMS_CONFIG_TMPL_FN}' + + DATA_TABLE_TMPL_FP='{DATA_TABLE_TMPL_FP}' + DIAG_TABLE_TMPL_FP='{DIAG_TABLE_TMPL_FP}' + FIELD_TABLE_TMPL_FP='{FIELD_TABLE_TMPL_FP}' + FV3_NML_BASE_SUITE_FP='{FV3_NML_BASE_SUITE_FP}' + FV3_NML_YAML_CONFIG_FP='{FV3_NML_YAML_CONFIG_FP}' + FV3_NML_BASE_ENS_FP='{FV3_NML_BASE_ENS_FP}' + MODEL_CONFIG_TMPL_FP='{MODEL_CONFIG_TMPL_FP}' + NEMS_CONFIG_TMPL_FP='{NEMS_CONFIG_TMPL_FP}' + + CCPP_PHYS_SUITE_FN='{CCPP_PHYS_SUITE_FN}' + CCPP_PHYS_SUITE_IN_CCPP_FP='{CCPP_PHYS_SUITE_IN_CCPP_FP}' + CCPP_PHYS_SUITE_FP='{CCPP_PHYS_SUITE_FP}' + + FIELD_DICT_FN='{FIELD_DICT_FN}' + FIELD_DICT_IN_UWM_FP='{FIELD_DICT_IN_UWM_FP}' + FIELD_DICT_FP='{FIELD_DICT_FP}' + + DATA_TABLE_FP='{DATA_TABLE_FP}' + FIELD_TABLE_FP='{FIELD_TABLE_FP}' + FV3_NML_FN='{FV3_NML_FN}' # This may not be necessary... + FV3_NML_FP='{FV3_NML_FP}' + NEMS_CONFIG_FP='{NEMS_CONFIG_FP}' + + FV3_EXEC_FP='{FV3_EXEC_FP}' + + LOAD_MODULES_RUN_TASK_FP='{LOAD_MODULES_RUN_TASK_FP}' + + THOMPSON_MP_CLIMO_FN='{THOMPSON_MP_CLIMO_FN}' + THOMPSON_MP_CLIMO_FP='{THOMPSON_MP_CLIMO_FP}' + # + #----------------------------------------------------------------------- + # + # Flag for creating relative symlinks (as opposed to absolute ones). + # + #----------------------------------------------------------------------- + # + RELATIVE_LINK_FLAG='{RELATIVE_LINK_FLAG}' + # + #----------------------------------------------------------------------- + # + # Parameters that indicate whether or not various parameterizations are + # included in and called by the physics suite. + # + #----------------------------------------------------------------------- + # + SDF_USES_RUC_LSM='{type_to_str(SDF_USES_RUC_LSM)}' + SDF_USES_THOMPSON_MP='{type_to_str(SDF_USES_THOMPSON_MP)}' + # + #----------------------------------------------------------------------- + # + # Grid configuration parameters needed regardless of grid generation + # method used. + # + #----------------------------------------------------------------------- + # + GTYPE='{GTYPE}' + TILE_RGNL='{TILE_RGNL}' + NH0='{NH0}' + NH3='{NH3}' + NH4='{NH4}' + + LON_CTR='{LON_CTR}' + LAT_CTR='{LAT_CTR}' + NX='{NX}' + NY='{NY}' + NHW='{NHW}' + STRETCH_FAC='{STRETCH_FAC}' + + RES_IN_FIXLAM_FILENAMES='{RES_IN_FIXLAM_FILENAMES}' + # + # If running the make_grid task, CRES will be set to a null string during + # the grid generation step. It will later be set to an actual value after + # the make_grid task is complete. + # + CRES='{CRES}'""" + with open(GLOBAL_VAR_DEFNS_FP,'a') as f: + f.write(dedent(msg)) + # + #----------------------------------------------------------------------- + # + # Append to the variable definitions file the defintions of grid parame- + # ters that are specific to the grid generation method used. + # + #----------------------------------------------------------------------- + # + if GRID_GEN_METHOD == "GFDLgrid": + + msg=f""" + # + #----------------------------------------------------------------------- + # + # Grid configuration parameters for a regional grid generated from a + # global parent cubed-sphere grid. This is the method originally + # suggested by GFDL since it allows GFDL's nested grid generator to be + # used to generate a regional grid. However, for large regional domains, + # it results in grids that have an unacceptably large range of cell sizes + # (i.e. ratio of maximum to minimum cell size is not sufficiently close + # to 1). + # + #----------------------------------------------------------------------- + # + ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG='{ISTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG}' + IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG='{IEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG}' + JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG='{JSTART_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG}' + JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG='{JEND_OF_RGNL_DOM_WITH_WIDE_HALO_ON_T6SG}'""" + with open(GLOBAL_VAR_DEFNS_FP,'a') as f: + f.write(dedent(msg)) + + elif GRID_GEN_METHOD == "ESGgrid": + + msg=f""" + # + #----------------------------------------------------------------------- + # + # Grid configuration parameters for a regional grid generated independently + # of a global parent grid. This method was developed by Jim Purser of + # EMC and results in very uniform grids (i.e. ratio of maximum to minimum + # cell size is very close to 1). + # + #----------------------------------------------------------------------- + # + DEL_ANGLE_X_SG='{DEL_ANGLE_X_SG}' + DEL_ANGLE_Y_SG='{DEL_ANGLE_Y_SG}' + NEG_NX_OF_DOM_WITH_WIDE_HALO='{NEG_NX_OF_DOM_WITH_WIDE_HALO}' + NEG_NY_OF_DOM_WITH_WIDE_HALO='{NEG_NY_OF_DOM_WITH_WIDE_HALO}' + PAZI='{PAZI or ''}'""" + with open(GLOBAL_VAR_DEFNS_FP,'a') as f: + f.write(dedent(msg)) + # + #----------------------------------------------------------------------- + # + # Continue appending variable definitions to the variable definitions + # file. + # + #----------------------------------------------------------------------- + # + msg = f""" + # + #----------------------------------------------------------------------- + # + # Flag in the \"{MODEL_CONFIG_FN}\" file for coupling the ocean model to + # the weather model. + # + #----------------------------------------------------------------------- + # + CPL='{type_to_str(CPL)}' + # + #----------------------------------------------------------------------- + # + # Name of the ozone parameterization. The value this gets set to depends + # on the CCPP physics suite being used. + # + #----------------------------------------------------------------------- + # + OZONE_PARAM='{OZONE_PARAM}' + # + #----------------------------------------------------------------------- + # + # If USE_USER_STAGED_EXTRN_FILES is set to \"FALSE\", this is the system + # directory in which the workflow scripts will look for the files generated + # by the external model specified in EXTRN_MDL_NAME_ICS. These files will + # be used to generate the input initial condition and surface files for + # the FV3-LAM. + # + #----------------------------------------------------------------------- + # + EXTRN_MDL_SYSBASEDIR_ICS='{EXTRN_MDL_SYSBASEDIR_ICS}' + # + #----------------------------------------------------------------------- + # + # If USE_USER_STAGED_EXTRN_FILES is set to \"FALSE\", this is the system + # directory in which the workflow scripts will look for the files generated + # by the external model specified in EXTRN_MDL_NAME_LBCS. These files + # will be used to generate the input lateral boundary condition files for + # the FV3-LAM. + # + #----------------------------------------------------------------------- + # + EXTRN_MDL_SYSBASEDIR_LBCS='{EXTRN_MDL_SYSBASEDIR_LBCS}' + # + #----------------------------------------------------------------------- + # + # Shift back in time (in units of hours) of the starting time of the ex- + # ternal model specified in EXTRN_MDL_NAME_LBCS. + # + #----------------------------------------------------------------------- + # + EXTRN_MDL_LBCS_OFFSET_HRS='{EXTRN_MDL_LBCS_OFFSET_HRS}' + # + #----------------------------------------------------------------------- + # + # Boundary condition update times (in units of forecast hours). Note that + # LBC_SPEC_FCST_HRS is an array, even if it has only one element. + # + #----------------------------------------------------------------------- + # + LBC_SPEC_FCST_HRS={list_to_str(LBC_SPEC_FCST_HRS)} + # + #----------------------------------------------------------------------- + # + # The number of cycles for which to make forecasts and the list of + # starting dates/hours of these cycles. + # + #----------------------------------------------------------------------- + # + NUM_CYCLES='{NUM_CYCLES}' + ALL_CDATES={list_to_str(ALL_CDATES)} + # + #----------------------------------------------------------------------- + # + # Parameters that determine whether FVCOM data will be used, and if so, + # their location. + # + # If USE_FVCOM is set to \"TRUE\", then FVCOM data (in the file FVCOM_FILE + # located in the directory FVCOM_DIR) will be used to update the surface + # boundary conditions during the initial conditions generation task + # (MAKE_ICS_TN). + # + #----------------------------------------------------------------------- + # + USE_FVCOM='{type_to_str(USE_FVCOM)}' + FVCOM_DIR='{FVCOM_DIR}' + FVCOM_FILE='{FVCOM_FILE}' + # + #----------------------------------------------------------------------- + # + # Computational parameters. + # + #----------------------------------------------------------------------- + # + NCORES_PER_NODE='{NCORES_PER_NODE}' + PE_MEMBER01='{PE_MEMBER01}' + # + #----------------------------------------------------------------------- + # + # IF DO_SPP is set to "TRUE", N_VAR_SPP specifies the number of physics + # parameterizations that are perturbed with SPP. If DO_LSM_SPP is set to + # "TRUE", N_VAR_LNDP specifies the number of LSM parameters that are + # perturbed. LNDP_TYPE determines the way LSM perturbations are employed + # and FHCYC_LSM_SPP_OR_NOT sets FHCYC based on whether LSM perturbations + # are turned on or not. + # + #----------------------------------------------------------------------- + # + N_VAR_SPP='{N_VAR_SPP}' + N_VAR_LNDP='{N_VAR_LNDP}' + LNDP_TYPE='{LNDP_TYPE}' + FHCYC_LSM_SPP_OR_NOT='{FHCYC_LSM_SPP_OR_NOT}' + """ + + with open(GLOBAL_VAR_DEFNS_FP,'a') as f: + f.write(dedent(msg)) + + # export all vars + export_vars() + + # + #----------------------------------------------------------------------- + # + # Check validity of parameters in one place, here in the end. + # + #----------------------------------------------------------------------- + # + + # update dictionary with globals() values + update_dict = {k: globals()[k] for k in cfg_d.keys() if k in globals() } + cfg_d.update(update_dict) + + # loop through cfg_d and check validity of params + cfg_v = load_config_file("valid_param_vals.yaml") + for k,v in cfg_d.items(): + if v == None: + continue + vkey = 'valid_vals_' + k + if (vkey in cfg_v) and not (v in cfg_v[vkey]): + print_err_msg_exit(f''' + The variable {k}={v} in {EXPT_DEFAULT_CONFIG_FN} or {EXPT_CONFIG_FN} does not have + a valid value. Possible values are: + {k} = {cfg_v[vkey]}''') + + # + #----------------------------------------------------------------------- + # + # Print message indicating successful completion of script. + # + #----------------------------------------------------------------------- + # + print_info_msg(f''' + ======================================================================== + Function setup() in \"{os.path.basename(__file__)}\" completed successfully!!! + ========================================================================''') + +# +#----------------------------------------------------------------------- +# +# Call the function defined above. +# +#----------------------------------------------------------------------- +# +if __name__ == "__main__": + setup() + diff --git a/ush/setup.sh b/ush/setup.sh index 01470963d..a5e0f15fd 100755 --- a/ush/setup.sh +++ b/ush/setup.sh @@ -515,7 +515,7 @@ One or more fix file directories have not been specified for this machine: FIXlut = \"${FIXlut:-\"\"} TOPO_DIR = \"${TOPO_DIR:-\"\"} SFC_CLIMO_INPUT_DIR = \"${SFC_CLIMO_INPUT_DIR:-\"\"} - FIXLAM_NCO_BASEDIR = \"${FIXLAM_NCO_BASEDIR:-\"\"} + DOMAIN_PREGEN_BASEDIR = \"${DOMAIN_PREGEN_BASEDIR:-\"\"} You can specify the missing location(s) in ${machine_file}" fi @@ -523,7 +523,7 @@ fi # #----------------------------------------------------------------------- # -# Set the names of the build and workflow environment files (if not +# Set the names of the build and workflow module files (if not # already specified by the user). These are the files that need to be # sourced before building the component SRW App codes and running various # workflow scripts, respectively. @@ -531,8 +531,8 @@ fi #----------------------------------------------------------------------- # machine=$(echo_lowercase ${MACHINE}) -WFLOW_ENV_FN=${WFLOW_ENV_FN:-"wflow_${machine}.env"} -BUILD_ENV_FN=${BUILD_ENV_FN:-"build_${machine}_${COMPILER}.env"} +WFLOW_MOD_FN=${WFLOW_MOD_FN:-"wflow_${machine}"} +BUILD_MOD_FN=${BUILD_MOD_FN:-"build_${machine}_${COMPILER}"} # #----------------------------------------------------------------------- # @@ -1141,7 +1141,7 @@ check_for_preexist_dir_file "$EXPTDIR" "${PREEXISTING_DIR_METHOD}" # is not placed directly under COMROOT but several directories further # down. More specifically, for a cycle starting at yyyymmddhh, it is at # -# $COMROOT/$NET/$envir/$RUN.$yyyymmdd/$hh +# $COMROOT/$NET/$model_ver/$RUN.$yyyymmdd/$hh # # Below, we set COMROOT in terms of PTMP as COMROOT="$PTMP/com". COMOROOT # is not used by the workflow in community mode. @@ -1151,10 +1151,13 @@ check_for_preexist_dir_file "$EXPTDIR" "${PREEXISTING_DIR_METHOD}" # from the RUN_POST_TN task will be placed, i.e. it is the cycle-independent # portion of the RUN_POST_TN task's output directory. It is given by # -# $COMROOT/$NET/$envir +# $COMROOT/$NET/$model_ver # # COMOUT_BASEDIR is not used by the workflow in community mode. # +# POST_OUTPUT_DOMAIN_NAME: +# The PREDEF_GRID_NAME is set by default. +# #----------------------------------------------------------------------- # LOGDIR="${EXPTDIR}/log" @@ -1164,20 +1167,28 @@ FIXclim="${EXPTDIR}/fix_clim" FIXLAM="${EXPTDIR}/fix_lam" if [ "${RUN_ENVIR}" = "nco" ]; then - - CYCLE_BASEDIR="$STMP/tmpnwprd/$RUN" + CYCLE_BASEDIR="${STMP}/tmpnwprd/${RUN}" check_for_preexist_dir_file "${CYCLE_BASEDIR}" "${PREEXISTING_DIR_METHOD}" - COMROOT="$PTMP/com" - COMOUT_BASEDIR="$COMROOT/$NET/$envir" + COMROOT="${PTMP}/com" + COMOUT_BASEDIR="${COMROOT}/${NET}/${model_ver}" check_for_preexist_dir_file "${COMOUT_BASEDIR}" "${PREEXISTING_DIR_METHOD}" - else - CYCLE_BASEDIR="$EXPTDIR" COMROOT="" COMOUT_BASEDIR="" +fi +POST_OUTPUT_DOMAIN_NAME="${POST_OUTPUT_DOMAIN_NAME:-${PREDEF_GRID_NAME}}" +if [ -z "${POST_OUTPUT_DOMAIN_NAME}" ]; then + print_err_msg_exit "\ +The domain name used in naming the run_post output files (POST_OUTPUT_DOMAIN_NAME) +has not been set: + POST_OUTPUT_DOMAIN_NAME = \"${POST_OUTPUT_DOMAIN_NAME}\" +If this experiment is not using a predefined grid (i.e. if PREDEF_GRID_NAME +is set to a null string), POST_OUTPUT_DOMAIN_NAME must be set in the SRW +App's configuration file (\"${EXPT_CONFIG_FN}\")." fi +POST_OUTPUT_DOMAIN_NAME=$(echo_lowercase ${POST_OUTPUT_DOMAIN_NAME}) # #----------------------------------------------------------------------- # @@ -1216,11 +1227,22 @@ fi # dot_ccpp_phys_suite_or_null=".${CCPP_PHYS_SUITE}" -DATA_TABLE_TMPL_FN="${DATA_TABLE_FN}" -DIAG_TABLE_TMPL_FN="${DIAG_TABLE_FN}${dot_ccpp_phys_suite_or_null}" -FIELD_TABLE_TMPL_FN="${FIELD_TABLE_FN}${dot_ccpp_phys_suite_or_null}" -MODEL_CONFIG_TMPL_FN="${MODEL_CONFIG_FN}" -NEMS_CONFIG_TMPL_FN="${NEMS_CONFIG_FN}" +# Names of input files that the forecast model (ufs-weather-model) expects +# to read in. These should only be changed if the input file names in the +# forecast model code are changed. +#---------------------------------- +DATA_TABLE_FN="data_table" +DIAG_TABLE_FN="diag_table" +FIELD_TABLE_FN="field_table" +MODEL_CONFIG_FN="model_configure" +NEMS_CONFIG_FN="nems.configure" +#---------------------------------- + +DATA_TABLE_TMPL_FN="${DATA_TABLE_TMPL_FN:-${DATA_TABLE_FN}}" +DIAG_TABLE_TMPL_FN="${DIAG_TABLE_TMPL_FN:-${DIAG_TABLE_FN}}${dot_ccpp_phys_suite_or_null}" +FIELD_TABLE_TMPL_FN="${FIELD_TABLE_TMPL_FN:-${FIELD_TABLE_FN}}${dot_ccpp_phys_suite_or_null}" +MODEL_CONFIG_TMPL_FN="${MODEL_CONFIG_TMPL_FN:-${MODEL_CONFIG_FN}}" +NEMS_CONFIG_TMPL_FN="${NEMS_CONFIG_TMPL_FN:-${NEMS_CONFIG_FN}}" DATA_TABLE_TMPL_FP="${TEMPLATE_DIR}/${DATA_TABLE_TMPL_FN}" DIAG_TABLE_TMPL_FP="${TEMPLATE_DIR}/${DIAG_TABLE_TMPL_FN}" @@ -1345,14 +1367,15 @@ USE_USER_STAGED_EXTRN_FILES=$(boolify $USE_USER_STAGED_EXTRN_FILES) # if [ "${USE_USER_STAGED_EXTRN_FILES}" = "TRUE" ]; then - if [ ! -d "${EXTRN_MDL_SOURCE_BASEDIR_ICS}" ]; then + # Check for the base directory up to the first templated field. + if [ ! -d "$(dirname ${EXTRN_MDL_SOURCE_BASEDIR_ICS%%\$*})" ]; then print_err_msg_exit "\ The directory (EXTRN_MDL_SOURCE_BASEDIR_ICS) in which the user-staged external model files for generating ICs should be located does not exist: EXTRN_MDL_SOURCE_BASEDIR_ICS = \"${EXTRN_MDL_SOURCE_BASEDIR_ICS}\"" fi - if [ ! -d "${EXTRN_MDL_SOURCE_BASEDIR_LBCS}" ]; then + if [ ! -d "$(dirname ${EXTRN_MDL_SOURCE_BASEDIR_LBCS%%\$*})" ]; then print_err_msg_exit "\ The directory (EXTRN_MDL_SOURCE_BASEDIR_LBCS) in which the user-staged external model files for generating LBCs should be located does not exist: @@ -1393,6 +1416,21 @@ fi # #----------------------------------------------------------------------- # +# Make sure that DO_ENSEMBLE is set to TRUE when running ensemble vx. +# +#----------------------------------------------------------------------- +# +if [ "${DO_ENSEMBLE}" = "FALSE" ] && [ "${RUN_TASK_VX_ENSGRID}" = "TRUE" -o \ + "${RUN_TASK_VX_ENSPOINT}" = "TRUE" ]; then + print_err_msg_exit "\ +Ensemble verification can not be run unless running in ensemble mode: + DO_ENSEMBLE = \"${DO_ENSEMBLE}\" + RUN_TASK_VX_ENSGRID = \"${RUN_TASK_VX_ENSGRID}\" + RUN_TASK_VX_ENSPOINT = \"${RUN_TASK_VX_ENSPOINT}\"" +fi +# +#----------------------------------------------------------------------- +# # Set the full path to the forecast model executable. # #----------------------------------------------------------------------- @@ -1452,7 +1490,7 @@ LOAD_MODULES_RUN_TASK_FP="$USHDIR/load_modules_run_task.sh" # if [ "${RUN_ENVIR}" = "nco" ]; then - nco_fix_dir="${FIXLAM_NCO_BASEDIR}/${PREDEF_GRID_NAME}" + nco_fix_dir="${DOMAIN_PREGEN_BASEDIR}/${PREDEF_GRID_NAME}" if [ ! -d "${nco_fix_dir}" ]; then print_err_msg_exit "\ The directory (nco_fix_dir) that should contain the pregenerated grid, @@ -1468,11 +1506,11 @@ orography, and surface climatology files does not exist: When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated grid files already exist in the directory - \${FIXLAM_NCO_BASEDIR}/\${PREDEF_GRID_NAME} + \${DOMAIN_PREGEN_BASEDIR}/\${PREDEF_GRID_NAME} where - FIXLAM_NCO_BASEDIR = \"${FIXLAM_NCO_BASEDIR}\" + DOMAIN_PREGEN_BASEDIR = \"${DOMAIN_PREGEN_BASEDIR}\" PREDEF_GRID_NAME = \"${PREDEF_GRID_NAME}\" Thus, the MAKE_GRID_TN task must not be run (i.e. RUN_TASK_MAKE_GRID must @@ -1506,11 +1544,11 @@ Reset values are: msg=" When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated orography files already exist in the directory - \${FIXLAM_NCO_BASEDIR}/\${PREDEF_GRID_NAME} + \${DOMAIN_PREGEN_BASEDIR}/\${PREDEF_GRID_NAME} where - FIXLAM_NCO_BASEDIR = \"${FIXLAM_NCO_BASEDIR}\" + DOMAIN_PREGEN_BASEDIR = \"${DOMAIN_PREGEN_BASEDIR}\" PREDEF_GRID_NAME = \"${PREDEF_GRID_NAME}\" Thus, the MAKE_OROG_TN task must not be run (i.e. RUN_TASK_MAKE_OROG must @@ -1545,11 +1583,11 @@ Reset values are: When RUN_ENVIR is set to \"nco\", the workflow assumes that pregenerated surface climatology files already exist in the directory - \${FIXLAM_NCO_BASEDIR}/\${PREDEF_GRID_NAME} + \${DOMAIN_PREGEN_BASEDIR}/\${PREDEF_GRID_NAME} where - FIXLAM_NCO_BASEDIR = \"${FIXLAM_NCO_BASEDIR}\" + DOMAIN_PREGEN_BASEDIR = \"${DOMAIN_PREGEN_BASEDIR}\" PREDEF_GRID_NAME = \"${PREDEF_GRID_NAME}\" Thus, the MAKE_SFC_CLIMO_TN task must not be run (i.e. RUN_TASK_MAKE_SFC_CLIMO @@ -1576,8 +1614,7 @@ one above. Reset values are: fi - if [ "${RUN_TASK_VX_GRIDSTAT}" = "TRUE" ] || \ - [ "${RUN_TASK_VX_GRIDSTAT}" = "FALSE" ]; then + if [ "${RUN_TASK_VX_GRIDSTAT}" = "TRUE" ]; then msg=" When RUN_ENVIR is set to \"nco\", it is assumed that the verification @@ -1596,8 +1633,7 @@ Reset value is:" fi - if [ "${RUN_TASK_VX_POINTSTAT}" = "TRUE" ] || \ - [ "${RUN_TASK_VX_POINTSTAT}" = "FALSE" ]; then + if [ "${RUN_TASK_VX_POINTSTAT}" = "TRUE" ]; then msg=" When RUN_ENVIR is set to \"nco\", it is assumed that the verification @@ -1616,8 +1652,7 @@ Reset value is:" fi - if [ "${RUN_TASK_VX_ENSGRID}" = "TRUE" ] || \ - [ "${RUN_TASK_VX_ENSGRID}" = "FALSE" ]; then + if [ "${RUN_TASK_VX_ENSGRID}" = "TRUE" ]; then msg=" When RUN_ENVIR is set to \"nco\", it is assumed that the verification @@ -2011,7 +2046,6 @@ SUB_HOURLY_POST is NOT available with Inline Post yet." fi fi - check_var_valid_value "QUILTING" "valid_vals_QUILTING" QUILTING=$(boolify $QUILTING) @@ -2141,7 +2175,6 @@ GLOBAL_VAR_DEFNS_FP="$EXPTDIR/${GLOBAL_VAR_DEFNS_FN}" # variable definitions file. # #----------------------------------------------------------------------- - # print_info_msg " Creating list of default experiment variable definitions..." @@ -2429,6 +2462,12 @@ FV3_NML_ENSMEM_FPS=${fv3_nml_ensmem_fps_str} # GLOBAL_VAR_DEFNS_FP='${GLOBAL_VAR_DEFNS_FP}' +DATA_TABLE_FN='${DATA_TABLE_FN}' +DIAG_TABLE_FN='${DIAG_TABLE_FN}' +FIELD_TABLE_FN='${FIELD_TABLE_FN}' +MODEL_CONFIG_FN='${MODEL_CONFIG_FN}' +NEMS_CONFIG_FN='${NEMS_CONFIG_FN}' + DATA_TABLE_TMPL_FN='${DATA_TABLE_TMPL_FN}' DIAG_TABLE_TMPL_FN='${DIAG_TABLE_TMPL_FN}' FIELD_TABLE_TMPL_FN='${FIELD_TABLE_TMPL_FN}' diff --git a/ush/templates/FV3LAM_wflow.xml b/ush/templates/FV3LAM_wflow.xml index 48e53d81a..8dff08780 100644 --- a/ush/templates/FV3LAM_wflow.xml +++ b/ush/templates/FV3LAM_wflow.xml @@ -22,6 +22,7 @@ Parameters needed by the job scheduler. + @@ -184,6 +185,9 @@ MODULES_RUN_TASK_FP script. {%- endif %} {{ wtime_make_grid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &MAKE_GRID_TN; &LOGDIR;/&MAKE_GRID_TN;.log @@ -207,6 +211,9 @@ MODULES_RUN_TASK_FP script. {%- endif %} {{ wtime_make_orog }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &MAKE_OROG_TN; &LOGDIR;/&MAKE_OROG_TN;.log @@ -234,6 +241,9 @@ MODULES_RUN_TASK_FP script. {{ nnodes_make_sfc_climo }}:ppn={{ ppn_make_sfc_climo }} {{ wtime_make_sfc_climo }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &MAKE_SFC_CLIMO_TN; &LOGDIR;/&MAKE_SFC_CLIMO_TN;.log @@ -271,6 +281,9 @@ MODULES_RUN_TASK_FP script. {{ nnodes_get_extrn_ics }}:ppn={{ ppn_get_extrn_ics }} {{ wtime_get_extrn_ics }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &GET_EXTRN_ICS_TN; &LOGDIR;/&GET_EXTRN_ICS_TN;_@Y@m@d@H.log @@ -278,7 +291,6 @@ MODULES_RUN_TASK_FP script. PDY@Y@m@d CDATE@Y@m@d@H CYCLE_DIR&CYCLE_BASEDIR;/@Y@m@d@H - EXTRN_MDL_NAME{{ extrn_mdl_name_ics }} ICS_OR_LBCSICS @@ -298,6 +310,9 @@ MODULES_RUN_TASK_FP script. {{ nnodes_get_extrn_lbcs }}:ppn={{ ppn_get_extrn_lbcs }} {{ wtime_get_extrn_lbcs }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &GET_EXTRN_LBCS_TN; &LOGDIR;/&GET_EXTRN_LBCS_TN;_@Y@m@d@H.log @@ -305,7 +320,6 @@ MODULES_RUN_TASK_FP script. PDY@Y@m@d CDATE@Y@m@d@H CYCLE_DIR&CYCLE_BASEDIR;/@Y@m@d@H - EXTRN_MDL_NAME{{ extrn_mdl_name_lbcs }} ICS_OR_LBCSLBCS @@ -332,6 +346,9 @@ MODULES_RUN_TASK_FP script. {{ nnodes_make_ics }}:ppn={{ ppn_make_ics }} {{ wtime_make_ics }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &MAKE_ICS_TN;{{ uscore_ensmem_name }} &LOGDIR;/&MAKE_ICS_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -376,6 +393,9 @@ MODULES_RUN_TASK_FP script. {{ nnodes_make_lbcs }}:ppn={{ ppn_make_lbcs }} {{ wtime_make_lbcs }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &MAKE_LBCS_TN;{{ uscore_ensmem_name }} &LOGDIR;/&MAKE_LBCS_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -423,6 +443,9 @@ MODULES_RUN_TASK_FP script. {%- else %} {{ nnodes_run_fcst }}:ppn={{ ppn_run_fcst }} &NCORES_PER_NODE; + {%- endif %} + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; {%- endif %} {{ wtime_run_fcst }} &RUN_FCST_TN;{{ uscore_ensmem_name }} @@ -475,6 +498,9 @@ later below for other output times. {{ nnodes_run_post }}:ppn={{ ppn_run_post }} {{ wtime_run_post }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn# &LOGDIR;/&RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn#_@Y@m@d@H.log @@ -529,6 +555,9 @@ for other output times. {{ nnodes_run_post }}:ppn={{ ppn_run_post }} {{ wtime_run_post }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn# &LOGDIR;/&RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn#_@Y@m@d@H.log @@ -588,6 +617,9 @@ always zero). {{ nnodes_run_post }}:ppn={{ ppn_run_post }} {{ wtime_run_post }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} {%- if sub_hourly_post %} &RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn# &LOGDIR;/&RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn#_@Y@m@d@H.log @@ -656,6 +688,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_run_post }}:ppn={{ ppn_run_post }} {{ wtime_run_post }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn# &LOGDIR;/&RUN_POST_TN;{{ uscore_ensmem_name }}_f#fhr##fmn#_@Y@m@d@H.log @@ -699,6 +734,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_get_obs_ccpa }}:ppn={{ ppn_get_obs_ccpa }} {{ wtime_get_obs_ccpa }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &GET_OBS_CCPA_TN; &LOGDIR;/&GET_OBS_CCPA_TN;_@Y@m@d@H.log @@ -729,6 +767,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_get_obs_mrms }}:ppn={{ ppn_get_obs_mrms }} {{ wtime_get_obs_mrms }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &GET_OBS_MRMS_TN; &LOGDIR;/&GET_OBS_MRMS_TN;_@Y@m@d@H.log @@ -760,6 +801,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_get_obs_ndas }}:ppn={{ ppn_get_obs_ndas }} {{ wtime_get_obs_ndas }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &GET_OBS_NDAS_TN; &LOGDIR;/&GET_OBS_NDAS_TN;_@Y@m@d@H.log @@ -786,6 +830,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_TN; &LOGDIR;/&VX_GRIDSTAT_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -836,6 +883,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_REFC_TN; &LOGDIR;/&VX_GRIDSTAT_REFC_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -885,6 +935,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_RETOP_TN; &LOGDIR;/&VX_GRIDSTAT_RETOP_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -934,6 +987,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_03h_TN; &LOGDIR;/&VX_GRIDSTAT_03h_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -969,6 +1025,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_06h_TN; &LOGDIR;/&VX_GRIDSTAT_06h_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -1004,6 +1063,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_gridstat }}:ppn={{ ppn_vx_gridstat }} {{ wtime_vx_gridstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_GRIDSTAT_24h_TN; &LOGDIR;/&VX_GRIDSTAT_24h_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -1039,6 +1101,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_pointstat }}:ppn={{ ppn_vx_pointstat }} {{ wtime_vx_pointstat }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_POINTSTAT_TN; &LOGDIR;/&VX_POINTSTAT_TN;{{ uscore_ensmem_name }}_@Y@m@d@H.log @@ -1091,6 +1156,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_TN; &LOGDIR;/&VX_ENSGRID_TN;_@Y@m@d@H.log @@ -1123,6 +1191,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_03h_TN; &LOGDIR;/&VX_ENSGRID_03h_TN;_@Y@m@d@H.log @@ -1155,6 +1226,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_06h_TN; &LOGDIR;/&VX_ENSGRID_06h_TN;_@Y@m@d@H.log @@ -1187,6 +1261,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_24h_TN; &LOGDIR;/&VX_ENSGRID_24h_TN;_@Y@m@d@H.log @@ -1218,6 +1295,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_REFC_TN; &LOGDIR;/&VX_ENSGRID_REFC_TN;_@Y@m@d@H.log @@ -1248,6 +1328,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid }}:ppn={{ ppn_vx_ensgrid }} {{ wtime_vx_ensgrid }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_RETOP_TN; &LOGDIR;/&VX_ENSGRID_RETOP_TN;_@Y@m@d@H.log @@ -1277,6 +1360,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_mean }}:ppn={{ ppn_vx_ensgrid_mean }} {{ wtime_vx_ensgrid_mean }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_MEAN_TN; &LOGDIR;/&VX_ENSGRID_MEAN_TN;_@Y@m@d@H.log @@ -1308,6 +1394,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_TN; &LOGDIR;/&VX_ENSGRID_PROB_TN;_@Y@m@d@H.log @@ -1339,6 +1428,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_mean }}:ppn={{ ppn_vx_ensgrid_mean }} {{ wtime_vx_ensgrid_mean }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_MEAN_03h_TN; &LOGDIR;/&VX_ENSGRID_MEAN_03h_TN;_@Y@m@d@H.log @@ -1370,6 +1462,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_03h_TN; &LOGDIR;/&VX_ENSGRID_PROB_03h_TN;_@Y@m@d@H.log @@ -1402,6 +1497,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_mean }}:ppn={{ ppn_vx_ensgrid_mean }} {{ wtime_vx_ensgrid_mean }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_MEAN_06h_TN; &LOGDIR;/&VX_ENSGRID_MEAN_06h_TN;_@Y@m@d@H.log @@ -1433,6 +1531,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_06h_TN; &LOGDIR;/&VX_ENSGRID_PROB_06h_TN;_@Y@m@d@H.log @@ -1465,6 +1566,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_mean }}:ppn={{ ppn_vx_ensgrid_mean }} {{ wtime_vx_ensgrid_mean }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_MEAN_24h_TN; &LOGDIR;/&VX_ENSGRID_MEAN_24h_TN;_@Y@m@d@H.log @@ -1496,6 +1600,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_24h_TN; &LOGDIR;/&VX_ENSGRID_PROB_24h_TN;_@Y@m@d@H.log @@ -1527,6 +1634,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_REFC_TN; &LOGDIR;/&VX_ENSGRID_PROB_REFC_TN;_@Y@m@d@H.log @@ -1557,6 +1667,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_ensgrid_prob }}:ppn={{ ppn_vx_ensgrid_prob }} {{ wtime_vx_ensgrid_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSGRID_PROB_RETOP_TN; &LOGDIR;/&VX_ENSGRID_PROB_RETOP_TN;_@Y@m@d@H.log @@ -1588,6 +1701,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_enspoint }}:ppn={{ ppn_vx_enspoint }} {{ wtime_vx_enspoint }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSPOINT_TN; &LOGDIR;/&VX_ENSPOINT_TN;_@Y@m@d@H.log @@ -1616,6 +1732,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_enspoint_mean }}:ppn={{ ppn_vx_enspoint_mean }} {{ wtime_vx_enspoint_mean }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSPOINT_MEAN_TN; &LOGDIR;/&VX_ENSPOINT_MEAN_TN;_@Y@m@d@H.log @@ -1644,6 +1763,9 @@ the tag to be identical to the ones above for other output times. {{ nnodes_vx_enspoint_prob }}:ppn={{ ppn_vx_enspoint_prob }} {{ wtime_vx_enspoint_prob }} &NCORES_PER_NODE; + {%- if machine in ["GAEA"] %} + &SLURM_NATIVE_CMD; + {%- endif %} &VX_ENSPOINT_PROB_TN; &LOGDIR;/&VX_ENSPOINT_PROB_TN;_@Y@m@d@H.log diff --git a/ush/templates/data_locations.yml b/ush/templates/data_locations.yml index 23354e171..ef5d49c8e 100644 --- a/ush/templates/data_locations.yml +++ b/ush/templates/data_locations.yml @@ -64,7 +64,12 @@ FV3GFS: - gfs.t{hh}z.sfcanl.nemsio fcst: - gfs.t{hh}z.atmf{fcst_hr:03d}.nemsio - - gfs.t{hh}z.sfcf{fcst_hr:03d}.nemsio + netcdf: + anl: + - gfs.t{hh}z.atmanl.nc + - gfs.t{hh}z.sfcanl.nc + fcst: + - gfs.t{hh}z.atmf{fcst_hr:03d}.nc hpss: protocol: htar archive_path: @@ -103,16 +108,41 @@ FV3GFS: file_names: <<: *gfs_file_names +GSMGFS: + hpss: + protocol: htar + archive_path: + - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} + archive_internal_dir: + anl: + - ./ + fcst: + - /gpfs/hps/nco/ops/com/gfs/prod/gfs.{yyyymmdd} + archive_file_names: + anl: + - gpfs_hps_nco_ops_com_gfs_prod_gfs.{yyyymmddhh}.anl.tar + fcst: + - gpfs_hps_nco_ops_com_gfs_prod_gfs.{yyyymmddhh}.sigma.tar + file_names: + anl: + - gfs.t{hh}z.atmanl.nemsio + - gfs.t{hh}z.sfcanl.nemsio + fcst: + - gfs.t{hh}z.atmf{fcst_hr:03d}.nemsio + RAP: hpss: protocol: htar archive_format: tar archive_path: - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} + - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} archive_internal_dir: - ./ + - ./ archive_file_names: # RAP forecasts are binned into 6 hour tar files. + - gpfs_hps_nco_ops_com_rap_prod_rap.{yyyymmdd}{bin6}.wrf.tar - com_rap_prod_rap.{yyyymmdd}{bin6}.wrf.tar file_names: &rap_file_names anl: @@ -125,58 +155,31 @@ RAP: file_names: <<: *rap_file_names -RAPx: - hpss: - protocol: htar - archive_format: zip - archive_path: - - /BMC/fdr/Permanent/{yyyy}/{mm}/{dd}/data/fsl/rap/full/wrfnat - archive_file_names: - # RAPx bins two cycles togehter, and named by the lower even value - # of the cycle hour. - - '{yyyymmdd}{hh_even}00.zip' - file_names: - anl: - - '{yy}{jjj}{hh}00{fcst_hr:02d}00' - fcst: - - '{yy}{jjj}{hh}00{fcst_hr:02d}00' - HRRR: hpss: protocol: htar archive_format: tar archive_path: - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} + - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} archive_internal_dir: - ./ + - ./ archive_file_names: # HRRR forecasts are binned into 6 hour tar files. - - com_hrrr_prod_hrrr.{yyyymmdd}_conus{bin6}.wrfnatdng.tar + - gpfs_hps_nco_ops_com_hrrr_prod_hrrr.{yyyymmdd}_conus{bin6}.wrf.tar + - com_hrrr_prod_hrrr.{yyyymmdd}_conus{bin6}.wrf.tar file_names: &hrrr_file_names anl: - - hrrr.t{hh}z.wrfnatf{fcst_hr:02d}.grib2 + - hrrr.t{hh}z.wrfprsf{fcst_hr:02d}.grib2 fcst: - - hrrr.t{hh}z.wrfnatf{fcst_hr:02d}.grib2 + - hrrr.t{hh}z.wrfprsf{fcst_hr:02d}.grib2 aws: protocol: download url: https://noaa-hrrr-bdp-pds.s3.amazonaws.com/hrrr.{yyyymmdd}/conus/ file_names: <<: *hrrr_file_names -HRRRx: - hpss: - protocol: htar - archive_format: zip - archive_path: - - /BMC/fdr/Permanent/{yyyy}/{mm}/{dd}/data/fsl/hrrr/conus/wrfnat - archive_file_names: - - '{yyyymmddhh}00.zip' - file_names: - anl: - - '{yy}{jjj}{hh}00{fcst_hr:02d}00' - fcst: - - '{yy}{jjj}{hh}00{fcst_hr:02d}00' - NAM: hpss: protocol: htar @@ -184,10 +187,10 @@ NAM: archive_path: - /NCEPPROD/hpssprod/runhistory/rh{yyyy}/{yyyymm}/{yyyymmdd} archive_file_names: - - com_nam_prod_nam.{yyyymmddhh}.bgrid.tar + - com_nam_prod_nam.{yyyymmddhh}.awphys{fcst_hr:02d}.tar file_names: anl: - - nam.t{hh}z.bgrdsf{fcst_hr:03d}.tm00 + - nam.t{hh}z.awphys{fcst_hr:02d}.tm00.grib2 fcst: - - nam.t{hh}z.bgrdsf{fcst_hr:03d} + - nam.t{hh}z.awphys{fcst_hr:02d}.tm00.grib2 diff --git a/ush/templates/parm/metplus/EnsembleStat_APCP01h.conf b/ush/templates/parm/metplus/EnsembleStat_APCP01h.conf index 23f6adae9..33599d9f7 100644 --- a/ush/templates/parm/metplus/EnsembleStat_APCP01h.conf +++ b/ush/templates/parm/metplus/EnsembleStat_APCP01h.conf @@ -221,7 +221,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_01 # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_APCP03h.conf b/ush/templates/parm/metplus/EnsembleStat_APCP03h.conf index d67380ad8..2691dc2dc 100644 --- a/ush/templates/parm/metplus/EnsembleStat_APCP03h.conf +++ b/ush/templates/parm/metplus/EnsembleStat_APCP03h.conf @@ -256,8 +256,8 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_03 [filename_templates] # Need to have PCPCombine output data to individual member directories. -FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a03h +FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a03h # Input and output template for obs pcp-combine files OBS_PCP_COMBINE_INPUT_TEMPLATE = {OBS_PCP_COMBINE_INPUT_DIR}/{valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2 @@ -266,7 +266,7 @@ OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_DIR}/{valid?fmt=%Y%m%d # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a03h +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a03h # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_APCP06h.conf b/ush/templates/parm/metplus/EnsembleStat_APCP06h.conf index ae2f61f44..8527e2363 100644 --- a/ush/templates/parm/metplus/EnsembleStat_APCP06h.conf +++ b/ush/templates/parm/metplus/EnsembleStat_APCP06h.conf @@ -257,8 +257,8 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_01 [filename_templates] # Need to have PCPCombine output data to individual member directories. -FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a06h +FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a06h # Input and output template for obs pcp-combine files OBS_PCP_COMBINE_INPUT_TEMPLATE = {OBS_PCP_COMBINE_INPUT_DIR}/{valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2 @@ -267,7 +267,7 @@ OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_DIR}/{valid?fmt=%Y%m%d # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a06h +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a06h # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_APCP24h.conf b/ush/templates/parm/metplus/EnsembleStat_APCP24h.conf index 0b1d175af..c3ed6a3a9 100644 --- a/ush/templates/parm/metplus/EnsembleStat_APCP24h.conf +++ b/ush/templates/parm/metplus/EnsembleStat_APCP24h.conf @@ -257,8 +257,8 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_01 [filename_templates] # Need to have PCPCombine output data to individual member directories. -FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a24h +FCST_PCP_COMBINE_INPUT_TEMPLATE = {custom?fmt=%s}/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {custom?fmt=%s}/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a24h # Input and output template for obs pcp-combine files OBS_PCP_COMBINE_INPUT_TEMPLATE = {OBS_PCP_COMBINE_INPUT_DIR}/{valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2 @@ -267,7 +267,7 @@ OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_DIR}/{valid?fmt=%Y%m%d # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a24h +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = membegin_end_incr(1,{ENV[NUM_ENS_MEMBERS]},1,{ENV[NUM_PAD]})/metprd/pcp_combine/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a24h # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_REFC.conf b/ush/templates/parm/metplus/EnsembleStat_REFC.conf index 989341a55..123af0f0a 100644 --- a/ush/templates/parm/metplus/EnsembleStat_REFC.conf +++ b/ush/templates/parm/metplus/EnsembleStat_REFC.conf @@ -220,7 +220,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/REFC # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_RETOP.conf b/ush/templates/parm/metplus/EnsembleStat_RETOP.conf index b0c235a2e..4fcbc9bcf 100644 --- a/ush/templates/parm/metplus/EnsembleStat_RETOP.conf +++ b/ush/templates/parm/metplus/EnsembleStat_RETOP.conf @@ -223,7 +223,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/RETOP # FCST_ENSEMBLE_STAT_INPUT_TEMPLATE - comma separated list of ensemble members # or a single line, - filename wildcard characters may be used, ? or *. -FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_conus_sfc.conf b/ush/templates/parm/metplus/EnsembleStat_conus_sfc.conf index ecdc7f1ac..d66066422 100644 --- a/ush/templates/parm/metplus/EnsembleStat_conus_sfc.conf +++ b/ush/templates/parm/metplus/EnsembleStat_conus_sfc.conf @@ -93,6 +93,9 @@ ENSEMBLE_STAT_OUTPUT_PREFIX = {MODEL}_ADPSFC_{OBTYPE} PB2NC_CONFIG_FILE = {PARM_BASE}/met_config/PB2NCConfig_wrapped ENSEMBLE_STAT_CONFIG_FILE = {PARM_BASE}/met_config/EnsembleStatConfig_wrapped +ENSEMBLE_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 +#ENSEMBLE_STAT_OBS_QUALITY_EXC = + # if True, pb2nc will skip processing a file if the output already exists # used to speed up runs and reduce redundancy PB2NC_SKIP_IF_OUTPUT_EXISTS = True @@ -322,7 +325,7 @@ PB2NC_OUTPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc # or a single line, - filename wildcard characters may be used, ? or *. FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = - mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 + mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/EnsembleStat_upper_air.conf b/ush/templates/parm/metplus/EnsembleStat_upper_air.conf index d25241e93..072926a3b 100644 --- a/ush/templates/parm/metplus/EnsembleStat_upper_air.conf +++ b/ush/templates/parm/metplus/EnsembleStat_upper_air.conf @@ -93,6 +93,9 @@ ENSEMBLE_STAT_OUTPUT_PREFIX = {MODEL}_ADPUPA_{OBTYPE} PB2NC_CONFIG_FILE = {PARM_BASE}/met_config/PB2NCConfig_wrapped ENSEMBLE_STAT_CONFIG_FILE = {PARM_BASE}/met_config/EnsembleStatConfig_wrapped +ENSEMBLE_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 +#ENSEMBLE_STAT_OBS_QUALITY_EXC = + # if True, pb2nc will skip processing a file if the output already exists # used to speed up runs and reduce redundancy PB2NC_SKIP_IF_OUTPUT_EXISTS = True @@ -391,7 +394,7 @@ PB2NC_OUTPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc # or a single line, - filename wildcard characters may be used, ? or *. FCST_ENSEMBLE_STAT_INPUT_TEMPLATE = - mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 + mem*/postprd/{ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for point observations. # Example precip24_2010010112.nc diff --git a/ush/templates/parm/metplus/GridStat_APCP01h.conf b/ush/templates/parm/metplus/GridStat_APCP01h.conf index 240462dac..2c6aac65a 100644 --- a/ush/templates/parm/metplus/GridStat_APCP01h.conf +++ b/ush/templates/parm/metplus/GridStat_APCP01h.conf @@ -253,7 +253,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_01h [filename_templates] # Template to look for forecast input to GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for observation input to GridStat relative to OBS_GRID_STAT_INPUT_DIR OBS_GRID_STAT_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hrap.conus.gb2 diff --git a/ush/templates/parm/metplus/GridStat_APCP03h.conf b/ush/templates/parm/metplus/GridStat_APCP03h.conf index f6d629021..c01780617 100644 --- a/ush/templates/parm/metplus/GridStat_APCP03h.conf +++ b/ush/templates/parm/metplus/GridStat_APCP03h.conf @@ -283,7 +283,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_03h [filename_templates] # Template to look for forecast input to PCPCombine and GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 FCST_GRID_STAT_INPUT_TEMPLATE = {FCST_PCP_COMBINE_OUTPUT_TEMPLATE} # Template to look for observation input to PCPCombine and GridStat relative to OBS_GRID_STAT_INPUT_DIR @@ -291,7 +291,7 @@ OBS_PCP_COMBINE_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hr OBS_GRID_STAT_INPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_TEMPLATE} # Optional subdirectories relative to GRID_STAT_OUTPUT_DIR to write output from PCPCombine and GridStat -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a{level?fmt=%HH}h +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a{level?fmt=%HH}h OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.hrap.conus.gb2_a{level?fmt=%HH}h GRID_STAT_OUTPUT_TEMPLATE = metprd/grid_stat diff --git a/ush/templates/parm/metplus/GridStat_APCP06h.conf b/ush/templates/parm/metplus/GridStat_APCP06h.conf index 7c8d69d1a..0e4d66531 100644 --- a/ush/templates/parm/metplus/GridStat_APCP06h.conf +++ b/ush/templates/parm/metplus/GridStat_APCP06h.conf @@ -283,7 +283,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_06h [filename_templates] # Template to look for forecast input to PCPCombine and GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 FCST_GRID_STAT_INPUT_TEMPLATE = {FCST_PCP_COMBINE_OUTPUT_TEMPLATE} # Template to look for observation input to PCPCombine and GridStat relative to OBS_GRID_STAT_INPUT_DIR @@ -291,7 +291,7 @@ OBS_PCP_COMBINE_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hr OBS_GRID_STAT_INPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_TEMPLATE} # Optional subdirectories relative to GRID_STAT_OUTPUT_DIR to write output from PCPCombine and GridStat -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a{level?fmt=%HH}h +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a{level?fmt=%HH}h OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.hrap.conus.gb2_a{level?fmt=%HH}h GRID_STAT_OUTPUT_TEMPLATE = metprd/grid_stat diff --git a/ush/templates/parm/metplus/GridStat_APCP24h.conf b/ush/templates/parm/metplus/GridStat_APCP24h.conf index bcccfe8e1..7ae700cf3 100644 --- a/ush/templates/parm/metplus/GridStat_APCP24h.conf +++ b/ush/templates/parm/metplus/GridStat_APCP24h.conf @@ -283,7 +283,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/APCP_24h [filename_templates] # Template to look for forecast input to PCPCombine and GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_PCP_COMBINE_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 FCST_GRID_STAT_INPUT_TEMPLATE = {FCST_PCP_COMBINE_OUTPUT_TEMPLATE} # Template to look for observation input to PCPCombine and GridStat relative to OBS_GRID_STAT_INPUT_DIR @@ -291,7 +291,7 @@ OBS_PCP_COMBINE_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.01h.hr OBS_GRID_STAT_INPUT_TEMPLATE = {OBS_PCP_COMBINE_OUTPUT_TEMPLATE} # Optional subdirectories relative to GRID_STAT_OUTPUT_DIR to write output from PCPCombine and GridStat -FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}_a{level?fmt=%HH}h +FCST_PCP_COMBINE_OUTPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}_a{level?fmt=%HH}h OBS_PCP_COMBINE_OUTPUT_TEMPLATE = {valid?fmt=%Y%m%d}/ccpa.t{valid?fmt=%H}z.hrap.conus.gb2_a{level?fmt=%HH}h GRID_STAT_OUTPUT_TEMPLATE = metprd/grid_stat diff --git a/ush/templates/parm/metplus/GridStat_REFC.conf b/ush/templates/parm/metplus/GridStat_REFC.conf index 7eade0cb1..027723470 100644 --- a/ush/templates/parm/metplus/GridStat_REFC.conf +++ b/ush/templates/parm/metplus/GridStat_REFC.conf @@ -263,7 +263,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/REFC [filename_templates] # Template to look for forecast input to GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for observation input to GridStat relative to OBS_GRID_STAT_INPUT_DIR OBS_GRID_STAT_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/MergedReflectivityQCComposite_00.50_{valid?fmt=%Y%m%d}-{valid?fmt=%H}0000.grib2 diff --git a/ush/templates/parm/metplus/GridStat_RETOP.conf b/ush/templates/parm/metplus/GridStat_RETOP.conf index 4ef239d36..0b0029211 100644 --- a/ush/templates/parm/metplus/GridStat_RETOP.conf +++ b/ush/templates/parm/metplus/GridStat_RETOP.conf @@ -263,7 +263,7 @@ STAGING_DIR = {OUTPUT_BASE}/stage/RETOP [filename_templates] # Template to look for forecast input to GridStat relative to FCST_GRID_STAT_INPUT_DIR -FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_GRID_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for observation input to GridStat relative to OBS_GRID_STAT_INPUT_DIR OBS_GRID_STAT_INPUT_TEMPLATE = {valid?fmt=%Y%m%d}/EchoTop_18_00.50_{valid?fmt=%Y%m%d}-{valid?fmt=%H%M%S}.grib2 diff --git a/ush/templates/parm/metplus/PointStat_conus_sfc.conf b/ush/templates/parm/metplus/PointStat_conus_sfc.conf index 1558c4aa9..b3a2755a5 100644 --- a/ush/templates/parm/metplus/PointStat_conus_sfc.conf +++ b/ush/templates/parm/metplus/PointStat_conus_sfc.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST @@ -274,7 +273,7 @@ PB2NC_INPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H} PB2NC_OUTPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc # Template to look for forecast input to PointStat relative to FCST_POINT_STAT_INPUT_DIR -FCST_POINT_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_POINT_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for observation input to PointStat relative to OBS_POINT_STAT_INPUT_DIR OBS_POINT_STAT_INPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc diff --git a/ush/templates/parm/metplus/PointStat_conus_sfc_mean.conf b/ush/templates/parm/metplus/PointStat_conus_sfc_mean.conf index db3b3f932..20e2d7392 100644 --- a/ush/templates/parm/metplus/PointStat_conus_sfc_mean.conf +++ b/ush/templates/parm/metplus/PointStat_conus_sfc_mean.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST diff --git a/ush/templates/parm/metplus/PointStat_conus_sfc_prob.conf b/ush/templates/parm/metplus/PointStat_conus_sfc_prob.conf index a1754261e..f28d02244 100644 --- a/ush/templates/parm/metplus/PointStat_conus_sfc_prob.conf +++ b/ush/templates/parm/metplus/PointStat_conus_sfc_prob.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST diff --git a/ush/templates/parm/metplus/PointStat_upper_air.conf b/ush/templates/parm/metplus/PointStat_upper_air.conf index e21f2566c..6110122c5 100644 --- a/ush/templates/parm/metplus/PointStat_upper_air.conf +++ b/ush/templates/parm/metplus/PointStat_upper_air.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST @@ -274,7 +273,7 @@ PB2NC_INPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H} PB2NC_OUTPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc # Template to look for forecast input to PointStat relative to FCST_POINT_STAT_INPUT_DIR -FCST_POINT_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslevf{lead?fmt=%HHH}.tm{init?fmt=%H}.grib2 +FCST_POINT_STAT_INPUT_TEMPLATE = {ENV[NET]}.t{init?fmt=%H}z.prslev.f{lead?fmt=%HHH}.{ENV[POST_OUTPUT_DOMAIN_NAME]}.grib2 # Template to look for observation input to PointStat relative to OBS_POINT_STAT_INPUT_DIR OBS_POINT_STAT_INPUT_TEMPLATE = prepbufr.ndas.{valid?fmt=%Y%m%d%H}.nc diff --git a/ush/templates/parm/metplus/PointStat_upper_air_mean.conf b/ush/templates/parm/metplus/PointStat_upper_air_mean.conf index 515253683..a2b655136 100644 --- a/ush/templates/parm/metplus/PointStat_upper_air_mean.conf +++ b/ush/templates/parm/metplus/PointStat_upper_air_mean.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST diff --git a/ush/templates/parm/metplus/PointStat_upper_air_prob.conf b/ush/templates/parm/metplus/PointStat_upper_air_prob.conf index 0a15f6f1f..0212f79d4 100644 --- a/ush/templates/parm/metplus/PointStat_upper_air_prob.conf +++ b/ush/templates/parm/metplus/PointStat_upper_air_prob.conf @@ -91,8 +91,7 @@ PB2NC_TIME_SUMMARY_TYPES = min, max, range, mean, stdev, median, p80 # or the value of the environment variable METPLUS_PARM_BASE if set POINT_STAT_CONFIG_FILE ={PARM_BASE}/met_config/PointStatConfig_wrapped - -#POINT_STAT_OBS_QUALITY_INC = 1, 2, 3 +POINT_STAT_OBS_QUALITY_INC = 0, 1, 2, 3, 9 #POINT_STAT_OBS_QUALITY_EXC = POINT_STAT_CLIMO_MEAN_TIME_INTERP_METHOD = NEAREST diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.facsf.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.maximum_snow_albedo.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.slope_type.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.snowfree_albedo.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.soil_type.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.substrate_temperature.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_greenness.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357.vegetation_type.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo3.nc b/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo3.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo6.nc b/ush/test_data/RRFS_CONUS_3km/C3357_grid.tile7.halo6.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo3.nc b/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo3.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo6.nc b/ush/test_data/RRFS_CONUS_3km/C3357_mosaic.halo6.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo4.nc b/ush/test_data/RRFS_CONUS_3km/C3357_oro_data.tile7.halo4.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ls.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ls.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ss.tile7.halo0.nc b/ush/test_data/RRFS_CONUS_3km/C3357_oro_data_ss.tile7.halo0.nc new file mode 100644 index 000000000..e69de29bb diff --git a/ush/test_data/suite_FV3_GSD_SAR.xml b/ush/test_data/suite_FV3_GSD_SAR.xml new file mode 100644 index 000000000..f2b38d577 --- /dev/null +++ b/ush/test_data/suite_FV3_GSD_SAR.xml @@ -0,0 +1,85 @@ + + + + + + + GFS_time_vary_pre + GFS_rrtmg_setup + GFS_rad_time_vary + GFS_phys_time_vary + + + + + GFS_suite_interstitial_rad_reset + sgscloud_radpre + GFS_rrtmg_pre + GFS_radiation_surface + rrtmg_sw_pre + rrtmg_sw + rrtmg_sw_post + rrtmg_lw_pre + rrtmg_lw + sgscloud_radpost + rrtmg_lw_post + GFS_rrtmg_post + + + + + GFS_suite_interstitial_phys_reset + GFS_suite_stateout_reset + get_prs_fv3 + GFS_suite_interstitial_1 + GFS_surface_generic_pre + GFS_surface_composites_pre + dcyc2t3 + GFS_surface_composites_inter + GFS_suite_interstitial_2 + + + + sfc_diff + GFS_surface_loop_control_part1 + sfc_nst_pre + sfc_nst + sfc_nst_post + lsm_ruc + GFS_surface_loop_control_part2 + + + + GFS_surface_composites_post + sfc_diag + sfc_diag_post + GFS_surface_generic_post + mynnedmf_wrapper + GFS_GWD_generic_pre + cires_ugwp + cires_ugwp_post + GFS_GWD_generic_post + GFS_suite_stateout_update + ozphys_2015 + h2ophys + get_phi_fv3 + + GFS_suite_interstitial_3 + GFS_suite_interstitial_4 + + GFS_MP_generic_pre + mp_thompson_pre + mp_thompson + mp_thompson_post + GFS_MP_generic_post + maximum_hourly_diagnostics + phys_tend + + + + + GFS_stochastics + + + + diff --git a/ush/valid_param_vals.sh b/ush/valid_param_vals.sh index 1ea3a86b8..69c08984e 100644 --- a/ush/valid_param_vals.sh +++ b/ush/valid_param_vals.sh @@ -4,7 +4,7 @@ valid_vals_RUN_ENVIR=("nco" "community") valid_vals_VERBOSE=("TRUE" "true" "YES" "yes" "FALSE" "false" "NO" "no") valid_vals_DEBUG=("TRUE" "true" "YES" "yes" "FALSE" "false" "NO" "no") -valid_vals_MACHINE=("WCOSS_DELL_P3" "HERA" "ORION" "JET" "ODIN" "CHEYENNE" "STAMPEDE" "LINUX" "MACOS" "NOAACLOUD" "SINGULARITY") +valid_vals_MACHINE=("WCOSS_DELL_P3" "HERA" "ORION" "JET" "ODIN" "CHEYENNE" "STAMPEDE" "LINUX" "MACOS" "NOAACLOUD" "SINGULARITY" "GAEA") valid_vals_SCHED=("slurm" "pbspro" "lsf" "lsfcray" "none") valid_vals_FCST_MODEL=("ufs-weather-model" "fv3gfs_aqm") valid_vals_WORKFLOW_MANAGER=("rocoto" "none") @@ -12,6 +12,9 @@ valid_vals_PREDEF_GRID_NAME=( \ "RRFS_CONUS_25km" \ "RRFS_CONUS_13km" \ "RRFS_CONUS_3km" \ +"RRFS_CONUScompact_25km" \ +"RRFS_CONUScompact_13km" \ +"RRFS_CONUScompact_3km" \ "RRFS_SUBCONUS_3km" \ "RRFS_AK_13km" \ "RRFS_AK_3km" \ @@ -27,6 +30,8 @@ valid_vals_PREDEF_GRID_NAME=( \ "GSD_HRRR_AK_50km" \ "RRFS_NA_13km" \ "RRFS_NA_3km" \ +"SUBCONUS_Ind_3km" \ +"WoFS_3km" \ ) valid_vals_CCPP_PHYS_SUITE=( \ "FV3_GFS_2017_gfdlmp" \ @@ -37,7 +42,7 @@ valid_vals_CCPP_PHYS_SUITE=( \ "FV3_RRFS_v1beta" \ "FV3_RRFS_v1alpha" \ "FV3_HRRR" \ -) +) valid_vals_GFDLgrid_RES=("48" "96" "192" "384" "768" "1152" "3072") valid_vals_EXTRN_MDL_NAME_ICS=("GSMGFS" "FV3GFS" "RAP" "HRRR" "NAM") valid_vals_EXTRN_MDL_NAME_LBCS=("GSMGFS" "FV3GFS" "RAP" "HRRR" "NAM") diff --git a/ush/valid_param_vals.yaml b/ush/valid_param_vals.yaml new file mode 100644 index 000000000..efadce608 --- /dev/null +++ b/ush/valid_param_vals.yaml @@ -0,0 +1,85 @@ +# +# Define valid values for various global experiment/workflow variables. +# +valid_vals_RUN_ENVIR: ["nco", "community"] +valid_vals_VERBOSE: [True, False] +valid_vals_DEBUG: [True, False] +valid_vals_MACHINE: ["WCOSS_DELL_P3", "HERA", "ORION", "JET", "ODIN", "CHEYENNE", "STAMPEDE", "LINUX", "MACOS", "NOAACLOUD", "SINGULARITY", "GAEA"] +valid_vals_SCHED: ["slurm", "pbspro", "lsf", "lsfcray", "none"] +valid_vals_FCST_MODEL: ["ufs-weather-model", "fv3gfs_aqm"] +valid_vals_WORKFLOW_MANAGER: ["rocoto", "none"] +valid_vals_PREDEF_GRID_NAME: [ +"RRFS_CONUS_25km", +"RRFS_CONUS_13km", +"RRFS_CONUS_3km", +"RRFS_CONUScompact_25km", +"RRFS_CONUScompact_13km", +"RRFS_CONUScompact_3km", +"RRFS_SUBCONUS_3km", +"RRFS_AK_13km", +"RRFS_AK_3km", +"CONUS_25km_GFDLgrid", +"CONUS_3km_GFDLgrid", +"EMC_AK", +"EMC_HI", +"EMC_PR", +"EMC_GU", +"GSL_HAFSV0.A_25km", +"GSL_HAFSV0.A_13km", +"GSL_HAFSV0.A_3km", +"GSD_HRRR_AK_50km", +"RRFS_NA_13km", +"RRFS_NA_3km", +"SUBCONUS_Ind_3km", +"WoFS_3km" +] +valid_vals_CCPP_PHYS_SUITE: [ +"FV3_GFS_2017_gfdlmp", +"FV3_GFS_2017_gfdlmp_regional", +"FV3_GFS_v15p2", +"FV3_GFS_v15_thompson_mynn_lam3km", +"FV3_GFS_v16", +"FV3_RRFS_v1beta", +"FV3_RRFS_v1alpha", +"FV3_HRRR" +] +valid_vals_GFDLgrid_RES: [48, 96, 192, 384, 768, 1152, 3072] +valid_vals_EXTRN_MDL_NAME_ICS: ["GSMGFS", "FV3GFS", "RAP", "HRRR", "NAM"] +valid_vals_EXTRN_MDL_NAME_LBCS: ["GSMGFS", "FV3GFS", "RAP", "HRRR", "NAM"] +valid_vals_USE_USER_STAGED_EXTRN_FILES: [True, False] +valid_vals_FV3GFS_FILE_FMT_ICS: ["nemsio", "grib2", "netcdf"] +valid_vals_FV3GFS_FILE_FMT_LBCS: ["nemsio", "grib2", "netcdf"] +valid_vals_GRID_GEN_METHOD: ["GFDLgrid", "ESGgrid"] +valid_vals_PREEXISTING_DIR_METHOD: ["delete", "rename", "quit"] +valid_vals_GTYPE: ["regional"] +valid_vals_WRTCMP_output_grid: ["rotated_latlon", "lambert_conformal", "regional_latlon"] +valid_vals_RUN_TASK_MAKE_GRID: [True, False] +valid_vals_RUN_TASK_MAKE_OROG: [True, False] +valid_vals_RUN_TASK_MAKE_SFC_CLIMO: [True, False] +valid_vals_RUN_TASK_RUN_POST: [True, False] +valid_vals_WRITE_DOPOST: [True, False] +valid_vals_RUN_TASK_VX_GRIDSTAT: [True, False] +valid_vals_RUN_TASK_VX_POINTSTAT: [True, False] +valid_vals_RUN_TASK_VX_ENSGRID: [True, False] +valid_vals_RUN_TASK_VX_ENSPOINT: [True, False] +valid_vals_QUILTING: [True, False] +valid_vals_PRINT_ESMF: [True, False] +valid_vals_USE_CRON_TO_RELAUNCH: [True, False] +valid_vals_DOT_OR_USCORE: [".", "_"] +valid_vals_NOMADS: [True, False] +valid_vals_NOMADS_file_type: ["GRIB2", "grib2", "NEMSIO", "nemsio"] +valid_vals_DO_ENSEMBLE: [True, False] +valid_vals_USE_CUSTOM_POST_CONFIG_FILE: [True, False] +valid_vals_USE_CRTM: [True, False] +valid_vals_DO_SHUM: [True, False] +valid_vals_DO_SPPT: [True, False] +valid_vals_DO_SPP: [True, False] +valid_vals_DO_LSM_SPP: [True, False] +valid_vals_DO_SKEB: [True, False] +valid_vals_USE_ZMTNBLCK: [True, False] +valid_vals_USE_FVCOM: [True, False] +valid_vals_FVCOM_WCSTART: ["warm", "WARM", "cold", "COLD"] +valid_vals_COMPILER: ["intel", "gnu"] +valid_vals_SUB_HOURLY_POST: [True, False] +valid_vals_DT_SUBHOURLY_POST_MNTS: [0, 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30] +valid_vals_USE_MERRA_CLIMO: [True, False] diff --git a/ush/wrappers/qsub_job.sh b/ush/wrappers/qsub_job.sh index 797590509..15ec7eb14 100755 --- a/ush/wrappers/qsub_job.sh +++ b/ush/wrappers/qsub_job.sh @@ -1,11 +1,12 @@ #!/bin/sh -#PBS -A P48503002 +#PBS -A XXXXXXXXX #PBS -q regular #PBS -l select=1:mpiprocs=24:ncpus=24 +##PBS -l select=1:mpiprocs=1:ncpus=1 # USE FOR MET VERIFICATION #PBS -l walltime=02:30:00 #PBS -N run_make_grid -#PBS -j oe -o /glade/scratch/carson/ufs/expt_dirs/test_1/log/run_make_grid.log -cd /glade/scratch/carson/ufs/expt_dirs/test_1 +#PBS -j oe -o /path/to/exptdir/log/run_make_grid.log # NEED TO SET +cd /path/to/exptdir # NEED TO SET set -x # source /etc/profile.d/modules.sh @@ -31,3 +32,15 @@ module load esmf/8.0.0 ##module load esmf/8.0.0_bs50 # ./run_make_grid.sh +# +# +# Additional modules are needed for MET verification jobs +# +#module use /glade/p/ral/jntp/MET/MET_releases/modulefiles +#module load met/10.0.0 +#ncar_pylib /glade/p/ral/jntp/UFS_SRW_app/ncar_pylib/python_graphics + +#./run_pointvx.sh # Run grod-to-point deterministic vx +#./run_gridvx.sh # Run grid-stat deterministic vx +#./run_pointensvx.sh # Run grid-to-point ensemble/probabilsitic vx +#./rungridensvx.sh # Run grid-to-grid ensemble/probabilsitic vx diff --git a/ush/wrappers/run_gridensvx.sh b/ush/wrappers/run_gridensvx.sh new file mode 100755 index 000000000..d6a3c76af --- /dev/null +++ b/ush/wrappers/run_gridensvx.sh @@ -0,0 +1,22 @@ +#!/bin/sh + +# Stand-alone script to run grid-to-grid ensemble verification +export GLOBAL_VAR_DEFNS_FP="${EXPTDIR}/var_defns.sh" +set -x +source ${GLOBAL_VAR_DEFNS_FP} +export CDATE=${DATE_FIRST_CYCL}${CYCL_HRS} +export CYCLE_DIR=${EXPTDIR}/${CDATE} +export cyc=${CYCL_HRS} +export PDY=${DATE_FIRST_CYCL} +export OBS_DIR=${MRMS_OBS_DIR} # CCPA_OBS_DIR MRMS_OBS_DIR +export VAR="REFC" # APCP REFC RETOP +export ACCUM="" # 01 03 06 24 --> leave empty for REFC and RETOP + +export FHR=`echo $(seq 0 ${ACCUM} ${FCST_LEN_HRS}) | cut -d" " -f2-` + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSGRID + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSGRID_MEAN + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSGRID_PROB + diff --git a/ush/wrappers/run_gridvx.sh b/ush/wrappers/run_gridvx.sh new file mode 100755 index 000000000..a96383f6e --- /dev/null +++ b/ush/wrappers/run_gridvx.sh @@ -0,0 +1,19 @@ +#!/bin/sh + +# Stand-alone script to run grid-to-grid verification +export GLOBAL_VAR_DEFNS_FP="${EXPTDIR}/var_defns.sh" +set -x +source ${GLOBAL_VAR_DEFNS_FP} +export CDATE=${DATE_FIRST_CYCL}${CYCL_HRS} +export CYCLE_DIR=${EXPTDIR}/${CDATE} +export cyc=${CYCL_HRS} +export PDY=${DATE_FIRST_CYCL} +export SLASH_ENSMEM_SUBDIR="" # When running with do_ensemble = true, need to run for each member, e.g., "/mem1" +export OBS_DIR=${CCPA_OBS_DIR} # CCPA_OBS_DIR MRMS_OBS_DIR +export VAR="APCP" # APCP REFC RETOP +export ACCUM="06" # 01 03 06 24 --> leave empty for REFC and RETOP + +export FHR=`echo $(seq 0 ${ACCUM} ${FCST_LEN_HRS}) | cut -d" " -f2-` + +${JOBSDIR}/JREGIONAL_RUN_VX_GRIDSTAT + diff --git a/ush/wrappers/run_pointensvx.sh b/ush/wrappers/run_pointensvx.sh new file mode 100755 index 000000000..142d9ec83 --- /dev/null +++ b/ush/wrappers/run_pointensvx.sh @@ -0,0 +1,20 @@ +#!/bin/sh + +# Stand-alone script to run grid-to-point ensemble verification +export GLOBAL_VAR_DEFNS_FP="${EXPTDIR}/var_defns.sh" +set -x +source ${GLOBAL_VAR_DEFNS_FP} +export CDATE=${DATE_FIRST_CYCL}${CYCL_HRS} +export CYCLE_DIR=${EXPTDIR}/${CDATE} +export cyc=${CYCL_HRS} +export PDY=${DATE_FIRST_CYCL} +export OBS_DIR=${NDAS_OBS_DIR} + +export FHR=`echo $(seq 0 1 ${FCST_LEN_HRS})` + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSPOINT + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSPOINT_MEAN + +${JOBSDIR}/JREGIONAL_RUN_VX_ENSPOINT_PROB + diff --git a/ush/wrappers/run_pointvx.sh b/ush/wrappers/run_pointvx.sh new file mode 100755 index 000000000..527c16f37 --- /dev/null +++ b/ush/wrappers/run_pointvx.sh @@ -0,0 +1,17 @@ +#!/bin/sh + +# Stand-alone script to run grid-to-point verification +export GLOBAL_VAR_DEFNS_FP="${EXPTDIR}/var_defns.sh" +set -x +source ${GLOBAL_VAR_DEFNS_FP} +export CDATE=${DATE_FIRST_CYCL}${CYCL_HRS} +export CYCLE_DIR=${EXPTDIR}/${CDATE} +export cyc=${CYCL_HRS} +export PDY=${DATE_FIRST_CYCL} +export SLASH_ENSMEM_SUBDIR="" # When running with do_ensemble = true, need to run for each member, e.g., "/mem1" +export OBS_DIR=${NDAS_OBS_DIR} + +export FHR=`echo $(seq 0 1 ${FCST_LEN_HRS})` + +${JOBSDIR}/JREGIONAL_RUN_VX_POINTSTAT + diff --git a/ush/wrappers/sq_job.sh b/ush/wrappers/sq_job.sh index 394bfabe2..a9fa3c180 100755 --- a/ush/wrappers/sq_job.sh +++ b/ush/wrappers/sq_job.sh @@ -1,11 +1,12 @@ #!/bin/sh -#SBATCH -e /scratch1/BMC/gmtb/Laurie.Carson/expt_dirs/test_2/log/run_make_grid.log -#SBATCH --account=gmtb +#SBATCH -e /path/to/exptdir/log/run_make_grid.log # NEED TO SET +#SBATCH --account=XXXXXXXXX #SBATCH --qos=batch #SBATCH --ntasks=48 +##SBATCH --ntasks=1 # USE FOR MET VERIFICATION #SBATCH --time=20 #SBATCH --job-name="run_make_grid" -cd /scratch1/BMC/gmtb/Laurie.Carson/expt_dirs/test_2 +cd /path/to/exptdir # NEED TO SET set -x . /apps/lmod/lmod/init/sh @@ -41,3 +42,17 @@ module load miniconda3 conda activate regional_workflow ./run_make_grid.sh +# +# +# Additional modules are needed for MET verification jobs +# +#module use -a /contrib/anaconda/modulefiles +#module load intel/18.0.5.274 +#module load anaconda/latest +#module use -a /contrib/met/modulefiles/ +#module load met/10.0.0 + +#./run_pointvx.sh # Run grod-to-point deterministic vx +#./run_gridvx.sh # Run grid-stat deterministic vx +#./run_pointensvx.sh # Run grid-to-point ensemble/probabilsitic vx +#./run_gridensvx.sh # Run grid-to-grid ensemble/probabilsitic vx