From 8573ba8ab196c1e357a101462b16bd92128461b1 Mon Sep 17 00:00:00 2001 From: TRasmussen <33480590+TillRasmussen@users.noreply.github.com> Date: Thu, 16 Nov 2023 22:12:07 +0100 Subject: [PATCH] New 1d evp solver (#895) * New 1d evp solver * Small changes incl timer names and inclued private/publice in ice_dyn_core1d * fixed bug on gnu debug * moved halo update to evp1d, added deallocation, fixed bug * fixed deallocation dyn_evp1d * bugfix deallocate * Remove gather strintx and strinty * removed 4 test with evp1d and c/cd grid * Update of evp1d implementation - Rename halo_HTE_HTN to global_ext_halo and move into ice_grid.F90 - Generalize global_ext_halo to work with any nghost size (was hardcoded for nghost=1) - Remove argument from dyn_evp1d_init, change to "use" of global grid variables - rename pgl_global_ext to save_ghte_ghtn - Update allocation of G_HTE, G_HTN - Add dealloc_grid to deallocate G_HTE and G_HTN at end of initialization - Add calls to dealloc_grid to all CICE_InitMod.F90 subroutines - Make dimension of evp1d arguments implicit size more consistently - Clean up indentation and formatting a bit * Clean up trailing blanks * resolved name conflicts * 1d grid var name change --------- Co-authored-by: apcraig --- cicecore/cicedyn/dynamics/ice_dyn_core1d.F90 | 671 ++++++ cicecore/cicedyn/dynamics/ice_dyn_evp.F90 | 794 ++++--- cicecore/cicedyn/dynamics/ice_dyn_evp1d.F90 | 1467 +++++++++++++ cicecore/cicedyn/dynamics/ice_dyn_evp_1d.F90 | 1921 ----------------- cicecore/cicedyn/dynamics/ice_dyn_shared.F90 | 15 +- cicecore/cicedyn/dynamics/ice_dyn_vp.F90 | 12 +- .../cicedyn/dynamics/ice_transport_remap.F90 | 13 +- cicecore/cicedyn/general/ice_init.F90 | 25 +- .../infrastructure/comm/mpi/ice_boundary.F90 | 131 +- .../infrastructure/comm/mpi/ice_timers.F90 | 52 +- .../comm/serial/ice_boundary.F90 | 130 +- .../infrastructure/comm/serial/ice_timers.F90 | 84 +- cicecore/cicedyn/infrastructure/ice_grid.F90 | 203 +- .../drivers/direct/hadgem3/CICE_InitMod.F90 | 3 +- .../direct/nemo_concepts/CICE_InitMod.F90 | 3 +- cicecore/drivers/mct/cesm1/CICE_InitMod.F90 | 3 +- cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 | 4 +- cicecore/drivers/nuopc/dmi/CICE_FinalMod.F90 | 14 +- cicecore/drivers/nuopc/dmi/CICE_InitMod.F90 | 15 +- cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 | 51 +- .../drivers/standalone/cice/CICE_InitMod.F90 | 4 +- .../unittest/gridavgchk/CICE_InitMod.F90 | 3 +- .../drivers/unittest/halochk/CICE_InitMod.F90 | 3 +- .../drivers/unittest/opticep/CICE_InitMod.F90 | 3 +- .../drivers/unittest/sumchk/CICE_InitMod.F90 | 3 +- .../scripts/machines/Macros.freya_intel | 6 +- configuration/scripts/tests/omp_suite.ts | 4 - 27 files changed, 2799 insertions(+), 2838 deletions(-) create mode 100644 cicecore/cicedyn/dynamics/ice_dyn_core1d.F90 create mode 100644 cicecore/cicedyn/dynamics/ice_dyn_evp1d.F90 delete mode 100644 cicecore/cicedyn/dynamics/ice_dyn_evp_1d.F90 diff --git a/cicecore/cicedyn/dynamics/ice_dyn_core1d.F90 b/cicecore/cicedyn/dynamics/ice_dyn_core1d.F90 new file mode 100644 index 000000000..f3f71b490 --- /dev/null +++ b/cicecore/cicedyn/dynamics/ice_dyn_core1d.F90 @@ -0,0 +1,671 @@ +!=============================================================================== +! Copyright (C) 2023, Intel Corporation +! Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: +! 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. +! 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. +! 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. +! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +!=============================================================================== + +!=============================================================================== +! +! Elastic-viscous-plastic sea ice dynamics model +! Computes ice velocity and deformation +! +! See: +! +! Hunke, E. C., and J. K. Dukowicz (1997). An elastic-viscous-plastic model +! for sea ice dynamics. J. Phys. Oceanogr., 27, 1849-1867. +! +! Hunke, E. C. (2001). Viscous-Plastic Sea Ice Dynamics with the EVP Model: +! Linearization Issues. J. Comput. Phys., 170, 18-38. +! +! Hunke, E. C., and J. K. Dukowicz (2002). The Elastic-Viscous-Plastic +! Sea Ice Dynamics Model in General Orthogonal Curvilinear Coordinates +! on a Sphere - Incorporation of Metric Terms. Mon. Weather Rev., +! 130, 1848-1865. +! +! Hunke, E. C., and J. K. Dukowicz (2003). The sea ice momentum +! equation in the free drift regime. Los Alamos Tech. Rep. LA-UR-03-2219. +! +! Hibler, W. D. (1979). A dynamic thermodynamic sea ice model. J. Phys. +! Oceanogr., 9, 817-846. +! +! Bouillon, S., T. Fichefet, V. Legat and G. Madec (2013). The +! elastic-viscous-plastic method revisited. Ocean Model., 71, 2-12. +! +! author: Elizabeth C. Hunke, LANL +! +! 2003: Vectorized by Clifford Chen (Fujitsu) and William Lipscomb (LANL) +! 2004: Block structure added by William Lipscomb +! 2005: Removed boundary calls for stress arrays (WHL) +! 2006: Streamlined for efficiency by Elizabeth Hunke +! Converted to free source form (F90) +!=============================================================================== +! 2023: Intel +! Refactored for SIMD code generation +! Refactored to reduce memory footprint +! Refactored to support explicit inlining +! Refactored the OpenMP parallelization (classic loop inlined w. scoping) +! Refactored to support OpenMP GPU offloading +! Refactored to allow private subroutines in stress to become pure +!=============================================================================== +!=============================================================================== +! 2023: DMI +! Updated to match requirements from CICE +!=============================================================================== +! module is based on benchmark test v2c + +module ice_dyn_core1d + + use ice_dyn_shared, only: e_factor, epp2i, capping + use ice_constants, only: c1 + + implicit none + private + + public :: stress_1d, stepu_1d, calc_diag_1d + contains + + ! arguments ------------------------------------------------------------------ + subroutine stress_1d (ee, ne, se, lb, ub, & + uvel, vvel, dxT, dyT, skipme, strength, & + hte, htn, htem1, htnm1, & + stressp_1, stressp_2, stressp_3, stressp_4, & + stressm_1, stressm_2, stressm_3, stressm_4, & + stress12_1, stress12_2, stress12_3, stress12_4, & + str1, str2, str3, str4, str5, str6, str7, str8) + + use ice_kinds_mod + use ice_constants , only: p027, p055, p111, p166, c1p5, & + p222, p25, p333, p5 + + use ice_dyn_shared, only: arlx1i, denom1, revp, & + deltaminEVP, visc_replpress + ! + implicit none + ! arguments ------------------------------------------------------------------ + integer (kind=int_kind), intent(in) :: lb,ub + integer (kind=int_kind), dimension(:), intent(in), contiguous :: ee,ne,se + logical (kind=log_kind), dimension(:), intent(in), contiguous :: skipme + real (kind=dbl_kind), dimension(:), intent(in), contiguous :: & + strength , & ! ice strength (N/m) + uvel , & ! x-component of velocity (m/s) + vvel , & ! y-component of velocity (m/s) + dxT , & ! width of T-cell through the middle (m) + dyT , & ! height of T-cell through the middle (m) + hte , & + htn , & + htem1 , & + htnm1 + + real (kind=dbl_kind), dimension(:), intent(inout), contiguous :: & + stressp_1, stressp_2, stressp_3, stressp_4 , & ! sigma11+sigma22 + stressm_1, stressm_2, stressm_3, stressm_4 , & ! sigma11-sigma22 + stress12_1,stress12_2,stress12_3,stress12_4 ! sigma12 + + real (kind=dbl_kind), dimension(:), intent(inout), contiguous :: & + str1,str2,str3,str4,str5,str6,str7,str8 + + ! local variables + integer (kind=int_kind) :: iw + + real (kind=dbl_kind) :: & + divune, divunw, divuse, divusw , & ! divergence + tensionne, tensionnw, tensionse, tensionsw, & ! tension + shearne, shearnw, shearse, shearsw , & ! shearing + Deltane, Deltanw, Deltase, Deltasw , & ! Delt + zetax2ne, zetax2nw, zetax2se, zetax2sw , & ! 2 x zeta (bulk visc) + etax2ne, etax2nw, etax2se, etax2sw , & ! 2 x eta (shear visc) + rep_prsne, rep_prsnw, rep_prsse, rep_prssw, & ! replacement pressure + ssigpn, ssigps, ssigpe, ssigpw , & + ssigmn, ssigms, ssigme, ssigmw , & + ssig12n, ssig12s, ssig12e, ssig12w , & + ssigp1, ssigp2, ssigm1, ssigm2, ssig121, ssig122, & + csigpne, csigpnw, csigpse, csigpsw , & + csigmne, csigmnw, csigmse, csigmsw , & + csig12ne, csig12nw, csig12se, csig12sw , & + str12ew, str12we, str12ns, str12sn , & + strp_tmp, strm_tmp + + real (kind=dbl_kind) :: & + tmp_uvel_ee, tmp_vvel_se, tmp_vvel_ee, & + tmp_vvel_ne, tmp_uvel_ne, tmp_uvel_se, & + tmp_uvel_cc, tmp_vvel_cc, tmp_dxT, tmp_dyT, & + tmp_cxp, tmp_cyp, tmp_cxm, tmp_cym, & + tmp_strength, tmp_DminTarea, tmparea, & + tmp_dxhy, tmp_dyhx + + character(len=*), parameter :: subname = '(stress_1d)' + +#ifdef _OPENMP_TARGET + !$omp target teams distribute parallel do +#else + !$omp parallel do schedule(runtime) & + !$omp default(none) & + !$omp private(iw, divune, divunw, divuse, divusw , & + !$omp tensionne, tensionnw, tensionse, tensionsw , & + !$omp shearne, shearnw, shearse, shearsw , & + !$omp Deltane, Deltanw, Deltase, Deltasw , & + !$omp zetax2ne, zetax2nw, zetax2se, zetax2sw , & + !$omp etax2ne, etax2nw, etax2se, etax2sw , & + !$omp rep_prsne, rep_prsnw, rep_prsse, rep_prssw , & + !$omp ssigpn, ssigps, ssigpe, ssigpw , & + !$omp ssigmn, ssigms, ssigme, ssigmw , & + !$omp ssig12n, ssig12s, ssig12e, ssig12w, ssigp1 , & + !$omp ssigp2, ssigm1, ssigm2, ssig121, ssig122 , & + !$omp csigpne, csigpnw, csigpse, csigpsw , & + !$omp csigmne, csigmnw, csigmse, csigmsw , & + !$omp csig12ne, csig12nw, csig12se, csig12sw , & + !$omp str12ew, str12we, str12ns, str12sn , & + !$omp strp_tmp, strm_tmp , & + !$omp tmp_uvel_ee, tmp_vvel_se, tmp_vvel_ee , & + !$omp tmp_vvel_ne, tmp_uvel_ne, tmp_uvel_se , & + !$omp tmp_uvel_cc, tmp_vvel_cc, tmp_dxT, tmp_dyT , & + !$omp tmp_cxp, tmp_cyp, tmp_cxm, tmp_cym , & + !$omp tmp_strength, tmp_DminTarea, tmparea , & + !$omp tmp_dxhy, tmp_dyhx) & + !$omp shared(uvel,vvel,dxT,dyT,htn,hte,htnm1,htem1 , & + !$omp str1,str2,str3,str4,str5,str6,str7,str8 , & + !$omp stressp_1,stressp_2,stressp_3,stressp_4 , & + !$omp stressm_1,stressm_2,stressm_3,stressm_4 , & + !$omp stress12_1,stress12_2,stress12_3,stress12_4, & + !$omp deltaminEVP, arlx1i, denom1, e_factor , & + !$omp epp2i, capping, & + !$omp skipme,strength,ee,se,ne,lb,ub,revp) +#endif + + do iw = lb, ub + if (skipme(iw)) cycle + ! divergence = e_11 + e_22 + tmp_uvel_cc = uvel(iw) + tmp_vvel_cc = vvel(iw) + tmp_uvel_ee = uvel(ee(iw)) + tmp_vvel_se = vvel(se(iw)) + tmp_vvel_ee = vvel(ee(iw)) + tmp_vvel_ne = vvel(ne(iw)) + tmp_uvel_ne = uvel(ne(iw)) + tmp_uvel_se = uvel(se(iw)) + tmp_dxT = dxT(iw) + tmp_dyT = dyT(iw) + tmp_cxp = c1p5 * htn(iw) - p5 * htnm1(iw) + tmp_cyp = c1p5 * hte(iw) - p5 * htem1(iw) + tmp_cxm = -(c1p5 * htnm1(iw) - p5 * htn(iw)) + tmp_cym = -(c1p5 * htem1(iw) - p5 * hte(iw)) + tmp_strength = strength(iw) + tmparea = dxT(iw) * dyT(iw) ! necessary to split calc of DminTarea. Otherwize not binary identical + tmp_DminTarea = deltaminEVP * tmparea + tmp_dxhy = p5 * (hte(iw) - htem1(iw)) + tmp_dyhx = p5 * (htn(iw) - htnm1(iw)) + + !-------------------------------------------------------------------------- + ! strain rates - NOTE these are actually strain rates * area (m^2/s) + !-------------------------------------------------------------------------- + call strain_rates_1d (tmp_uvel_cc, tmp_vvel_cc, & + tmp_uvel_ee, tmp_vvel_ee, & + tmp_uvel_se, tmp_vvel_se, & + tmp_uvel_ne, tmp_vvel_ne, & + tmp_dxT , tmp_dyT , & + tmp_cxp , tmp_cyp , & + tmp_cxm , tmp_cym , & + divune , divunw , & + divuse , divusw , & + tensionne , tensionnw , & + tensionse , tensionsw , & + shearne , shearnw , & + shearse, shearsw , & + Deltane, Deltanw , & + Deltase, Deltasw ) + + !-------------------------------------------------------------------------- + ! viscosities and replacement pressure + !-------------------------------------------------------------------------- + call visc_replpress (tmp_strength, tmp_DminTarea, Deltane, & + zetax2ne, etax2ne, rep_prsne) + + call visc_replpress (tmp_strength, tmp_DminTarea, Deltanw, & + zetax2nw, etax2nw, rep_prsnw) + + call visc_replpress (tmp_strength, tmp_DminTarea, Deltasw, & + zetax2sw, etax2sw, rep_prssw) + + call visc_replpress (tmp_strength, tmp_DminTarea, Deltase, & + zetax2se, etax2se, rep_prsse) + + !-------------------------------------------------------------------------- + ! the stresses ! kg/s^2 + ! (1) northeast, (2) northwest, (3) southwest, (4) southeast + !-------------------------------------------------------------------------- + ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code + + stressp_1 (iw) = (stressp_1 (iw)*(c1-arlx1i*revp) & + + arlx1i*(zetax2ne*divune - rep_prsne)) * denom1 + stressp_2 (iw) = (stressp_2 (iw)*(c1-arlx1i*revp) & + + arlx1i*(zetax2nw*divunw - rep_prsnw)) * denom1 + stressp_3 (iw) = (stressp_3 (iw)*(c1-arlx1i*revp)& + + arlx1i*(zetax2sw*divusw - rep_prssw)) * denom1 + stressp_4 (iw) = (stressp_4 (iw)*(c1-arlx1i*revp) & + + arlx1i*(zetax2se*divuse - rep_prsse)) * denom1 + + stressm_1 (iw) = (stressm_1 (iw)*(c1-arlx1i*revp) & + + arlx1i*etax2ne*tensionne) * denom1 + stressm_2 (iw) = (stressm_2 (iw)*(c1-arlx1i*revp) & + + arlx1i*etax2nw*tensionnw) * denom1 + stressm_3 (iw) = (stressm_3 (iw)*(c1-arlx1i*revp) & + + arlx1i*etax2sw*tensionsw) * denom1 + stressm_4 (iw) = (stressm_4 (iw)*(c1-arlx1i*revp) & + + arlx1i*etax2se*tensionse) * denom1 + + stress12_1(iw) = (stress12_1(iw)*(c1-arlx1i*revp) & + + arlx1i*p5*etax2ne*shearne) * denom1 + stress12_2(iw) = (stress12_2(iw)*(c1-arlx1i*revp) & + + arlx1i*p5*etax2nw*shearnw) * denom1 + stress12_3(iw) = (stress12_3(iw)*(c1-arlx1i*revp) & + + arlx1i*p5*etax2sw*shearsw) * denom1 + stress12_4(iw) = (stress12_4(iw)*(c1-arlx1i*revp) & + + arlx1i*p5*etax2se*shearse) * denom1 + + !-------------------------------------------------------------------------- + ! combinations of the stresses for the momentum equation ! kg/s^2 + !-------------------------------------------------------------------------- + ssigpn = stressp_1(iw) + stressp_2(iw) + ssigps = stressp_3(iw) + stressp_4(iw) + ssigpe = stressp_1(iw) + stressp_4(iw) + ssigpw = stressp_2(iw) + stressp_3(iw) + ssigp1 =(stressp_1(iw) + stressp_3(iw))*p055 + ssigp2 =(stressp_2(iw) + stressp_4(iw))*p055 + + ssigmn = stressm_1(iw) + stressm_2(iw) + ssigms = stressm_3(iw) + stressm_4(iw) + ssigme = stressm_1(iw) + stressm_4(iw) + ssigmw = stressm_2(iw) + stressm_3(iw) + ssigm1 =(stressm_1(iw) + stressm_3(iw))*p055 + ssigm2 =(stressm_2(iw) + stressm_4(iw))*p055 + + ssig12n = stress12_1(iw) + stress12_2(iw) + ssig12s = stress12_3(iw) + stress12_4(iw) + ssig12e = stress12_1(iw) + stress12_4(iw) + ssig12w = stress12_2(iw) + stress12_3(iw) + ssig121 =(stress12_1(iw) + stress12_3(iw))*p111 + ssig122 =(stress12_2(iw) + stress12_4(iw))*p111 + + csigpne = p111*stressp_1(iw) + ssigp2 + p027*stressp_3(iw) + csigpnw = p111*stressp_2(iw) + ssigp1 + p027*stressp_4(iw) + csigpsw = p111*stressp_3(iw) + ssigp2 + p027*stressp_1(iw) + csigpse = p111*stressp_4(iw) + ssigp1 + p027*stressp_2(iw) + + csigmne = p111*stressm_1(iw) + ssigm2 + p027*stressm_3(iw) + csigmnw = p111*stressm_2(iw) + ssigm1 + p027*stressm_4(iw) + csigmsw = p111*stressm_3(iw) + ssigm2 + p027*stressm_1(iw) + csigmse = p111*stressm_4(iw) + ssigm1 + p027*stressm_2(iw) + + csig12ne = p222*stress12_1(iw) + ssig122 & + + p055*stress12_3(iw) + csig12nw = p222*stress12_2(iw) + ssig121 & + + p055*stress12_4(iw) + csig12sw = p222*stress12_3(iw) + ssig122 & + + p055*stress12_1(iw) + csig12se = p222*stress12_4(iw) + ssig121 & + + p055*stress12_2(iw) + + str12ew = p5*tmp_dxt*(p333*ssig12e + p166*ssig12w) + str12we = p5*tmp_dxt*(p333*ssig12w + p166*ssig12e) + str12ns = p5*tmp_dyt*(p333*ssig12n + p166*ssig12s) + str12sn = p5*tmp_dyt*(p333*ssig12s + p166*ssig12n) + + !-------------------------------------------------------------------------- + ! for dF/dx (u momentum) + !-------------------------------------------------------------------------- + strp_tmp = p25*tmp_dyT*(p333*ssigpn + p166*ssigps) + strm_tmp = p25*tmp_dyT*(p333*ssigmn + p166*ssigms) + + ! northeast (i,j) + str1(iw) = -strp_tmp - strm_tmp - str12ew & + +tmp_dxhy*(-csigpne + csigmne) + tmp_dyhx*csig12ne + + ! northwest (i+1,j) + str2(iw) = strp_tmp + strm_tmp - str12we & + +tmp_dxhy*(-csigpnw + csigmnw) + tmp_dyhx*csig12nw + + strp_tmp = p25*tmp_dyT*(p333*ssigps + p166*ssigpn) + strm_tmp = p25*tmp_dyT*(p333*ssigms + p166*ssigmn) + + ! southeast (i,j+1) + str3(iw) = -strp_tmp - strm_tmp + str12ew & + +tmp_dxhy*(-csigpse + csigmse) + tmp_dyhx*csig12se + + ! southwest (i+1,j+1) + str4(iw) = strp_tmp + strm_tmp + str12we & + +tmp_dxhy*(-csigpsw + csigmsw) + tmp_dyhx*csig12sw + + !-------------------------------------------------------------------------- + ! for dF/dy (v momentum) + !-------------------------------------------------------------------------- + strp_tmp = p25*tmp_dxT*(p333*ssigpe + p166*ssigpw) + strm_tmp = p25*tmp_dxT*(p333*ssigme + p166*ssigmw) + + ! northeast (i,j) + str5(iw) = -strp_tmp + strm_tmp - str12ns & + -tmp_dyhx*(csigpne + csigmne) + tmp_dxhy*csig12ne + + ! southeast (i,j+1) + str6(iw) = strp_tmp - strm_tmp - str12sn & + -tmp_dyhx*(csigpse + csigmse) + tmp_dxhy*csig12se + + strp_tmp = p25*tmp_dxT*(p333*ssigpw + p166*ssigpe) + strm_tmp = p25*tmp_dxT*(p333*ssigmw + p166*ssigme) + + ! northwest (i+1,j) + str7(iw) = -strp_tmp + strm_tmp + str12ns & + -tmp_dyhx*(csigpnw + csigmnw) + tmp_dxhy*csig12nw + + ! southwest (i+1,j+1) + str8(iw) = strp_tmp - strm_tmp + str12sn & + -tmp_dyhx*(csigpsw + csigmsw) + tmp_dxhy*csig12sw + enddo +#ifdef _OPENMP_TARGET + !$omp end target teams distribute parallel do +#else + !$omp end parallel do +#endif + end subroutine stress_1d + + !============================================================================= + ! Compute strain rates + ! + ! author: Elizabeth C. Hunke, LANL + ! + ! 2019: subroutine created by Philippe Blain, ECCC + subroutine strain_rates_1d (tmp_uvel_cc, tmp_vvel_cc, & + tmp_uvel_ee, tmp_vvel_ee, & + tmp_uvel_se, tmp_vvel_se, & + tmp_uvel_ne, tmp_vvel_ne, & + dxT , dyT , & + cxp , cyp , & + cxm , cym , & + divune , divunw , & + divuse , divusw , & + tensionne , tensionnw , & + tensionse , tensionsw , & + shearne , shearnw , & + shearse , shearsw , & + Deltane , Deltanw , & + Deltase , Deltasw ) + + use ice_kinds_mod + + real (kind=dbl_kind), intent(in) :: & + tmp_uvel_ee, tmp_vvel_ee, tmp_uvel_se, tmp_vvel_se, & + tmp_uvel_cc, tmp_vvel_cc, tmp_uvel_ne, tmp_vvel_ne + + real (kind=dbl_kind), intent(in) :: & + dxT , & ! width of T-cell through the middle (m) + dyT , & ! height of T-cell through the middle (m) + cyp , & ! 1.5*HTE - 0.5*HTW + cxp , & ! 1.5*HTN - 0.5*HTS + cym , & ! 0.5*HTE - 1.5*HTW + cxm ! 0.5*HTN - 1.5*HTS + + real (kind=dbl_kind), intent(out):: & ! at each corner : + divune, divunw, divuse, divusw , & ! divergence + tensionne, tensionnw, tensionse, tensionsw, & ! tension + shearne, shearnw, shearse, shearsw , & ! shearing + Deltane, Deltanw, Deltase, Deltasw ! Delta + + character(len=*), parameter :: subname = '(strain_rates_1d)' + + !----------------------------------------------------------------------------- + ! strain rates + ! NOTE these are actually strain rates * area (m^2/s) + !----------------------------------------------------------------------------- + + ! divergence = e_11 + e_22 + divune = cyp*tmp_uvel_cc - dyT*tmp_uvel_ee & + + cxp*tmp_vvel_cc - dxT*tmp_vvel_se + + divunw = cym*tmp_uvel_ee + dyT*tmp_uvel_cc & + + cxp*tmp_vvel_ee - dxT*tmp_vvel_ne + + divusw = cym*tmp_uvel_ne + dyT*tmp_uvel_se & + + cxm*tmp_vvel_ne + dxT*tmp_vvel_ee + + divuse = cyp*tmp_uvel_se - dyT*tmp_uvel_ne & + + cxm*tmp_vvel_se + dxT*tmp_vvel_cc + + ! tension strain rate = e_11 - e_22 + tensionne = -cym*tmp_uvel_cc - dyT*tmp_uvel_ee & + +cxm*tmp_vvel_cc + dxT*tmp_vvel_se + + tensionnw = -cyp*tmp_uvel_ee + dyT*tmp_uvel_cc& + +cxm*tmp_vvel_ee + dxT*tmp_vvel_ne + + tensionsw = -cyp*tmp_uvel_ne + dyT*tmp_uvel_se & + +cxp*tmp_vvel_ne - dxT*tmp_vvel_ee + + tensionse = -cym*tmp_uvel_se - dyT*tmp_uvel_ne & + +cxp*tmp_vvel_se - dxT*tmp_vvel_cc + + ! shearing strain rate = 2*e_12 + shearne = -cym*tmp_vvel_cc - dyT*tmp_vvel_ee & + -cxm*tmp_uvel_cc - dxT*tmp_uvel_se + + shearnw = -cyp*tmp_vvel_ee + dyT*tmp_vvel_cc & + -cxm*tmp_uvel_ee - dxT*tmp_uvel_ne + + shearsw = -cyp*tmp_vvel_ne + dyT*tmp_vvel_se & + -cxp*tmp_uvel_ne + dxT*tmp_uvel_ee + + shearse = -cym*tmp_vvel_se - dyT*tmp_vvel_ne & + -cxp*tmp_uvel_se + dxT*tmp_uvel_cc + + ! Delta (in the denominator of zeta, eta) + Deltane = sqrt(divune**2 + e_factor*(tensionne**2 + shearne**2)) + Deltanw = sqrt(divunw**2 + e_factor*(tensionnw**2 + shearnw**2)) + Deltasw = sqrt(divusw**2 + e_factor*(tensionsw**2 + shearsw**2)) + Deltase = sqrt(divuse**2 + e_factor*(tensionse**2 + shearse**2)) + + end subroutine strain_rates_1d + + !============================================================================= + ! Calculation of the surface stresses + ! Integration of the momentum equation to find velocity (u,v) + ! author: Elizabeth C. Hunke, LANL + subroutine stepu_1d (lb , ub , & + Cw , aiX , & + uocn , vocn , & + waterx , watery , & + forcex , forcey , & + umassdti , fm , & + uarear , & + uvel_init, vvel_init, & + uvel , vvel , & + str1 , str2 , & + str3 , str4 , & + str5 , str6 , & + str7 , str8 , & + nw , sw , & + sse , skipme , & + Tbu, Cb, rhow) + + use ice_kinds_mod + use ice_dyn_shared, only: brlx, revp, u0, cosw, sinw + implicit none + + ! arguments ------------------------------------------------------------------ + integer(kind=int_kind), intent(in) :: lb,ub + integer(kind=int_kind), intent(in), dimension(:), contiguous :: nw,sw,sse + logical(kind=log_kind), intent(in), dimension(:), contiguous :: skipme + real (kind=dbl_kind), intent(in), dimension(:), contiguous :: & + Tbu, & ! coefficient for basal stress (N/m^2) + uvel_init,& ! x-component of velocity (m/s), beginning of timestep + vvel_init,& ! y-component of velocity (m/s), beginning of timestep + aiX, & ! ice fraction on u-grid + waterx, & ! for ocean stress calculation, x (m/s) + watery, & ! for ocean stress calculation, y (m/s) + forcex, & ! work array: combined atm stress and ocn tilt, x + forcey, & ! work array: combined atm stress and ocn tilt, y + Umassdti, & ! mass of U-cell/dt (kg/m^2 s) + uocn, & ! ocean current, x-direction (m/s) + vocn, & ! ocean current, y-direction (m/s) + fm, & ! Coriolis param. * mass in U-cell (kg/s) + uarear, & ! 1/uarea + Cw + + real (kind=dbl_kind),dimension(:), intent(in), contiguous :: & + str1,str2,str3,str4,str5,str6,str7,str8 + real (kind=dbl_kind),dimension(:), intent(inout), contiguous :: & + uvel, vvel + ! basal stress coefficient + real (kind=dbl_kind),dimension(:), intent(out), contiguous :: Cb + + real (kind=dbl_kind), intent(in) :: rhow + + ! local variables + integer (kind=int_kind) :: iw + + real (kind=dbl_kind) ::& + uold, vold , & ! old-time uvel, vvel + vrel , & ! relative ice-ocean velocity + cca,ccb,ab2,cc1,cc2,& ! intermediate variables + taux, tauy, & ! part of ocean stress term + strintx, strinty ! internal strength, changed to scalar and calculated after + real (kind=dbl_kind) :: & + tmp_str2_nw,tmp_str3_sse,tmp_str4_sw, & + tmp_str6_sse,tmp_str7_nw,tmp_str8_sw + + character(len=*), parameter :: subname = '(stepu_1d)' + + !----------------------------------------------------------------------------- + ! integrate the momentum equation + !----------------------------------------------------------------------------- +#ifdef _OPENMP_TARGET + !$omp target teams distribute parallel do +#else + !$omp parallel do schedule(runtime) & + !$omp default(none) & + !$omp private(iw, tmp_str2_nw,tmp_str3_sse,tmp_str4_sw, & + !$omp tmp_str6_sse,tmp_str7_nw,tmp_str8_sw, & + !$omp vrel, uold, vold, taux, tauy, cca, ccb, ab2, & + !$omp cc1, cc2,strintx, strinty) & + !$omp shared(uvel,vvel,str1,str2,str3,str4,str5,str6,str7,str8, & + !$omp Cb,nw,sw,sse,skipme,Tbu,uvel_init,vvel_init, & + !$omp aiX,waterx,watery,forcex,forcey,Umassdti,uocn,vocn,fm,uarear, & + !$omp Cw,lb,ub,brlx, revp, rhow) +#endif + do iw = lb, ub + if (skipme(iw)) cycle + + uold = uvel(iw) + vold = vvel(iw) + + ! (magnitude of relative ocean current)*rhow*drag*aice + vrel = aiX(iw)*rhow*Cw(iw)*sqrt((uocn(iw)-uold)**2+(vocn(iw)-vold)**2) + ! ice/ocean stress + taux = vrel*waterx(iw) ! NOTE this is not the entire + tauy = vrel*watery(iw) ! ocn stress term + + Cb(iw) = Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) ! for basal stress + + ! revp = 0 for classic evp, 1 for revised evp + cca = (brlx + revp)*umassdti(iw) + vrel * cosw + Cb(iw) ! kg/m^2 s + ccb = fm(iw) + sign(c1,fm(iw)) * vrel * sinw ! kg/m^2 s + + ab2 = cca**2 + ccb**2 + + tmp_str2_nw = str2(nw(iw)) + tmp_str3_sse = str3(sse(iw)) + tmp_str4_sw = str4(sw(iw)) + tmp_str6_sse = str6(sse(iw)) + tmp_str7_nw = str7(nw(iw)) + tmp_str8_sw = str8(sw(iw)) + + ! divergence of the internal stress tensor + strintx = uarear(iw)*(str1(iw)+tmp_str2_nw+tmp_str3_sse+tmp_str4_sw) + strinty = uarear(iw)*(str5(iw)+tmp_str6_sse+tmp_str7_nw+tmp_str8_sw) + + ! finally, the velocity components + cc1 = strintx + forcex(iw) + taux & + + umassdti(iw)*(brlx*uold + revp*uvel_init(iw)) + cc2 = strinty + forcey(iw) + tauy & + + umassdti(iw)*(brlx*vold + revp*vvel_init(iw)) + uvel(iw) = (cca*cc1 + ccb*cc2) / ab2 ! m/s + vvel(iw) = (cca*cc2 - ccb*cc1) / ab2 + + ! calculate seabed stress component for outputs + ! only needed on last iteration. + enddo + +#ifdef _OPENMP_TARGET + !$omp end target teams distribute parallel do +#else + !$omp end parallel do +#endif + end subroutine stepu_1d + + !============================================================================= + ! calculates strintx and strinty if needed + subroutine calc_diag_1d (lb , ub , & + uarear , skipme , & + str1 , str2 , & + str3 , str4 , & + str5 , str6 , & + str7 , str8 , & + nw , sw , & + sse , & + strintx, strinty) + + use ice_kinds_mod + + real (kind=dbl_kind),dimension(:), intent(in), contiguous :: & + str1,str2,str3,str4,str5,str6,str7,str8 + real (kind=dbl_kind),dimension(:), intent(inout), contiguous :: & + strintx, strinty + + integer(kind=int_kind), intent(in) :: lb,ub + integer(kind=int_kind), intent(in), dimension(:), contiguous :: nw,sw,sse + logical(kind=log_kind), intent(in), dimension(:), contiguous :: skipme + real (kind=dbl_kind), intent(in), dimension(:), contiguous :: uarear + + ! local variables + integer (kind=int_kind) :: iw + real (kind=dbl_kind) :: & + tmp_str2_nw,tmp_str3_sse,tmp_str4_sw, & + tmp_str6_sse,tmp_str7_nw,tmp_str8_sw + + character(len=*), parameter :: subname = '(calc_diag_1d)' + +#ifdef _OPENMP_TARGET + !$omp target teams distribute parallel do +#else + !$omp parallel do schedule(runtime) & + !$omp default(none) & + !$omp private(iw, tmp_str2_nw,tmp_str3_sse,tmp_str4_sw, & + !$omp tmp_str6_sse,tmp_str7_nw,tmp_str8_sw) & + !$omp shared(strintx,strinty,str1,str2,str3,str4,str5,str6,str7,str8, & + !$omp nw,sw,sse,skipme, uarear, lb,ub) +#endif + + do iw = lb, ub + if (skipme(iw)) cycle + + tmp_str2_nw = str2(nw(iw)) + tmp_str3_sse = str3(sse(iw)) + tmp_str4_sw = str4(sw(iw)) + tmp_str6_sse = str6(sse(iw)) + tmp_str7_nw = str7(nw(iw)) + tmp_str8_sw = str8(sw(iw)) + + ! divergence of the internal stress tensor + strintx(iw) = uarear(iw)*(str1(iw)+tmp_str2_nw+tmp_str3_sse+tmp_str4_sw) + strinty(iw) = uarear(iw)*(str5(iw)+tmp_str6_sse+tmp_str7_nw+tmp_str8_sw) + enddo + +#ifdef _OPENMP_TARGET + !$omp end target teams distribute parallel do +#else + !$omp end parallel do +#endif + + end subroutine calc_diag_1d + +end module ice_dyn_core1d diff --git a/cicecore/cicedyn/dynamics/ice_dyn_evp.F90 b/cicecore/cicedyn/dynamics/ice_dyn_evp.F90 index a24c8f57d..ee832e447 100644 --- a/cicecore/cicedyn/dynamics/ice_dyn_evp.F90 +++ b/cicecore/cicedyn/dynamics/ice_dyn_evp.F90 @@ -119,19 +119,24 @@ module ice_dyn_evp ! Elastic-viscous-plastic dynamics driver ! subroutine init_evp - use ice_blocks, only: nx_block, ny_block - use ice_domain_size, only: max_blocks - use ice_grid, only: grid_ice + use ice_blocks, only: nx_block, ny_block, nghost + use ice_domain_size, only: max_blocks, nx_global, ny_global + use ice_grid, only: grid_ice, dyT, dxT, uarear, tmask, G_HTE, G_HTN use ice_calendar, only: dt_dyn - use ice_dyn_shared, only: init_dyn_shared + use ice_dyn_shared, only: init_dyn_shared, evp_algorithm + use ice_dyn_evp1d, only: dyn_evp1d_init !allocate c and cd grid var. Follow structucre of eap integer (int_kind) :: ierr - character(len=*), parameter :: subname = '(alloc_dyn_evp)' + character(len=*), parameter :: subname = '(init_evp)' call init_dyn_shared(dt_dyn) + if (evp_algorithm == "shared_mem_1d" ) then + call dyn_evp1d_init + endif + allocate( uocnU (nx_block,ny_block,max_blocks), & ! i ocean current (m/s) vocnU (nx_block,ny_block,max_blocks), & ! j ocean current (m/s) ss_tltxU (nx_block,ny_block,max_blocks), & ! sea surface slope, x-direction (m/m) @@ -196,6 +201,7 @@ subroutine init_evp end subroutine init_evp +!======================================================================= #ifdef CICE_IN_NEMO ! Wind stress is set during this routine from the values supplied ! via NEMO (unless calc_strair is true). These values are supplied @@ -241,14 +247,13 @@ subroutine evp (dt) uvelE, vvelE, divu, shear, & aice_init, aice0, aicen, vicen, strength use ice_timers, only: timer_dynamics, timer_bound, & - ice_timer_start, ice_timer_stop, timer_evp_1d, timer_evp_2d - use ice_dyn_evp_1d, only: ice_dyn_evp_1d_copyin, ice_dyn_evp_1d_kernel, & - ice_dyn_evp_1d_copyout + ice_timer_start, ice_timer_stop, timer_evp use ice_dyn_shared, only: evp_algorithm, stack_fields, unstack_fields, & DminTarea, visc_method, deformations, deformationsC_T, deformationsCD_T, & strain_rates_U, & iceTmask, iceUmask, iceEmask, iceNmask, & dyn_haloUpdate, fld2, fld3, fld4 + use ice_dyn_evp1d, only: dyn_evp1d_run real (kind=dbl_kind), intent(in) :: & dt ! time step @@ -793,40 +798,23 @@ subroutine evp (dt) endif - if (evp_algorithm == "shared_mem_1d" ) then + call ice_timer_start(timer_evp) - if (trim(grid_type) == 'tripole') then - call abort_ice(trim(subname)//' & - & Kernel not tested on tripole grid. Set evp_algorithm=standard_2d') - endif + if (grid_ice == "B") then - call ice_timer_start(timer_evp_1d) - call ice_dyn_evp_1d_copyin( & - nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost, & - iceTmask, iceUmask, & - cdn_ocnU,aiU,uocnU,vocnU,forcexU,forceyU,TbU, & - umassdti,fmU,uarear,tarear,strintxU,strintyU,uvel_init,vvel_init,& - strength,uvel,vvel,dxT,dyT, & - stressp_1 ,stressp_2, stressp_3, stressp_4, & - stressm_1 ,stressm_2, stressm_3, stressm_4, & - stress12_1,stress12_2,stress12_3,stress12_4 ) - call ice_dyn_evp_1d_kernel() - call ice_dyn_evp_1d_copyout( & - nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost, & -!strocn uvel,vvel, strocnxU,strocnyU, strintxU,strintyU, & - uvel,vvel, strintxU,strintyU, & - stressp_1, stressp_2, stressp_3, stressp_4, & - stressm_1, stressm_2, stressm_3, stressm_4, & - stress12_1,stress12_2,stress12_3,stress12_4, & - divu,rdg_conv,rdg_shear,shear,taubxU,taubyU ) - call ice_timer_stop(timer_evp_1d) - - else ! evp_algorithm == standard_2d (Standard CICE) - - call ice_timer_start(timer_evp_2d) + if (evp_algorithm == "shared_mem_1d" ) then - if (grid_ice == "B") then + call dyn_evp1d_run(stressp_1 , stressp_2, stressp_3 , stressp_4 , & + stressm_1 , stressm_2 , stressm_3 , stressm_4 , & + stress12_1, stress12_2, stress12_3, stress12_4, & + strength , & + cdn_ocnU , aiu , uocnU , vocnU , & + waterxU , wateryU , forcexU , forceyU , & + umassdti , fmU , strintxU , strintyU , & + Tbu , taubxU , taubyU , uvel , & + vvel , icetmask , iceUmask) + else ! evp_algorithm == standard_2d (Standard CICE) do ksub = 1,ndte ! subcycling !$OMP PARALLEL DO PRIVATE(iblk,strtmp) SCHEDULE(runtime) @@ -851,7 +839,7 @@ subroutine evp (dt) stressm_3 (:,:,iblk), stressm_4 (:,:,iblk), & stress12_1(:,:,iblk), stress12_2(:,:,iblk), & stress12_3(:,:,iblk), stress12_4(:,:,iblk), & - strtmp (:,:,:) ) + strtmp (:,:,:)) !----------------------------------------------------------------- ! momentum equation @@ -881,406 +869,405 @@ subroutine evp (dt) uvel, vvel) enddo ! sub cycling + endif ! evp algorithm + + !----------------------------------------------------------------- + ! save quantities for mechanical redistribution + !----------------------------------------------------------------- + + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call deformations (nx_block , ny_block , & + icellT (iblk), & + indxTi (:,iblk), indxTj (:,iblk), & + uvel (:,:,iblk), vvel (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + cxp (:,:,iblk), cyp (:,:,iblk), & + cxm (:,:,iblk), cym (:,:,iblk), & + tarear (:,:,iblk), & + shear (:,:,iblk), divu (:,:,iblk), & + rdg_conv(:,:,iblk), rdg_shear(:,:,iblk) ) + enddo + !$OMP END PARALLEL DO + + elseif (grid_ice == "C") then + + do ksub = 1,ndte ! subcycling + + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks !----------------------------------------------------------------- - ! save quantities for mechanical redistribution + ! strain rates at U point + ! NOTE these are actually strain rates * area (m^2/s) !----------------------------------------------------------------- + call strain_rates_U (nx_block , ny_block , & + icellU (iblk), & + indxUi (:,iblk), indxUj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + uvel (:,:,iblk), vvel (:,:,iblk), & + dxE (:,:,iblk), dyN (:,:,iblk), & + dxU (:,:,iblk), dyU (:,:,iblk), & + ratiodxN(:,:,iblk), ratiodxNr(:,:,iblk), & + ratiodyE(:,:,iblk), ratiodyEr(:,:,iblk), & + epm (:,:,iblk), npm (:,:,iblk), & + divergU (:,:,iblk), tensionU (:,:,iblk), & + shearU (:,:,iblk), deltaU (:,:,iblk) ) + + enddo ! iblk + !$OMP END PARALLEL DO - !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_NEcorner, field_type_scalar, & + shearU) + + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks - call deformations (nx_block , ny_block , & - icellT (iblk), & - indxTi (:,iblk), indxTj (:,iblk), & - uvel (:,:,iblk), vvel (:,:,iblk), & - dxT (:,:,iblk), dyT (:,:,iblk), & - cxp (:,:,iblk), cyp (:,:,iblk), & - cxm (:,:,iblk), cym (:,:,iblk), & - tarear (:,:,iblk), & - shear (:,:,iblk), divu (:,:,iblk), & - rdg_conv(:,:,iblk), rdg_shear(:,:,iblk) ) + call stressC_T (nx_block , ny_block , & + icellT (iblk), & + indxTi (:,iblk), indxTj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + dxN (:,:,iblk), dyE (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + uarea (:,:,iblk), DminTarea (:,:,iblk), & + strength (:,:,iblk), shearU (:,:,iblk), & + zetax2T (:,:,iblk), etax2T (:,:,iblk), & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12T (:,:,iblk)) + enddo !$OMP END PARALLEL DO + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_center, field_type_scalar, & + zetax2T, etax2T, stresspT, stressmT) - elseif (grid_ice == "C") then - - do ksub = 1,ndte ! subcycling + if (visc_method == 'avg_strength') then + call grid_average_X2Y('S', strength, 'T', strengthU, 'U') + elseif (visc_method == 'avg_zeta') then + call grid_average_X2Y('S', etax2T , 'T', etax2U , 'U') + endif - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks + call stressC_U (nx_block , ny_block , & + icellU (iblk), & + indxUi (:,iblk), indxUj (:,iblk), & + uarea (:,:,iblk), & + etax2U (:,:,iblk), deltaU (:,:,iblk), & + strengthU (:,:,iblk), shearU (:,:,iblk), & + stress12U (:,:,iblk)) + enddo + !$OMP END PARALLEL DO - !----------------------------------------------------------------- - ! strain rates at U point - ! NOTE these are actually strain rates * area (m^2/s) - !----------------------------------------------------------------- - call strain_rates_U (nx_block , ny_block , & - icellU (iblk), & - indxUi (:,iblk), indxUj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - uvel (:,:,iblk), vvel (:,:,iblk), & - dxE (:,:,iblk), dyN (:,:,iblk), & - dxU (:,:,iblk), dyU (:,:,iblk), & - ratiodxN(:,:,iblk), ratiodxNr(:,:,iblk), & - ratiodyE(:,:,iblk), ratiodyEr(:,:,iblk), & - epm (:,:,iblk), npm (:,:,iblk), & - divergU (:,:,iblk), tensionU (:,:,iblk), & - shearU (:,:,iblk), deltaU (:,:,iblk) ) + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info , halo_info_mask, & + field_loc_NEcorner, field_type_scalar, & + stress12U) - enddo ! iblk - !$OMP END PARALLEL DO + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_NEcorner, field_type_scalar, & - shearU) + call div_stress_Ex (nx_block , ny_block , & + icellE (iblk), & + indxEi (:,iblk), indxEj (:,iblk), & + dxE (:,:,iblk), dyE (:,:,iblk), & + dxU (:,:,iblk), dyT (:,:,iblk), & + earear (:,:,iblk) , & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12U (:,:,iblk), strintxE (:,:,iblk) ) + + call div_stress_Ny (nx_block , ny_block , & + icellN (iblk), & + indxNi (:,iblk), indxNj (:,iblk), & + dxN (:,:,iblk), dyN (:,:,iblk), & + dxT (:,:,iblk), dyU (:,:,iblk), & + narear (:,:,iblk) , & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12U (:,:,iblk), strintyN (:,:,iblk) ) - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - call stressC_T (nx_block , ny_block , & - icellT (iblk), & - indxTi (:,iblk), indxTj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - dxN (:,:,iblk), dyE (:,:,iblk), & - dxT (:,:,iblk), dyT (:,:,iblk), & - uarea (:,:,iblk), DminTarea (:,:,iblk), & - strength (:,:,iblk), shearU (:,:,iblk), & - zetax2T (:,:,iblk), etax2T (:,:,iblk), & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12T (:,:,iblk)) + enddo + !$OMP END PARALLEL DO - enddo - !$OMP END PARALLEL DO + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_center, field_type_scalar, & - zetax2T, etax2T, stresspT, stressmT) - - if (visc_method == 'avg_strength') then - call grid_average_X2Y('S', strength, 'T', strengthU, 'U') - elseif (visc_method == 'avg_zeta') then - call grid_average_X2Y('S', etax2T , 'T', etax2U , 'U') - endif + call stepu_C (nx_block , ny_block , & ! u, E point + icellE (iblk), Cdn_ocnE (:,:,iblk), & + indxEi (:,iblk), indxEj (:,iblk), & + aiE (:,:,iblk), & + uocnE (:,:,iblk), vocnE (:,:,iblk), & + waterxE (:,:,iblk), forcexE (:,:,iblk), & + emassdti (:,:,iblk), fmE (:,:,iblk), & + strintxE (:,:,iblk), taubxE (:,:,iblk), & + uvelE_init(:,:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + TbE (:,:,iblk)) - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - call stressC_U (nx_block , ny_block , & - icellU (iblk), & - indxUi (:,iblk), indxUj (:,iblk), & - uarea (:,:,iblk), & - etax2U (:,:,iblk), deltaU (:,:,iblk), & - strengthU (:,:,iblk), shearU (:,:,iblk), & - stress12U (:,:,iblk)) - enddo - !$OMP END PARALLEL DO + call stepv_C (nx_block, ny_block, & ! v, N point + icellN (iblk), Cdn_ocnN (:,:,iblk), & + indxNi (:,iblk), indxNj (:,iblk), & + aiN (:,:,iblk), & + uocnN (:,:,iblk), vocnN (:,:,iblk), & + wateryN (:,:,iblk), forceyN (:,:,iblk), & + nmassdti (:,:,iblk), fmN (:,:,iblk), & + strintyN (:,:,iblk), taubyN (:,:,iblk), & + vvelN_init(:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + TbN (:,:,iblk)) + enddo + !$OMP END PARALLEL DO - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info , halo_info_mask, & - field_loc_NEcorner, field_type_scalar, & - stress12U) + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Eface, field_type_vector, & + uvelE) + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Nface, field_type_vector, & + vvelN) - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks + call grid_average_X2Y('A', uvelE, 'E', uvelN, 'N') + call grid_average_X2Y('A', vvelN, 'N', vvelE, 'E') + uvelN(:,:,:) = uvelN(:,:,:)*npm(:,:,:) + vvelE(:,:,:) = vvelE(:,:,:)*epm(:,:,:) - call div_stress_Ex (nx_block , ny_block , & - icellE (iblk), & - indxEi (:,iblk), indxEj (:,iblk), & - dxE (:,:,iblk), dyE (:,:,iblk), & - dxU (:,:,iblk), dyT (:,:,iblk), & - earear (:,:,iblk) , & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12U (:,:,iblk), strintxE (:,:,iblk) ) - - call div_stress_Ny (nx_block , ny_block , & - icellN (iblk), & - indxNi (:,iblk), indxNj (:,iblk), & - dxN (:,:,iblk), dyN (:,:,iblk), & - dxT (:,:,iblk), dyU (:,:,iblk), & - narear (:,:,iblk) , & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12U (:,:,iblk), strintyN (:,:,iblk) ) + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Nface, field_type_vector, & + uvelN) + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Eface, field_type_vector, & + vvelE) - enddo - !$OMP END PARALLEL DO + call grid_average_X2Y('S', uvelE, 'E', uvel, 'U') + call grid_average_X2Y('S', vvelN, 'N', vvel, 'U') - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks + uvel(:,:,:) = uvel(:,:,:)*uvm(:,:,:) + vvel(:,:,:) = vvel(:,:,:)*uvm(:,:,:) + ! U fields at NE corner + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_NEcorner, field_type_vector, & + uvel, vvel) - call stepu_C (nx_block , ny_block , & ! u, E point - icellE (iblk), Cdn_ocnE (:,:,iblk), & - indxEi (:,iblk), indxEj (:,iblk), & - aiE (:,:,iblk), & - uocnE (:,:,iblk), vocnE (:,:,iblk), & - waterxE (:,:,iblk), forcexE (:,:,iblk), & - emassdti (:,:,iblk), fmE (:,:,iblk), & - strintxE (:,:,iblk), taubxE (:,:,iblk), & - uvelE_init(:,:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - TbE (:,:,iblk)) + enddo ! subcycling - call stepv_C (nx_block, ny_block, & ! v, N point - icellN (iblk), Cdn_ocnN (:,:,iblk), & - indxNi (:,iblk), indxNj (:,iblk), & - aiN (:,:,iblk), & - uocnN (:,:,iblk), vocnN (:,:,iblk), & - wateryN (:,:,iblk), forceyN (:,:,iblk), & - nmassdti (:,:,iblk), fmN (:,:,iblk), & - strintyN (:,:,iblk), taubyN (:,:,iblk), & - vvelN_init(:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - TbN (:,:,iblk)) - enddo - !$OMP END PARALLEL DO - - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Eface, field_type_vector, & - uvelE) - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Nface, field_type_vector, & - vvelN) - - call grid_average_X2Y('A', uvelE, 'E', uvelN, 'N') - call grid_average_X2Y('A', vvelN, 'N', vvelE, 'E') - uvelN(:,:,:) = uvelN(:,:,:)*npm(:,:,:) - vvelE(:,:,:) = vvelE(:,:,:)*epm(:,:,:) + !----------------------------------------------------------------- + ! save quantities for mechanical redistribution + !----------------------------------------------------------------- - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Nface, field_type_vector, & - uvelN) - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Eface, field_type_vector, & - vvelE) - - call grid_average_X2Y('S', uvelE, 'E', uvel, 'U') - call grid_average_X2Y('S', vvelN, 'N', vvel, 'U') - - uvel(:,:,:) = uvel(:,:,:)*uvm(:,:,:) - vvel(:,:,:) = vvel(:,:,:)*uvm(:,:,:) - ! U fields at NE corner - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_NEcorner, field_type_vector, & - uvel, vvel) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks + call deformationsC_T (nx_block , ny_block , & + icellT (iblk), & + indxTi (:,iblk), indxTj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + dxN (:,:,iblk), dyE (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + tarear (:,:,iblk), uarea (:,:,iblk), & + shearU (:,:,iblk), & + shear (:,:,iblk), divu (:,:,iblk), & + rdg_conv(:,:,iblk), rdg_shear(:,:,iblk)) + enddo + !$OMP END PARALLEL DO - enddo ! subcycling + elseif (grid_ice == "CD") then - !----------------------------------------------------------------- - ! save quantities for mechanical redistribution - !----------------------------------------------------------------- + do ksub = 1,ndte ! subcycling - !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + !$OMP PARALLEL DO PRIVATE(iblk) do iblk = 1, nblocks - call deformationsC_T (nx_block , ny_block , & - icellT (iblk), & - indxTi (:,iblk), indxTj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - dxN (:,:,iblk), dyE (:,:,iblk), & - dxT (:,:,iblk), dyT (:,:,iblk), & - tarear (:,:,iblk), uarea (:,:,iblk), & - shearU (:,:,iblk), & - shear (:,:,iblk), divu (:,:,iblk), & - rdg_conv(:,:,iblk), rdg_shear(:,:,iblk)) + call stressCD_T (nx_block , ny_block , & + icellT (iblk), & + indxTi (:,iblk), indxTj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + dxN (:,:,iblk), dyE (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + DminTarea (:,:,iblk), & + strength (:,:,iblk), & + zetax2T (:,:,iblk), etax2T (:,:,iblk), & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12T (:,:,iblk) ) + enddo !$OMP END PARALLEL DO - elseif (grid_ice == "CD") then + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_center, field_type_scalar, & + zetax2T, etax2T) - do ksub = 1,ndte ! subcycling + if (visc_method == 'avg_strength') then + call grid_average_X2Y('S', strength, 'T', strengthU, 'U') + elseif (visc_method == 'avg_zeta') then + call grid_average_X2Y('S', zetax2T , 'T', zetax2U , 'U') + call grid_average_X2Y('S', etax2T , 'T', etax2U , 'U') + endif - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - call stressCD_T (nx_block , ny_block , & - icellT (iblk), & - indxTi (:,iblk), indxTj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - dxN (:,:,iblk), dyE (:,:,iblk), & - dxT (:,:,iblk), dyT (:,:,iblk), & - DminTarea (:,:,iblk), & - strength (:,:,iblk), & - zetax2T (:,:,iblk), etax2T (:,:,iblk), & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12T (:,:,iblk) ) + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks + !----------------------------------------------------------------- + ! strain rates at U point + ! NOTE these are actually strain rates * area (m^2/s) + !----------------------------------------------------------------- + call strain_rates_U (nx_block , ny_block , & + icellU (iblk), & + indxUi (:,iblk), indxUj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + uvel (:,:,iblk), vvel (:,:,iblk), & + dxE (:,:,iblk), dyN (:,:,iblk), & + dxU (:,:,iblk), dyU (:,:,iblk), & + ratiodxN (:,:,iblk), ratiodxNr(:,:,iblk), & + ratiodyE (:,:,iblk), ratiodyEr(:,:,iblk), & + epm (:,:,iblk), npm (:,:,iblk), & + divergU (:,:,iblk), tensionU (:,:,iblk), & + shearU (:,:,iblk), DeltaU (:,:,iblk) ) + + call stressCD_U (nx_block , ny_block , & + icellU (iblk), & + indxUi (:,iblk), indxUj (:,iblk), & + uarea (:,:,iblk), & + zetax2U (:,:,iblk), etax2U (:,:,iblk), & + strengthU(:,:,iblk), & + divergU (:,:,iblk), tensionU (:,:,iblk), & + shearU (:,:,iblk), DeltaU (:,:,iblk), & + stresspU (:,:,iblk), stressmU (:,:,iblk), & + stress12U(:,:,iblk)) + enddo + !$OMP END PARALLEL DO - enddo - !$OMP END PARALLEL DO + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_center, field_type_scalar, & + stresspT, stressmT, stress12T) + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_NEcorner,field_type_scalar, & + stresspU, stressmU, stress12U) - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_center, field_type_scalar, & - zetax2T, etax2T) - - if (visc_method == 'avg_strength') then - call grid_average_X2Y('S', strength, 'T', strengthU, 'U') - elseif (visc_method == 'avg_zeta') then - call grid_average_X2Y('S', zetax2T , 'T', zetax2U , 'U') - call grid_average_X2Y('S', etax2T , 'T', etax2U , 'U') - endif + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - !----------------------------------------------------------------- - ! strain rates at U point - ! NOTE these are actually strain rates * area (m^2/s) - !----------------------------------------------------------------- - call strain_rates_U (nx_block , ny_block , & - icellU (iblk), & - indxUi (:,iblk), indxUj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - uvel (:,:,iblk), vvel (:,:,iblk), & - dxE (:,:,iblk), dyN (:,:,iblk), & - dxU (:,:,iblk), dyU (:,:,iblk), & - ratiodxN (:,:,iblk), ratiodxNr(:,:,iblk), & - ratiodyE (:,:,iblk), ratiodyEr(:,:,iblk), & - epm (:,:,iblk), npm (:,:,iblk), & - divergU (:,:,iblk), tensionU (:,:,iblk), & - shearU (:,:,iblk), DeltaU (:,:,iblk) ) - - call stressCD_U (nx_block , ny_block , & - icellU (iblk), & - indxUi (:,iblk), indxUj (:,iblk), & - uarea (:,:,iblk), & - zetax2U (:,:,iblk), etax2U (:,:,iblk), & - strengthU(:,:,iblk), & - divergU (:,:,iblk), tensionU (:,:,iblk), & - shearU (:,:,iblk), DeltaU (:,:,iblk), & - stresspU (:,:,iblk), stressmU (:,:,iblk), & - stress12U(:,:,iblk)) - enddo - !$OMP END PARALLEL DO + call div_stress_Ex (nx_block , ny_block , & + icellE (iblk), & + indxEi (:,iblk), indxEj (:,iblk), & + dxE (:,:,iblk), dyE (:,:,iblk), & + dxU (:,:,iblk), dyT (:,:,iblk), & + earear (:,:,iblk) , & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12U (:,:,iblk), strintxE (:,:,iblk) ) + + call div_stress_Ey (nx_block , ny_block , & + icellE (iblk), & + indxEi (:,iblk), indxEj (:,iblk), & + dxE (:,:,iblk), dyE (:,:,iblk), & + dxU (:,:,iblk), dyT (:,:,iblk), & + earear (:,:,iblk) , & + stresspU (:,:,iblk), stressmU (:,:,iblk), & + stress12T (:,:,iblk), strintyE (:,:,iblk) ) + + call div_stress_Nx (nx_block , ny_block , & + icellN (iblk), & + indxNi (:,iblk), indxNj (:,iblk), & + dxN (:,:,iblk), dyN (:,:,iblk), & + dxT (:,:,iblk), dyU (:,:,iblk), & + narear (:,:,iblk) , & + stresspU (:,:,iblk), stressmU (:,:,iblk), & + stress12T (:,:,iblk), strintxN (:,:,iblk) ) + + call div_stress_Ny (nx_block , ny_block , & + icellN (iblk), & + indxNi (:,iblk), indxNj (:,iblk), & + dxN (:,:,iblk), dyN (:,:,iblk), & + dxT (:,:,iblk), dyU (:,:,iblk), & + narear (:,:,iblk) , & + stresspT (:,:,iblk), stressmT (:,:,iblk), & + stress12U (:,:,iblk), strintyN (:,:,iblk) ) - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_center, field_type_scalar, & - stresspT, stressmT, stress12T) - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_NEcorner,field_type_scalar, & - stresspU, stressmU, stress12U) - - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks + enddo + !$OMP END PARALLEL DO - call div_stress_Ex (nx_block , ny_block , & - icellE (iblk), & - indxEi (:,iblk), indxEj (:,iblk), & - dxE (:,:,iblk), dyE (:,:,iblk), & - dxU (:,:,iblk), dyT (:,:,iblk), & - earear (:,:,iblk) , & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12U (:,:,iblk), strintxE (:,:,iblk) ) - - call div_stress_Ey (nx_block , ny_block , & - icellE (iblk), & - indxEi (:,iblk), indxEj (:,iblk), & - dxE (:,:,iblk), dyE (:,:,iblk), & - dxU (:,:,iblk), dyT (:,:,iblk), & - earear (:,:,iblk) , & - stresspU (:,:,iblk), stressmU (:,:,iblk), & - stress12T (:,:,iblk), strintyE (:,:,iblk) ) - - call div_stress_Nx (nx_block , ny_block , & - icellN (iblk), & - indxNi (:,iblk), indxNj (:,iblk), & - dxN (:,:,iblk), dyN (:,:,iblk), & - dxT (:,:,iblk), dyU (:,:,iblk), & - narear (:,:,iblk) , & - stresspU (:,:,iblk), stressmU (:,:,iblk), & - stress12T (:,:,iblk), strintxN (:,:,iblk) ) - - call div_stress_Ny (nx_block , ny_block , & - icellN (iblk), & - indxNi (:,iblk), indxNj (:,iblk), & - dxN (:,:,iblk), dyN (:,:,iblk), & - dxT (:,:,iblk), dyU (:,:,iblk), & - narear (:,:,iblk) , & - stresspT (:,:,iblk), stressmT (:,:,iblk), & - stress12U (:,:,iblk), strintyN (:,:,iblk) ) + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks - enddo - !$OMP END PARALLEL DO + call stepuv_CD (nx_block , ny_block , & ! E point + icellE (iblk), Cdn_ocnE (:,:,iblk), & + indxEi (:,iblk), indxEj (:,iblk), & + aiE (:,:,iblk), & + uocnE (:,:,iblk), vocnE (:,:,iblk), & + waterxE (:,:,iblk), wateryE (:,:,iblk), & + forcexE (:,:,iblk), forceyE (:,:,iblk), & + emassdti (:,:,iblk), fmE (:,:,iblk), & + strintxE (:,:,iblk), strintyE (:,:,iblk), & + taubxE (:,:,iblk), taubyE (:,:,iblk), & + uvelE_init(:,:,iblk), vvelE_init(:,:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + TbE (:,:,iblk)) + + call stepuv_CD (nx_block , ny_block , & ! N point + icellN (iblk), Cdn_ocnN (:,:,iblk), & + indxNi (:,iblk), indxNj (:,iblk), & + aiN (:,:,iblk), & + uocnN (:,:,iblk), vocnN (:,:,iblk), & + waterxN (:,:,iblk), wateryN (:,:,iblk), & + forcexN (:,:,iblk), forceyN (:,:,iblk), & + nmassdti (:,:,iblk), fmN (:,:,iblk), & + strintxN (:,:,iblk), strintyN (:,:,iblk), & + taubxN (:,:,iblk), taubyN (:,:,iblk), & + uvelN_init(:,:,iblk), vvelN_init(:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + TbN (:,:,iblk)) + enddo + !$OMP END PARALLEL DO - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks + ! calls ice_haloUpdate, controls bundles and masks + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Eface, field_type_vector, & + uvelE, vvelE) + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_Nface, field_type_vector, & + uvelN, vvelN) - call stepuv_CD (nx_block , ny_block , & ! E point - icellE (iblk), Cdn_ocnE (:,:,iblk), & - indxEi (:,iblk), indxEj (:,iblk), & - aiE (:,:,iblk), & - uocnE (:,:,iblk), vocnE (:,:,iblk), & - waterxE (:,:,iblk), wateryE (:,:,iblk), & - forcexE (:,:,iblk), forceyE (:,:,iblk), & - emassdti (:,:,iblk), fmE (:,:,iblk), & - strintxE (:,:,iblk), strintyE (:,:,iblk), & - taubxE (:,:,iblk), taubyE (:,:,iblk), & - uvelE_init(:,:,iblk), vvelE_init(:,:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - TbE (:,:,iblk)) - - call stepuv_CD (nx_block , ny_block , & ! N point - icellN (iblk), Cdn_ocnN (:,:,iblk), & - indxNi (:,iblk), indxNj (:,iblk), & - aiN (:,:,iblk), & - uocnN (:,:,iblk), vocnN (:,:,iblk), & - waterxN (:,:,iblk), wateryN (:,:,iblk), & - forcexN (:,:,iblk), forceyN (:,:,iblk), & - nmassdti (:,:,iblk), fmN (:,:,iblk), & - strintxN (:,:,iblk), strintyN (:,:,iblk), & - taubxN (:,:,iblk), taubyN (:,:,iblk), & - uvelN_init(:,:,iblk), vvelN_init(:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - TbN (:,:,iblk)) - enddo - !$OMP END PARALLEL DO + call grid_average_X2Y('S', uvelE, 'E', uvel, 'U') + call grid_average_X2Y('S', vvelN, 'N', vvel, 'U') - ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Eface, field_type_vector, & - uvelE, vvelE) - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_Nface, field_type_vector, & - uvelN, vvelN) - - call grid_average_X2Y('S', uvelE, 'E', uvel, 'U') - call grid_average_X2Y('S', vvelN, 'N', vvel, 'U') - - uvel(:,:,:) = uvel(:,:,:)*uvm(:,:,:) - vvel(:,:,:) = vvel(:,:,:)*uvm(:,:,:) + uvel(:,:,:) = uvel(:,:,:)*uvm(:,:,:) + vvel(:,:,:) = vvel(:,:,:)*uvm(:,:,:) ! U fields at NE corner ! calls ice_haloUpdate, controls bundles and masks - call dyn_haloUpdate (halo_info, halo_info_mask, & - field_loc_NEcorner, field_type_vector, & - uvel, vvel) + call dyn_haloUpdate (halo_info, halo_info_mask, & + field_loc_NEcorner, field_type_vector, & + uvel, vvel) - enddo ! subcycling + enddo ! subcycling - !----------------------------------------------------------------- - ! save quantities for mechanical redistribution - !----------------------------------------------------------------- + !----------------------------------------------------------------- + ! save quantities for mechanical redistribution + !----------------------------------------------------------------- - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - call deformationsCD_T (nx_block , ny_block , & - icellT (iblk), & - indxTi (:,iblk), indxTj (:,iblk), & - uvelE (:,:,iblk), vvelE (:,:,iblk), & - uvelN (:,:,iblk), vvelN (:,:,iblk), & - dxN (:,:,iblk), dyE (:,:,iblk), & - dxT (:,:,iblk), dyT (:,:,iblk), & - tarear (:,:,iblk), & - shear (:,:,iblk), divu (:,:,iblk), & - rdg_conv(:,:,iblk), rdg_shear(:,:,iblk)) - enddo - !$OMP END PARALLEL DO - endif ! grid_ice + !$OMP PARALLEL DO PRIVATE(iblk) + do iblk = 1, nblocks + call deformationsCD_T (nx_block , ny_block , & + icellT (iblk), & + indxTi (:,iblk), indxTj (:,iblk), & + uvelE (:,:,iblk), vvelE (:,:,iblk), & + uvelN (:,:,iblk), vvelN (:,:,iblk), & + dxN (:,:,iblk), dyE (:,:,iblk), & + dxT (:,:,iblk), dyT (:,:,iblk), & + tarear (:,:,iblk), & + shear (:,:,iblk), divu (:,:,iblk), & + rdg_conv(:,:,iblk), rdg_shear(:,:,iblk)) + enddo + !$OMP END PARALLEL DO + endif ! grid_ice - call ice_timer_stop(timer_evp_2d) - endif ! evp_algorithm + call ice_timer_stop(timer_evp) if (maskhalo_dyn) then call ice_HaloDestroy(halo_info_mask) @@ -1439,7 +1426,7 @@ subroutine stress (nx_block, ny_block, & stress12_3, stress12_4, & str ) - use ice_dyn_shared, only: strain_rates, visc_replpress, capping + use ice_dyn_shared, only: strain_rates, visc_replpress integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions @@ -1532,16 +1519,16 @@ subroutine stress (nx_block, ny_block, & !----------------------------------------------------------------- call visc_replpress (strength(i,j), DminTarea(i,j), Deltane, & - zetax2ne, etax2ne, rep_prsne, capping) + zetax2ne, etax2ne, rep_prsne) call visc_replpress (strength(i,j), DminTarea(i,j), Deltanw, & - zetax2nw, etax2nw, rep_prsnw, capping) + zetax2nw, etax2nw, rep_prsnw) call visc_replpress (strength(i,j), DminTarea(i,j), Deltasw, & - zetax2sw, etax2sw, rep_prssw, capping) + zetax2sw, etax2sw, rep_prssw) call visc_replpress (strength(i,j), DminTarea(i,j), Deltase, & - zetax2se, etax2se, rep_prsse, capping) + zetax2se, etax2se, rep_prsse) !----------------------------------------------------------------- ! the stresses ! kg/s^2 @@ -1549,7 +1536,6 @@ subroutine stress (nx_block, ny_block, & !----------------------------------------------------------------- ! NOTE: for comp. efficiency 2 x zeta and 2 x eta are used in the code - stressp_1 (i,j) = (stressp_1 (i,j)*(c1-arlx1i*revp) & + arlx1i*(zetax2ne*divune - rep_prsne)) * denom1 stressp_2 (i,j) = (stressp_2 (i,j)*(c1-arlx1i*revp) & @@ -1736,7 +1722,7 @@ subroutine stressC_T (nx_block, ny_block , & stresspT , stressmT , & stress12T) - use ice_dyn_shared, only: strain_rates_T, capping, & + use ice_dyn_shared, only: strain_rates_T, & visc_replpress, e_factor integer (kind=int_kind), intent(in) :: & @@ -1829,7 +1815,7 @@ subroutine stressC_T (nx_block, ny_block , & !----------------------------------------------------------------- call visc_replpress (strength(i,j), DminTarea(i,j), DeltaT, & - zetax2T (i,j), etax2T(i,j), rep_prsT, capping) + zetax2T (i,j), etax2T(i,j), rep_prsT) !----------------------------------------------------------------- ! the stresses ! kg/s^2 @@ -1872,7 +1858,7 @@ subroutine stressC_U (nx_block , ny_block ,& stress12U) use ice_dyn_shared, only: visc_replpress, & - visc_method, deltaminEVP, capping + visc_method, deltaminEVP integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions @@ -1928,7 +1914,7 @@ subroutine stressC_U (nx_block , ny_block ,& ! only need etax2U here, but other terms are calculated with etax2U ! minimal extra calculations here even though it seems like there is call visc_replpress (strengthU(i,j), DminUarea, deltaU(i,j), & - lzetax2U , letax2U , lrep_prsU , capping) + lzetax2U , letax2U , lrep_prsU) stress12U(i,j) = (stress12U(i,j)*(c1-arlx1i*revp) & + arlx1i*p5*letax2U*shearU(i,j)) * denom1 enddo @@ -1956,7 +1942,7 @@ subroutine stressCD_T (nx_block, ny_block , & stresspT, stressmT , & stress12T) - use ice_dyn_shared, only: strain_rates_T, capping, & + use ice_dyn_shared, only: strain_rates_T, & visc_replpress integer (kind=int_kind), intent(in) :: & @@ -2026,7 +2012,7 @@ subroutine stressCD_T (nx_block, ny_block , & !----------------------------------------------------------------- call visc_replpress (strength(i,j), DminTarea(i,j), DeltaT(i,j), & - zetax2T (i,j), etax2T(i,j), rep_prsT , capping) + zetax2T (i,j), etax2T(i,j), rep_prsT) !----------------------------------------------------------------- ! the stresses ! kg/s^2 @@ -2065,7 +2051,7 @@ subroutine stressCD_U (nx_block, ny_block, & stress12U) use ice_dyn_shared, only: visc_replpress, & - visc_method, deltaminEVP, capping + visc_method, deltaminEVP integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions @@ -2123,7 +2109,7 @@ subroutine stressCD_U (nx_block, ny_block, & ! only need etax2U here, but other terms are calculated with etax2U ! minimal extra calculations here even though it seems like there is call visc_replpress (strengthU(i,j), DminUarea, deltaU(i,j), & - lzetax2U , letax2U , lrep_prsU , capping) + lzetax2U , letax2U , lrep_prsU ) endif !----------------------------------------------------------------- diff --git a/cicecore/cicedyn/dynamics/ice_dyn_evp1d.F90 b/cicecore/cicedyn/dynamics/ice_dyn_evp1d.F90 new file mode 100644 index 000000000..223ef2849 --- /dev/null +++ b/cicecore/cicedyn/dynamics/ice_dyn_evp1d.F90 @@ -0,0 +1,1467 @@ +! Module for 1d evp dynamics +! Mimics the 2d B grid solver +! functions in this module includes conversion from 1d to 2d and vice versa. +! cpp flag _OPENMP_TARGET is for gpu. Otherwize optimized for cpu +! FIXME: For now it allocates all water point, which in most cases could be avoided. +!=============================================================================== +! Created by Till Rasmussen (DMI), Mads Hvid Ribergaard (DMI), and Jacob W. Poulsen, Intel + +module ice_dyn_evp1d + + !- modules ------------------------------------------------------------------- + use ice_kinds_mod + use ice_blocks, only: nx_block, ny_block, nghost + use ice_constants + use ice_communicate, only: my_task, master_task + use ice_domain_size, only: max_blocks, nx_global, ny_global + use ice_fileunits, only: nu_diag + use ice_exit, only: abort_ice + + !- directives ---------------------------------------------------------------- + implicit none + private + + !- public routines ----------------------------------------------------------- + public :: dyn_evp1d_init, dyn_evp1d_run, dyn_evp1d_finalize + + !- private routines ---------------------------------------------------------- + + !- private vars -------------------------------------------------------------- + ! nx and ny are module variables for arrays after gather (G_*) Dimension according to CICE is + ! nx_global+2*nghost, ny_global+2*nghost + ! nactive are number of active points (both t and u). navel is number of active + integer(kind=int_kind), save :: nx, ny, nActive, navel, nallocated + + ! indexes + integer(kind=int_kind), allocatable, dimension(:,:) :: iwidx + logical(kind=log_kind), allocatable, dimension(:) :: skipTcell,skipUcell + integer(kind=int_kind), allocatable, dimension(:) :: ee,ne,se,nw,sw,sse ! arrays for neighbour points + integer(kind=int_kind), allocatable, dimension(:) :: indxti, indxtj, indxTij + + ! 1D arrays to allocate + + ! Grid + real (kind=dbl_kind), allocatable, dimension(:) :: & + HTE_1d,HTN_1d, HTEm1_1d,HTNm1_1d, dxT_1d, dyT_1d, uarear_1d + + ! time varying + real(kind=dbl_kind) , allocatable, dimension(:) :: & + cdn_ocn,aiu,uocn,vocn,waterxU,wateryU,forcexU,forceyU,umassdti,fmU, & + strintxU,strintyU,uvel_init,vvel_init, strength, uvel, vvel, & + stressp_1, stressp_2, stressp_3, stressp_4, stressm_1, stressm_2, & + stressm_3, stressm_4, stress12_1, stress12_2, stress12_3, stress12_4, & + str1, str2, str3, str4, str5, str6, str7, str8, Tbu, Cb + + ! halo updates for circular domains + integer(kind=int_kind), allocatable, dimension(:) :: & + halo_parent_outer_east , halo_parent_outer_west , & + halo_parent_outer_north, halo_parent_outer_south, & + halo_inner_east , halo_inner_west , & + halo_inner_north , halo_inner_south + + ! number of halo points (same for inner and outer) + integer(kind=int_kind) :: & + n_inner_east, n_inner_west, n_inner_north, n_inner_south + +!============================================================================= + contains +!============================================================================= +! module public subroutines +! In addition all water points are assumed to be active and allocated thereafter. +!============================================================================= + + subroutine dyn_evp1d_init + + use ice_grid, only: G_HTE, G_HTN + + implicit none + + ! local variables + + real(kind=dbl_kind) , allocatable, dimension(:,:) :: G_dyT, G_dxT, G_uarear + logical(kind=log_kind), allocatable, dimension(:,:) :: G_tmask + + integer(kind=int_kind) :: ios, ierr + + character(len=*), parameter :: subname = '(dyn_evp1d_init)' + + nx=nx_global+2*nghost + ny=ny_global+2*nghost + + allocate(G_dyT(nx,ny),G_dxT(nx,ny),G_uarear(nx,ny),G_tmask(nx,ny),stat=ierr) + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + ! gather from blks to global + call gather_static(G_uarear, G_dxT, G_dyT, G_tmask) + + ! calculate number of water points (T and U). Only needed for the static version + ! tmask in ocean/ice + if (my_task == master_task) then + call calc_nActiveTU(G_tmask,nActive) + call evp1d_alloc_static_na(nActive) + call calc_2d_indices_init(nActive, G_tmask) + call calc_navel(nActive, navel) + call evp1d_alloc_static_navel(navel) + call numainit(1,nActive,navel) + call convert_2d_1d_init(nActive,G_HTE, G_HTN, G_uarear, G_dxT, G_dyT) + call evp1d_alloc_static_halo() + endif + + deallocate(G_dyT,G_dxT,G_uarear,G_tmask,stat=ierr) + if (ierr/=0) then + call abort_ice(subname//' ERROR: deallocating', file=__FILE__, line=__LINE__) + endif + + end subroutine dyn_evp1d_init + +!============================================================================= + + subroutine dyn_evp1d_run(L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strength, & + L_cdn_ocn , L_aiu , L_uocn , L_vocn , & + L_waterxU , L_wateryU , L_forcexU , L_forceyU , & + L_umassdti , L_fmU , L_strintxU , L_strintyU , & + L_Tbu , L_taubxU , L_taubyU , L_uvel , & + L_vvel , L_icetmask , L_iceUmask) + + use ice_dyn_shared, only : ndte + use ice_dyn_core1d, only : stress_1d, stepu_1d, calc_diag_1d + use ice_timers , only : ice_timer_start, ice_timer_stop, timer_evp1dcore + + use icepack_intfc , only : icepack_query_parameters, icepack_warnings_flush, & + icepack_warnings_aborted + + implicit none + + ! nx_block, ny_block, max_blocks + real(kind=dbl_kind) , dimension(:,:,:), intent(inout) :: & + L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strintxU , L_strintyU , L_uvel , L_vvel , & + L_taubxU , L_taubyU + real(kind=dbl_kind) , dimension(:,:,:), intent(in) :: & + L_strength , & + L_cdn_ocn , L_aiu , L_uocn , L_vocn , & + L_waterxU , L_wateryU , L_forcexU , L_forceyU, & + L_umassdti , L_fmU , L_Tbu + logical(kind=log_kind), dimension(:,:,:), intent(in) :: & + L_iceUmask , L_iceTmask + + ! local variables + + ! nx, ny + real(kind=dbl_kind), dimension(nx,ny) :: & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , G_strintxU , G_strintyU , & + G_Tbu , G_uvel , G_vvel , G_taubxU , & + G_taubyU ! G_taubxU and G_taubyU are post processed from Cb + logical(kind=log_kind), dimension (nx,ny) :: & + G_iceUmask , G_iceTmask + + character(len=*), parameter :: subname = '(dyn_evp1d_run)' + + integer(kind=int_kind) :: ksub + + real (kind=dbl_kind) :: rhow + + ! From 3d to 2d on master task + call gather_dyn(L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strength, & + L_cdn_ocn , L_aiu , L_uocn , L_vocn , & + L_waterxU , L_wateryU , L_forcexU , L_forceyU , & + L_umassdti , L_fmU , & + L_Tbu , L_uvel , L_vvel , & + L_icetmask , L_iceUmask , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength , & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , & + G_Tbu , G_uvel , G_vvel , & + G_iceTmask, G_iceUmask) + + if (my_task == master_task) then + call set_skipMe(G_iceTmask, G_iceUmask,nActive) + ! Map from 2d to 1d + call convert_2d_1d_dyn(nActive, & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4, & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4, & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , & + G_Tbu , G_uvel , G_vvel) + + call calc_halo_parent(Nactive,navel) + + ! map from cpu to gpu (to) and back. + ! This could be optimized considering which variables change from time step to time step + ! and which are constant. + ! in addition initialization of Cb and str1, str2, str3, str4, str5, str6, str7, str8 + call icepack_query_parameters(rhow_out=rhow) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & + file=__FILE__, line=__LINE__) + + call ice_timer_start(timer_evp1dcore) +#ifdef _OPENMP_TARGET + !$omp target data map(to: ee, ne, se, nw, sw, sse, skipUcell, skipTcell,& + !$omp strength, dxT_1d, dyT_1d, HTE_1d,HTN_1d,HTEm1_1d, & + !$omp HTNm1_1d,forcexU, forceyU, umassdti, fmU, & + !$omp uarear_1d,uvel_init, vvel_init, Tbu, Cb, & + !$omp str1, str2, str3, str4, str5, str6, str7, str8, & + !$omp cdn_ocn, aiu, uocn, vocn, waterxU, wateryU, rhow & + !$omp map(tofrom: uvel,vvel, & + !$omp stressp_1, stressp_2, stressp_3, stressp_4, & + !$omp stressm_1, stressm_2, stressm_3, stressm_4, & + !$omp stress12_1,stress12_2,stress12_3,stress12_4) + !$omp target update to(arlx1i,denom1,capping,deltaminEVP,e_factor,epp2i,brlx) +#endif + ! initialization of str? in order to avoid influence from old time steps + str1(1:navel)=c0 + str2(1:navel)=c0 + str3(1:navel)=c0 + str4(1:navel)=c0 + str5(1:navel)=c0 + str6(1:navel)=c0 + str7(1:navel)=c0 + str8(1:navel)=c0 + + do ksub = 1,ndte ! subcycling + call stress_1d (ee, ne, se, 1, nActive, & + uvel, vvel, dxT_1d, dyT_1d, skipTcell, strength, & + HTE_1d, HTN_1d, HTEm1_1d, HTNm1_1d, & + stressp_1, stressp_2, stressp_3, stressp_4, & + stressm_1, stressm_2, stressm_3, stressm_4, & + stress12_1, stress12_2, stress12_3, stress12_4, & + str1, str2, str3, str4, str5, str6, str7, str8) + + call stepu_1d (1, nActive, cdn_ocn, aiu, uocn, vocn, & + waterxU, wateryU, forcexU, forceyU, umassdti, fmU, uarear_1d, & + uvel_init, vvel_init, uvel, vvel, & + str1, str2, str3, str4, str5, str6, str7, str8, & + nw, sw, sse, skipUcell, Tbu, Cb, rhow) + call evp1d_halo_update() + enddo + ! This can be skipped if diagnostics of strintx and strinty is not needed + ! They will either both be calculated or not. + call calc_diag_1d(1 , nActive , & + uarear_1d, skipUcell, & + str1 , str2 , & + str3 , str4 , & + str5 , str6 , & + str7 , str8 , & + nw , sw , & + sse , & + strintxU, strintyU) + + call ice_timer_stop(timer_evp1dcore) + +#ifdef _OPENMP_TARGET + !$omp end target data +#endif + ! Map results back to 2d + call convert_1d_2d_dyn(nActive, navel, & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength , & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , G_strintxU , G_strintyU , & + G_Tbu , G_uvel , G_vvel , G_taubxU , & + G_taubyU) + + endif ! master_task + + call scatter_dyn(L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strintxU , L_strintyU , L_uvel , L_vvel , & + L_taubxU , L_taubyU , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strintxU , G_strintyU , G_uvel , G_vvel , & + G_taubxU , G_taubyU) + ! calculate number of active points. allocate if initial or if array size should increase + ! call calc_nActiveTU(iceTmask_log,nActive, iceUmask) + ! if (nActiveold ==0) then ! first + ! call evp_1d_alloc(nActive, nActive,nx,ny) + ! nactiveold=nActive+buf1d ! allocate + ! call init_unionTU(nx, ny, iceTmask_log,iceUmask) + ! else if (nactiveold < nActive) then + ! write(nu_diag,*) 'Warning nActive is bigger than old allocation. Need to re allocate' + ! call evp_1d_dealloc() ! only deallocate if not first time step + ! call evp_1d_alloc(nActive, nActive,nx,ny) + ! nactiveold=nActive+buf1d ! allocate + ! call init_unionTU(nx, ny, iceTmask_log,iceUmask) + ! endif + ! call cp_2dto1d(nActive) + ! FIXME THIS IS THE LOGIC FOR RE ALLOCATION IF NEEDED + ! call add_1d(nx, ny, natmp, iceTmask_log, iceUmask, ts) + + end subroutine dyn_evp1d_run + +!============================================================================= + + subroutine dyn_evp1d_finalize() + implicit none + + character(len=*), parameter :: subname = '(dyn_evp1d_finalize)' + + if (my_task == master_task) then + write(nu_diag,*) 'Close evp 1d log' + endif + + end subroutine dyn_evp1d_finalize + +!============================================================================= + + subroutine evp1d_alloc_static_na(na0) + implicit none + + integer(kind=int_kind), intent(in) :: na0 + integer(kind=int_kind) :: ierr + character(len=*), parameter :: subname = '(evp1d_alloc_static_na)' + + allocate(skipTcell(1:na0), & + skipUcell(1:na0), & + iwidx(1:nx,1:ny), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + + allocate(indxTi(1:na0), & + indxTj(1:na0), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + allocate(ee(1:na0) , & + ne(1:na0) , & + se(1:na0) , & + nw(1:na0) , & + sw(1:na0) , & + sse(1:na0), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + allocate( HTE_1d (1:na0), & + HTN_1d (1:na0), & + HTEm1_1d (1:na0), & + HTNm1_1d (1:na0), & + dxT_1d (1:na0), & + dyT_1d (1:na0), & + strength (1:na0), & + stressp_1 (1:na0), & + stressp_2 (1:na0), & + stressp_3 (1:na0), & + stressp_4 (1:na0), & + stressm_1 (1:na0), & + stressm_2 (1:na0), & + stressm_3 (1:na0), & + stressm_4 (1:na0), & + stress12_1(1:na0), & + stress12_2(1:na0), & + stress12_3(1:na0), & + stress12_4(1:na0), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + allocate(cdn_ocn (1:na0), aiu (1:na0), & + uocn (1:na0), vocn (1:na0), & + waterxU (1:na0), wateryU (1:na0), & + forcexU (1:na0), forceyU (1:na0), & + umassdti (1:na0), fmU (1:na0), & + uarear_1d(1:na0), & + strintxU (1:na0), strintyU (1:na0), & + Tbu (1:na0), Cb (1:na0), & + uvel_init(1:na0), vvel_init(1:na0), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + end subroutine evp1d_alloc_static_na + +!============================================================================= + + subroutine evp1d_alloc_static_navel(navel0) + implicit none + + integer(kind=int_kind), intent(in) :: navel0 + integer(kind=int_kind) :: ierr + character(len=*), parameter :: subname = '(evp1d_alloc_static_na)' + + allocate(str1(1:navel0) , str2(1:navel0), str3(1:navel0), & + str4(1:navel0) , str5(1:navel0), str6(1:navel0), & + str7(1:navel0) , str8(1:navel0), & + indxTij(1:navel0), uvel(1:navel0), vvel(1:navel0), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + end subroutine evp1d_alloc_static_navel + +!============================================================================= + + subroutine evp1d_alloc_static_halo() + + implicit none + integer(kind=int_kind) :: ierr + character(len=*), parameter :: subname = '(evp1d_alloc_static_halo)' + + ! allocation of arrays to use for halo + ! These are the size of one of the dimensions of the global grid but they could be + ! reduced in size as only the number of active U points are used. + ! Points to send data from are in the "inner" vectors. Data in outer points are named "outer" + + allocate(halo_inner_east (ny), halo_inner_west (ny), & + halo_inner_north(nx), halo_inner_south(nx), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + allocate(halo_parent_outer_east (ny), halo_parent_outer_west (ny), & + halo_parent_outer_north(nx), halo_parent_outer_south(nx), & + stat=ierr) + + if (ierr/=0) then + call abort_ice(subname//' ERROR: allocating', file=__FILE__, line=__LINE__) + endif + + end subroutine evp1d_alloc_static_halo + +!============================================================================= + + subroutine calc_nActiveTU(Tmask,na0, Umask) + + ! Calculate number of active points with a given mask. + + implicit none + logical(kind=log_kind), intent(in) :: Tmask(:,:) + logical(kind=log_kind), optional, intent(in) :: Umask(:,:) + integer(kind=int_kind), intent(out) :: na0 + integer(kind=int_kind) :: i,j + character(len=*), parameter :: subname = '(calc_nActivceTU)' + + na0=0 + if (present(Umask)) then + do i=1+nghost,nx + do j=1+nghost,ny + if ((Tmask(i,j)) .or. (Umask(i,j))) then + na0=na0+1 + endif + enddo + enddo + else + do i=1+nghost,nx + do j=1+nghost,ny + if (Tmask(i,j)) then + na0=na0+1 + endif + enddo + enddo + endif + + end subroutine calc_nActiveTU + +!============================================================================= + + subroutine set_skipMe(iceTmask, iceUmask,na0) + + implicit none + + logical(kind=log_kind), intent(in) :: iceTmask(:,:), iceUmask(:,:) + integer(kind=int_kind), intent(in) :: na0 + integer(kind=int_kind) :: iw, i, j, niw + character(len=*), parameter :: subname = '(set_skipMe)' + + skipUcell=.false. + skipTcell=.false. + niw=0 + ! first count + do iw=1, na0 + i = indxti(iw) + j = indxtj(iw) + if ( iceTmask(i,j) .or. iceUmask(i,j)) then + niw=niw+1 + endif + if (.not. (iceTmask(i,j))) skipTcell(iw)=.true. + if (.not. (iceUmask(i,j))) skipUcell(iw)=.true. + if (i == nx) skipUcell(iw)=.true. + if (j == ny) skipUcell(iw)=.true. + enddo + ! write(nu_diag,*) 'number of points and Active points', na0, niw + + end subroutine set_skipMe + +!============================================================================= + + subroutine calc_2d_indices_init(na0, Tmask) + ! All points are active. Need to find neighbors. + ! This should include de selection of u points. + + implicit none + + integer(kind=int_kind), intent(in) :: na0 + ! nx, ny + logical(kind=log_kind), dimension(:,:), intent(in) :: Tmask + + ! local variables + + integer(kind=int_kind) :: i, j, Nmaskt + character(len=*), parameter :: subname = '(calc_2d_indices_init)' + + indxti(:) = 0 + indxtj(:) = 0 + Nmaskt = 0 + ! NOTE: T mask includes northern and eastern ghost cells + do j = 1 + nghost, ny + do i = 1 + nghost, nx + if (Tmask(i,j)) then + Nmaskt = Nmaskt + 1 + indxti(Nmaskt) = i + indxtj(Nmaskt) = j + end if + end do + end do + + end subroutine calc_2d_indices_init + +!============================================================================= + + subroutine union(x, y, xdim, ydim, xy, nxy) + + ! Find union (xy) of two sorted integer vectors (x and y), i.e. + ! combined values of the two vectors with no repetitions + implicit none + integer(kind=int_kind), intent(in) :: xdim, ydim + integer(kind=int_kind), intent(in) :: x(1:xdim), y(1:ydim) + integer(kind=int_kind), intent(out) :: xy(1:xdim + ydim) + integer(kind=int_kind), intent(out) :: nxy + + ! local variables + + integer(kind=int_kind) :: i, j, k + character(len=*), parameter :: subname = '(union)' + + i = 1 + j = 1 + k = 1 + do while (i <= xdim .and. j <= ydim) + if (x(i) < y(j)) then + xy(k) = x(i) + i = i + 1 + else if (x(i) > y(j)) then + xy(k) = y(j) + j = j + 1 + else + xy(k) = x(i) + i = i + 1 + j = j + 1 + endif + k = k + 1 + enddo + + ! the rest + do while (i <= xdim) + xy(k) = x(i) + i = i + 1 + k = k + 1 + enddo + do while (j <= ydim) + xy(k) = y(j) + j = j + 1 + k = k + 1 + enddo + nxy = k - 1 + + end subroutine union + +!============================================================================= + + subroutine gather_static(G_uarear, G_dxT, G_dyT, G_Tmask) + + ! In standalone distrb_info is an integer. Not needed anyway + use ice_communicate, only : master_task + use ice_gather_scatter, only : gather_global_ext + use ice_domain, only : distrb_info + use ice_grid, only: dyT, dxT, uarear, tmask + implicit none + + ! nx, ny + real(kind=dbl_kind) , dimension(:,:), intent(out) :: G_uarear, G_dxT, G_dyT + logical(kind=log_kind), dimension(:,:), intent(out) :: G_Tmask + + character(len=*), parameter :: subname = '(gather_static)' + + ! copy from distributed I_* to G_* + call gather_global_ext(G_uarear, uarear, master_task, distrb_info) + call gather_global_ext(G_dxT , dxT , master_task, distrb_info) + call gather_global_ext(G_dyT , dyT , master_task, distrb_info) + call gather_global_ext(G_Tmask , Tmask , master_task, distrb_info) + + end subroutine gather_static + +!============================================================================= + + subroutine gather_dyn(L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3,L_stress12_4 , & + L_strength , & + L_cdn_ocn , L_aiu , L_uocn , L_vocn , & + L_waterxU , L_wateryU , L_forcexU , L_forceyU , & + L_umassdti , L_fmU , & + L_Tbu , L_uvel , L_vvel , & + L_icetmask , L_iceUmask , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , & + G_Tbu , G_uvel , G_vvel , & + G_iceTmask, G_iceUmask) + + use ice_communicate, only : master_task + use ice_gather_scatter, only : gather_global_ext + use ice_domain, only : distrb_info + implicit none + + ! nx_block, ny_block, max_blocks + real(kind=dbl_kind) , dimension(:,:,:), intent(in) :: & + L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strength , & + L_cdn_ocn , L_aiu , L_uocn , L_vocn , & + L_waterxU , L_wateryU , L_forcexU , L_forceyU , & + L_umassdti , L_fmU , & + L_Tbu , L_uvel , L_vvel + logical(kind=log_kind), dimension(:,:,:), intent(in) :: & + L_iceUmask , L_iceTmask + + ! nx, ny + real(kind=dbl_kind) , dimension(:,:), intent(out) :: & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , & + G_Tbu , G_uvel , G_vvel + logical(kind=log_kind), dimension(:,:), intent(out) :: & + G_iceUmask , G_iceTmask + + character(len=*), parameter :: subname = '(gather_dyn)' + + ! copy from distributed I_* to G_* + call gather_global_ext(G_stressp_1 , L_stressp_1, master_task, distrb_info,c0) + call gather_global_ext(G_stressp_2 , L_stressp_2, master_task, distrb_info,c0) + call gather_global_ext(G_stressp_3 , L_stressp_3, master_task, distrb_info,c0) + call gather_global_ext(G_stressp_4 , L_stressp_4, master_task, distrb_info,c0) + + call gather_global_ext(G_stressm_1 , L_stressm_1, master_task, distrb_info,c0) + call gather_global_ext(G_stressm_2 , L_stressm_2, master_task, distrb_info,c0) + call gather_global_ext(G_stressm_3 , L_stressm_3, master_task, distrb_info,c0) + call gather_global_ext(G_stressm_4 , L_stressm_4, master_task, distrb_info,c0) + + call gather_global_ext(G_stress12_1, L_stress12_1, master_task, distrb_info,c0) + call gather_global_ext(G_stress12_2, L_stress12_2, master_task, distrb_info,c0) + call gather_global_ext(G_stress12_3, L_stress12_3, master_task, distrb_info,c0) + call gather_global_ext(G_stress12_4, L_stress12_4, master_task, distrb_info,c0) + call gather_global_ext(G_strength , L_strength , master_task, distrb_info,c0) + + call gather_global_ext(G_cdn_ocn , L_cdn_ocn , master_task, distrb_info) + call gather_global_ext(G_aiu , L_aiu , master_task, distrb_info) + call gather_global_ext(G_uocn , L_uocn , master_task, distrb_info) + call gather_global_ext(G_vocn , L_vocn , master_task, distrb_info) + + call gather_global_ext(G_waterxU , L_waterxU , master_task, distrb_info) + call gather_global_ext(G_wateryU , L_wateryU , master_task, distrb_info) + call gather_global_ext(G_forcexU , L_forcexU , master_task, distrb_info) + call gather_global_ext(G_forceyU , L_forceyU , master_task, distrb_info) + + call gather_global_ext(G_umassdti , L_umassdti , master_task, distrb_info) + call gather_global_ext(G_fmU , L_fmU , master_task, distrb_info) + + call gather_global_ext(G_Tbu , L_Tbu , master_task, distrb_info) + call gather_global_ext(G_uvel , L_uvel , master_task, distrb_info,c0) + call gather_global_ext(G_vvel , L_vvel , master_task, distrb_info,c0) + call gather_global_ext(G_iceTmask , L_iceTmask , master_task, distrb_info) + call gather_global_ext(G_iceUmask , L_iceUmask , master_task, distrb_info) + + end subroutine gather_dyn + +!============================================================================= + + subroutine scatter_dyn(L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strintxU , L_strintyU , L_uvel , L_vvel , & + L_taubxU , L_taubyU , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4, & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4, & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strintxU , G_strintyU , G_uvel , G_vvel , & + G_taubxU , G_taubyU ) + + use ice_communicate, only : master_task + use ice_gather_scatter, only : scatter_global_ext + use ice_domain, only : distrb_info + implicit none + + ! nx_block, ny_block, max_blocks + real(kind=dbl_kind), dimension(:,:,:), intent(out) :: & + L_stressp_1 , L_stressp_2 , L_stressp_3 , L_stressp_4 , & + L_stressm_1 , L_stressm_2 , L_stressm_3 , L_stressm_4 , & + L_stress12_1, L_stress12_2, L_stress12_3, L_stress12_4, & + L_strintxU , L_strintyU , L_uvel , L_vvel , & + L_taubxU , L_taubyU + + ! nx, ny + real(kind=dbl_kind), dimension(:,:), intent(in) :: & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strintxU , G_strintyU , G_uvel , G_vvel , & + G_taubxU , G_taubyU + + character(len=*), parameter :: subname = '(scatter_dyn)' + + call scatter_global_ext(L_stressp_1, G_stressp_1, master_task, distrb_info) + call scatter_global_ext(L_stressp_2, G_stressp_2, master_task, distrb_info) + call scatter_global_ext(L_stressp_3, G_stressp_3, master_task, distrb_info) + call scatter_global_ext(L_stressp_4, G_stressp_4, master_task, distrb_info) + + call scatter_global_ext(L_stressm_1, G_stressm_1, master_task, distrb_info) + call scatter_global_ext(L_stressm_2, G_stressm_2, master_task, distrb_info) + call scatter_global_ext(L_stressm_3, G_stressm_3, master_task, distrb_info) + call scatter_global_ext(L_stressm_4, G_stressm_4, master_task, distrb_info) + + call scatter_global_ext(L_stress12_1, G_stress12_1, master_task, distrb_info) + call scatter_global_ext(L_stress12_2, G_stress12_2, master_task, distrb_info) + call scatter_global_ext(L_stress12_3, G_stress12_3, master_task, distrb_info) + call scatter_global_ext(L_stress12_4, G_stress12_4, master_task, distrb_info) + + call scatter_global_ext(L_strintxU , G_strintxU , master_task, distrb_info) + call scatter_global_ext(L_strintyU , G_strintyU , master_task, distrb_info) + call scatter_global_ext(L_uvel , G_uvel , master_task, distrb_info) + call scatter_global_ext(L_vvel , G_vvel , master_task, distrb_info) + call scatter_global_ext(L_taubxU , G_taubxU , master_task, distrb_info) + call scatter_global_ext(L_taubyU , G_taubyU , master_task, distrb_info) + + end subroutine scatter_dyn + +!============================================================================= + + subroutine convert_2d_1d_init(na0, G_HTE, G_HTN, G_uarear, G_dxT, G_dyT) + + implicit none + + integer(kind=int_kind), intent(in) :: na0 + real (kind=dbl_kind), dimension(:, :), intent(in) :: G_HTE, G_HTN, G_uarear, G_dxT, G_dyT + + ! local variables + + integer(kind=int_kind) :: iw, lo, up, j, i + integer(kind=int_kind), dimension(1:na0) :: & + Iin, Iee, Ine, Ise, Inw, Isw, Isse + + integer(kind=int_kind), dimension(1:7 * na0) :: util1, util2 + + character(len=*), parameter :: subname = '(convert_2d_1d_init)' + + ! calculate additional 1D indices used for finite differences + do iw = 1, na0 + ! get 2D indices + i = indxti(iw) + j = indxtj(iw) + ! calculate 1D indices + Iin(iw) = i + (j - 1) * nx ! ( 0, 0) target point + Iee(iw) = i - 1 + (j - 1) * nx ! (-1, 0) + Ine(iw) = i - 1 + (j - 2) * nx ! (-1,-1) + Ise(iw) = i + (j - 2) * nx ! ( 0,-1) + Inw(iw) = i + 1 + (j - 1) * nx ! (+1, 0) + Isw(iw) = i + 1 + (j - 0) * nx ! (+1,+1) + Isse(iw) = i + (j - 0) * nx ! ( 0,+1) + end do + + ! find number of points needed for finite difference calculations + call union(Iin, Iee, na0, na0, util1,i ) + call union(util1, Ine, i, na0, util2, j ) + call union(util2, Ise, j, na0, util1, i ) + call union(util1, Inw, i, na0, util2, j ) + call union(util2, Isw, j, na0, util1, i ) + call union(util1, Isse, i, na0, util2, navel) + + ! index vector with sorted target points + do iw = 1, na0 + indxTij(iw) = Iin(iw) + end do + ! sorted additional points + call setdiff(util2, Iin, navel, na0, util1, j) + do iw = na0 + 1, navel + indxTij(iw) = util1(iw - na0) + end do + + ! indices for additional points needed for uvel and vvel + call findXinY(Iee, indxTij, na0, navel, ee) + call findXinY(Ine, indxTij, na0, navel, ne) + call findXinY(Ise, indxTij, na0, navel, se) + call findXinY(Inw, indxTij, na0, navel, nw) + call findXinY(Isw, indxTij, na0, navel, sw) + call findXinY(Isse, indxTij, na0, navel, sse) + !tar i$OMP PARALLEL PRIVATE(iw, lo, up, j, i) + ! write 1D arrays from 2D arrays (target points) + !tar call domp_get_domain(1, na0, lo, up) + lo=1 + up=na0 + do iw = 1, na0 + ! get 2D indices + i = indxti(iw) + j = indxtj(iw) + ! map + uarear_1d(iw) = G_uarear(i, j) + dxT_1d(iw) = G_dxT(i, j) + dyT_1d(iw) = G_dyT(i, j) + HTE_1d(iw) = G_HTE(i, j) + HTN_1d(iw) = G_HTN(i, j) + HTEm1_1d(iw) = G_HTE(i - 1, j) + HTNm1_1d(iw) = G_HTN(i, j - 1) + end do + + end subroutine convert_2d_1d_init + +!============================================================================= + + subroutine convert_2d_1d_dyn(na0 , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength , G_cdn_ocn , G_aiu , G_uocn , & + G_vocn , G_waterxU , G_wateryU , G_forcexU , & + G_forceyU , G_umassdti , G_fmU , G_Tbu , & + G_uvel , G_vvel ) + + implicit none + + integer(kind=int_kind), intent(in) :: na0 + + ! nx, ny + real(kind=dbl_kind), dimension(:, :), intent(in) :: & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4, & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4, & + G_stress12_1, G_stress12_2, G_stress12_3,G_stress12_4, & + G_strength , G_cdn_ocn , G_aiu , G_uocn , & + G_vocn , G_waterxU , G_wateryU , G_forcexU , & + G_forceyU , G_umassdti , G_fmU , G_Tbu , & + G_uvel , G_vvel + + integer(kind=int_kind) :: lo, up, iw, i, j + character(len=*), parameter :: subname = '(convert_2d_1d_dyn)' + + lo=1 + up=na0 + do iw = 1, na0 + ! get 2D indices + i = indxti(iw) + j = indxtj(iw) + ! map + stressp_1(iw) = G_stressp_1(i, j) + stressp_2(iw) = G_stressp_2(i, j) + stressp_3(iw) = G_stressp_3(i, j) + stressp_4(iw) = G_stressp_4(i, j) + stressm_1(iw) = G_stressm_1(i, j) + stressm_2(iw) = G_stressm_2(i, j) + stressm_3(iw) = G_stressm_3(i, j) + stressm_4(iw) = G_stressm_4(i, j) + stress12_1(iw) = G_stress12_1(i, j) + stress12_2(iw) = G_stress12_2(i, j) + stress12_3(iw) = G_stress12_3(i, j) + stress12_4(iw) = G_stress12_4(i, j) + strength(iw) = G_strength(i,j) + cdn_ocn(iw) = G_cdn_ocn(i, j) + aiu(iw) = G_aiu(i, j) + uocn(iw) = G_uocn(i, j) + vocn(iw) = G_vocn(i, j) + waterxU(iw) = G_waterxU(i, j) + wateryU(iw) = G_wateryU(i, j) + forcexU(iw) = G_forcexU(i, j) + forceyU(iw) = G_forceyU(i, j) + umassdti(iw) = G_umassdti(i, j) + fmU(iw) = G_fmU(i, j) + strintxU(iw) = C0 + strintyU(iw) = C0 + Tbu(iw) = G_Tbu(i, j) + Cb(iw) = c0 + uvel(iw) = G_uvel(i,j) + vvel(iw) = G_vvel(i,j) + uvel_init(iw) = G_uvel(i,j) + vvel_init(iw) = G_vvel(i,j) + end do + + ! Halos can potentially have values of u and v + do iw=na0+1,navel + j = int((indxTij(iw) - 1) / (nx)) + 1 + i = indxTij(iw) - (j - 1) * nx + uvel(iw)=G_uvel(i,j) + vvel(iw)=G_vvel(i,j) + end do + + end subroutine convert_2d_1d_dyn + +!============================================================================= + + subroutine convert_1d_2d_dyn(na0 , navel0 , & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , G_strintxU , G_strintyU , & + G_Tbu , G_uvel , G_vvel , G_taubxU , & + G_taubyU) + + implicit none + + integer(kind=int_kind), intent(in) :: na0, navel0 + ! nx, ny + real(kind=dbl_kind), dimension(:, :), intent(inout) :: & + G_stressp_1 , G_stressp_2 , G_stressp_3 , G_stressp_4 , & + G_stressm_1 , G_stressm_2 , G_stressm_3 , G_stressm_4 , & + G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4, & + G_strength, & + G_cdn_ocn , G_aiu , G_uocn , G_vocn , & + G_waterxU , G_wateryU , G_forcexU , G_forceyU , & + G_umassdti , G_fmU , G_strintxU , G_strintyU , & + G_Tbu , G_uvel , G_vvel , G_taubxU , & + G_taubyU + + integer(kind=int_kind) :: lo, up, iw, i, j + character(len=*), parameter :: subname = '(convert_1d_2d_dyn)' + + lo=1 + up=na0 + do iw = lo, up + ! get 2D indices + i = indxti(iw) + j = indxtj(iw) + ! map to 2d + G_stressp_1 (i,j) = stressp_1(iw) + G_stressp_2 (i,j) = stressp_2(iw) + G_stressp_3 (i,j) = stressp_3(iw) + G_stressp_4 (i,j) = stressp_4(iw) + G_stressm_1 (i,j) = stressm_1(iw) + G_stressm_2 (i,j) = stressm_2(iw) + G_stressm_3 (i,j) = stressm_3(iw) + G_stressm_4 (i,j) = stressm_4(iw) + G_stress12_1(i,j) = stress12_1(iw) + G_stress12_2(i,j) = stress12_2(iw) + G_stress12_3(i,j) = stress12_3(iw) + G_stress12_4(i,j) = stress12_4(iw) + G_strintxU(i,j) = strintxU(iw) + G_strintyU(i,j) = strintyU (iw) + G_taubxU(i,j) = -uvel(iw)*Cb(iw) + G_taubyU(i,j) = -vvel(iw)*Cb(iw) + G_uvel(i,j) = uvel(iw) + G_vvel(i,j) = vvel(iw) + end do + + do iw=na0+1,navel0 + j = int((indxTij(iw) - 1) / (nx)) + 1 + i = indxTij(iw) - (j - 1) * nx + G_uvel(i,j) = uvel(iw) + G_vvel(i,j) = vvel(iw) + end do + + end subroutine convert_1d_2d_dyn + +!======================================================================= + + subroutine setdiff(x, y, lvecx, lvecy,xy, nxy) + ! Find element (xy) of two sorted integer vectors (x and y) that + ! are in x, but not in y, or in y, but not in x + + implicit none + + integer(kind=int_kind), intent(in) :: lvecx,lvecy + integer(kind=int_kind), intent(in) :: x(1:lvecx), y(1:lvecy) + integer(kind=int_kind), intent(out) :: xy(1:lvecx + lvecy) + integer(kind=int_kind), intent(out) :: nxy + + ! local variables + + integer(kind=int_kind) :: i, j, k + + character(len=*), parameter :: subname = '(setdiff)' + + i = 1 + j = 1 + k = 1 + do while (i <= lvecx .and. j <= lvecy) + if (x(i) < y(j)) then + xy(k) = x(i) + i = i + 1 + k = k + 1 + else if (x(i) > y(j)) then + xy(k) = y(j) + j = j + 1 + k = k + 1 + else + i = i + 1 + j = j + 1 + end if + end do + + ! the rest + do while (i <= lvecx) + xy(k) = x(i) + i = i + 1 + k = k + 1 + end do + do while (j <= lvecy) + xy(k) = y(j) + j = j + 1 + k = k + 1 + end do + nxy = k - 1 + + end subroutine setdiff + +!======================================================================= + + subroutine findXinY(x, y, lvecx, lvecy, indx) + ! Find indx vector so that x(1:na) = y(indx(1:na)) + ! + ! Conditions: + ! * EVERY item in x is found in y + ! * x(1:lvecx) is a sorted integer vector + ! * y(1:lvecy) consists of two sorted integer vectors: + ! [y(1:lvecx); y(lvecy + 1:lvecx)] + ! * lvecy >= lvecx + + implicit none + + integer (kind=int_kind), intent(in) :: lvecx, lvecy + integer (kind=int_kind), intent(in) :: x(1:lvecx), y(1:lvecy) + integer (kind=int_kind), intent(out) :: indx(1:lvecx) + + ! local variables + + integer (kind=int_kind) :: i, j1, j2 + + character(len=*), parameter :: subname = '(findXinY)' + + i = 1 + j1 = 1 + j2 = lvecx + 1 + do while (i <= lvecx) + if (x(i) == y(j1)) then + indx(i) = j1 + i = i + 1 + j1 = j1 + 1 + else if (x(i) == y(j2)) then + indx(i) = j2 + i = i + 1 + j2 = j2 + 1 + else if (x(i) > y(j1)) then + j1 = j1 + 1 + else if (x(i) > y(j2)) then + j2 = j2 + 1 + else + stop + end if + end do + + end subroutine findXinY + +!======================================================================= + + subroutine calc_navel(na0, navel0) + ! Calculate number of active points, including halo points + + implicit none + + integer(kind=int_kind), intent(in) :: na0 + integer(kind=int_kind), intent(out) :: navel0 + + ! local variables + + integer(kind=int_kind) :: iw, i, j + integer(kind=int_kind), dimension(1:na0) :: & + Iin, Iee, Ine, Ise, Inw, Isw, Isse, indi, indj + + integer(kind=int_kind), dimension(1:7 * na0) :: util1, util2 + + character(len=*), parameter :: subname = '(calc_navel)' + + ! calculate additional 1D indices used for finite differences + do iw = 1, na0 + ! get 2D indices + i = indxti(iw) + j = indxtj(iw) + + ! calculate 1D indices + Iin(iw) = i + (j - 1) * nx ! ( 0, 0) target point + Iee(iw) = i - 1 + (j - 1) * nx ! (-1, 0) + Ine(iw) = i - 1 + (j - 2) * nx ! (-1, -1) + Ise(iw) = i + (j - 2) * nx ! ( 0, -1) + Inw(iw) = i + 1 + (j - 1) * nx ! (+1, 0) + Isw(iw) = i + 1 + (j - 0) * nx ! (+1, +1) + Isse(iw) = i + (j - 0) * nx ! ( 0, +1) + end do + + ! find number of points needed for finite difference calculations + call union(Iin , Iee , na0, na0, util1, i ) + call union(util1, Ine , i , na0, util2, j ) + call union(util2, Ise , j , na0, util1, i ) + call union(util1, Inw , i , na0, util2, j ) + call union(util2, Isw , j , na0, util1, i ) + call union(util1, Isse, i , na0, util2, navel0) + + end subroutine calc_navel + +!======================================================================= + + subroutine numainit(lo,up,uu) + + implicit none + integer(kind=int_kind),intent(in) :: lo,up,uu + integer(kind=int_kind) :: iw + character(len=*), parameter :: subname = '(numainit)' + + !$omp parallel do schedule(runtime) private(iw) + do iw = lo,up + skipTcell(iw)=.false. + skipUcell(iw)=.false. + ee(iw)=0 + ne(iw)=0 + se(iw)=0 + nw(iw)=0 + sw(iw)=0 + sse(iw)=0 + aiu(iw)=c0 + Cb(iw)=c0 + cdn_ocn(iw)=c0 + dxT_1d(iw)=c0 + dyT_1d(iw)=c0 + fmU(iw)=c0 + forcexU(iw)=c0 + forceyU(iw)=c0 + HTE_1d(iw)=c0 + HTEm1_1d(iw)=c0 + HTN_1d(iw)=c0 + HTNm1_1d(iw)=c0 + strength(iw)= c0 + stress12_1(iw)=c0 + stress12_2(iw)=c0 + stress12_3(iw)=c0 + stress12_4(iw)=c0 + stressm_1(iw)=c0 + stressm_2(iw)=c0 + stressm_3(iw)=c0 + stressm_4(iw)=c0 + stressp_1(iw)=c0 + stressp_2(iw)=c0 + stressp_3(iw)=c0 + stressp_4(iw)=c0 + strintxU(iw)= c0 + strintyU(iw)= c0 + Tbu(iw)=c0 + uarear_1d(iw)=c0 + umassdti(iw)=c0 + uocn(iw)=c0 + uvel_init(iw)=c0 + uvel(iw)=c0 + vocn(iw)=c0 + vvel_init(iw)=c0 + vvel(iw)=c0 + waterxU(iw)=c0 + wateryU(iw)=c0 + enddo + !$omp end parallel do + !$omp parallel do schedule(runtime) private(iw) + do iw = lo,uu + uvel(iw)=c0 + vvel(iw)=c0 + str1(iw)=c0 + str2(iw)=c0 + str3(iw)=c0 + str4(iw)=c0 + str5(iw)=c0 + str6(iw)=c0 + str7(iw)=c0 + str8(iw)=c0 + enddo + !$omp end parallel do + + end subroutine numainit + +!======================================================================= + + subroutine evp1d_halo_update() + + implicit none + integer(kind=int_kind) :: iw + + character(len=*), parameter :: subname = '(evp1d_halo_update)' + +!TILL !$omp parallel do schedule(runtime) private(iw) + do iw = 1, n_inner_east + uvel(halo_parent_outer_east(iw)) = uvel(halo_inner_east(iw)) + vvel(halo_parent_outer_east(iw)) = vvel(halo_inner_east(iw)) + end do +! western halo + do iw = 1, n_inner_west + uvel(halo_parent_outer_west(iw)) = uvel(halo_inner_west(iw)) + vvel(halo_parent_outer_west(iw)) = vvel(halo_inner_west(iw)) + end do + do iw = 1, n_inner_south + uvel(halo_parent_outer_south(iw)) = uvel(halo_inner_south(iw)) + vvel(halo_parent_outer_south(iw)) = vvel(halo_inner_south(iw)) + end do +! western halo + do iw = 1, n_inner_north + uvel(halo_parent_outer_north(iw)) = uvel(halo_inner_north(iw)) + vvel(halo_parent_outer_north(iw)) = vvel(halo_inner_north(iw)) + end do + + end subroutine evp1d_halo_update + +!======================================================================= + + subroutine calc_halo_parent(na0,navel0) + ! splits the global domain in east and west boundary and find the inner (within) the domain and the outer (outside the domain) + ! Implementation for circular boundaries. This means that mathes between the opposite directions must be found + ! E.g. inner_west and outer_east + ! Till Rasmussen, DMI 2023 + + use ice_domain, only: ew_boundary_type, ns_boundary_type + implicit none + + integer(kind=int_kind), intent(in) :: na0, navel0 + + ! local variables + + ! Indexes, Directions are east, weast, north and south + ! This is done to reduce the search windows. + ! Iw runs from 1 to navel and the one to keep in the end + ! Iw_inner_{direction} contains the indexes for + + integer(kind=int_kind) :: & + iw, n_outer_east, n_outer_west, n_outer_south, n_outer_north + + integer(kind=int_kind) :: i, j, ifind, jfind ! 2d index. ifind and jfind are points on the boundary + + integer(kind=int_kind), dimension(ny) :: & + halo_outer_east, halo_outer_west, & + ind_inner_west , ind_inner_east + + integer(kind=int_kind), dimension(nx) :: & + halo_outer_south, halo_outer_north, & + ind_inner_south , ind_inner_north + + character(len=*), parameter :: subname = '(calc_halo_parent)' + + !----------------------------------------------------------------- + ! Indices for halo update: + ! 0: no halo point + ! >0: index for halo point parent, related to indij vector + ! + ! TODO: Implement for nghost > 1 + ! TODO: Implement for tripole grids + !----------------------------------------------------------------- + halo_inner_west(:) = 0 + halo_inner_east(:) = 0 + halo_inner_south(:) = 0 + halo_inner_north(:) = 0 + + halo_outer_west(:) = 0 + halo_outer_east(:) = 0 + halo_outer_south(:) = 0 + halo_outer_north(:) = 0 + + ind_inner_west(:) = 0 + ind_inner_east(:) = 0 + ind_inner_south(:) = 0 + ind_inner_north(:) = 0 + + halo_parent_outer_east(:)=0 + halo_parent_outer_west(:)=0 + halo_parent_outer_north(:)=0 + halo_parent_outer_south(:)=0 + ! Index inner boundary + n_inner_north=0 + n_inner_south=0 + n_inner_east=0 + n_inner_west=0 + ! Index outer boundary + n_outer_east=0 + n_outer_west=0 + n_outer_north=0 + n_outer_south=0 + !TILL SHOULD CHANGE TO 1D + do iw = 1, na0 + j = int((indxTij(iw) - 1) / (nx)) + 1 + i = indxTij(iw) - (j - 1) * nx + ! All four boundaries find points internally that are within the domain and next to the boundary + ! This can in principle be moved to previos loops that connects i and j to 1d index. + ! ifind is i value on the halo to find. + ! Some parts assume nghost = 1 + ! INNER EAST + if (trim(ew_boundary_type) == 'cyclic') then + if ((.not. skipUcell(iw)) .and. (i==nx-nghost)) then + n_inner_east=n_inner_east+1 + ifind = 1 + ind_inner_east(n_inner_east) = ifind + (j - 1) * nx + halo_inner_east(n_inner_east) = iw + else if ((.not. skipUcell(iw)) .and. (i==1+nghost)) then + n_inner_west=n_inner_west+1 + ifind = nx + ind_inner_west(n_inner_west) = ifind + (j - 1) * nx + halo_inner_west(n_inner_west) = iw + endif + endif + if (trim(ns_boundary_type) == 'cyclic') then + if ((.not. skipUcell(iw)) .and. (j==1+nghost)) then + n_inner_south=n_inner_south+1 + jfind = ny + ind_inner_south(n_inner_south) = i + (jfind - 1) * nx + halo_inner_south(n_inner_south) = iw + else if ((.not. skipUcell(iw)) .and. (j==ny-nghost)) then + n_inner_north=n_inner_north+1 + jfind = 1 + ind_inner_north(n_inner_north) = i + (jfind - 1) * nx + halo_inner_north(n_inner_north) = iw + endif + endif + ! Finds all halos points on western halo WEST + if (i == 1) then + n_outer_west=n_outer_west+1 + halo_outer_west(n_outer_west)= iw + endif + ! Simiilar on East + if (i == nx ) then + n_outer_east=n_outer_east+1 + halo_outer_east(n_outer_east)=iw + endif + ! Finds all halos points on western halo WEST + if (j == 1) then + n_outer_south=n_outer_south+1 + halo_outer_south(n_outer_south)= iw + endif + ! Simiilar on East + if (j == ny ) then + n_outer_north=n_outer_north+1 + halo_outer_north(n_outer_north)=iw + endif + end do + + ! outer halo also needs points that are not active + do iw = na0+1, navel0 + j = int((indxTij(iw) - 1) / (nx)) + 1 + i = indxTij(iw) - (j - 1) * nx + ! outer halo west + if (i == 1) then + n_outer_west=n_outer_west+1 + halo_outer_west(n_outer_west)= iw + endif + ! outer halo east + if (i == nx ) then + n_outer_east=n_outer_east+1 + halo_outer_east(n_outer_east)=iw + endif + ! outer halo south + if (j == 1) then + n_outer_south=n_outer_south+1 + halo_outer_south(n_outer_south)= iw + endif + ! outer halo north + if (j == ny ) then + n_outer_north=n_outer_north+1 + halo_outer_north(n_outer_north)=iw + endif + end do + ! Search is now reduced to a search between two reduced vectors for each boundary + ! This runs through each boundary and matches + ! number of active points for halo east and west (count of active u cells within the domain. + ! reduce outer array to only match inner arrays + ! East West + if (trim(ew_boundary_type) == 'cyclic') then + do i=1,n_inner_west + do j=1,n_outer_east + if (ind_inner_west(i) == indxTij(halo_outer_east(j))) then + halo_parent_outer_west(i)=halo_outer_east(j) + endif + end do + end do + + do i=1,n_inner_east + do j=1,n_outer_west + if (ind_inner_east(i) == indxTij(halo_outer_west(j))) then + halo_parent_outer_east(i)=halo_outer_west(j) + endif + end do + end do + endif + if (trim(ns_boundary_type) == 'cyclic') then + do i=1,n_inner_south + do j=1,n_outer_north + if (ind_inner_south(i) == indxTij(halo_outer_north(j))) then + halo_parent_outer_south(i)=halo_outer_north(j) + endif + end do + end do + + do i=1,n_inner_north + do j=1,n_outer_south + if (ind_inner_north(i) == indxTij(halo_outer_south(j))) then + halo_parent_outer_north(i)=halo_outer_south(j) + endif + end do + end do + endif + + end subroutine calc_halo_parent + +!======================================================================= + +end module ice_dyn_evp1d + diff --git a/cicecore/cicedyn/dynamics/ice_dyn_evp_1d.F90 b/cicecore/cicedyn/dynamics/ice_dyn_evp_1d.F90 deleted file mode 100644 index b7daab0a0..000000000 --- a/cicecore/cicedyn/dynamics/ice_dyn_evp_1d.F90 +++ /dev/null @@ -1,1921 +0,0 @@ -!======================================================================= -! -! Elastic-viscous-plastic sea ice dynamics model (1D implementations) -! Computes ice velocity and deformation -! -! authors: Jacob Weismann Poulsen, DMI -! Mads Hvid Ribergaard, DMI - -module ice_dyn_evp_1d - - use ice_kinds_mod - use ice_fileunits, only : nu_diag - use ice_exit, only : abort_ice - use icepack_intfc, only : icepack_query_parameters - use icepack_intfc, only : icepack_warnings_flush, & - icepack_warnings_aborted - - implicit none - private - public :: ice_dyn_evp_1d_copyin, ice_dyn_evp_1d_copyout, & - ice_dyn_evp_1d_kernel - - integer(kind=int_kind) :: NA_len, NAVEL_len, domp_iam, domp_nt -#if defined (_OPENMP) - real(kind=dbl_kind) :: rdomp_iam, rdomp_nt - !$OMP THREADPRIVATE(domp_iam, domp_nt, rdomp_iam, rdomp_nt) -#endif - logical(kind=log_kind), dimension(:), allocatable :: skiptcell, skipucell - integer(kind=int_kind), dimension(:), allocatable :: ee, ne, se, & - nw, sw, sse, indi, indj, indij, halo_parent - real(kind=dbl_kind), dimension(:), allocatable :: cdn_ocn, aiu, & - uocn, vocn, forcex, forcey, Tbu, tarear, umassdti, fm, uarear, & - strintx, strinty, uvel_init, vvel_init, strength, uvel, vvel, & - dxT, dyT, stressp_1, stressp_2, stressp_3, stressp_4, stressm_1, & - stressm_2, stressm_3, stressm_4, stress12_1, stress12_2, & - stress12_3, stress12_4, divu, rdg_conv, rdg_shear, shear, taubx, & - tauby, str1, str2, str3, str4, str5, str6, str7, str8, HTE, HTN, & - HTEm1, HTNm1 - integer, parameter :: JPIM = selected_int_kind(9) - - interface evp1d_stress - module procedure stress_iter - module procedure stress_last - end interface - - interface evp1d_stepu - module procedure stepu_iter - module procedure stepu_last - end interface - -!======================================================================= - -contains - -!======================================================================= - - subroutine domp_init -#if defined (_OPENMP) - - use omp_lib, only : omp_get_thread_num, omp_get_num_threads -#endif - - implicit none - - character(len=*), parameter :: subname = '(domp_init)' - - !$OMP PARALLEL DEFAULT(none) -#if defined (_OPENMP) - domp_iam = omp_get_thread_num() - rdomp_iam = real(domp_iam, dbl_kind) - domp_nt = omp_get_num_threads() - rdomp_nt = real(domp_nt, dbl_kind) -#else - domp_iam = 0 - domp_nt = 1 -#endif - !$OMP END PARALLEL - - end subroutine domp_init - -!======================================================================= - - subroutine domp_get_domain(lower, upper, d_lower, d_upper) -#if defined (_OPENMP) - - use omp_lib, only : omp_in_parallel - use ice_constants, only : p5 -#endif - - implicit none - - integer(kind=JPIM), intent(in) :: lower, upper - integer(kind=JPIM), intent(out) :: d_lower, d_upper - - ! local variables -#if defined (_OPENMP) - - real(kind=dbl_kind) :: dlen -#endif - - character(len=*), parameter :: subname = '(domp_get_domain)' - - ! proper action in "null" case - if (upper <= 0 .or. upper < lower) then - d_lower = 0 - d_upper = -1 - return - end if - - ! proper action in serial case - d_lower = lower - d_upper = upper -#if defined (_OPENMP) - - if (omp_in_parallel()) then - dlen = real((upper - lower + 1), dbl_kind) - d_lower = lower + floor(((rdomp_iam * dlen + p5) / rdomp_nt), JPIM) - d_upper = lower - 1 + floor(((rdomp_iam * dlen + dlen + p5) / rdomp_nt), JPIM) - end if -#endif - - end subroutine domp_get_domain - -!======================================================================= - - subroutine stress_iter(NA_len, ee, ne, se, lb, ub, uvel, vvel, dxT, & - dyT, hte, htn, htem1, htnm1, strength, stressp_1, stressp_2, & - stressp_3, stressp_4, stressm_1, stressm_2, stressm_3, & - stressm_4, stress12_1, stress12_2, stress12_3, stress12_4, str1, & - str2, str3, str4, str5, str6, str7, str8, skiptcell) - - use ice_kinds_mod - use ice_constants, only : p027, p055, p111, p166, p222, p25, & - p333, p5, c1p5, c1 - use ice_dyn_shared, only : ecci, denom1, arlx1i, Ktens, revp, & - deltaminEVP - - implicit none - - integer(kind=int_kind), intent(in) :: NA_len, lb, ub - integer(kind=int_kind), dimension(:), intent(in), contiguous :: & - ee, ne, se - real(kind=dbl_kind), dimension(:), intent(in), contiguous :: & - strength, uvel, vvel, dxT, dyT, hte, htn, htem1, htnm1 - logical(kind=log_kind), intent(in), dimension(:) :: skiptcell - real(kind=dbl_kind), dimension(:), intent(inout), contiguous :: & - stressp_1, stressp_2, stressp_3, stressp_4, stressm_1, & - stressm_2, stressm_3, stressm_4, stress12_1, stress12_2, & - stress12_3, stress12_4 - real(kind=dbl_kind), dimension(:), intent(out), contiguous :: & - str1, str2, str3, str4, str5, str6, str7, str8 - - ! local variables - - integer(kind=int_kind) :: iw, il, iu - real(kind=dbl_kind) :: divune, divunw, divuse, divusw, & - tensionne, tensionnw, tensionse, tensionsw, shearne, shearnw, & - shearse, shearsw, Deltane, Deltanw, Deltase, Deltasw, c0ne, & - c0nw, c0se, c0sw, c1ne, c1nw, c1se, c1sw, ssigpn, ssigps, & - ssigpe, ssigpw, ssigmn, ssigms, ssigme, ssigmw, ssig12n, & - ssig12s, ssig12e, ssig12w, ssigp1, ssigp2, ssigm1, ssigm2, & - ssig121, ssig122, csigpne, csigpnw, csigpse, csigpsw, & - csigmne, csigmnw, csigmse, csigmsw, csig12ne, csig12nw, & - csig12se, csig12sw, str12ew, str12we, str12ns, str12sn, & - strp_tmp, strm_tmp, tmp_uvel_ee, tmp_vvel_se, tmp_vvel_ee, & - tmp_vvel_ne, tmp_uvel_ne, tmp_uvel_se, dxhy, dyhx, cxp, cyp, & - cxm, cym, tmparea, DminTarea - - character(len=*), parameter :: subname = '(stress_iter)' - -#ifdef _OPENACC - !$acc parallel & - !$acc present(ee, ne, se, strength, uvel, vvel, dxT, dyT, hte, & - !$acc htn, htem1, htnm1, str1, str2, str3, str4, str5, str6, & - !$acc str7, str8, stressp_1, stressp_2, stressp_3, stressp_4, & - !$acc stressm_1, stressm_2, stressm_3, stressm_4, stress12_1, & - !$acc stress12_2, stress12_3, stress12_4, skiptcell) - !$acc loop - do iw = 1, NA_len -#else - call domp_get_domain(lb, ub, il, iu) - do iw = il, iu -#endif - - if (skiptcell(iw)) cycle - - tmparea = dxT(iw) * dyT(iw) ! necessary to split calc of DminTarea. Otherwize not binary identical - DminTarea = deltaminEVP * tmparea - dxhy = p5 * (hte(iw) - htem1(iw)) - dyhx = p5 * (htn(iw) - htnm1(iw)) - cxp = c1p5 * htn(iw) - p5 * htnm1(iw) - cyp = c1p5 * hte(iw) - p5 * htem1(iw) - cxm = -(c1p5 * htnm1(iw) - p5 * htn(iw)) - cym = -(c1p5 * htem1(iw) - p5 * hte(iw)) - - !-------------------------------------------------------------- - ! strain rates - ! NOTE: these are actually strain rates * area (m^2/s) - !-------------------------------------------------------------- - - tmp_uvel_ne = uvel(ne(iw)) - tmp_uvel_se = uvel(se(iw)) - tmp_uvel_ee = uvel(ee(iw)) - - tmp_vvel_ee = vvel(ee(iw)) - tmp_vvel_se = vvel(se(iw)) - tmp_vvel_ne = vvel(ne(iw)) - ! divergence = e_11 + e_22 - divune = cyp * uvel(iw) - dyT(iw) * tmp_uvel_ee & - + cxp * vvel(iw) - dxT(iw) * tmp_vvel_se - divunw = cym * tmp_uvel_ee + dyT(iw) * uvel(iw) & - + cxp * tmp_vvel_ee - dxT(iw) * tmp_vvel_ne - divusw = cym * tmp_uvel_ne + dyT(iw) * tmp_uvel_se & - + cxm * tmp_vvel_ne + dxT(iw) * tmp_vvel_ee - divuse = cyp * tmp_uvel_se - dyT(iw) * tmp_uvel_ne & - + cxm * tmp_vvel_se + dxT(iw) * vvel(iw) - - ! tension strain rate = e_11 - e_22 - tensionne = -cym * uvel(iw) - dyT(iw) * tmp_uvel_ee & - + cxm * vvel(iw) + dxT(iw) * tmp_vvel_se - tensionnw = -cyp * tmp_uvel_ee + dyT(iw) * uvel(iw) & - + cxm * tmp_vvel_ee + dxT(iw) * tmp_vvel_ne - tensionsw = -cyp * tmp_uvel_ne + dyT(iw) * tmp_uvel_se & - + cxp * tmp_vvel_ne - dxT(iw) * tmp_vvel_ee - tensionse = -cym * tmp_uvel_se - dyT(iw) * tmp_uvel_ne & - + cxp * tmp_vvel_se - dxT(iw) * vvel(iw) - - ! shearing strain rate = 2 * e_12 - shearne = -cym * vvel(iw) - dyT(iw) * tmp_vvel_ee & - - cxm * uvel(iw) - dxT(iw) * tmp_uvel_se - shearnw = -cyp * tmp_vvel_ee + dyT(iw) * vvel(iw) & - - cxm * tmp_uvel_ee - dxT(iw) * tmp_uvel_ne - shearsw = -cyp * tmp_vvel_ne + dyT(iw) * tmp_vvel_se & - - cxp * tmp_uvel_ne + dxT(iw) * tmp_uvel_ee - shearse = -cym * tmp_vvel_se - dyT(iw) * tmp_vvel_ne & - - cxp * tmp_uvel_se + dxT(iw) * uvel(iw) - - ! Delta (in the denominator of zeta and eta) - Deltane = sqrt(divune**2 + ecci * (tensionne**2 + shearne**2)) - Deltanw = sqrt(divunw**2 + ecci * (tensionnw**2 + shearnw**2)) - Deltasw = sqrt(divusw**2 + ecci * (tensionsw**2 + shearsw**2)) - Deltase = sqrt(divuse**2 + ecci * (tensionse**2 + shearse**2)) - - !-------------------------------------------------------------- - ! replacement pressure/Delta (kg/s) - ! save replacement pressure for principal stress calculation - !-------------------------------------------------------------- - - c0ne = strength(iw) / max(Deltane, DminTarea) - c0nw = strength(iw) / max(Deltanw, DminTarea) - c0sw = strength(iw) / max(Deltasw, DminTarea) - c0se = strength(iw) / max(Deltase, DminTarea) - - c1ne = c0ne * arlx1i - c1nw = c0nw * arlx1i - c1sw = c0sw * arlx1i - c1se = c0se * arlx1i - - c0ne = c1ne * ecci - c0nw = c1nw * ecci - c0sw = c1sw * ecci - c0se = c1se * ecci - - !-------------------------------------------------------------- - ! the stresses (kg/s^2) - ! (1) northeast, (2) northwest, (3) southwest, (4) southeast - !-------------------------------------------------------------- - - stressp_1(iw) = (stressp_1(iw) * (c1 - arlx1i * revp) & - + c1ne * (divune * (c1 + Ktens) - Deltane * (c1 - Ktens))) * denom1 - stressp_2(iw) = (stressp_2(iw) * (c1 - arlx1i * revp) & - + c1nw * (divunw * (c1 + Ktens) - Deltanw * (c1 - Ktens))) * denom1 - stressp_3(iw) = (stressp_3(iw) * (c1 - arlx1i * revp) & - + c1sw * (divusw * (c1 + Ktens) - Deltasw * (c1 - Ktens))) * denom1 - stressp_4(iw) = (stressp_4(iw) * (c1 - arlx1i * revp) & - + c1se * (divuse * (c1 + Ktens) - Deltase * (c1 - Ktens))) * denom1 - - stressm_1(iw) = (stressm_1(iw) * (c1 - arlx1i * revp) + c0ne * tensionne * (c1 + Ktens)) * denom1 - stressm_2(iw) = (stressm_2(iw) * (c1 - arlx1i * revp) + c0nw * tensionnw * (c1 + Ktens)) * denom1 - stressm_3(iw) = (stressm_3(iw) * (c1 - arlx1i * revp) + c0sw * tensionsw * (c1 + Ktens)) * denom1 - stressm_4(iw) = (stressm_4(iw) * (c1 - arlx1i * revp) + c0se * tensionse * (c1 + Ktens)) * denom1 - - stress12_1(iw) = (stress12_1(iw) * (c1 - arlx1i * revp) + c0ne * shearne * p5 * (c1 + Ktens)) * denom1 - stress12_2(iw) = (stress12_2(iw) * (c1 - arlx1i * revp) + c0nw * shearnw * p5 * (c1 + Ktens)) * denom1 - stress12_3(iw) = (stress12_3(iw) * (c1 - arlx1i * revp) + c0sw * shearsw * p5 * (c1 + Ktens)) * denom1 - stress12_4(iw) = (stress12_4(iw) * (c1 - arlx1i * revp) + c0se * shearse * p5 * (c1 + Ktens)) * denom1 - - !-------------------------------------------------------------- - ! combinations of the stresses for the momentum equation - ! (kg/s^2) - !-------------------------------------------------------------- - - ssigpn = stressp_1(iw) + stressp_2(iw) - ssigps = stressp_3(iw) + stressp_4(iw) - ssigpe = stressp_1(iw) + stressp_4(iw) - ssigpw = stressp_2(iw) + stressp_3(iw) - ssigp1 = (stressp_1(iw) + stressp_3(iw)) * p055 - ssigp2 = (stressp_2(iw) + stressp_4(iw)) * p055 - - ssigmn = stressm_1(iw) + stressm_2(iw) - ssigms = stressm_3(iw) + stressm_4(iw) - ssigme = stressm_1(iw) + stressm_4(iw) - ssigmw = stressm_2(iw) + stressm_3(iw) - ssigm1 = (stressm_1(iw) + stressm_3(iw)) * p055 - ssigm2 = (stressm_2(iw) + stressm_4(iw)) * p055 - - ssig12n = stress12_1(iw) + stress12_2(iw) - ssig12s = stress12_3(iw) + stress12_4(iw) - ssig12e = stress12_1(iw) + stress12_4(iw) - ssig12w = stress12_2(iw) + stress12_3(iw) - ssig121 = (stress12_1(iw) + stress12_3(iw)) * p111 - ssig122 = (stress12_2(iw) + stress12_4(iw)) * p111 - - csigpne = p111 * stressp_1(iw) + ssigp2 + p027 * stressp_3(iw) - csigpnw = p111 * stressp_2(iw) + ssigp1 + p027 * stressp_4(iw) - csigpsw = p111 * stressp_3(iw) + ssigp2 + p027 * stressp_1(iw) - csigpse = p111 * stressp_4(iw) + ssigp1 + p027 * stressp_2(iw) - - csigmne = p111 * stressm_1(iw) + ssigm2 + p027 * stressm_3(iw) - csigmnw = p111 * stressm_2(iw) + ssigm1 + p027 * stressm_4(iw) - csigmsw = p111 * stressm_3(iw) + ssigm2 + p027 * stressm_1(iw) - csigmse = p111 * stressm_4(iw) + ssigm1 + p027 * stressm_2(iw) - - csig12ne = p222 * stress12_1(iw) + ssig122 + p055 * stress12_3(iw) - csig12nw = p222 * stress12_2(iw) + ssig121 + p055 * stress12_4(iw) - csig12sw = p222 * stress12_3(iw) + ssig122 + p055 * stress12_1(iw) - csig12se = p222 * stress12_4(iw) + ssig121 + p055 * stress12_2(iw) - - str12ew = p5 * dxT(iw) * (p333 * ssig12e + p166 * ssig12w) - str12we = p5 * dxT(iw) * (p333 * ssig12w + p166 * ssig12e) - str12ns = p5 * dyT(iw) * (p333 * ssig12n + p166 * ssig12s) - str12sn = p5 * dyT(iw) * (p333 * ssig12s + p166 * ssig12n) - - !-------------------------------------------------------------- - ! for dF/dx (u momentum) - !-------------------------------------------------------------- - - strp_tmp = p25 * dyT(iw) * (p333 * ssigpn + p166 * ssigps) - strm_tmp = p25 * dyT(iw) * (p333 * ssigmn + p166 * ssigms) - - ! northeast (i,j) - str1(iw) = -strp_tmp - strm_tmp - str12ew & - + dxhy * (-csigpne + csigmne) + dyhx * csig12ne - - ! northwest (i+1,j) - str2(iw) = strp_tmp + strm_tmp - str12we & - + dxhy * (-csigpnw + csigmnw) + dyhx * csig12nw - - strp_tmp = p25 * dyT(iw) * (p333 * ssigps + p166 * ssigpn) - strm_tmp = p25 * dyT(iw) * (p333 * ssigms + p166 * ssigmn) - - ! southeast (i,j+1) - str3(iw) = -strp_tmp - strm_tmp + str12ew & - + dxhy * (-csigpse + csigmse) + dyhx * csig12se - - ! southwest (i+1,j+1) - str4(iw) = strp_tmp + strm_tmp + str12we & - + dxhy * (-csigpsw + csigmsw) + dyhx * csig12sw - - !-------------------------------------------------------------- - ! for dF/dy (v momentum) - !-------------------------------------------------------------- - - strp_tmp = p25 * dxT(iw) * (p333 * ssigpe + p166 * ssigpw) - strm_tmp = p25 * dxT(iw) * (p333 * ssigme + p166 * ssigmw) - - ! northeast (i,j) - str5(iw) = -strp_tmp + strm_tmp - str12ns & - - dyhx * (csigpne + csigmne) + dxhy * csig12ne - - ! southeast (i,j+1) - str6(iw) = strp_tmp - strm_tmp - str12sn & - - dyhx * (csigpse + csigmse) + dxhy * csig12se - - strp_tmp = p25 * dxT(iw) * (p333 * ssigpw + p166 * ssigpe) - strm_tmp = p25 * dxT(iw) * (p333 * ssigmw + p166 * ssigme) - - ! northwest (i+1,j) - str7(iw) = -strp_tmp + strm_tmp + str12ns & - - dyhx * (csigpnw + csigmnw) + dxhy * csig12nw - - ! southwest (i+1,j+1) - str8(iw) = strp_tmp - strm_tmp + str12sn & - - dyhx * (csigpsw + csigmsw) + dxhy * csig12sw - - end do -#ifdef _OPENACC - !$acc end parallel -#endif - - end subroutine stress_iter - -!======================================================================= - - subroutine stress_last(NA_len, ee, ne, se, lb, ub, uvel, vvel, dxT, & - dyT, hte, htn, htem1, htnm1, strength, stressp_1, stressp_2, & - stressp_3, stressp_4, stressm_1, stressm_2, stressm_3, & - stressm_4, stress12_1, stress12_2, stress12_3, stress12_4, str1, & - str2, str3, str4, str5, str6, str7, str8, skiptcell, tarear, divu, & - rdg_conv, rdg_shear, shear) - - use ice_kinds_mod - use ice_constants, only : p027, p055, p111, p166, p222, p25, & - p333, p5, c1p5, c1, c0 - use ice_dyn_shared, only : ecci, denom1, arlx1i, Ktens, revp,& - deltaminEVP - - implicit none - - integer(kind=int_kind), intent(in) :: NA_len, lb, ub - integer(kind=int_kind), dimension(:), intent(in), contiguous :: & - ee, ne, se - real(kind=dbl_kind), dimension(:), intent(in), contiguous :: & - strength, uvel, vvel, dxT, dyT, hte, htn, htem1, htnm1, tarear - logical(kind=log_kind), intent(in), dimension(:) :: skiptcell - real(kind=dbl_kind), dimension(:), intent(inout), contiguous :: & - stressp_1, stressp_2, stressp_3, stressp_4, stressm_1, & - stressm_2, stressm_3, stressm_4, stress12_1, stress12_2, & - stress12_3, stress12_4 - real(kind=dbl_kind), dimension(:), intent(out), contiguous :: & - str1, str2, str3, str4, str5, str6, str7, str8, divu, & - rdg_conv, rdg_shear, shear - - ! local variables - - integer(kind=int_kind) :: iw, il, iu - real(kind=dbl_kind) :: divune, divunw, divuse, divusw, & - tensionne, tensionnw, tensionse, tensionsw, shearne, shearnw, & - shearse, shearsw, Deltane, Deltanw, Deltase, Deltasw, c0ne, & - c0nw, c0se, c0sw, c1ne, c1nw, c1se, c1sw, ssigpn, ssigps, & - ssigpe, ssigpw, ssigmn, ssigms, ssigme, ssigmw, ssig12n, & - ssig12s, ssig12e, ssig12w, ssigp1, ssigp2, ssigm1, ssigm2, & - ssig121, ssig122, csigpne, csigpnw, csigpse, csigpsw, & - csigmne, csigmnw, csigmse, csigmsw, csig12ne, csig12nw, & - csig12se, csig12sw, str12ew, str12we, str12ns, str12sn, & - strp_tmp, strm_tmp, tmp_uvel_ee, tmp_vvel_se, tmp_vvel_ee, & - tmp_vvel_ne, tmp_uvel_ne, tmp_uvel_se, dxhy, dyhx, cxp, cyp, & - cxm, cym, tmparea, DminTarea - - character(len=*), parameter :: subname = '(stress_last)' - -#ifdef _OPENACC - !$acc parallel & - !$acc present(ee, ne, se, strength, uvel, vvel, dxT, dyT, hte, & - !$acc htn, htem1, htnm1, str1, str2, str3, str4, str5, str6, & - !$acc str7, str8, stressp_1, stressp_2, stressp_3, stressp_4, & - !$acc stressm_1, stressm_2, stressm_3, stressm_4, stress12_1, & - !$acc stress12_2, stress12_3, stress12_4, tarear, divu, & - !$acc rdg_conv, rdg_shear, shear, skiptcell) - !$acc loop - do iw = 1, NA_len -#else - call domp_get_domain(lb, ub, il, iu) - do iw = il, iu -#endif - - if (skiptcell(iw)) cycle - - tmparea = dxT(iw) * dyT(iw) ! necessary to split calc of DminTarea. Otherwize not binary identical - DminTarea = deltaminEVP * tmparea - dxhy = p5 * (hte(iw) - htem1(iw)) - dyhx = p5 * (htn(iw) - htnm1(iw)) - cxp = c1p5 * htn(iw) - p5 * htnm1(iw) - cyp = c1p5 * hte(iw) - p5 * htem1(iw) - cxm = -(c1p5 * htnm1(iw) - p5 * htn(iw)) - cym = -(c1p5 * htem1(iw) - p5 * hte(iw)) - - !-------------------------------------------------------------- - ! strain rates - ! NOTE: these are actually strain rates * area (m^2/s) - !-------------------------------------------------------------- - - tmp_uvel_ne = uvel(ne(iw)) - tmp_uvel_se = uvel(se(iw)) - tmp_uvel_ee = uvel(ee(iw)) - - tmp_vvel_ee = vvel(ee(iw)) - tmp_vvel_se = vvel(se(iw)) - tmp_vvel_ne = vvel(ne(iw)) - - ! divergence = e_11 + e_22 - divune = cyp * uvel(iw) - dyT(iw) * tmp_uvel_ee & - + cxp * vvel(iw) - dxT(iw) * tmp_vvel_se - divunw = cym * tmp_uvel_ee + dyT(iw) * uvel(iw) & - + cxp * tmp_vvel_ee - dxT(iw) * tmp_vvel_ne - divusw = cym * tmp_uvel_ne + dyT(iw) * tmp_uvel_se & - + cxm * tmp_vvel_ne + dxT(iw) * tmp_vvel_ee - divuse = cyp * tmp_uvel_se - dyT(iw) * tmp_uvel_ne & - + cxm * tmp_vvel_se + dxT(iw) * vvel(iw) - - ! tension strain rate = e_11 - e_22 - tensionne = -cym * uvel(iw) - dyT(iw) * tmp_uvel_ee & - + cxm * vvel(iw) + dxT(iw) * tmp_vvel_se - tensionnw = -cyp * tmp_uvel_ee + dyT(iw) * uvel(iw) & - + cxm * tmp_vvel_ee + dxT(iw) * tmp_vvel_ne - tensionsw = -cyp * tmp_uvel_ne + dyT(iw) * tmp_uvel_se & - + cxp * tmp_vvel_ne - dxT(iw) * tmp_vvel_ee - tensionse = -cym * tmp_uvel_se - dyT(iw) * tmp_uvel_ne & - + cxp * tmp_vvel_se - dxT(iw) * vvel(iw) - - ! shearing strain rate = 2 * e_12 - shearne = -cym * vvel(iw) - dyT(iw) * tmp_vvel_ee & - - cxm * uvel(iw) - dxT(iw) * tmp_uvel_se - shearnw = -cyp * tmp_vvel_ee + dyT(iw) * vvel(iw) & - - cxm * tmp_uvel_ee - dxT(iw) * tmp_uvel_ne - shearsw = -cyp * tmp_vvel_ne + dyT(iw) * tmp_vvel_se & - - cxp * tmp_uvel_ne + dxT(iw) * tmp_uvel_ee - shearse = -cym * tmp_vvel_se - dyT(iw) * tmp_vvel_ne & - - cxp * tmp_uvel_se + dxT(iw) * uvel(iw) - - ! Delta (in the denominator of zeta and eta) - Deltane = sqrt(divune**2 + ecci * (tensionne**2 + shearne**2)) - Deltanw = sqrt(divunw**2 + ecci * (tensionnw**2 + shearnw**2)) - Deltasw = sqrt(divusw**2 + ecci * (tensionsw**2 + shearsw**2)) - Deltase = sqrt(divuse**2 + ecci * (tensionse**2 + shearse**2)) - - !-------------------------------------------------------------- - ! on last subcycle, save quantities for mechanical - ! redistribution - !-------------------------------------------------------------- - - divu(iw) = p25 * (divune + divunw + divuse + divusw) * tarear(iw) - rdg_conv(iw) = -min(divu(iw), c0) ! TODO: Could move outside the entire kernel - rdg_shear(iw) = p5 * (p25 * (Deltane + Deltanw + Deltase + Deltasw) * tarear(iw) - abs(divu(iw))) - - ! diagnostic only - ! shear = sqrt(tension**2 + shearing**2) - shear(iw) = p25 * tarear(iw) * sqrt((tensionne + tensionnw + tensionse + tensionsw)**2 & - + (shearne + shearnw + shearse + shearsw)**2) - - !-------------------------------------------------------------- - ! replacement pressure/Delta (kg/s) - ! save replacement pressure for principal stress calculation - !-------------------------------------------------------------- - - c0ne = strength(iw) / max(Deltane, DminTarea) - c0nw = strength(iw) / max(Deltanw, DminTarea) - c0sw = strength(iw) / max(Deltasw, DminTarea) - c0se = strength(iw) / max(Deltase, DminTarea) - - c1ne = c0ne * arlx1i - c1nw = c0nw * arlx1i - c1sw = c0sw * arlx1i - c1se = c0se * arlx1i - - c0ne = c1ne * ecci - c0nw = c1nw * ecci - c0sw = c1sw * ecci - c0se = c1se * ecci - - !-------------------------------------------------------------- - ! the stresses (kg/s^2) - ! (1) northeast, (2) northwest, (3) southwest, (4) southeast - !-------------------------------------------------------------- - - stressp_1(iw) = (stressp_1(iw) * (c1 - arlx1i * revp) & - + c1ne * (divune * (c1 + Ktens) - Deltane * (c1 - Ktens))) * denom1 - stressp_2(iw) = (stressp_2(iw) * (c1 - arlx1i * revp) & - + c1nw * (divunw * (c1 + Ktens) - Deltanw * (c1 - Ktens))) * denom1 - stressp_3(iw) = (stressp_3(iw) * (c1 - arlx1i * revp) & - + c1sw * (divusw * (c1 + Ktens) - Deltasw * (c1 - Ktens))) * denom1 - stressp_4(iw) = (stressp_4(iw) * (c1 - arlx1i * revp) & - + c1se * (divuse * (c1 + Ktens) - Deltase * (c1 - Ktens))) * denom1 - - stressm_1(iw) = (stressm_1(iw) * (c1 - arlx1i * revp) + c0ne * tensionne * (c1 + Ktens)) * denom1 - stressm_2(iw) = (stressm_2(iw) * (c1 - arlx1i * revp) + c0nw * tensionnw * (c1 + Ktens)) * denom1 - stressm_3(iw) = (stressm_3(iw) * (c1 - arlx1i * revp) + c0sw * tensionsw * (c1 + Ktens)) * denom1 - stressm_4(iw) = (stressm_4(iw) * (c1 - arlx1i * revp) + c0se * tensionse * (c1 + Ktens)) * denom1 - - stress12_1(iw) = (stress12_1(iw) * (c1 - arlx1i * revp) + c0ne * shearne * p5 * (c1 + Ktens)) * denom1 - stress12_2(iw) = (stress12_2(iw) * (c1 - arlx1i * revp) + c0nw * shearnw * p5 * (c1 + Ktens)) * denom1 - stress12_3(iw) = (stress12_3(iw) * (c1 - arlx1i * revp) + c0sw * shearsw * p5 * (c1 + Ktens)) * denom1 - stress12_4(iw) = (stress12_4(iw) * (c1 - arlx1i * revp) + c0se * shearse * p5 * (c1 + Ktens)) * denom1 - - !-------------------------------------------------------------- - ! combinations of the stresses for the momentum equation - ! (kg/s^2) - !-------------------------------------------------------------- - - ssigpn = stressp_1(iw) + stressp_2(iw) - ssigps = stressp_3(iw) + stressp_4(iw) - ssigpe = stressp_1(iw) + stressp_4(iw) - ssigpw = stressp_2(iw) + stressp_3(iw) - ssigp1 = (stressp_1(iw) + stressp_3(iw)) * p055 - ssigp2 = (stressp_2(iw) + stressp_4(iw)) * p055 - - ssigmn = stressm_1(iw) + stressm_2(iw) - ssigms = stressm_3(iw) + stressm_4(iw) - ssigme = stressm_1(iw) + stressm_4(iw) - ssigmw = stressm_2(iw) + stressm_3(iw) - ssigm1 = (stressm_1(iw) + stressm_3(iw)) * p055 - ssigm2 = (stressm_2(iw) + stressm_4(iw)) * p055 - - ssig12n = stress12_1(iw) + stress12_2(iw) - ssig12s = stress12_3(iw) + stress12_4(iw) - ssig12e = stress12_1(iw) + stress12_4(iw) - ssig12w = stress12_2(iw) + stress12_3(iw) - ssig121 = (stress12_1(iw) + stress12_3(iw)) * p111 - ssig122 = (stress12_2(iw) + stress12_4(iw)) * p111 - - csigpne = p111 * stressp_1(iw) + ssigp2 + p027 * stressp_3(iw) - csigpnw = p111 * stressp_2(iw) + ssigp1 + p027 * stressp_4(iw) - csigpsw = p111 * stressp_3(iw) + ssigp2 + p027 * stressp_1(iw) - csigpse = p111 * stressp_4(iw) + ssigp1 + p027 * stressp_2(iw) - - csigmne = p111 * stressm_1(iw) + ssigm2 + p027 * stressm_3(iw) - csigmnw = p111 * stressm_2(iw) + ssigm1 + p027 * stressm_4(iw) - csigmsw = p111 * stressm_3(iw) + ssigm2 + p027 * stressm_1(iw) - csigmse = p111 * stressm_4(iw) + ssigm1 + p027 * stressm_2(iw) - - csig12ne = p222 * stress12_1(iw) + ssig122 + p055 * stress12_3(iw) - csig12nw = p222 * stress12_2(iw) + ssig121 + p055 * stress12_4(iw) - csig12sw = p222 * stress12_3(iw) + ssig122 + p055 * stress12_1(iw) - csig12se = p222 * stress12_4(iw) + ssig121 + p055 * stress12_2(iw) - - str12ew = p5 * dxT(iw) * (p333 * ssig12e + p166 * ssig12w) - str12we = p5 * dxT(iw) * (p333 * ssig12w + p166 * ssig12e) - str12ns = p5 * dyT(iw) * (p333 * ssig12n + p166 * ssig12s) - str12sn = p5 * dyT(iw) * (p333 * ssig12s + p166 * ssig12n) - - !-------------------------------------------------------------- - ! for dF/dx (u momentum) - !-------------------------------------------------------------- - - strp_tmp = p25 * dyT(iw) * (p333 * ssigpn + p166 * ssigps) - strm_tmp = p25 * dyT(iw) * (p333 * ssigmn + p166 * ssigms) - - ! northeast (i,j) - str1(iw) = -strp_tmp - strm_tmp - str12ew & - + dxhy * (-csigpne + csigmne) + dyhx * csig12ne - - ! northwest (i+1,j) - str2(iw) = strp_tmp + strm_tmp - str12we & - + dxhy * (-csigpnw + csigmnw) + dyhx * csig12nw - - strp_tmp = p25 * dyT(iw) * (p333 * ssigps + p166 * ssigpn) - strm_tmp = p25 * dyT(iw) * (p333 * ssigms + p166 * ssigmn) - - ! southeast (i,j+1) - str3(iw) = -strp_tmp - strm_tmp + str12ew & - + dxhy * (-csigpse + csigmse) + dyhx * csig12se - - ! southwest (i+1,j+1) - str4(iw) = strp_tmp + strm_tmp + str12we & - + dxhy * (-csigpsw + csigmsw) + dyhx * csig12sw - - !-------------------------------------------------------------- - ! for dF/dy (v momentum) - !-------------------------------------------------------------- - - strp_tmp = p25 * dxT(iw) * (p333 * ssigpe + p166 * ssigpw) - strm_tmp = p25 * dxT(iw) * (p333 * ssigme + p166 * ssigmw) - - ! northeast (i,j) - str5(iw) = -strp_tmp + strm_tmp - str12ns & - - dyhx * (csigpne + csigmne) + dxhy * csig12ne - - ! southeast (i,j+1) - str6(iw) = strp_tmp - strm_tmp - str12sn & - - dyhx * (csigpse + csigmse) + dxhy * csig12se - - strp_tmp = p25 * dxT(iw) * (p333 * ssigpw + p166 * ssigpe) - strm_tmp = p25 * dxT(iw) * (p333 * ssigmw + p166 * ssigme) - - ! northwest (i+1,j) - str7(iw) = -strp_tmp + strm_tmp + str12ns & - - dyhx * (csigpnw + csigmnw) + dxhy * csig12nw - - ! southwest (i+1,j+1) - str8(iw) = strp_tmp - strm_tmp + str12sn & - - dyhx * (csigpsw + csigmsw) + dxhy * csig12sw - - end do -#ifdef _OPENACC - !$acc end parallel -#endif - - end subroutine stress_last - -!======================================================================= - - subroutine stepu_iter(NA_len, rhow, lb, ub, Cw, aiu, uocn, vocn, & - forcex, forcey, umassdti, fm, uarear, Tbu, uvel_init, vvel_init, & - uvel, vvel, str1, str2, str3, str4, str5, str6, str7, str8, nw, & - sw, sse, skipucell) - - use ice_kinds_mod - use ice_constants, only : c0, c1 - use ice_dyn_shared, only : brlx, revp, u0, cosw, sinw - - implicit none - - integer(kind=int_kind), intent(in) :: NA_len, lb, ub - real(kind=dbl_kind), intent(in) :: rhow - logical(kind=log_kind), intent(in), dimension(:) :: skipucell - integer(kind=int_kind), dimension(:), intent(in), contiguous :: & - nw, sw, sse - real(kind=dbl_kind), dimension(:), intent(in), contiguous :: & - uvel_init, vvel_init, aiu, forcex, forcey, umassdti, Tbu, & - uocn, vocn, fm, uarear, Cw, str1, str2, str3, str4, str5, & - str6, str7, str8 - real(kind=dbl_kind), dimension(:), intent(inout), contiguous :: & - uvel, vvel - - ! local variables - - integer(kind=int_kind) :: iw, il, iu - real(kind=dbl_kind) :: uold, vold, vrel, cca, ccb, ab2, cc1, & - cc2, taux, tauy, Cb, tmp_str2_nw, tmp_str3_sse, tmp_str4_sw, & - tmp_str6_sse, tmp_str7_nw, tmp_str8_sw, waterx, watery, & - tmp_strintx, tmp_strinty - - character(len=*), parameter :: subname = '(stepu_iter)' - -#ifdef _OPENACC - !$acc parallel & - !$acc present(Cw, aiu, uocn, vocn, forcex, forcey, umassdti, fm, & - !$acc uarear, Tbu, uvel_init, vvel_init, nw, sw, sse, skipucell, & - !$acc str1, str2, str3, str4, str5, str6, str7, str8, uvel, & - !$acc vvel) - !$acc loop - do iw = 1, NA_len -#else - call domp_get_domain(lb, ub, il, iu) - do iw = il, iu -#endif - - if (skipucell(iw)) cycle - - uold = uvel(iw) - vold = vvel(iw) - - vrel = aiu(iw) * rhow * Cw(iw) * sqrt((uocn(iw) - uold)**2 + (vocn(iw) - vold)**2) - - waterx = uocn(iw) * cosw - vocn(iw) * sinw * sign(c1, fm(iw)) - watery = vocn(iw) * cosw + uocn(iw) * sinw * sign(c1, fm(iw)) - - taux = vrel * waterx - tauy = vrel * watery - - Cb = Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - - cca = (brlx + revp) * umassdti(iw) + vrel * cosw + Cb - ccb = fm(iw) + sign(c1, fm(iw)) * vrel * sinw - - ab2 = cca**2 + ccb**2 - - tmp_str2_nw = str2(nw(iw)) - tmp_str3_sse = str3(sse(iw)) - tmp_str4_sw = str4(sw(iw)) - tmp_str6_sse = str6(sse(iw)) - tmp_str7_nw = str7(nw(iw)) - tmp_str8_sw = str8(sw(iw)) - - tmp_strintx = uarear(iw) * (str1(iw) + tmp_str2_nw + tmp_str3_sse + tmp_str4_sw) - tmp_strinty = uarear(iw) * (str5(iw) + tmp_str6_sse + tmp_str7_nw + tmp_str8_sw) - - cc1 = tmp_strintx + forcex(iw) + taux & - + umassdti(iw) * (brlx * uold + revp * uvel_init(iw)) - cc2 = tmp_strinty + forcey(iw) + tauy & - + umassdti(iw) * (brlx * vold + revp * vvel_init(iw)) - - uvel(iw) = (cca * cc1 + ccb * cc2) / ab2 - vvel(iw) = (cca * cc2 - ccb * cc1) / ab2 - - end do -#ifdef _OPENACC - !$acc end parallel -#endif - - end subroutine stepu_iter - -!======================================================================= - - subroutine stepu_last(NA_len, rhow, lb, ub, Cw, aiu, uocn, vocn, & - forcex, forcey, umassdti, fm, uarear, Tbu, uvel_init, vvel_init, & - uvel, vvel, str1, str2, str3, str4, str5, str6, str7, str8, nw, & - sw, sse, skipucell, strintx, strinty, taubx, tauby) - - use ice_kinds_mod - use ice_constants, only : c0, c1 - use ice_dyn_shared, only : brlx, revp, u0, cosw, sinw - - implicit none - - integer(kind=int_kind), intent(in) :: NA_len, lb, ub - real(kind=dbl_kind), intent(in) :: rhow - logical(kind=log_kind), intent(in), dimension(:) :: skipucell - integer(kind=int_kind), dimension(:), intent(in), contiguous :: & - nw, sw, sse - real(kind=dbl_kind), dimension(:), intent(in), contiguous :: & - uvel_init, vvel_init, aiu, forcex, forcey, umassdti, Tbu, & - uocn, vocn, fm, uarear, Cw, str1, str2, str3, str4, str5, & - str6, str7, str8 - real(kind=dbl_kind), dimension(:), intent(inout), contiguous :: & - uvel, vvel, strintx, strinty, taubx, tauby - - ! local variables - - integer(kind=int_kind) :: iw, il, iu - real(kind=dbl_kind) :: uold, vold, vrel, cca, ccb, ab2, cc1, & - cc2, taux, tauy, Cb, tmp_str2_nw, tmp_str3_sse, tmp_str4_sw, & - tmp_str6_sse, tmp_str7_nw, tmp_str8_sw, waterx, watery - - character(len=*), parameter :: subname = '(stepu_last)' - -#ifdef _OPENACC - !$acc parallel & - !$acc present(Cw, aiu, uocn, vocn, forcex, forcey, umassdti, fm, & - !$acc uarear, Tbu, uvel_init, vvel_init, nw, sw, sse, skipucell, & - !$acc str1, str2, str3, str4, str5, str6, str7, str8, uvel, & - !$acc vvel, strintx, strinty, taubx, tauby) - !$acc loop - do iw = 1, NA_len -#else - call domp_get_domain(lb, ub, il, iu) - do iw = il, iu -#endif - - if (skipucell(iw)) cycle - - uold = uvel(iw) - vold = vvel(iw) - - vrel = aiu(iw) * rhow * Cw(iw) * sqrt((uocn(iw) - uold)**2 + (vocn(iw) - vold)**2) - - waterx = uocn(iw) * cosw - vocn(iw) * sinw * sign(c1, fm(iw)) - watery = vocn(iw) * cosw + uocn(iw) * sinw * sign(c1, fm(iw)) - - taux = vrel * waterx - tauy = vrel * watery - - Cb = Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - - cca = (brlx + revp) * umassdti(iw) + vrel * cosw + Cb - ccb = fm(iw) + sign(c1, fm(iw)) * vrel * sinw - - ab2 = cca**2 + ccb**2 - - tmp_str2_nw = str2(nw(iw)) - tmp_str3_sse = str3(sse(iw)) - tmp_str4_sw = str4(sw(iw)) - tmp_str6_sse = str6(sse(iw)) - tmp_str7_nw = str7(nw(iw)) - tmp_str8_sw = str8(sw(iw)) - - strintx(iw) = uarear(iw) * (str1(iw) + tmp_str2_nw + tmp_str3_sse + tmp_str4_sw) - strinty(iw) = uarear(iw) * (str5(iw) + tmp_str6_sse + tmp_str7_nw + tmp_str8_sw) - - cc1 = strintx(iw) + forcex(iw) + taux & - + umassdti(iw) * (brlx * uold + revp * uvel_init(iw)) - cc2 = strinty(iw) + forcey(iw) + tauy & - + umassdti(iw) * (brlx * vold + revp * vvel_init(iw)) - - uvel(iw) = (cca * cc1 + ccb * cc2) / ab2 - vvel(iw) = (cca * cc2 - ccb * cc1) / ab2 - - ! calculate seabed stress component for outputs - taubx(iw) = -uvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - tauby(iw) = -vvel(iw) * Tbu(iw) / (sqrt(uold**2 + vold**2) + u0) - - end do -#ifdef _OPENACC - !$acc end parallel -#endif - - end subroutine stepu_last - -!======================================================================= - - subroutine evp1d_halo_update(NAVEL_len, lb, ub, uvel, vvel, & - halo_parent) - - use ice_kinds_mod - - implicit none - - integer(kind=int_kind), intent(in) :: NAVEL_len, lb, ub - integer(kind=int_kind), dimension(:), intent(in), contiguous :: & - halo_parent - real(kind=dbl_kind), dimension(:), intent(inout), contiguous :: & - uvel, vvel - - ! local variables - - integer (kind=int_kind) :: iw, il, iu - - character(len=*), parameter :: subname = '(evp1d_halo_update)' - -#ifdef _OPENACC - !$acc parallel & - !$acc present(uvel, vvel) - !$acc loop - do iw = 1, NAVEL_len - if (halo_parent(iw) == 0) cycle - uvel(iw) = uvel(halo_parent(iw)) - vvel(iw) = vvel(halo_parent(iw)) - end do - !$acc end parallel -#else - call domp_get_domain(lb, ub, il, iu) - do iw = il, iu - if (halo_parent(iw) == 0) cycle - uvel(iw) = uvel(halo_parent(iw)) - vvel(iw) = vvel(halo_parent(iw)) - end do - call domp_get_domain(ub + 1, NAVEL_len, il, iu) - do iw = il, iu - if (halo_parent(iw) == 0) cycle - uvel(iw) = uvel(halo_parent(iw)) - vvel(iw) = vvel(halo_parent(iw)) - end do -#endif - - end subroutine evp1d_halo_update - -!======================================================================= - - subroutine alloc1d(na) - - implicit none - - integer(kind=int_kind), intent(in) :: na - - ! local variables - - integer(kind=int_kind) :: ierr - - character(len=*), parameter :: subname = '(alloc1d)' - - allocate( & - ! helper indices for neighbours - indj(1:na), indi(1:na), ee(1:na), ne(1:na), se(1:na), & - nw(1:na), sw(1:na), sse(1:na), skipucell(1:na), & - skiptcell(1:na), & - ! grid distances and their "-1 neighbours" - HTE(1:na), HTN(1:na), HTEm1(1:na), HTNm1(1:na), & - ! T cells - strength(1:na), dxT(1:na), dyT(1:na), tarear(1:na), & - stressp_1(1:na), stressp_2(1:na), stressp_3(1:na), & - stressp_4(1:na), stressm_1(1:na), stressm_2(1:na), & - stressm_3(1:na), stressm_4(1:na), stress12_1(1:na), & - stress12_2(1:na), stress12_3(1:na), stress12_4(1:na), & - divu(1:na), rdg_conv(1:na), rdg_shear(1:na), shear(1:na), & - ! U cells - cdn_ocn(1:na), aiu(1:na), uocn(1:na), vocn(1:na), & - forcex(1:na), forcey(1:na), Tbu(1:na), umassdti(1:na), & - fm(1:na), uarear(1:na), strintx(1:na), strinty(1:na), & - uvel_init(1:na), vvel_init(1:na), taubx(1:na), tauby(1:na), & - ! error handling - stat=ierr) - - if (ierr /= 0) call abort_ice(subname & - // ' ERROR: could not allocate 1D arrays') - - end subroutine alloc1d - -!======================================================================= - - subroutine alloc1d_navel(navel) - - implicit none - - integer(kind=int_kind), intent(in) :: navel - - ! local variables - - integer(kind=int_kind) :: ierr - - character(len=*), parameter :: subname = '(alloc1d_navel)' - - allocate(uvel(1:navel), vvel(1:navel), indij(1:navel), & - halo_parent(1:navel), str1(1:navel), str2(1:navel), & - str3(1:navel), str4(1:navel), str5(1:navel), str6(1:navel), & - str7(1:navel), str8(1:navel), stat=ierr) - - if (ierr /= 0) call abort_ice(subname & - // ' ERROR: could not allocate 1D arrays') - - end subroutine alloc1d_navel - -!======================================================================= - - subroutine dealloc1d - - implicit none - - ! local variables - - integer(kind=int_kind) :: ierr - - character(len=*), parameter :: subname = '(dealloc1d)' - - deallocate( & - ! helper indices for neighbours - indj, indi, ee, ne, se, nw, sw, sse, skipucell, skiptcell, & - ! grid distances and their "-1 neighbours" - HTE, HTN, HTEm1, HTNm1, & - ! T cells - strength, dxT, dyT, tarear, stressp_1, stressp_2, stressp_3, & - stressp_4, stressm_1, stressm_2, stressm_3, stressm_4, & - stress12_1, stress12_2, stress12_3, stress12_4, str1, str2, & - str3, str4, str5, str6, str7, str8, divu, rdg_conv, & - rdg_shear, shear, & - ! U cells - cdn_ocn, aiu, uocn, vocn, forcex, forcey, Tbu, umassdti, fm, & - uarear, strintx, strinty, uvel_init, vvel_init, taubx, tauby, & - uvel, vvel, indij, halo_parent, & - ! error handling - stat=ierr) - - if (ierr /= 0) call abort_ice(subname & - // ' ERROR: could not deallocate 1D arrays') - - end subroutine dealloc1d - -!======================================================================= - - subroutine ice_dyn_evp_1d_copyin(nx, ny, nblk, nx_glob, ny_glob, & - I_iceTmask, I_iceUmask, I_cdn_ocn, I_aiu, I_uocn, I_vocn, & - I_forcex, I_forcey, I_Tbu, I_umassdti, I_fm, I_uarear, I_tarear, & - I_strintx, I_strinty, I_uvel_init, I_vvel_init, I_strength, & - I_uvel, I_vvel, I_dxT, I_dyT, I_stressp_1, I_stressp_2, & - I_stressp_3, I_stressp_4, I_stressm_1, I_stressm_2, I_stressm_3, & - I_stressm_4, I_stress12_1, I_stress12_2, I_stress12_3, & - I_stress12_4) - - use ice_gather_scatter, only : gather_global_ext - use ice_domain, only : distrb_info - use ice_communicate, only : my_task, master_task - use ice_grid, only : G_HTE, G_HTN - use ice_constants, only : c0 - - implicit none - - integer(int_kind), intent(in) :: nx, ny, nblk, nx_glob, ny_glob - logical(kind=log_kind), dimension(nx, ny, nblk), intent(in) :: & - I_iceTmask, I_iceUmask - real(kind=dbl_kind), dimension(nx, ny, nblk), intent(in) :: & - I_cdn_ocn, I_aiu, I_uocn, I_vocn, I_forcex, I_forcey, I_Tbu, & - I_umassdti, I_fm, I_uarear, I_tarear, I_strintx, I_strinty, & - I_uvel_init, I_vvel_init, I_strength, I_uvel, I_vvel, I_dxT, & - I_dyT, I_stressp_1, I_stressp_2, I_stressp_3, I_stressp_4, & - I_stressm_1, I_stressm_2, I_stressm_3, I_stressm_4, & - I_stress12_1, I_stress12_2, I_stress12_3, I_stress12_4 - - ! local variables - - logical(kind=log_kind), dimension(nx_glob, ny_glob) :: & - G_iceTmask, G_iceUmask - real(kind=dbl_kind), dimension(nx_glob, ny_glob) :: & - G_cdn_ocn, G_aiu, G_uocn, G_vocn, G_forcex, G_forcey, G_Tbu, & - G_umassdti, G_fm, G_uarear, G_tarear, G_strintx, G_strinty, & - G_uvel_init, G_vvel_init, G_strength, G_uvel, G_vvel, G_dxT, & - G_dyT, G_stressp_1, G_stressp_2, G_stressp_3, G_stressp_4, & - G_stressm_1, G_stressm_2, G_stressm_3, G_stressm_4, & - G_stress12_1, G_stress12_2, G_stress12_3, G_stress12_4 - - character(len=*), parameter :: & - subname = '(ice_dyn_evp_1d_copyin)' - - call gather_global_ext(G_iceTmask, I_iceTmask, master_task, distrb_info ) - call gather_global_ext(G_iceUmask, I_iceUmask, master_task, distrb_info ) - call gather_global_ext(G_cdn_ocn, I_cdn_ocn, master_task, distrb_info ) - call gather_global_ext(G_aiu, I_aiu, master_task, distrb_info ) - call gather_global_ext(G_uocn, I_uocn, master_task, distrb_info ) - call gather_global_ext(G_vocn, I_vocn, master_task, distrb_info ) - call gather_global_ext(G_forcex, I_forcex, master_task, distrb_info ) - call gather_global_ext(G_forcey, I_forcey, master_task, distrb_info ) - call gather_global_ext(G_Tbu, I_Tbu, master_task, distrb_info ) - call gather_global_ext(G_umassdti, I_umassdti, master_task, distrb_info ) - call gather_global_ext(G_fm, I_fm, master_task, distrb_info ) - call gather_global_ext(G_uarear, I_uarear, master_task, distrb_info ) - call gather_global_ext(G_tarear, I_tarear, master_task, distrb_info ) - call gather_global_ext(G_strintx, I_strintx, master_task, distrb_info ) - call gather_global_ext(G_strinty, I_strinty, master_task, distrb_info ) - call gather_global_ext(G_uvel_init, I_uvel_init, master_task, distrb_info ) - call gather_global_ext(G_vvel_init, I_vvel_init, master_task, distrb_info ) - call gather_global_ext(G_strength, I_strength, master_task, distrb_info ) - call gather_global_ext(G_uvel, I_uvel, master_task, distrb_info, c0) - call gather_global_ext(G_vvel, I_vvel, master_task, distrb_info, c0) - call gather_global_ext(G_dxT, I_dxT, master_task, distrb_info ) - call gather_global_ext(G_dyT, I_dyT, master_task, distrb_info ) - call gather_global_ext(G_stressp_1, I_stressp_1, master_task, distrb_info ) - call gather_global_ext(G_stressp_2, I_stressp_2, master_task, distrb_info ) - call gather_global_ext(G_stressp_3, I_stressp_3, master_task, distrb_info ) - call gather_global_ext(G_stressp_4, I_stressp_4, master_task, distrb_info ) - call gather_global_ext(G_stressm_1, I_stressm_1, master_task, distrb_info ) - call gather_global_ext(G_stressm_2, I_stressm_2, master_task, distrb_info ) - call gather_global_ext(G_stressm_3, I_stressm_3, master_task, distrb_info ) - call gather_global_ext(G_stressm_4, I_stressm_4, master_task, distrb_info ) - call gather_global_ext(G_stress12_1, I_stress12_1, master_task, distrb_info ) - call gather_global_ext(G_stress12_2, I_stress12_2, master_task, distrb_info ) - call gather_global_ext(G_stress12_3, I_stress12_3, master_task, distrb_info ) - call gather_global_ext(G_stress12_4, I_stress12_4, master_task, distrb_info ) - - ! all calculations id done on master task - if (my_task == master_task) then - ! find number of active points and allocate 1D vectors - call calc_na(nx_glob, ny_glob, NA_len, G_iceTmask, G_iceUmask) - call alloc1d(NA_len) - call calc_2d_indices(nx_glob, ny_glob, NA_len, G_iceTmask, G_iceUmask) - call calc_navel(nx_glob, ny_glob, NA_len, NAVEL_len) - call alloc1d_navel(NAVEL_len) - ! initialize OpenMP. FIXME: ought to be called from main - call domp_init() - !$OMP PARALLEL DEFAULT(shared) - call numainit(1, NA_len, NAVEL_len) - !$OMP END PARALLEL - ! map 2D arrays to 1D arrays - call convert_2d_1d(nx_glob, ny_glob, NA_len, NAVEL_len, & - G_HTE, G_HTN, G_cdn_ocn, G_aiu, G_uocn, G_vocn, G_forcex, & - G_forcey, G_Tbu, G_umassdti, G_fm, G_uarear, G_tarear, & - G_strintx, G_strinty, G_uvel_init, G_vvel_init, & - G_strength, G_uvel, G_vvel, G_dxT, G_dyT, G_stressp_1, & - G_stressp_2, G_stressp_3, G_stressp_4, G_stressm_1, & - G_stressm_2, G_stressm_3, G_stressm_4, G_stress12_1, & - G_stress12_2, G_stress12_3, G_stress12_4) - call calc_halo_parent(nx_glob, ny_glob, NA_len, NAVEL_len, G_iceTmask) - end if - - end subroutine ice_dyn_evp_1d_copyin - -!======================================================================= - - subroutine ice_dyn_evp_1d_copyout(nx, ny, nblk, nx_glob, ny_glob, & - I_uvel, I_vvel, I_strintx, I_strinty, I_stressp_1, I_stressp_2, & - I_stressp_3, I_stressp_4, I_stressm_1, I_stressm_2, I_stressm_3, & - I_stressm_4, I_stress12_1, I_stress12_2, I_stress12_3, & - I_stress12_4, I_divu, I_rdg_conv, I_rdg_shear, I_shear, I_taubx, & - I_tauby) - - use ice_constants, only : c0 - use ice_gather_scatter, only : scatter_global_ext - use ice_domain, only : distrb_info - use ice_communicate, only : my_task, master_task - - implicit none - - integer(int_kind), intent(in) :: nx, ny, nblk, nx_glob, ny_glob - real(dbl_kind), dimension(nx, ny, nblk), intent(out) :: I_uvel, & - I_vvel, I_strintx, I_strinty, I_stressp_1, I_stressp_2, & - I_stressp_3, I_stressp_4, I_stressm_1, I_stressm_2, & - I_stressm_3, I_stressm_4, I_stress12_1, I_stress12_2, & - I_stress12_3, I_stress12_4, I_divu, I_rdg_conv, I_rdg_shear, & - I_shear, I_taubx, I_tauby - - ! local variables - - integer(int_kind) :: iw, lo, up, j, i - real(dbl_kind), dimension(nx_glob, ny_glob) :: G_uvel, G_vvel, & - G_strintx, G_strinty, G_stressp_1, G_stressp_2, G_stressp_3, & - G_stressp_4, G_stressm_1, G_stressm_2, G_stressm_3, & - G_stressm_4, G_stress12_1, G_stress12_2, G_stress12_3, & - G_stress12_4, G_divu, G_rdg_conv, G_rdg_shear, G_shear, & - G_taubx, G_tauby - - character(len=*), parameter :: & - subname = '(ice_dyn_evp_1d_copyout)' - - ! remap 1D arrays into 2D arrays - if (my_task == master_task) then - - G_uvel = c0 - G_vvel = c0 - G_strintx = c0 - G_strinty = c0 - G_stressp_1 = c0 - G_stressp_2 = c0 - G_stressp_3 = c0 - G_stressp_4 = c0 - G_stressm_1 = c0 - G_stressm_2 = c0 - G_stressm_3 = c0 - G_stressm_4 = c0 - G_stress12_1 = c0 - G_stress12_2 = c0 - G_stress12_3 = c0 - G_stress12_4 = c0 - G_divu = c0 - G_rdg_conv = c0 - G_rdg_shear = c0 - G_shear = c0 - G_taubx = c0 - G_tauby = c0 - - !$OMP PARALLEL PRIVATE(iw, lo, up, j, i) - call domp_get_domain(1, NA_len, lo, up) - do iw = lo, up - ! get 2D indices - i = indi(iw) - j = indj(iw) - ! remap - G_strintx(i, j) = strintx(iw) - G_strinty(i, j) = strinty(iw) - G_stressp_1(i, j) = stressp_1(iw) - G_stressp_2(i, j) = stressp_2(iw) - G_stressp_3(i, j) = stressp_3(iw) - G_stressp_4(i, j) = stressp_4(iw) - G_stressm_1(i, j) = stressm_1(iw) - G_stressm_2(i, j) = stressm_2(iw) - G_stressm_3(i, j) = stressm_3(iw) - G_stressm_4(i, j) = stressm_4(iw) - G_stress12_1(i, j) = stress12_1(iw) - G_stress12_2(i, j) = stress12_2(iw) - G_stress12_3(i, j) = stress12_3(iw) - G_stress12_4(i, j) = stress12_4(iw) - G_divu(i, j) = divu(iw) - G_rdg_conv(i, j) = rdg_conv(iw) - G_rdg_shear(i, j) = rdg_shear(iw) - G_shear(i, j) = shear(iw) - G_taubx(i, j) = taubx(iw) - G_tauby(i, j) = tauby(iw) - G_uvel(i, j) = uvel(iw) - G_vvel(i, j) = vvel(iw) - end do - call domp_get_domain(NA_len + 1, NAVEL_len, lo, up) - do iw = lo, up - ! get 2D indices - j = int((indij(iw) - 1) / (nx_glob)) + 1 - i = indij(iw) - (j - 1) * nx_glob - ! remap - G_uvel(i, j) = uvel(iw) - G_vvel(i, j) = vvel(iw) - end do - !$OMP END PARALLEL - - call dealloc1d() - - end if - - ! scatter data on all tasks - call scatter_global_ext(I_uvel, G_uvel, master_task, distrb_info) - call scatter_global_ext(I_vvel, G_vvel, master_task, distrb_info) - call scatter_global_ext(I_strintx, G_strintx, master_task, distrb_info) - call scatter_global_ext(I_strinty, G_strinty, master_task, distrb_info) - call scatter_global_ext(I_stressp_1, G_stressp_1, master_task, distrb_info) - call scatter_global_ext(I_stressp_2, G_stressp_2, master_task, distrb_info) - call scatter_global_ext(I_stressp_3, G_stressp_3, master_task, distrb_info) - call scatter_global_ext(I_stressp_4, G_stressp_4, master_task, distrb_info) - call scatter_global_ext(I_stressm_1, G_stressm_1, master_task, distrb_info) - call scatter_global_ext(I_stressm_2, G_stressm_2, master_task, distrb_info) - call scatter_global_ext(I_stressm_3, G_stressm_3, master_task, distrb_info) - call scatter_global_ext(I_stressm_4, G_stressm_4, master_task, distrb_info) - call scatter_global_ext(I_stress12_1, G_stress12_1, master_task, distrb_info) - call scatter_global_ext(I_stress12_2, G_stress12_2, master_task, distrb_info) - call scatter_global_ext(I_stress12_3, G_stress12_3, master_task, distrb_info) - call scatter_global_ext(I_stress12_4, G_stress12_4, master_task, distrb_info) - call scatter_global_ext(I_divu, G_divu, master_task, distrb_info) - call scatter_global_ext(I_rdg_conv, G_rdg_conv, master_task, distrb_info) - call scatter_global_ext(I_rdg_shear, G_rdg_shear, master_task, distrb_info) - call scatter_global_ext(I_shear, G_shear, master_task, distrb_info) - call scatter_global_ext(I_taubx, G_taubx, master_task, distrb_info) - call scatter_global_ext(I_tauby, G_tauby, master_task, distrb_info) - - end subroutine ice_dyn_evp_1d_copyout - -!======================================================================= - - subroutine ice_dyn_evp_1d_kernel - - use ice_constants, only : c0 - use ice_dyn_shared, only : ndte - use ice_communicate, only : my_task, master_task - - implicit none - - ! local variables - - real(kind=dbl_kind) :: rhow - integer(kind=int_kind) :: ksub - - character(len=*), parameter :: & - subname = '(ice_dyn_evp_1d_kernel)' - - ! all calculations is done on master task - if (my_task == master_task) then - - ! read constants - call icepack_query_parameters(rhow_out = rhow) - call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) then - call abort_ice(error_message=subname, file=__FILE__, & - line=__LINE__) - end if - - if (ndte < 2) call abort_ice(subname & - // ' ERROR: ndte must be 2 or higher for this kernel') - - ! tcraig, turn off the OMP directives here, Jan, 2022 - ! This produces non bit-for-bit results with different thread counts. - ! Seems like there isn't an opportunity for safe threading here ??? - !$XXXOMP PARALLEL PRIVATE(ksub) - do ksub = 1, ndte - 1 - call evp1d_stress(NA_len, ee, ne, se, 1, NA_len, uvel, & - vvel, dxT, dyT, hte, htn, htem1, htnm1, strength, & - stressp_1, stressp_2, stressp_3, stressp_4, stressm_1, & - stressm_2, stressm_3, stressm_4, stress12_1, & - stress12_2, stress12_3, stress12_4, str1, str2, str3, & - str4, str5, str6, str7, str8, skiptcell) - !$XXXOMP BARRIER - call evp1d_stepu(NA_len, rhow, 1, NA_len, cdn_ocn, aiu, & - uocn, vocn, forcex, forcey, umassdti, fm, uarear, Tbu, & - uvel_init, vvel_init, uvel, vvel, str1, str2, str3, & - str4, str5, str6, str7, str8, nw, sw, sse, skipucell) - !$XXXOMP BARRIER - call evp1d_halo_update(NAVEL_len, 1, NA_len, uvel, vvel, & - halo_parent) - !$XXXOMP BARRIER - end do - - call evp1d_stress(NA_len, ee, ne, se, 1, NA_len, uvel, vvel, & - dxT, dyT, hte, htn, htem1, htnm1, strength, stressp_1, & - stressp_2, stressp_3, stressp_4, stressm_1, stressm_2, & - stressm_3, stressm_4, stress12_1, stress12_2, stress12_3, & - stress12_4, str1, str2, str3, str4, str5, str6, str7, & - str8, skiptcell, tarear, divu, rdg_conv, rdg_shear, shear) - !$XXXOMP BARRIER - call evp1d_stepu(NA_len, rhow, 1, NA_len, cdn_ocn, aiu, uocn, & - vocn, forcex, forcey, umassdti, fm, uarear, Tbu, & - uvel_init, vvel_init, uvel, vvel, str1, str2, str3, str4, & - str5, str6, str7, str8, nw, sw, sse, skipucell, strintx, & - strinty, taubx, tauby) - !$XXXOMP BARRIER - call evp1d_halo_update(NAVEL_len, 1, NA_len, uvel, vvel, & - halo_parent) - !$XXXOMP END PARALLEL - - end if ! master task - - end subroutine ice_dyn_evp_1d_kernel - -!======================================================================= - - subroutine calc_na(nx, ny, na, iceTmask, iceUmask) - ! Calculate number of active points - - use ice_blocks, only : nghost - - implicit none - - integer(kind=int_kind), intent(in) :: nx, ny - logical(kind=log_kind), dimension(nx, ny), intent(in) :: & - iceTmask, iceUmask - integer(kind=int_kind), intent(out) :: na - - ! local variables - - integer(kind=int_kind) :: i, j - - character(len=*), parameter :: subname = '(calc_na)' - - na = 0 - ! NOTE: T mask includes northern and eastern ghost cells - do j = 1 + nghost, ny - do i = 1 + nghost, nx - if (iceTmask(i,j) .or. iceUmask(i,j)) na = na + 1 - end do - end do - - end subroutine calc_na - -!======================================================================= - - subroutine calc_2d_indices(nx, ny, na, iceTmask, iceUmask) - - use ice_blocks, only : nghost - - implicit none - - integer(kind=int_kind), intent(in) :: nx, ny, na - logical(kind=log_kind), dimension(nx, ny), intent(in) :: & - iceTmask, iceUmask - - ! local variables - - integer(kind=int_kind) :: i, j, Nmaskt - - character(len=*), parameter :: subname = '(calc_2d_indices)' - - skipucell(:) = .false. - skiptcell(:) = .false. - indi = 0 - indj = 0 - Nmaskt = 0 - ! NOTE: T mask includes northern and eastern ghost cells - do j = 1 + nghost, ny - do i = 1 + nghost, nx - if (iceTmask(i,j) .or. iceUmask(i,j)) then - Nmaskt = Nmaskt + 1 - indi(Nmaskt) = i - indj(Nmaskt) = j - if (.not. iceTmask(i,j)) skiptcell(Nmaskt) = .true. - if (.not. iceUmask(i,j)) skipucell(Nmaskt) = .true. - ! NOTE: U mask does not include northern and eastern - ! ghost cells. Skip northern and eastern ghost cells - if (i == nx) skipucell(Nmaskt) = .true. - if (j == ny) skipucell(Nmaskt) = .true. - end if - end do - end do - - end subroutine calc_2d_indices - -!======================================================================= - - subroutine calc_navel(nx_block, ny_block, na, navel) - ! Calculate number of active points, including halo points - - implicit none - - integer(kind=int_kind), intent(in) :: nx_block, ny_block, na - integer(kind=int_kind), intent(out) :: navel - - ! local variables - - integer(kind=int_kind) :: iw, i, j - integer(kind=int_kind), dimension(1:na) :: Iin, Iee, Ine, Ise, & - Inw, Isw, Isse - integer(kind=int_kind), dimension(1:7 * na) :: util1, util2 - - character(len=*), parameter :: subname = '(calc_navel)' - - ! calculate additional 1D indices used for finite differences - do iw = 1, na - ! get 2D indices - i = indi(iw) - j = indj(iw) - ! calculate 1D indices - Iin(iw) = i + (j - 1) * nx_block ! ( 0, 0) target point - Iee(iw) = i - 1 + (j - 1) * nx_block ! (-1, 0) - Ine(iw) = i - 1 + (j - 2) * nx_block ! (-1, -1) - Ise(iw) = i + (j - 2) * nx_block ! ( 0, -1) - Inw(iw) = i + 1 + (j - 1) * nx_block ! (+1, 0) - Isw(iw) = i + 1 + (j - 0) * nx_block ! (+1, +1) - Isse(iw) = i + (j - 0) * nx_block ! ( 0, +1) - end do - - ! find number of points needed for finite difference calculations - call union(Iin, Iee, na, na, util1, i ) - call union(util1, Ine, i, na, util2, j ) - call union(util2, Ise, j, na, util1, i ) - call union(util1, Inw, i, na, util2, j ) - call union(util2, Isw, j, na, util1, i ) - call union(util1, Isse, i, na, util2, navel) - - end subroutine calc_navel - -!======================================================================= - - subroutine convert_2d_1d(nx, ny, na, navel, I_HTE, I_HTN, & - I_cdn_ocn, I_aiu, I_uocn, I_vocn, I_forcex, I_forcey, I_Tbu, & - I_umassdti, I_fm, I_uarear, I_tarear, I_strintx, I_strinty, & - I_uvel_init, I_vvel_init, I_strength, I_uvel, I_vvel, I_dxT, & - I_dyT, I_stressp_1, I_stressp_2, I_stressp_3, I_stressp_4, & - I_stressm_1, I_stressm_2, I_stressm_3, I_stressm_4, & - I_stress12_1, I_stress12_2, I_stress12_3, I_stress12_4) - - implicit none - - integer(kind=int_kind), intent(in) :: nx, ny, na, navel - real (kind=dbl_kind), dimension(nx, ny), intent(in) :: I_HTE, & - I_HTN, I_cdn_ocn, I_aiu, I_uocn, I_vocn, I_forcex, I_forcey, & - I_Tbu, I_umassdti, I_fm, I_uarear, I_tarear, I_strintx, & - I_strinty, I_uvel_init, I_vvel_init, I_strength, I_uvel, & - I_vvel, I_dxT, I_dyT, I_stressp_1, I_stressp_2, I_stressp_3, & - I_stressp_4, I_stressm_1, I_stressm_2, I_stressm_3, & - I_stressm_4, I_stress12_1, I_stress12_2, I_stress12_3, & - I_stress12_4 - - ! local variables - - integer(kind=int_kind) :: iw, lo, up, j, i, nachk - integer(kind=int_kind), dimension(1:na) :: Iin, Iee, Ine, Ise, & - Inw, Isw, Isse - integer(kind=int_kind), dimension(1:7 * na) :: util1, util2 - - character(len=*), parameter :: subname = '(convert_2d_1d)' - - ! calculate additional 1D indices used for finite differences - do iw = 1, na - ! get 2D indices - i = indi(iw) - j = indj(iw) - ! calculate 1D indices - Iin(iw) = i + (j - 1) * nx ! ( 0, 0) target point - Iee(iw) = i - 1 + (j - 1) * nx ! (-1, 0) - Ine(iw) = i - 1 + (j - 2) * nx ! (-1,-1) - Ise(iw) = i + (j - 2) * nx ! ( 0,-1) - Inw(iw) = i + 1 + (j - 1) * nx ! (+1, 0) - Isw(iw) = i + 1 + (j - 0) * nx ! (+1,+1) - Isse(iw) = i + (j - 0) * nx ! ( 0,+1) - end do - - ! find number of points needed for finite difference calculations - call union(Iin, Iee, na, na, util1, i ) - call union(util1, Ine, i, na, util2, j ) - call union(util2, Ise, j, na, util1, i ) - call union(util1, Inw, i, na, util2, j ) - call union(util2, Isw, j, na, util1, i ) - call union(util1, Isse, i, na, util2, nachk) - - ! index vector with sorted target points - do iw = 1, na - indij(iw) = Iin(iw) - end do - - ! sorted additional points - call setdiff(util2, Iin, navel, na, util1, j) - do iw = na + 1, navel - indij(iw) = util1(iw - na) - end do - - ! indices for additional points needed for uvel and vvel - call findXinY(Iee, indij, na, navel, ee) - call findXinY(Ine, indij, na, navel, ne) - call findXinY(Ise, indij, na, navel, se) - call findXinY(Inw, indij, na, navel, nw) - call findXinY(Isw, indij, na, navel, sw) - call findXinY(Isse, indij, na, navel, sse) - - !$OMP PARALLEL PRIVATE(iw, lo, up, j, i) - ! write 1D arrays from 2D arrays (target points) - call domp_get_domain(1, na, lo, up) - do iw = lo, up - ! get 2D indices - i = indi(iw) - j = indj(iw) - ! map - uvel(iw) = I_uvel(i, j) - vvel(iw) = I_vvel(i, j) - cdn_ocn(iw) = I_cdn_ocn(i, j) - aiu(iw) = I_aiu(i, j) - uocn(iw) = I_uocn(i, j) - vocn(iw) = I_vocn(i, j) - forcex(iw) = I_forcex(i, j) - forcey(iw) = I_forcey(i, j) - Tbu(iw) = I_Tbu(i, j) - umassdti(iw) = I_umassdti(i, j) - fm(iw) = I_fm(i, j) - tarear(iw) = I_tarear(i, j) - uarear(iw) = I_uarear(i, j) - strintx(iw) = I_strintx(i, j) - strinty(iw) = I_strinty(i, j) - uvel_init(iw) = I_uvel_init(i, j) - vvel_init(iw) = I_vvel_init(i, j) - strength(iw) = I_strength(i, j) - dxT(iw) = I_dxT(i, j) - dyT(iw) = I_dyT(i, j) - stressp_1(iw) = I_stressp_1(i, j) - stressp_2(iw) = I_stressp_2(i, j) - stressp_3(iw) = I_stressp_3(i, j) - stressp_4(iw) = I_stressp_4(i, j) - stressm_1(iw) = I_stressm_1(i, j) - stressm_2(iw) = I_stressm_2(i, j) - stressm_3(iw) = I_stressm_3(i, j) - stressm_4(iw) = I_stressm_4(i, j) - stress12_1(iw) = I_stress12_1(i, j) - stress12_2(iw) = I_stress12_2(i, j) - stress12_3(iw) = I_stress12_3(i, j) - stress12_4(iw) = I_stress12_4(i, j) - HTE(iw) = I_HTE(i, j) - HTN(iw) = I_HTN(i, j) - HTEm1(iw) = I_HTE(i - 1, j) - HTNm1(iw) = I_HTN(i, j - 1) - end do - ! write 1D arrays from 2D arrays (additional points) - call domp_get_domain(na + 1, navel, lo, up) - do iw = lo, up - ! get 2D indices - j = int((indij(iw) - 1) / (nx)) + 1 - i = indij(iw) - (j - 1) * nx - ! map - uvel(iw) = I_uvel(i, j) - vvel(iw) = I_vvel(i, j) - end do - !$OMP END PARALLEL - - end subroutine convert_2d_1d - -!======================================================================= - - subroutine calc_halo_parent(nx, ny, na, navel, I_iceTmask) - - implicit none - - integer(kind=int_kind), intent(in) :: nx, ny, na, navel - logical(kind=log_kind), dimension(nx, ny), intent(in) :: & - I_iceTmask - - ! local variables - - integer(kind=int_kind) :: iw, i, j - integer(kind=int_kind), dimension(1:navel) :: Ihalo - - character(len=*), parameter :: subname = '(calc_halo_parent)' - - !----------------------------------------------------------------- - ! Indices for halo update: - ! 0: no halo point - ! >0: index for halo point parent, related to indij vector - ! - ! TODO: Implement for nghost > 1 - ! TODO: Implement for tripole grids - !----------------------------------------------------------------- - - Ihalo(:) = 0 - halo_parent(:) = 0 - - do iw = 1, navel - j = int((indij(iw) - 1) / (nx)) + 1 - i = indij(iw) - (j - 1) * nx - ! if within ghost zone - if (i == nx .and. I_iceTmask(2, j) ) Ihalo(iw) = 2 + (j - 1) * nx - if (i == 1 .and. I_iceTmask(nx - 1, j) ) Ihalo(iw) = (nx - 1) + (j - 1) * nx - if (j == ny .and. I_iceTmask(i, 2) ) Ihalo(iw) = i + nx - if (j == 1 .and. I_iceTmask(i, ny - 1) ) Ihalo(iw) = i + (ny - 2) * nx - end do - - ! relate halo indices to indij vector - call findXinY_halo(Ihalo, indij, navel, navel, halo_parent) - - end subroutine calc_halo_parent - -!======================================================================= - - subroutine union(x, y, nx, ny, xy, nxy) - ! Find union (xy) of two sorted integer vectors (x and y), i.e. - ! combined values of the two vectors with no repetitions - - implicit none - - integer(int_kind), intent(in) :: nx, ny - integer(int_kind), intent(in) :: x(1:nx), y(1:ny) - integer(int_kind), intent(out) :: xy(1:nx + ny) - integer(int_kind), intent(out) :: nxy - - ! local variables - - integer(int_kind) :: i, j, k - - character(len=*), parameter :: subname = '(union)' - - i = 1 - j = 1 - k = 1 - do while (i <= nx .and. j <= ny) - if (x(i) < y(j)) then - xy(k) = x(i) - i = i + 1 - else if (x(i) > y(j)) then - xy(k) = y(j) - j = j + 1 - else - xy(k) = x(i) - i = i + 1 - j = j + 1 - end if - k = k + 1 - end do - - ! the rest - do while (i <= nx) - xy(k) = x(i) - i = i + 1 - k = k + 1 - end do - do while (j <= ny) - xy(k) = y(j) - j = j + 1 - k = k + 1 - end do - nxy = k - 1 - - end subroutine union - -!======================================================================= - - subroutine setdiff(x, y, nx, ny, xy, nxy) - ! Find element (xy) of two sorted integer vectors (x and y) that - ! are in x, but not in y, or in y, but not in x - - implicit none - - integer(kind=int_kind), intent(in) :: nx, ny - integer(kind=int_kind), intent(in) :: x(1:nx), y(1:ny) - integer(kind=int_kind), intent(out) :: xy(1:nx + ny) - integer(kind=int_kind), intent(out) :: nxy - - ! local variables - - integer(kind=int_kind) :: i, j, k - - character(len=*), parameter :: subname = '(setdiff)' - - i = 1 - j = 1 - k = 1 - do while (i <= nx .and. j <= ny) - if (x(i) < y(j)) then - xy(k) = x(i) - i = i + 1 - k = k + 1 - else if (x(i) > y(j)) then - xy(k) = y(j) - j = j + 1 - k = k + 1 - else - i = i + 1 - j = j + 1 - end if - end do - - ! the rest - do while (i <= nx) - xy(k) = x(i) - i = i + 1 - k = k + 1 - end do - do while (j <= ny) - xy(k) = y(j) - j = j + 1 - k = k + 1 - end do - nxy = k - 1 - - end subroutine setdiff - -!======================================================================== - - subroutine findXinY(x, y, nx, ny, indx) - ! Find indx vector so that x(1:na) = y(indx(1:na)) - ! - ! Conditions: - ! * EVERY item in x is found in y - ! * x(1:nx) is a sorted integer vector - ! * y(1:ny) consists of two sorted integer vectors: - ! [y(1:nx); y(nx + 1:ny)] - ! * ny >= nx - ! - ! Return: indx(1:na) - - implicit none - - integer (kind=int_kind), intent(in) :: nx, ny - integer (kind=int_kind), intent(in) :: x(1:nx), y(1:ny) - integer (kind=int_kind), intent(out) :: indx(1:nx) - - ! local variables - - integer (kind=int_kind) :: i, j1, j2 - - character(len=*), parameter :: subname = '(findXinY)' - - i = 1 - j1 = 1 - j2 = nx + 1 - do while (i <= nx) - if (x(i) == y(j1)) then - indx(i) = j1 - i = i + 1 - j1 = j1 + 1 - else if (x(i) == y(j2)) then - indx(i) = j2 - i = i + 1 - j2 = j2 + 1 - else if (x(i) > y(j1)) then - j1 = j1 + 1 - else if (x(i) > y(j2)) then - j2 = j2 + 1 - else - call abort_ice(subname & - // ': ERROR: conditions not met') - end if - end do - - end subroutine findXinY - -!======================================================================= - - subroutine findXinY_halo(x, y, nx, ny, indx) - ! Find indx vector so that x(1:na) = y(indx(1:na)) - ! - ! Conditions: - ! * EVERY item in x is found in y, - ! except for x == 0, where indx = 0 is returned - ! * x(1:nx) is a non-sorted integer vector - ! * y(1:ny) is a sorted integer vector - ! * ny >= nx - ! - ! Return: indx(1:na) - - implicit none - - integer (kind=int_kind), intent(in) :: nx, ny - integer (kind=int_kind), intent(in) :: x(1:nx), y(1:ny) - integer (kind=int_kind), intent(out) :: indx(1:nx) - - ! local variables - - integer (kind=int_kind) :: i, j1, nloop - - character(len=*), parameter :: subname = '(findXinY_halo)' - - nloop = 1 - i = 1 - j1 = int((ny + 1) / 2) ! initial guess in the middle - do while (i <= nx) - if (x(i) == 0) then - indx(i) = 0 - i = i + 1 - nloop = 1 - else if (x(i) == y(j1)) then - indx(i) = j1 - i = i + 1 - j1 = j1 + 1 - ! initial guess in the middle - if (j1 > ny) j1 = int((ny + 1) / 2) - nloop = 1 - else if (x(i) < y(j1)) then - j1 = 1 - else if (x(i) > y(j1)) then - j1 = j1 + 1 - if (j1 > ny) then - j1 = 1 - nloop = nloop + 1 - if (nloop > 2) then - ! stop for infinite loop. This check should not be - ! necessary for halo - call abort_ice(subname // ' ERROR: too many loops') - end if - end if - end if - end do - - end subroutine findXinY_halo - -!======================================================================= - - subroutine numainit(l, u, uu) - - use ice_constants, only : c0 - - implicit none - - integer(kind=int_kind), intent(in) :: l, u, uu - - ! local variables - - integer(kind=int_kind) :: lo, up - - character(len=*), parameter :: subname = '(numainit)' - - call domp_get_domain(l, u, lo, up) - ee(lo:up) = 0 - ne(lo:up) = 0 - se(lo:up) = 0 - sse(lo:up) = 0 - nw(lo:up) = 0 - sw(lo:up) = 0 - halo_parent(lo:up) = 0 - strength(lo:up) = c0 - uvel(lo:up) = c0 - vvel(lo:up) = c0 - uvel_init(lo:up) = c0 - vvel_init(lo:up) = c0 - uocn(lo:up) = c0 - vocn(lo:up) = c0 - dxT(lo:up) = c0 - dyT(lo:up) = c0 - HTE(lo:up) = c0 - HTN(lo:up) = c0 - HTEm1(lo:up) = c0 - HTNm1(lo:up) = c0 - stressp_1(lo:up) = c0 - stressp_2(lo:up) = c0 - stressp_3(lo:up) = c0 - stressp_4(lo:up) = c0 - stressm_1(lo:up) = c0 - stressm_2(lo:up) = c0 - stressm_3(lo:up) = c0 - stressm_4(lo:up) = c0 - stress12_1(lo:up) = c0 - stress12_2(lo:up) = c0 - stress12_3(lo:up) = c0 - stress12_4(lo:up) = c0 - tarear(lo:up) = c0 - Tbu(lo:up) = c0 - taubx(lo:up) = c0 - tauby(lo:up) = c0 - divu(lo:up) = c0 - rdg_conv(lo:up) = c0 - rdg_shear(lo:up) = c0 - shear(lo:up) = c0 - str1(lo:up) = c0 - str2(lo:up) = c0 - str3(lo:up) = c0 - str4(lo:up) = c0 - str5(lo:up) = c0 - str6(lo:up) = c0 - str7(lo:up) = c0 - str8(lo:up) = c0 - - call domp_get_domain(u + 1, uu, lo, up) - halo_parent(lo:up) = 0 - uvel(lo:up) = c0 - vvel(lo:up) = c0 - str1(lo:up) = c0 - str2(lo:up) = c0 - str3(lo:up) = c0 - str4(lo:up) = c0 - str5(lo:up) = c0 - str6(lo:up) = c0 - str7(lo:up) = c0 - str8(lo:up) = c0 - - end subroutine numainit - -!======================================================================= - -end module ice_dyn_evp_1d diff --git a/cicecore/cicedyn/dynamics/ice_dyn_shared.F90 b/cicecore/cicedyn/dynamics/ice_dyn_shared.F90 index 69e552730..9dbeaf1a7 100644 --- a/cicecore/cicedyn/dynamics/ice_dyn_shared.F90 +++ b/cicecore/cicedyn/dynamics/ice_dyn_shared.F90 @@ -1451,10 +1451,10 @@ subroutine seabed_stress_factor_prob (nx_block, ny_block, & character(len=*), parameter :: subname = '(seabed_stress_factor_prob)' - call icepack_query_parameters(rhow_out=rhow, rhoi_out=rhoi) - call icepack_query_parameters(gravit_out=gravit) - call icepack_query_parameters(pi_out=pi) - call icepack_query_parameters(puny_out=puny) + call icepack_query_parameters(rhow_out=rhow, rhoi_out=rhoi,gravit_out=gravit,pi_out=pi,puny_out=puny) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & + file=__FILE__, line=__LINE__) Tbt=c0 @@ -2302,16 +2302,15 @@ end subroutine strain_rates_U ! by combining tensile strength and a parameterization for grounded ridges. ! J. Geophys. Res. Oceans, 121, 7354-7368. - subroutine visc_replpress(strength, DminArea, Delta, & - zetax2, etax2, rep_prs, capping) + subroutine visc_replpress(strength, DminArea, Delta, & + zetax2, etax2, rep_prs) real (kind=dbl_kind), intent(in):: & strength, & ! DminArea ! real (kind=dbl_kind), intent(in):: & - Delta , & ! - capping ! + Delta real (kind=dbl_kind), intent(out):: & zetax2 , & ! bulk viscosity diff --git a/cicecore/cicedyn/dynamics/ice_dyn_vp.F90 b/cicecore/cicedyn/dynamics/ice_dyn_vp.F90 index 32971c5b6..58589f8d7 100644 --- a/cicecore/cicedyn/dynamics/ice_dyn_vp.F90 +++ b/cicecore/cicedyn/dynamics/ice_dyn_vp.F90 @@ -1221,20 +1221,16 @@ subroutine calc_zeta_dPr (nx_block, ny_block, & call visc_replpress (strength(i,j) , DminTarea(i,j) , & Deltane , zetax2 (i,j,1), & - etax2 (i,j,1), rep_prs (i,j,1), & - capping) + etax2 (i,j,1), rep_prs (i,j,1)) call visc_replpress (strength(i,j) , DminTarea(i,j) , & Deltanw , zetax2 (i,j,2), & - etax2 (i,j,2), rep_prs (i,j,2), & - capping) + etax2 (i,j,2), rep_prs (i,j,2)) call visc_replpress (strength(i,j) , DminTarea(i,j) , & Deltasw , zetax2 (i,j,3), & - etax2 (i,j,3), rep_prs (i,j,3), & - capping) + etax2 (i,j,3), rep_prs (i,j,3)) call visc_replpress (strength(i,j) , DminTarea(i,j) , & Deltase , zetax2 (i,j,4), & - etax2 (i,j,4), rep_prs (i,j,4), & - capping) + etax2 (i,j,4), rep_prs (i,j,4)) !----------------------------------------------------------------- ! the stresses ! kg/s^2 diff --git a/cicecore/cicedyn/dynamics/ice_transport_remap.F90 b/cicecore/cicedyn/dynamics/ice_transport_remap.F90 index ee0a3d083..dd59efc87 100644 --- a/cicecore/cicedyn/dynamics/ice_transport_remap.F90 +++ b/cicecore/cicedyn/dynamics/ice_transport_remap.F90 @@ -647,11 +647,12 @@ subroutine horizontal_remap (dt, ntrace, & endif ! nghost ! tcraig, this OMP loop sometimes fails with cce/14.0.3, compiler bug?? - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n, & - !$OMP edgearea_e,edgearea_n,edge,iflux,jflux, & - !$OMP xp,yp,indxing,indxjng,mflxe,mflxn, & - !$OMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) & - !$OMP SCHEDULE(runtime) + ! TILL I can trigger the same with ifort (IFORT) 18.0.0 20170811 +!TILL !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n, & +!TILL !$OMP edgearea_e,edgearea_n,edge,iflux,jflux, & +!TILL !$OMP xp,yp,indxing,indxjng,mflxe,mflxn, & +!TILL !$OMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) & +!TILL !$OMP SCHEDULE(runtime) do iblk = 1, nblocks l_stop = .false. @@ -865,7 +866,7 @@ subroutine horizontal_remap (dt, ntrace, & enddo ! n enddo ! iblk - !$OMP END PARALLEL DO +!TILL !$OMP END PARALLEL DO end subroutine horizontal_remap diff --git a/cicecore/cicedyn/general/ice_init.F90 b/cicecore/cicedyn/general/ice_init.F90 index 4e1a50f44..75c5a03cf 100644 --- a/cicecore/cicedyn/general/ice_init.F90 +++ b/cicecore/cicedyn/general/ice_init.F90 @@ -105,7 +105,7 @@ subroutine input_data grid_ocn, grid_ocn_thrm, grid_ocn_dynu, grid_ocn_dynv, & grid_atm, grid_atm_thrm, grid_atm_dynu, grid_atm_dynv, & dxrect, dyrect, dxscale, dyscale, scale_dxdy, & - lonrefrect, latrefrect, pgl_global_ext + lonrefrect, latrefrect, save_ghte_ghtn use ice_dyn_shared, only: ndte, kdyn, revised_evp, yield_curve, & evp_algorithm, visc_method, & seabed_stress, seabed_stress_method, & @@ -375,7 +375,7 @@ subroutine input_data ndte = 120 ! subcycles per dynamics timestep: ndte=dt_dyn/dte evp_algorithm = 'standard_2d' ! EVP kernel (standard_2d=standard cice evp; shared_mem_1d=1d shared memory and no mpi elasticDamp = 0.36_dbl_kind ! coefficient for calculating the parameter E - pgl_global_ext = .false. ! if true, init primary grid lengths (global ext.) + save_ghte_ghtn = .false. ! if true, save global hte and htn (global ext.) brlx = 300.0_dbl_kind ! revised_evp values. Otherwise overwritten in ice_dyn_shared arlx = 300.0_dbl_kind ! revised_evp values. Otherwise overwritten in ice_dyn_shared revised_evp = .false. ! if true, use revised procedure for evp dynamics @@ -963,7 +963,6 @@ subroutine input_data call broadcast_scalar(ndte, master_task) call broadcast_scalar(evp_algorithm, master_task) call broadcast_scalar(elasticDamp, master_task) - call broadcast_scalar(pgl_global_ext, master_task) call broadcast_scalar(brlx, master_task) call broadcast_scalar(arlx, master_task) call broadcast_scalar(revised_evp, master_task) @@ -1258,6 +1257,10 @@ subroutine input_data abort_list = trim(abort_list)//":5" endif + if (kdyn == 1 .and. evp_algorithm == 'shared_mem_1d') then + save_ghte_ghtn = .true. + endif + if (kdyn == 2 .and. revised_evp) then if (my_task == master_task) then write(nu_diag,*) subname//' WARNING: revised_evp = T with EAP dynamics' @@ -1296,10 +1299,10 @@ subroutine input_data endif if (grid_ice == 'C' .or. grid_ice == 'CD') then - if (kdyn > 1) then + if (kdyn > 1 .or. (kdyn == 1 .and. evp_algorithm /= 'standard_2d')) then if (my_task == master_task) then - write(nu_diag,*) subname//' ERROR: grid_ice = C | CD only supported with kdyn<=1 (evp or off)' - write(nu_diag,*) subname//' ERROR: kdyn and grid_ice inconsistency' + write(nu_diag,*) subname//' ERROR: grid_ice = C | CD only supported with kdyn=1 and evp_algorithm=standard_2d' + write(nu_diag,*) subname//' ERROR: kdyn and/or evp_algorithm and grid_ice inconsistency' endif abort_list = trim(abort_list)//":46" endif @@ -1312,6 +1315,15 @@ subroutine input_data endif endif + if (evp_algorithm == 'shared_mem_1d' .and. & + grid_type == 'tripole') then + if (my_task == master_task) then + write(nu_diag,*) subname//' ERROR: evp_algorithm=shared_mem_1d is not tested for gridtype=tripole' + write(nu_diag,*) subname//' ERROR: change evp_algorithm to standard_2d' + endif + abort_list = trim(abort_list)//":49" + endif + capping = -9.99e30 if (kdyn == 1 .or. kdyn == 3) then if (capping_method == 'max') then @@ -1833,7 +1845,6 @@ subroutine input_data tmpstr2 = ' : standard 2d EVP solver' elseif (evp_algorithm == 'shared_mem_1d') then tmpstr2 = ' : vectorized 1d EVP solver' - pgl_global_ext = .true. else tmpstr2 = ' : unknown value' endif diff --git a/cicecore/cicedyn/infrastructure/comm/mpi/ice_boundary.F90 b/cicecore/cicedyn/infrastructure/comm/mpi/ice_boundary.F90 index 2a7d68c11..a33e050b9 100644 --- a/cicecore/cicedyn/infrastructure/comm/mpi/ice_boundary.F90 +++ b/cicecore/cicedyn/infrastructure/comm/mpi/ice_boundary.F90 @@ -113,8 +113,7 @@ module ice_boundary ice_HaloUpdate, & ice_HaloUpdate_stress, & ice_HaloExtrapolate, & - ice_HaloDestroy, & - primary_grid_lengths_global_ext + ice_HaloDestroy interface ice_HaloUpdate ! generic interface module procedure ice_HaloUpdate2DR8, & @@ -7164,134 +7163,8 @@ subroutine ice_HaloDestroy(halo) call abort_ice(subname,' ERROR: deallocating') return endif -end subroutine ice_HaloDestroy - -!*********************************************************************** - - subroutine primary_grid_lengths_global_ext( & - ARRAY_O, ARRAY_I, ew_boundary_type, ns_boundary_type) - -! This subroutine adds ghost cells to global primary grid lengths array -! ARRAY_I and outputs result to array ARRAY_O - - use ice_constants, only: c0 - use ice_domain_size, only: nx_global, ny_global - - real (kind=dbl_kind), dimension(:,:), intent(in) :: & - ARRAY_I - - character (*), intent(in) :: & - ew_boundary_type, ns_boundary_type - - real (kind=dbl_kind), dimension(:,:), intent(out) :: & - ARRAY_O - -!----------------------------------------------------------------------- -! -! local variables -! -!----------------------------------------------------------------------- - - integer (kind=int_kind) :: & - ii, io, ji, jo - - character(len=*), parameter :: & - subname = '(primary_grid_lengths_global_ext)' - -!----------------------------------------------------------------------- -! -! add ghost cells to global primary grid lengths array -! -!----------------------------------------------------------------------- - - if (trim(ns_boundary_type) == 'tripole' .or. & - trim(ns_boundary_type) == 'tripoleT') then - call abort_ice(subname//' ERROR: '//ns_boundary_type & - //' boundary type not implemented for configuration') - endif - - do jo = 1,ny_global+2*nghost - ji = -nghost + jo - - !*** Southern ghost cells - - if (ji < 1) then - select case (trim(ns_boundary_type)) - case ('cyclic') - ji = ji + ny_global - case ('open') - ji = nghost - jo + 1 - case ('closed') - ji = 0 - case default - call abort_ice( & - subname//' ERROR: unknown north-south boundary type') - end select - endif - !*** Northern ghost cells - - if (ji > ny_global) then - select case (trim(ns_boundary_type)) - case ('cyclic') - ji = ji - ny_global - case ('open') - ji = 2 * ny_global - ji + 1 - case ('closed') - ji = 0 - case default - call abort_ice( & - subname//' ERROR: unknown north-south boundary type') - end select - endif - - do io = 1,nx_global+2*nghost - ii = -nghost + io - - !*** Western ghost cells - - if (ii < 1) then - select case (trim(ew_boundary_type)) - case ('cyclic') - ii = ii + nx_global - case ('open') - ii = nghost - io + 1 - case ('closed') - ii = 0 - case default - call abort_ice( & - subname//' ERROR: unknown east-west boundary type') - end select - endif - - !*** Eastern ghost cells - - if (ii > nx_global) then - select case (trim(ew_boundary_type)) - case ('cyclic') - ii = ii - nx_global - case ('open') - ii = 2 * nx_global - ii + 1 - case ('closed') - ii = 0 - case default - call abort_ice( & - subname//' ERROR: unknown east-west boundary type') - end select - endif - - if (ii == 0 .or. ji == 0) then - ARRAY_O(io, jo) = c0 - else - ARRAY_O(io, jo) = ARRAY_I(ii, ji) - endif - - enddo - enddo - -!----------------------------------------------------------------------- - - end subroutine primary_grid_lengths_global_ext +end subroutine ice_HaloDestroy !*********************************************************************** diff --git a/cicecore/cicedyn/infrastructure/comm/mpi/ice_timers.F90 b/cicecore/cicedyn/infrastructure/comm/mpi/ice_timers.F90 index baab6f49b..23968f39a 100644 --- a/cicecore/cicedyn/infrastructure/comm/mpi/ice_timers.F90 +++ b/cicecore/cicedyn/infrastructure/comm/mpi/ice_timers.F90 @@ -65,8 +65,8 @@ module ice_timers timer_bundbound, &! boundary updates bundling timer_bgc, &! biogeochemistry timer_forcing, &! forcing - timer_evp_1d, &! timer only loop - timer_evp_2d, &! timer including conversion 1d/2d + timer_evp1dcore, &! timer only loop + timer_evp, &! timer including conversion 1d/2d timer_updstate ! update state ! timer_updstate, &! update state ! timer_tmp1, &! for temporary timings @@ -177,34 +177,34 @@ subroutine init_ice_timers nullify(all_timers(n)%block_accum_time) end do - call get_ice_timer(timer_total, 'Total', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_step, 'TimeLoop', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_dynamics, 'Dynamics', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_advect, 'Advection',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_column, 'Column', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_thermo, 'Thermo', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_sw, 'Shortwave',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_total , 'Total' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_step , 'TimeLoop' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_dynamics , 'Dynamics' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_advect , 'Advection' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_column , 'Column' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_thermo , 'Thermo' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_sw , 'Shortwave' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_ponds, 'Meltponds',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_ridge, 'Ridging', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_ridge , 'Ridging' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_catconv, 'Cat Conv', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_fsd, 'FloeSize', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_couple, 'Coupling', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_readwrite,'ReadWrite',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_diags, 'Diags ',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_hist, 'History ',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bound, 'Bound', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bundbound,'Bundbound',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bgc, 'BGC', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_forcing, 'Forcing', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_fsd , 'FloeSize' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_couple , 'Coupling' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_readwrite , 'ReadWrite' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_diags , 'Diags ' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_hist , 'History ' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bound , 'Bound' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bundbound , 'Bundbound' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bgc , 'BGC' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_forcing , 'Forcing' ,nblocks,distrb_info%nprocs) #if (defined CESMCOUPLED) - call get_ice_timer(timer_cplrecv, 'Cpl-recv', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_rcvsnd, 'Rcv->Snd', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_cplsend, 'Cpl-Send', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_sndrcv, 'Snd->Rcv', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_cplrecv , 'Cpl-recv' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_rcvsnd , 'Rcv->Snd' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_cplsend , 'Cpl-Send' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_sndrcv , 'Snd->Rcv' ,nblocks,distrb_info%nprocs) #endif - call get_ice_timer(timer_evp_1d, '1d-evp', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_evp_2d, '2d-evp', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_updstate, 'UpdState', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp1dcore , 'evp1dcore' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp , 'evp' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_updstate , 'UpdState' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp1, 'tmp1', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp2, 'tmp2', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp3, 'tmp3', nblocks,distrb_info%nprocs) diff --git a/cicecore/cicedyn/infrastructure/comm/serial/ice_boundary.F90 b/cicecore/cicedyn/infrastructure/comm/serial/ice_boundary.F90 index faeaf3227..b9ac8fe33 100644 --- a/cicecore/cicedyn/infrastructure/comm/serial/ice_boundary.F90 +++ b/cicecore/cicedyn/infrastructure/comm/serial/ice_boundary.F90 @@ -68,8 +68,7 @@ module ice_boundary ice_HaloUpdate, & ice_HaloUpdate_stress, & ice_HaloExtrapolate, & - ice_HaloDestroy, & - primary_grid_lengths_global_ext + ice_HaloDestroy interface ice_HaloUpdate ! generic interface module procedure ice_HaloUpdate2DR8, & @@ -4912,133 +4911,6 @@ subroutine ice_HaloDestroy(halo) end subroutine ice_HaloDestroy -!*********************************************************************** - - subroutine primary_grid_lengths_global_ext( & - ARRAY_O, ARRAY_I, ew_boundary_type, ns_boundary_type) - -! This subroutine adds ghost cells to global primary grid lengths array -! ARRAY_I and outputs result to array ARRAY_O - - use ice_constants, only: c0 - use ice_domain_size, only: nx_global, ny_global - - real (kind=dbl_kind), dimension(:,:), intent(in) :: & - ARRAY_I - - character (*), intent(in) :: & - ew_boundary_type, ns_boundary_type - - real (kind=dbl_kind), dimension(:,:), intent(out) :: & - ARRAY_O - -!----------------------------------------------------------------------- -! -! local variables -! -!----------------------------------------------------------------------- - - integer (kind=int_kind) :: & - ii, io, ji, jo - - character(len=*), parameter :: & - subname = '(primary_grid_lengths_global_ext)' - -!----------------------------------------------------------------------- -! -! add ghost cells to global primary grid lengths array -! -!----------------------------------------------------------------------- - - if (trim(ns_boundary_type) == 'tripole' .or. & - trim(ns_boundary_type) == 'tripoleT') then - call abort_ice(subname//' ERROR: '//ns_boundary_type & - //' boundary type not implemented for configuration') - endif - - do jo = 1,ny_global+2*nghost - ji = -nghost + jo - - !*** Southern ghost cells - - if (ji < 1) then - select case (trim(ns_boundary_type)) - case ('cyclic') - ji = ji + ny_global - case ('open') - ji = nghost - jo + 1 - case ('closed') - ji = 0 - case default - call abort_ice( & - subname//' ERROR: unknown north-south boundary type') - end select - endif - - !*** Northern ghost cells - - if (ji > ny_global) then - select case (trim(ns_boundary_type)) - case ('cyclic') - ji = ji - ny_global - case ('open') - ji = 2 * ny_global - ji + 1 - case ('closed') - ji = 0 - case default - call abort_ice( & - subname//' ERROR: unknown north-south boundary type') - end select - endif - - do io = 1,nx_global+2*nghost - ii = -nghost + io - - !*** Western ghost cells - - if (ii < 1) then - select case (trim(ew_boundary_type)) - case ('cyclic') - ii = ii + nx_global - case ('open') - ii = nghost - io + 1 - case ('closed') - ii = 0 - case default - call abort_ice( & - subname//' ERROR: unknown east-west boundary type') - end select - endif - - !*** Eastern ghost cells - - if (ii > nx_global) then - select case (trim(ew_boundary_type)) - case ('cyclic') - ii = ii - nx_global - case ('open') - ii = 2 * nx_global - ii + 1 - case ('closed') - ii = 0 - case default - call abort_ice( & - subname//' ERROR: unknown east-west boundary type') - end select - endif - - if (ii == 0 .or. ji == 0) then - ARRAY_O(io, jo) = c0 - else - ARRAY_O(io, jo) = ARRAY_I(ii, ji) - endif - - enddo - enddo - -!----------------------------------------------------------------------- - - end subroutine primary_grid_lengths_global_ext - !*********************************************************************** end module ice_boundary diff --git a/cicecore/cicedyn/infrastructure/comm/serial/ice_timers.F90 b/cicecore/cicedyn/infrastructure/comm/serial/ice_timers.F90 index bbe2fd4d1..690030201 100644 --- a/cicecore/cicedyn/infrastructure/comm/serial/ice_timers.F90 +++ b/cicecore/cicedyn/infrastructure/comm/serial/ice_timers.F90 @@ -37,28 +37,28 @@ module ice_timers ! public timers !----------------------------------------------------------------------- - integer (int_kind), public :: & - timer_total, &! total time - timer_step, &! time stepping - timer_dynamics, &! dynamics - timer_advect, &! horizontal advection - timer_column, &! column - timer_thermo, &! thermodynamics - timer_sw, &! radiative transfer - timer_ponds, &! melt ponds - timer_ridge, &! ridging - timer_catconv, &! category conversions - timer_fsd, &! floe size distribution - timer_couple, &! coupling - timer_readwrite, &! read/write - timer_diags, &! diagnostics/history - timer_hist, &! diagnostics/history - timer_bound, &! boundary updates - timer_bundbound, &! boundary updates - timer_bgc, &! biogeochemistry - timer_forcing, &! forcing - timer_evp_1d, &! timer only loop - timer_evp_2d, &! timer including conversion 1d/2d + integer (int_kind), public :: & + timer_total , &! total time + timer_step , &! time stepping + timer_dynamics , &! dynamics + timer_advect , &! horizontal advection + timer_column , &! column + timer_thermo , &! thermodynamics + timer_sw , &! radiative transfer + timer_ponds , &! melt ponds + timer_ridge , &! ridging + timer_catconv , &! category conversions + timer_fsd , &! floe size distribution + timer_couple , &! coupling + timer_readwrite , &! read/write + timer_diags , &! diagnostics/history + timer_hist , &! diagnostics/history + timer_bound , &! boundary updates + timer_bundbound , &! boundary updates + timer_bgc , &! biogeochemistry + timer_forcing , &! forcing + timer_evp1dcore , &! timer only loop + timer_evp , &! timer including conversion 1d/2d timer_updstate ! update state ! timer_updstate, &! update state ! timer_tmp1, &! for temporary timings @@ -191,28 +191,28 @@ subroutine init_ice_timers nullify(all_timers(n)%block_accum_time) end do - call get_ice_timer(timer_total, 'Total', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_step, 'TimeLoop', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_dynamics, 'Dynamics', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_advect, 'Advection',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_column, 'Column', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_thermo, 'Thermo', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_sw, 'Shortwave',nblocks,distrb_info%nprocs) + call get_ice_timer(timer_total , 'Total' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_step , 'TimeLoop' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_dynamics , 'Dynamics' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_advect , 'Advection' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_column , 'Column' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_thermo , 'Thermo' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_sw , 'Shortwave' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_ponds, 'Meltponds',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_ridge, 'Ridging', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_ridge , 'Ridging' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_catconv, 'Cat Conv', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_fsd, 'FloeSize', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_couple, 'Coupling', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_readwrite,'ReadWrite',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_diags, 'Diags ',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_hist, 'History ',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bound, 'Bound', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bundbound,'Bundbound',nblocks,distrb_info%nprocs) - call get_ice_timer(timer_bgc, 'BGC', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_forcing, 'Forcing', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_evp_1d, '1d-evp', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_evp_2d, '2d-evp', nblocks,distrb_info%nprocs) - call get_ice_timer(timer_updstate, 'UpdState', nblocks,distrb_info%nprocs) + call get_ice_timer(timer_fsd , 'FloeSize' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_couple , 'Coupling' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_readwrite , 'ReadWrite' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_diags , 'Diags ' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_hist , 'History ' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bound , 'Bound' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bundbound , 'Bundbound' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_bgc , 'BGC' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_forcing , 'Forcing' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp1dcore , 'evp1dcore' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_evp , 'evp' ,nblocks,distrb_info%nprocs) + call get_ice_timer(timer_updstate , 'UpdState' ,nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp1, 'tmp1', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp2, 'tmp2', nblocks,distrb_info%nprocs) ! call get_ice_timer(timer_tmp3, 'tmp3', nblocks,distrb_info%nprocs) diff --git a/cicecore/cicedyn/infrastructure/ice_grid.F90 b/cicecore/cicedyn/infrastructure/ice_grid.F90 index 5473ebeae..ef2db8a11 100644 --- a/cicecore/cicedyn/infrastructure/ice_grid.F90 +++ b/cicecore/cicedyn/infrastructure/ice_grid.F90 @@ -24,13 +24,17 @@ module ice_grid use ice_kinds_mod use ice_broadcast, only: broadcast_scalar, broadcast_array - use ice_boundary, only: ice_HaloUpdate, ice_HaloExtrapolate, & - primary_grid_lengths_global_ext + use ice_boundary, only: ice_HaloUpdate, ice_HaloExtrapolate + use ice_constants, only: c0, c1, c1p5, c2, c4, c20, c360, & + p5, p25, radius, cm_to_m, m_to_cm, & + field_loc_center, field_loc_NEcorner, field_loc_Nface, field_loc_Eface, & + field_type_scalar, field_type_vector, field_type_angle use ice_communicate, only: my_task, master_task use ice_blocks, only: block, get_block, nx_block, ny_block, nghost use ice_domain_size, only: nx_global, ny_global, max_blocks use ice_domain, only: blocks_ice, nblocks, halo_info, distrb_info, & - ew_boundary_type, ns_boundary_type, init_domain_distribution + ew_boundary_type, ns_boundary_type, init_domain_distribution, & + close_boundaries use ice_fileunits, only: nu_diag, nu_grid, nu_kmt, & get_fileunit, release_fileunit, flush_fileunit use ice_gather_scatter, only: gather_global, scatter_global @@ -44,8 +48,9 @@ module ice_grid implicit none private - public :: init_grid1, init_grid2, grid_average_X2Y, & - alloc_grid, makemask, grid_neighbor_min, grid_neighbor_max + public :: init_grid1, init_grid2, grid_average_X2Y, makemask, & + alloc_grid, dealloc_grid, & + grid_neighbor_min, grid_neighbor_max character (len=char_len_long), public :: & grid_format , & ! file format ('bin'=binary or 'nc'=netcdf) @@ -180,7 +185,7 @@ module ice_grid logical (kind=log_kind), public :: & use_bathymetry, & ! flag for reading in bathymetry_file - pgl_global_ext, & ! flag for init primary grid lengths (global ext.) + save_ghte_ghtn, & ! flag for saving global hte and htn during initialization scale_dxdy ! flag to apply scale factor to vary dx/dy in rectgrid logical (kind=log_kind), dimension (:,:,:), allocatable, public :: & @@ -288,7 +293,7 @@ subroutine alloc_grid mse (2,2,nx_block,ny_block,max_blocks), & msw (2,2,nx_block,ny_block,max_blocks), & stat=ierr) - if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory') + if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory1') if (grid_ice == 'CD' .or. grid_ice == 'C') then allocate( & @@ -297,21 +302,46 @@ subroutine alloc_grid ratiodxNr(nx_block,ny_block,max_blocks), & ratiodyEr(nx_block,ny_block,max_blocks), & stat=ierr) - if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory') + if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory2') endif - if (pgl_global_ext) then - allocate( & - G_HTE(nx_global+2*nghost, ny_global+2*nghost), & ! length of eastern edge of T-cell (global ext.) - G_HTN(nx_global+2*nghost, ny_global+2*nghost), & ! length of northern edge of T-cell (global ext.) - stat=ierr) - if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory') + if (save_ghte_ghtn) then + if (my_task == master_task) then + allocate( & + G_HTE(nx_global+2*nghost, ny_global+2*nghost), & ! length of eastern edge of T-cell (global ext.) + G_HTN(nx_global+2*nghost, ny_global+2*nghost), & ! length of northern edge of T-cell (global ext.) + stat=ierr) + else + allocate( & + G_HTE(1,1), & ! needed for debug checks + G_HTN(1,1), & ! never used in code + stat=ierr) + endif + if (ierr/=0) call abort_ice(subname//'ERROR: Out of memory3') endif end subroutine alloc_grid !======================================================================= +! +! DeAllocate space for variables no longer needed after initialization +! + subroutine dealloc_grid + + integer (int_kind) :: ierr + + character(len=*), parameter :: subname = '(dealloc_grid)' + + if (save_ghte_ghtn) then + deallocate(G_HTE, G_HTN, stat=ierr) + if (ierr/=0) call abort_ice(subname//'ERROR: Dealloc error1') + endif + + end subroutine dealloc_grid + +!======================================================================= + ! Distribute blocks across processors. The distribution is optimized ! based on latitude and topography, contained in the ULAT and KMT arrays. ! @@ -319,10 +349,6 @@ end subroutine alloc_grid subroutine init_grid1 - use ice_blocks, only: nx_block, ny_block - use ice_broadcast, only: broadcast_array - use ice_constants, only: c1 - integer (kind=int_kind) :: & fid_grid, & ! file id for netCDF grid file fid_kmt ! file id for netCDF kmt file @@ -445,11 +471,6 @@ end subroutine init_grid1 subroutine init_grid2 - use ice_blocks, only: get_block, block, nx_block, ny_block - use ice_constants, only: c0, c1, c2, p5, p25, c1p5, & - field_loc_center, field_loc_NEcorner, field_loc_Nface, field_loc_Eface, & - field_type_scalar, field_type_vector, field_type_angle - use ice_domain_size, only: max_blocks #if defined (_OPENMP) use OMP_LIB #endif @@ -800,12 +821,6 @@ end subroutine init_grid2 subroutine popgrid - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c1, p5, & - field_loc_center, field_loc_NEcorner, & - field_type_scalar, field_type_angle - use ice_domain_size, only: max_blocks - integer (kind=int_kind) :: & i, j, iblk, & ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -919,11 +934,6 @@ end subroutine popgrid subroutine popgrid_nc - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c1, & - field_loc_center, field_loc_NEcorner, & - field_type_scalar, field_type_angle - use ice_domain_size, only: max_blocks #ifdef USE_NETCDF use netcdf #endif @@ -1090,11 +1100,7 @@ end subroutine popgrid_nc subroutine latlongrid -! use ice_boundary - use ice_domain_size use ice_scam, only : scmlat, scmlon, single_column - use ice_constants, only: c0, c1, p5, p25, & - field_loc_center, field_type_scalar, radius #ifdef USE_NETCDF use netcdf #endif @@ -1374,10 +1380,6 @@ end subroutine latlongrid subroutine rectgrid - use ice_constants, only: c0, c1, c2, radius, cm_to_m, & - field_loc_center, field_loc_NEcorner, field_type_scalar - use ice_domain, only: close_boundaries - integer (kind=int_kind) :: & i, j, & imid, jmid @@ -1573,8 +1575,6 @@ subroutine rectgrid_scale_dxdy ! generate a variable spaced rectangluar grid. ! extend spacing from center of grid outward. - use ice_constants, only: c0, c1, c2, radius, cm_to_m, & - field_loc_center, field_loc_NEcorner, field_type_scalar integer (kind=int_kind) :: & i, j, iblk, & @@ -1738,8 +1738,6 @@ end subroutine rectgrid_scale_dxdy subroutine grid_boxislands_kmt (work) - use ice_constants, only: c0, c1, c20 - real (kind=dbl_kind), dimension(:,:), intent(inout) :: work integer (kind=int_kind) :: & @@ -1873,11 +1871,6 @@ end subroutine grid_boxislands_kmt subroutine cpomgrid - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c1, m_to_cm, & - field_loc_NEcorner, field_type_scalar - use ice_domain_size, only: max_blocks - integer (kind=int_kind) :: & i, j, iblk, & ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -1979,10 +1972,6 @@ end subroutine cpomgrid subroutine primary_grid_lengths_HTN(work_g) - use ice_constants, only: p25, p5, c2, cm_to_m, & - field_loc_center, field_loc_NEcorner, & - field_loc_Nface, field_type_scalar - real (kind=dbl_kind), dimension(:,:) :: work_g ! global array holding HTN ! local variables @@ -2018,10 +2007,14 @@ subroutine primary_grid_lengths_HTN(work_g) work_g2(i,j) = p5*(work_g(i,j) + work_g(ip1,j)) ! dxU enddo enddo - endif - if (pgl_global_ext) then - call primary_grid_lengths_global_ext( & - G_HTN, work_g, ew_boundary_type, ns_boundary_type) + if (save_ghte_ghtn) then + do j = 1, ny_global + do i = 1,nx_global + G_HTN(i+nghost,j+nghost) = work_g(i,j) + enddo + enddo + call global_ext_halo(G_HTN) + endif endif call scatter_global(HTN, work_g, master_task, distrb_info, & field_loc_Nface, field_type_scalar) @@ -2084,10 +2077,6 @@ end subroutine primary_grid_lengths_HTN subroutine primary_grid_lengths_HTE(work_g) - use ice_constants, only: p25, p5, c2, cm_to_m, & - field_loc_center, field_loc_NEcorner, & - field_loc_Eface, field_type_scalar - real (kind=dbl_kind), dimension(:,:) :: work_g ! global array holding HTE ! local variables @@ -2126,10 +2115,14 @@ subroutine primary_grid_lengths_HTE(work_g) work_g2(i,ny_global) = c2*work_g(i,ny_global-1) - work_g(i,ny_global-2) ! dyU enddo endif - endif - if (pgl_global_ext) then - call primary_grid_lengths_global_ext( & - G_HTE, work_g, ew_boundary_type, ns_boundary_type) + if (save_ghte_ghtn) then + do j = 1, ny_global + do i = 1, nx_global + G_HTE(i+nghost,j+nghost) = work_g(i,j) + enddo + enddo + call global_ext_halo(G_HTE) + endif endif call scatter_global(HTE, work_g, master_task, distrb_info, & field_loc_Eface, field_type_scalar) @@ -2186,6 +2179,48 @@ end subroutine primary_grid_lengths_HTE !======================================================================= +! This subroutine fills ghost cells in global extended grid + + subroutine global_ext_halo(array) + + real (kind=dbl_kind), dimension(:,:), intent(inout) :: & + array ! extended global grid size nx+2*nghost, ny+2*nghost + ! nghost+1:nghost+nx_global and nghost+1:nghost+ny_global filled on entry + + integer (kind=int_kind) :: n + + character(len=*), parameter :: subname = '(global_ext_halo)' + + do n = 1,nghost + if (ns_boundary_type =='cyclic') then + array(:,n) = array(:,ny_global+n) + array(:,ny_global+nghost+n) = array(:,nghost+n) + elseif (ns_boundary_type == 'open') then + array(:,n) = array(:,nghost+1) + array(:,ny_global+nghost+n) = array(:,ny_global+nghost) + else + array(:,n) = c0 + array(:,ny_global+nghost+n) = c0 + endif + enddo + + do n = 1,nghost + if (ew_boundary_type =='cyclic') then + array(n ,:) = array(nx_global+n,:) + array(nx_global+nghost+n,:) = array(nghost+n ,:) + elseif (ew_boundary_type == 'open') then + array(n ,:) = array(nghost+1 ,:) + array(nx_global+nghost+n,:) = array(nx_global+nghost,:) + else + array(n ,:) = c0 + array(nx_global+nghost+n,:) = c0 + endif + enddo + + end subroutine global_ext_halo + +!======================================================================= + ! Sets the boundary values for the T cell land mask (hm) and ! makes the logical land masks for T and U cells (tmask, umask) ! and N and E cells (nmask, emask). @@ -2195,10 +2230,6 @@ end subroutine primary_grid_lengths_HTE subroutine makemask - use ice_constants, only: c0, p5, c1p5, & - field_loc_center, field_loc_NEcorner, field_type_scalar, & - field_loc_Nface, field_loc_Eface - integer (kind=int_kind) :: & i, j, iblk, & ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -2349,10 +2380,6 @@ end subroutine makemask subroutine Tlatlon - use ice_constants, only: c0, c1, c1p5, c2, c4, p5, & - field_loc_center, field_loc_Nface, field_loc_Eface, & - field_type_scalar - integer (kind=int_kind) :: & i, j, iblk , & ! horizontal indices ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -3025,8 +3052,6 @@ end subroutine grid_average_X2Y_1f subroutine grid_average_X2YS(dir,work1,wght1,mask1,work2) - use ice_constants, only: c0 - character(len=*) , intent(in) :: & dir @@ -3256,8 +3281,6 @@ end subroutine grid_average_X2YS subroutine grid_average_X2YA(dir,work1,wght1,work2) - use ice_constants, only: c0 - character(len=*) , intent(in) :: & dir @@ -3486,8 +3509,6 @@ end subroutine grid_average_X2YA subroutine grid_average_X2YF(dir,work1,wght1,work2,wght2) - use ice_constants, only: c0, p25, p5 - character(len=*) , intent(in) :: & dir @@ -3690,8 +3711,6 @@ end subroutine grid_average_X2YF subroutine grid_average_X2Y_2(dir,work1a,wght1a,mask1a,work1b,wght1b,mask1b,work2) - use ice_constants, only: c0 - character(len=*) , intent(in) :: & dir @@ -3902,11 +3921,6 @@ end function grid_neighbor_max subroutine gridbox_corners - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c2, c360, & - field_loc_NEcorner, field_type_scalar - use ice_domain_size, only: max_blocks - integer (kind=int_kind) :: & i,j,iblk,icorner,& ! index counters ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -4098,11 +4112,6 @@ end subroutine gridbox_corners subroutine gridbox_edges - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c2, c360, & - field_loc_NEcorner, field_type_scalar - use ice_domain_size, only: max_blocks - integer (kind=int_kind) :: & i,j,iblk,icorner,& ! index counters ilo,ihi,jlo,jhi ! beginning and end of physical domain @@ -4398,11 +4407,6 @@ end subroutine gridbox_edges subroutine gridbox_verts(work_g,vbounds) - use ice_blocks, only: nx_block, ny_block - use ice_constants, only: c0, c2, & - field_loc_NEcorner, field_type_scalar - use ice_domain_size, only: max_blocks - real (kind=dbl_kind), dimension(:,:), intent(in) :: & work_g @@ -4517,8 +4521,6 @@ end subroutine gridbox_verts subroutine get_bathymetry - use ice_constants, only: c0 - integer (kind=int_kind) :: & i, j, k, iblk ! loop indices @@ -4710,7 +4712,6 @@ subroutine read_seabedstress_bathy ! use module use ice_read_write - use ice_constants, only: field_loc_center, field_type_scalar ! local variables integer (kind=int_kind) :: & diff --git a/cicecore/drivers/direct/hadgem3/CICE_InitMod.F90 b/cicecore/drivers/direct/hadgem3/CICE_InitMod.F90 index 4efb13c52..3f87f2ca8 100644 --- a/cicecore/drivers/direct/hadgem3/CICE_InitMod.F90 +++ b/cicecore/drivers/direct/hadgem3/CICE_InitMod.F90 @@ -80,7 +80,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, alloc_forcing, get_wave_spec use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_data, faero_default, alloc_forcing_bgc - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runid, runtype use ice_init, only: input_data, init_state @@ -213,6 +213,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays if (write_ic) call accum_hist(dt) ! write initial conditions end subroutine cice_init diff --git a/cicecore/drivers/direct/nemo_concepts/CICE_InitMod.F90 b/cicecore/drivers/direct/nemo_concepts/CICE_InitMod.F90 index 69ecd4c91..7e2308f20 100644 --- a/cicecore/drivers/direct/nemo_concepts/CICE_InitMod.F90 +++ b/cicecore/drivers/direct/nemo_concepts/CICE_InitMod.F90 @@ -80,7 +80,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, alloc_forcing, get_wave_spec use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_data, faero_default, alloc_forcing_bgc - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runid, runtype use ice_init, only: input_data, init_state @@ -215,6 +215,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays end subroutine cice_init !======================================================================= diff --git a/cicecore/drivers/mct/cesm1/CICE_InitMod.F90 b/cicecore/drivers/mct/cesm1/CICE_InitMod.F90 index 3c5907c54..419dbacc9 100644 --- a/cicecore/drivers/mct/cesm1/CICE_InitMod.F90 +++ b/cicecore/drivers/mct/cesm1/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init(mpicom_ice) get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -241,6 +241,7 @@ subroutine cice_init(mpicom_ice) if (write_ic) call accum_hist(dt) ! write initial conditions + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 b/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 index 2ebcc696a..0c6bc9949 100644 --- a/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 +++ b/cicecore/drivers/nuopc/cmeps/CICE_InitMod.F90 @@ -32,7 +32,7 @@ subroutine cice_init1() use ice_init , only: input_data use ice_init_column , only: input_zbgc, count_tracers - use ice_grid , only: init_grid1, alloc_grid + use ice_grid , only: init_grid1, alloc_grid, dealloc_grid use ice_domain , only: init_domain_blocks use ice_arrays_column , only: alloc_arrays_column use ice_state , only: alloc_state @@ -201,6 +201,8 @@ subroutine cice_init2() call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays + end subroutine cice_init2 !======================================================================= diff --git a/cicecore/drivers/nuopc/dmi/CICE_FinalMod.F90 b/cicecore/drivers/nuopc/dmi/CICE_FinalMod.F90 index 9f32875e1..27d01f110 100644 --- a/cicecore/drivers/nuopc/dmi/CICE_FinalMod.F90 +++ b/cicecore/drivers/nuopc/dmi/CICE_FinalMod.F90 @@ -35,12 +35,12 @@ subroutine CICE_Finalize character(len=*), parameter :: subname = '(CICE_Finalize)' - !------------------------------------------------------------------- - ! stop timers and print timer info - !------------------------------------------------------------------- + !------------------------------------------------------------------- + ! stop timers and print timer info + !------------------------------------------------------------------- call ice_timer_stop(timer_total) ! stop timing entire run - call ice_timer_print_all(stats=.false.) ! print timing information + call ice_timer_print_all(stats=timer_stats) ! print timing information call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & @@ -55,9 +55,9 @@ subroutine CICE_Finalize !echmod if (nu_diag /= 6) close (nu_diag) ! diagnostic output call release_all_fileunits - !------------------------------------------------------------------- - ! quit MPI - !------------------------------------------------------------------- + !------------------------------------------------------------------- + ! quit MPI + !------------------------------------------------------------------- #ifndef coupled #ifndef CICE_DMI diff --git a/cicecore/drivers/nuopc/dmi/CICE_InitMod.F90 b/cicecore/drivers/nuopc/dmi/CICE_InitMod.F90 index 147bdf7df..4577113f1 100644 --- a/cicecore/drivers/nuopc/dmi/CICE_InitMod.F90 +++ b/cicecore/drivers/nuopc/dmi/CICE_InitMod.F90 @@ -87,7 +87,7 @@ subroutine cice_init(mpi_comm) get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -259,6 +259,7 @@ subroutine cice_init(mpi_comm) if (write_ic) call accum_hist(dt) ! write initial conditions + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif @@ -277,7 +278,6 @@ subroutine init_restart use ice_domain_size, only: ncat, n_iso, n_aero, nfsd, nslyr use ice_dyn_eap, only: read_restart_eap use ice_dyn_shared, only: kdyn - use ice_flux, only: Tf use ice_grid, only: tmask use ice_init, only: ice_ic use ice_init_column, only: init_age, init_FY, init_lvl, init_snowtracers, & @@ -292,7 +292,8 @@ subroutine init_restart restart_iso, read_restart_iso, & restart_aero, read_restart_aero, & restart_hbrine, read_restart_hbrine, & - restart_zsal, restart_bgc + restart_bgc + use ice_flux, only: Tf use ice_restart_driver, only: restartfile use ice_restart_shared, only: runtype, restart use ice_state ! almost everything @@ -303,7 +304,7 @@ subroutine init_restart logical(kind=log_kind) :: & tr_iage, tr_FY, tr_lvl, tr_pond_lvl, & tr_pond_topo, tr_snow, tr_fsd, tr_iso, tr_aero, tr_brine, & - skl_bgc, z_tracers, solve_zsal + skl_bgc, z_tracers integer(kind=int_kind) :: & ntrcr integer(kind=int_kind) :: & @@ -319,7 +320,7 @@ subroutine init_restart file=__FILE__, line=__LINE__) call icepack_query_parameters(skl_bgc_out=skl_bgc, & - z_tracers_out=z_tracers, solve_zsal_out=solve_zsal) + z_tracers_out=z_tracers) call icepack_query_tracer_flags(tr_iage_out=tr_iage, tr_FY_out=tr_FY, & tr_lvl_out=tr_lvl, tr_pond_lvl_out=tr_pond_lvl, & tr_pond_topo_out=tr_pond_topo, tr_aero_out=tr_aero, tr_brine_out=tr_brine, & @@ -465,8 +466,6 @@ subroutine init_restart if (trim(runtype) == 'continue') then if (tr_brine) & restart_hbrine = .true. - if (solve_zsal) & - restart_zsal = .true. if (skl_bgc .or. z_tracers) & restart_bgc = .true. endif @@ -476,7 +475,7 @@ subroutine init_restart if (tr_brine .and. restart_hbrine) call read_restart_hbrine endif - if (solve_zsal .or. skl_bgc .or. z_tracers) then ! biogeochemistry + if (skl_bgc .or. z_tracers) then ! biogeochemistry if (tr_fsd) then write (nu_diag,*) 'FSD implementation incomplete for use with BGC' call icepack_warnings_flush(nu_diag) diff --git a/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 b/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 index 77bb7738e..897f62eea 100644 --- a/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 +++ b/cicecore/drivers/nuopc/dmi/CICE_RunMod.F90 @@ -15,11 +15,13 @@ module CICE_RunMod use ice_kinds_mod + use ice_communicate, only: my_task, master_task use ice_fileunits, only: nu_diag use ice_arrays_column, only: oceanmixed_ice use ice_constants, only: c0, c1 use ice_constants, only: field_loc_center, field_type_scalar use ice_exit, only: abort_ice + use ice_memusage, only: ice_memusage_print use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted use icepack_intfc, only: icepack_max_iso, icepack_max_aero use icepack_intfc, only: icepack_query_parameters @@ -43,7 +45,7 @@ module CICE_RunMod subroutine CICE_Run(stop_now_cpl) - use ice_calendar, only: istep, istep1, dt, stop_now, advance_timestep + use ice_calendar, only: dt, stop_now, advance_timestep use ice_forcing, only: get_forcing_atmo, get_forcing_ocn, & get_wave_spec use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & @@ -74,9 +76,9 @@ subroutine CICE_Run(stop_now_cpl) file=__FILE__, line=__LINE__) #ifndef CICE_IN_NEMO - !-------------------------------------------------------------------- - ! timestep loop - !-------------------------------------------------------------------- + !-------------------------------------------------------------------- + ! timestep loop + !-------------------------------------------------------------------- #ifndef CICE_DMI timeLoop: do #endif @@ -147,7 +149,7 @@ subroutine ice_step use ice_boundary, only: ice_HaloUpdate use ice_calendar, only: dt, dt_dyn, ndtd, diagfreq, write_restart, istep use ice_diagnostics, only: init_mass_diags, runtime_diags, debug_model, debug_ice - use ice_diagnostics_bgc, only: hbrine_diags, zsal_diags, bgc_diags + use ice_diagnostics_bgc, only: hbrine_diags, bgc_diags use ice_domain, only: halo_info, nblocks use ice_dyn_eap, only: write_restart_eap use ice_dyn_shared, only: kdyn, kridge @@ -181,7 +183,7 @@ subroutine ice_step logical (kind=log_kind) :: & tr_iage, tr_FY, tr_lvl, tr_fsd, tr_snow, & tr_pond_lvl, tr_pond_topo, tr_brine, tr_iso, tr_aero, & - calc_Tsfc, skl_bgc, solve_zsal, z_tracers, wave_spec + calc_Tsfc, skl_bgc, z_tracers, wave_spec character(len=*), parameter :: subname = '(ice_step)' @@ -195,8 +197,7 @@ subroutine ice_step endif call icepack_query_parameters(calc_Tsfc_out=calc_Tsfc, skl_bgc_out=skl_bgc, & - solve_zsal_out=solve_zsal, z_tracers_out=z_tracers, ktherm_out=ktherm, & - wave_spec_out=wave_spec) + z_tracers_out=z_tracers, ktherm_out=ktherm, wave_spec_out=wave_spec) call icepack_query_tracer_flags(tr_iage_out=tr_iage, tr_FY_out=tr_FY, & tr_lvl_out=tr_lvl, tr_pond_lvl_out=tr_pond_lvl, & tr_pond_topo_out=tr_pond_topo, tr_brine_out=tr_brine, tr_aero_out=tr_aero, & @@ -226,10 +227,9 @@ subroutine ice_step call step_prep - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - - if (ktherm >= 0) then + if (ktherm >= 0) then + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) + do iblk = 1, nblocks !----------------------------------------------------------------- ! scale radiation fields @@ -267,10 +267,9 @@ subroutine ice_step call debug_ice (iblk, plabeld) endif - endif ! ktherm > 0 - - enddo ! iblk - !$OMP END PARALLEL DO + enddo + !$OMP END PARALLEL DO + endif ! ktherm > 0 ! clean up, update tendency diagnostics offset = dt @@ -300,7 +299,7 @@ subroutine ice_step endif ! ridging - !$OMP PARALLEL DO PRIVATE(iblk) + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks if (kridge > 0) call step_dyn_ridge (dt_dyn, ndtd, iblk) enddo @@ -334,9 +333,11 @@ subroutine ice_step !----------------------------------------------------------------- if (tr_snow) then ! advanced snow physics + !$OMP PARALLEL DO PRIVATE(iblk) SCHEDULE(runtime) do iblk = 1, nblocks call step_snow (dt, iblk) enddo + !$OMP END PARALLEL DO call update_state (dt) ! clean up endif @@ -384,9 +385,11 @@ subroutine ice_step call ice_timer_start(timer_diags) ! diagnostics if (mod(istep,diagfreq) == 0) then call runtime_diags(dt) ! log file - if (solve_zsal) call zsal_diags if (skl_bgc .or. z_tracers) call bgc_diags if (tr_brine) call hbrine_diags + if (my_task == master_task) then + call ice_memusage_print(nu_diag,subname) + endif endif call ice_timer_stop(timer_diags) ! diagnostics @@ -406,13 +409,12 @@ subroutine ice_step if (tr_fsd) call write_restart_fsd if (tr_iso) call write_restart_iso if (tr_aero) call write_restart_aero - if (solve_zsal .or. skl_bgc .or. z_tracers) & + if (skl_bgc .or. z_tracers) & call write_restart_bgc if (tr_brine) call write_restart_hbrine if (kdyn == 2) call write_restart_eap call final_restart endif - call ice_timer_stop(timer_readwrite) ! reading/writing end subroutine ice_step @@ -426,7 +428,7 @@ end subroutine ice_step subroutine coupling_prep (iblk) use ice_arrays_column, only: alvdfn, alidfn, alvdrn, alidrn, & - albicen, albsnon, albpndn, apeffn, fzsal_g, fzsal, snowfracn + albicen, albsnon, albpndn, apeffn, snowfracn use ice_blocks, only: nx_block, ny_block, get_block, block use ice_domain, only: blocks_ice use ice_calendar, only: dt, nstreams @@ -441,7 +443,7 @@ subroutine coupling_prep (iblk) fsens, flat, fswabs, flwout, evap, Tref, Qref, & scale_fluxes, frzmlt_init, frzmlt use ice_flux_bgc, only: faero_ocn, fiso_ocn, Qref_iso, fiso_evap, & - fzsal_ai, fzsal_g_ai, flux_bio, flux_bio_ai + flux_bio, flux_bio_ai use ice_grid, only: tmask use ice_state, only: aicen, aice #ifdef CICE_IN_NEMO @@ -592,8 +594,6 @@ subroutine coupling_prep (iblk) fsalt_ai (i,j,iblk) = fsalt (i,j,iblk) fhocn_ai (i,j,iblk) = fhocn (i,j,iblk) fswthru_ai(i,j,iblk) = fswthru(i,j,iblk) - fzsal_ai (i,j,iblk) = fzsal (i,j,iblk) - fzsal_g_ai(i,j,iblk) = fzsal_g(i,j,iblk) if (nbtrcr > 0) then do k = 1, nbtrcr @@ -639,8 +639,7 @@ subroutine coupling_prep (iblk) faero_ocn(:,:,:,iblk), & alvdr (:,:,iblk), alidr (:,:,iblk), & alvdf (:,:,iblk), alidf (:,:,iblk), & - fzsal (:,:,iblk), fzsal_g (:,:,iblk), & - flux_bio (:,:,1:nbtrcr,iblk), & + flux_bio =flux_bio (:,:,1:nbtrcr,iblk), & Qref_iso =Qref_iso (:,:,:,iblk), & fiso_evap=fiso_evap(:,:,:,iblk), & fiso_ocn =fiso_ocn (:,:,:,iblk)) diff --git a/cicecore/drivers/standalone/cice/CICE_InitMod.F90 b/cicecore/drivers/standalone/cice/CICE_InitMod.F90 index 38000446a..a48bdda30 100644 --- a/cicecore/drivers/standalone/cice/CICE_InitMod.F90 +++ b/cicecore/drivers/standalone/cice/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -243,6 +243,8 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays + if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/cicecore/drivers/unittest/gridavgchk/CICE_InitMod.F90 b/cicecore/drivers/unittest/gridavgchk/CICE_InitMod.F90 index 38000446a..cb1241a5e 100644 --- a/cicecore/drivers/unittest/gridavgchk/CICE_InitMod.F90 +++ b/cicecore/drivers/unittest/gridavgchk/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -243,6 +243,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/cicecore/drivers/unittest/halochk/CICE_InitMod.F90 b/cicecore/drivers/unittest/halochk/CICE_InitMod.F90 index 38000446a..cb1241a5e 100644 --- a/cicecore/drivers/unittest/halochk/CICE_InitMod.F90 +++ b/cicecore/drivers/unittest/halochk/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -243,6 +243,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/cicecore/drivers/unittest/opticep/CICE_InitMod.F90 b/cicecore/drivers/unittest/opticep/CICE_InitMod.F90 index 38000446a..cb1241a5e 100644 --- a/cicecore/drivers/unittest/opticep/CICE_InitMod.F90 +++ b/cicecore/drivers/unittest/opticep/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -243,6 +243,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/cicecore/drivers/unittest/sumchk/CICE_InitMod.F90 b/cicecore/drivers/unittest/sumchk/CICE_InitMod.F90 index 38000446a..cb1241a5e 100644 --- a/cicecore/drivers/unittest/sumchk/CICE_InitMod.F90 +++ b/cicecore/drivers/unittest/sumchk/CICE_InitMod.F90 @@ -82,7 +82,7 @@ subroutine cice_init get_forcing_atmo, get_forcing_ocn, get_wave_spec, init_snowtable use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, & faero_default, alloc_forcing_bgc, fiso_default - use ice_grid, only: init_grid1, init_grid2, alloc_grid + use ice_grid, only: init_grid1, init_grid2, alloc_grid, dealloc_grid use ice_history, only: init_hist, accum_hist use ice_restart_shared, only: restart, runtype use ice_init, only: input_data, init_state @@ -243,6 +243,7 @@ subroutine cice_init call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call dealloc_grid ! deallocate temporary grid arrays if (my_task == master_task) then call ice_memusage_print(nu_diag,subname//':end') endif diff --git a/configuration/scripts/machines/Macros.freya_intel b/configuration/scripts/machines/Macros.freya_intel index f40ca4e23..b31264990 100644 --- a/configuration/scripts/machines/Macros.freya_intel +++ b/configuration/scripts/machines/Macros.freya_intel @@ -21,14 +21,14 @@ CFLAGS := -c -O2 -fp-model precise # Additional flags FIXEDFLAGS := -132 FREEFLAGS := -FR -FFLAGS := -fp-model source -convert big_endian -assume byterecl -ftz -traceback -no-wrap-margin +FFLAGS := -convert big_endian -assume byterecl #-xHost ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -check -fpe0 -ftrapuv -fp-model except -check noarg_temp_created + FFLAGS += -O0 -g -check -fpe0 -ftrapuv -fp-model except -check noarg_temp_created -fp-model source -ftz -traceback -no-wrap-margin # -heap-arrays 1024 else - FFLAGS += -O2 + FFLAGS += -O3 -xCORE-AVX512 -qopt-zmm-usage=high -finline-functions -finline -parallel endif LD := $(FC) LDFLAGS := $(FFLAGS) -v diff --git a/configuration/scripts/tests/omp_suite.ts b/configuration/scripts/tests/omp_suite.ts index 57effbe75..c640f49d0 100644 --- a/configuration/scripts/tests/omp_suite.ts +++ b/configuration/scripts/tests/omp_suite.ts @@ -32,7 +32,6 @@ smoke gx3 8x4 diag1,reprosum,run10day,gridc smoke gx3 6x2 alt01,reprosum,run10day,gridc smoke gx3 8x2 alt02,reprosum,run10day,gridc #smoke gx3 12x2 alt03,droundrobin,reprosum,run10day,gridc -smoke gx3 4x4 alt04,reprosum,run10day,gridc smoke gx3 4x4 alt05,reprosum,run10day,gridc smoke gx3 8x2 alt06,reprosum,run10day,gridc smoke gx3 7x2 alt07,reprosum,run10day,gridc @@ -58,7 +57,6 @@ smoke gx3 8x4 diag1,reprosum,run10day,gridcd smoke gx3 6x2 alt01,reprosum,run10day,gridcd smoke gx3 8x2 alt02,reprosum,run10day,gridcd #smoke gx3 12x2 alt03,droundrobin,reprosum,run10day,gridcd -smoke gx3 4x4 alt04,reprosum,run10day,gridcd smoke gx3 4x4 alt05,reprosum,run10day,gridcd smoke gx3 8x2 alt06,reprosum,run10day,gridcd smoke gx3 7x2 alt07,reprosum,run10day,gridcd @@ -113,7 +111,6 @@ smoke gx3 4x1 diag1,reprosum,run10day,cmplogrest,thread,grid smoke gx3 8x1 alt01,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_6x2_alt01_gridc_reprosum_run10day smoke gx3 8x1 alt02,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x2_alt02_gridc_reprosum_run10day #smoke gx3 8x1 alt03,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_12x2_alt03_droundrobin_gridc_reprosum_run10day -smoke gx3 8x1 alt04,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_4x4_alt04_gridc_reprosum_run10day smoke gx3 8x1 alt05,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_4x4_alt05_gridc_reprosum_run10day smoke gx3 8x1 alt06,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_8x2_alt06_gridc_reprosum_run10day smoke gx3 8x1 alt07,reprosum,run10day,cmplogrest,thread,gridc smoke_gx3_7x2_alt07_gridc_reprosum_run10day @@ -141,7 +138,6 @@ smoke gx3 4x1 diag1,reprosum,run10day,cmplogrest,thread,grid smoke gx3 8x1 alt01,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_6x2_alt01_gridcd_reprosum_run10day smoke gx3 8x1 alt02,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x2_alt02_gridcd_reprosum_run10day #smoke gx3 8x1 alt03,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_12x2_alt03_droundrobin_gridcd_reprosum_run10day -smoke gx3 8x1 alt04,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_4x4_alt04_gridcd_reprosum_run10day smoke gx3 8x1 alt05,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_4x4_alt05_gridcd_reprosum_run10day smoke gx3 8x1 alt06,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_8x2_alt06_gridcd_reprosum_run10day smoke gx3 8x1 alt07,reprosum,run10day,cmplogrest,thread,gridcd smoke_gx3_7x2_alt07_gridcd_reprosum_run10day