Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(very complicated) ROMS_SWAN_nesting blow up issue #320

Open
IvanPris opened this issue Sep 16, 2024 · 14 comments
Open

(very complicated) ROMS_SWAN_nesting blow up issue #320

IvanPris opened this issue Sep 16, 2024 · 14 comments

Comments

@IvanPris
Copy link

I am new to COAWST. I am now working on a project: ROMS_SWAN coupled with nesting (4 nested layers in total).

ROMS and SWAN are sharing the same grid and there are 4 nested layers in total. The largest domain covers -5 to 50 degree in latitude and 100 to 150 degree in longitude.

The complicated point is that when i run ROMS and SWAN separately, they run smoothly (144 forecast hours in total) without any blowup. I have also took a look at the output files from these two models very carefully (like the qck file for ROMS and swan netCDF files converted from the mat files) and everything is quite normal for ALL the 4 nested layers. (there isn't any distinct points with extreme values).

However, when i run the coupled one: ROMS_SWAN, it blew up after 1 forecast hour. When i take a look at the output QCK file for ROMS, i can notice that in the bottom left corner of the largest domain, the zeta values are horribly large, reaching almost 100 metres after 1 forecast hour, and the u & v are also large (increasing rapidly to 20 metres per second before blowing up. Another thing that i noticed is that, beside the horrible point above, there is also a small region near my SECOND largest domain's boundary (boundary again). When i take a look at the largest domain's qck file, actually in the corresponding region, the values are
quite large already compared with other points (like the region points are having 2-3 metre per second while other "normal" points are having -1 to 1 in general). As this region is located near the boundary of the second largest domain, so i think that's the reason why they are somehow exaggerated 3-4 times, having up to 8-10 metre per second in the second largest domain.

I have done quite a lot searching regarding this matter via the internet, and i tried using the following methods:

  1. masking (masking those grid points with large zeta, u and v)
    --> Result: when i mask the "blew up point", yes it can run longer for few minutes, then this time this is other point blew up (i think actually those points' u and v speed values are increasing continuously from the beginning, it does not blow up in the previous run because other points blew up BEFORE them). But one finding is that those points are mainly located near the boundary of the nested layer.

  2. adding sponge layer near the boundary: increasing the viscosity and diffusity near the boundaries
    --> Result: No big difference at all, still blew up

  3. Smoothing the bathymetry
    I did this all because i noticed that in those blew up points, all they have in common apart from near the boundary is that they are having very steep bathymetry, like near the coast of the Philippines and Ryukyu islands. I tried to smoothen the bathymetry using matlab scripts.

--> Result: Yes it can prevent some points from blowing up, but still, there are other points blowing up (especially near the Ryukyu islands, as i think they are still having quite steep bathymetry). But the point is that i already smoothened them to a large extent (using rx0 = 0.04).

What i dont understand is that:

A. May I know why when i separately run them, they are all ok without any abnormal values, but when i coupled them, it suddenly blew up?

B. As i suspect it might be related to the coupling, so i tried to increase the coupling interval between the two models by changing the coupling*.in (from 1800(s) to 18000(s) in both OCE2WAV AND WAV2OCE). I thought it could reduce the exchange of info between the two models (please correct me if i am wrong, as i am new to COAWST). However, still, it runs till the 1st forecast hour and then blew up. I dont understand, suppose if the coupling interval is 18000 seconds (they cannot exchange stuffs at all, but still the same result)

I will be very grateful if anyone could share their experiences if they encountered similar issues or give me some hints. Thank you very much

@jcwarner-usgs
Copy link
Collaborator

-sounds like it might be the interpolation weights file.
can you post an image of the blow up?

-how did you create the scrip weights file?

-can u run Tools/mfiles/mtools/plot_scrip_weights.m

@IvanPris
Copy link
Author

  1. my blowup images:
    this is my largest domain:
    ocean_qck_A_zeta
    ocean_qck_A_u_sur_eastward
    ocean_qck_A_v_sur_northward

As you can see, the bottom left corner's values are extremely high leading to a blow up.

On the hand, this is my second domain, as you can see, the values near the Ryuku islands are also extreme (keep increasing/decreasing), which will be the subsequent blow-up points
ocean_qck_B_zeta
Uploading ocean_qck_B_u_sur_eastward.png…
Uploading ocean_qck_B_v_sur_northward.png…

  1. For the scrip weights file:

i created using SCRIP_COAWST (no error pops up)
++++++++++++++++++++++++++++++++++++++++++++++++++++++
NGRIDS_ROMS=4,
NGRIDS_SWAN=4,
NGRIDS_WW3=0,
NGRIDS_WRF=0,
NGRIDS_HYD=0,

! 3) Enter name of the ROMS grid file(s):
ROMS_GRIDS(1)='PROJ_DIR/ROMS/input/A_grid_4nest.nc',
ROMS_GRIDS(2)='PROJ_DIR/ROMS/input/B_grid_4nest.nc',
ROMS_GRIDS(3)='PROJ_DIR/ROMS/input/C_grid_4nest.nc',
ROMS_GRIDS(4)='PROJ_DIR/ROMS/input/D_grid_4nest.nc',

! 4) Enter SWAN information:
! -the name(s) of the SWAN grid file(s) for coords and bathy.
! -the size of the SWAN grids (full number of center points), and
! -if the swan grids are Spherical(set cartesian=0) or
! Cartesian(set cartesian=1).
! NUMX: xi_rho, NUMY: eta_rho

SWAN_COORD(1)='PROJ_DIR/SWAN/swan_coord_A_4nest.grd',
SWAN_COORD(2)='PROJ_DIR/SWAN/swan_coord_B_4nest.grd',
SWAN_COORD(3)='PROJ_DIR/SWAN/swan_coord_C_4nest.grd',
SWAN_COORD(4)='PROJ_DIR/SWAN/swan_coord_D_4nest.grd',

SWAN_BATH(1)='PROJ_DIR/SWAN/swan_bathy_A_4nest.bot',
SWAN_BATH(2)='PROJ_DIR/SWAN/swan_bathy_B_4nest.bot',
SWAN_BATH(3)='PROJ_DIR/SWAN/swan_bathy_C_4nest.bot',
SWAN_BATH(4)='PROJ_DIR/SWAN/swan_bathy_D_4nest.bot',

SWAN_NUMX(1)=201,
SWAN_NUMY(1)=221,
CARTESIAN(1)=0,

SWAN_NUMX(2)=245,
SWAN_NUMY(2)=149,
CARTESIAN(2)=0,

SWAN_NUMX(3)=127,
SWAN_NUMY(3)=127,
CARTESIAN(3)=0,

SWAN_NUMX(4)=237,
SWAN_NUMY(4)=242,
CARTESIAN(4)=0,

++++++++++++++++++++++++++++++++++++++++++++++++++++++
3. After i run plot_scrip_weights.m, i got these:
image
image
image
image

I dont know why independent runs of ROMS and SWAN are both perfectly fine but coupled run gives rise to blow up.

Thank you very much for your help.

@simion1232006
Copy link

simion1232006 commented Sep 17, 2024

I encoutered a similar issue when coupling ROMS with WW3. I am not sure whether this will work for you, but you could try remove all masking for SWAN model . it is not a solution, but can test whether it is caused by wrong interpolation of data between ROMS and SWAN due to land masking.

@jcwarner-usgs
Copy link
Collaborator

In the lower left of the big grid, what do the waves look like? something has to be driving that flow.
what are your bc's?
can u post your .h, ocean.in, and INPUT files?

@IvanPris
Copy link
Author

I am very sorry for my late reply.

Thank you for all your help and advice.

These are my files: (coupled ROMS_SWAN + 4nest)
log.txt

sandy.h.2way_noavg.txt

ocean.in.txt

Thank you once again for your help.

@jcwarner-usgs
Copy link
Collaborator

-i cant see the log.txt, can you post that again?

-for the BC's you have, for example:
LBC(isUbar) == Gra Gra Gra Gra
Nes Nes Nes Nes
! Fla Fla Fla Fla \ ! 2D U-momentum
! Fla Fla Fla Fla
! Nes Nes Nes Nes \ ! 2D U-momentum, Grid 2
Nes Nes Nes Nes \ ! 2D U-momentum, Grid 3
Nes Nes Nes Nes ! 2D U-momentum, Grid 3
there are \ missing. did the parser get that infor correctly? it would should up in the log.txt.

-what is this: # define TWO_WAY

-can you try with this off: # define PRESS_COMPENSATE

  • you dont need this # define TS_MPDATA

-do not use this: # define NEARSHORE_MELLOR08
THIS might be the problem. Do not use MELLOR.

  • you want to use

define CHARNOK

define CRAIG_BANNER

or

define TKE_WAVEDISS

#define ZOS_HSIG
suggest you use the first 2 (charn + cg) , and turn off the second set (tkediss zos).

@IvanPris
Copy link
Author

Thank you for your help.

I have tried to make the modifications in the sandy.h file and then recompile, following your advice.

But the blow-up still persists, again in the lower left corner of the largest domain (reaching over 100m in zeta, large value in u and v which leads to blowup) and also the region near Rkuyu island in the second domain (u and v keep increasing, which will be the potential subsequent blowup points). Like the previous run, the model still blows up after running roughly 1.5 hours.

log.txt

ocean.in (2).txt

sandy_ivan.h.txt

@IvanPris
Copy link
Author

These are the input files for SWAN:
swan_omfs_hkw_4nest.in.txt
swan_omfs_nscs_4nest.in.txt
swan_omfs_scs_pac_4nest.in.txt
swan_omfs_shk_4nest.in.txt

Thank you very much for your help.

@jcwarner-usgs
Copy link
Collaborator

for grid 1 (the largest), what do the waves look like near the time of blowup, especially near lower left corner.
do those waves look different than the swan only case?

@IvanPris
Copy link
Author

I am sorry for my late reply.
for the waves in ROMS_SWAN coupled case: (values in bottom left corner is 0.7-0.8)
image

for the swan only case: (values in bottom left corner is also 0.7-0.8)
image

Therefore, i think not much difference between the ROMS_SWAN and SWAN_only cases

@IvanPris
Copy link
Author

IvanPris commented Sep 23, 2024

Some new updates:

I have made another attempt which significant increases the forecast time before the blowup (it can run for almost 44 hours before blowing up). The changes that i have made are:

  1. I have masked the bottom left corner (as somehow it appears to be useless for the forecast) in the largest domain
    image

  2. I have #define ONE way in sandy.h (instead of TWO way) and then recompile
    #define ONE way means the parent grid will give info to the child grids but the child grids will NOT exchange info with the parent grid

#define TWO way means the child grids will also exchange info with the parent grid as well

After i made these changes, the model can run forecast of nearly 44 hours before it blew up (this time it blew up at a different point in the largest grid, this time it is the u speed becomes suddenly very large, but the v speed and zeta values are normal). The blow-up point (tiny red dot in the diagram below) is located near Hainan
image
image

log.txt
ocean.in (3).txt
sandy_ivan.h (2).txt

Please find the following input files and log.txt:

If that's the case, may i know what should i do in order to prevent the blow-up, should i mask the point near Hainan?

Thank you for your help.

@jcwarner-usgs
Copy link
Collaborator

image
31 Aug 0100
This image is concerning. The SWAN waves are not moving cleanly between the grids when coupled.
For SWAN, you dont need to make a new wind file for each grid.
I would recommend that you make 1 wind file at the best resolution, and make copies of the wind file for each grid.

Your swan dt is 10 min for each grid. that might be large for the smaller grids.

I am not sure about that max current. Maybe the channel needs to be wider or deeper.

What do the ocean surface currents look like in the coupled run at 31 Aug 0100?

@IvanPris
Copy link
Author

IvanPris commented Sep 24, 2024

Thank you very much for your reply and help.

Some updates:

for the current blowup near Hainan, i tried to decrease the timestep (in ocean.in) for the largest grid from 240 to 120, now it can successfully run for the full forecast period (i.e. 144 hours, which is my target)

Though the blowup issue is somehow resolved, there is another issue: the values from ROMS output are somehow too large, for example for the largest domain, the zeta (free surface) in ROMS_SWAN_4nest (LHS) is larger when compared with my ROMS_WW3 (with no nesting) (RHS in the image below)
image

I am not sure if this is the reason (as the largest grid exchanges info with the inner child grid, so that the zeta in child grids are also too large, much larger than the actual observed values, the black line is the observed value and the blue line is the output from ROMS_SWAN_4nest (innermost grid))

image

May I know is there any way to resolve this issue?

Thank you

@IvanPris
Copy link
Author

Also, i have noticed some strange features:

There are two "curves" originating from the coawstlines (east of the Philippines and also near Ryukyu Islands ) moving in the direction of the arrows in the image below (u surface speed)

image

May I know is it something related to SWAN (as i also notice similar pattern in SWAN, is it because SWAN is designed and best-suited for nearshore simulation and now it is applied in big ocean in my largest grid) , are there any ways to remedy this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants