-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace forcing for gx1 #249
Comments
The JRA-55 forcing has been implemented. What remains is to retire the CORE-II based forcing. I recommend removing it entirely from the code (forcing routines would still be available via github) and from the standard forcing data download (keep a link to it, but don't put it in the mass-download file). This transition also includes QC tests, to make sure they still work as advertised. @rallard77, didn't you document runs showing differences between old and new forcing? If so, please link them here. If not, that needs to be added to this task. Thx |
Would it be possible to make new gx3 forcing from JRA-55, too? |
And maybe tx1 as well? |
jra55-example1.pptx |
@rallard77 @eclare108213 I also updated the link in the release table for 6.0.2: You should be able to edit these wiki pages yourself if you would like, or you can let me know changes to make. Sorry I hadn't finished it before the release. I'd intended to do it that day, but got behind. For action items, @eclare108213 I want to clarify: |
What I'd like is to move users off of COREII completely, in favor of JRA55, but I don't think we should remove the COREII data completely from the ftp site, for backwards compatibility. I'm not sure what the best way to do this is. We can certainly stop serving COREII unless it is specifically requested (i.e. don't package it as part of the standard forcing data). And I agree, don't change the ftp until the code and script changes are ready. |
@eclare108213 Great, thanks for clarifying. I agree it makes sense to just support one in the master version of the code and maybe somewhere at the bottom of the wiki link to the COREII tarball that just includes that data and nothing else. I just looked - the CORE data are 27G (uncompressed) compared to the 47G for JRA55, so it'll be nice to get these out of the "all" tarball, but it's not too crazy big. Have you assigned anyone to the removal through the code and tests? |
At the moment, it's assigned to @duvivier and @rallard77 :) |
@eclare108213 What's the status on this? I know we've updated the physical forcing files and released most those, but we still need to change defaults in the code to use JRA55 instead of COREII? Is that right? Should we add COREII forcing to our deprecated features list with a date TBD? |
The forcing is already scheduled on the deprecated code wiki: |
Well, I feel like an idiot now. Ignore previous comment. |
No problem! We're both working on a scheduled holiday... that's the idiocy. |
An agenda and cursory notes from this morning's meeting to discuss the next steps in the JRA-55 transition are One thing we didn't discuss is what configuration to use for the spinup and "standard" test runs for which we will provide sample output. The current JRA-55 plots used default gx1 namelist settings. Does anyone out there have a preferred CICE parameter/parameterization configuration for JRA-55 forcing, or should we stick with the defaults? Since the Consortium is providing this data for testing purposes only, I'm okay with using the defaults, although I'd like to at least review them to make sure they are physically sensible before doing the spinup. I guess we'd know after the spinup if they are way out of kilter. Attendees: @eclare108213 @dabail10 @rallard77 @daveh150 Related issues: |
I personally think we should just use the default namelist configuration. |
@dabail10 @rallard77 @daveh150 @apcraig @rgrumbine Per yesterday's meeting notes:
I propose that we use the Thursday 2 pm MT time slot for the next month (not Thanksgiving) to work through some aspects of this effort. Would that work for all of you? I’m hoping that we could do this in a few joint meetings with additional ‘homework’ in between. Here’s what I’m thinking - open to other ideas - and maybe some of this is already done. What's missing? homework prior to Nov 19:
Nov 19
homework Nov 20-Dec 3
Dec 3
homework Dec 4-10
Dec 10
homework Dec 10-17+
December 17
|
Elizabeth,
This sounds reasonable. I should be able to make the Nov 19 meeting.
Rick
…On 11/13/2020 3:06 PM, Elizabeth Hunke wrote:
@dabail10 <https://github.com/dabail10> @rallard77
<https://github.com/rallard77> @daveh150 <https://github.com/daveh150>
@apcraig <https://github.com/apcraig> @rgrumbine
<https://github.com/rgrumbine>
Per yesterday's meeting notes:
We will combine efforts to convert to JRA forcing, incorporate
NOAA metrics, and improve our testing coverage as discussed in
last month’s meeting.
Action: Elizabeth will schedule a series of meetings to coordinate
this effort.
I propose that we use the */Thursday 2 pm MT time slot for the next
month/* (not Thanksgiving) to work through some aspects of this
effort. */Would that work for all of you?/* I’m hoping that we could
do this in a few joint meetings with additional ‘homework’ in between.
Here’s what I’m thinking - open to other ideas - and maybe some of
this is already done. What's missing?
homework prior to Nov 19:
* run standard tests for current and new forcing data on as many
platforms as possible, and create timeseries plots (each person in
the meeting should be able to look at their own tests)
* review default namelist configuration
Nov 19
* compare basic output from JRA-55 and bulk forcing for the standard
tests
* review code coverage to see what major things are missing
* decide whether to change any namelist values for 'production' runs
* discuss Bob’s metrics and decide which ones we can use to compare
the gx1 production runs
homework Nov 20-Dec 3
* add tests or issues for specific code coverage needs
* test new scripts for extending JRA forcing years, create extended
dataset
* run equivalent gx1 production cases for JRA and CORE forcing (how
many years? who will be responsible for this?)
Dec 3
* review gx1 results
* discuss how to implement/apply Bob’s metrics
homework Dec 4-10
* implement/apply Bob’s metrics as discussed
* create a wiki page comparing the gx1 production runs
Dec 10
* share results with the full team
* decide when to deprecate old forcing
homework Dec 10-17+
* update CICE sample output
https://github.com/CICE-Consortium/CICE/wiki/CICE-6.0.2-Sample-Output
<https://github.com/CICE-Consortium/CICE/wiki/CICE-6.0.2-Sample-Output>
* release code (or this can wait for a release next year)
December 17
* meet again if needed
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#249 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AG63LB5HCLJXAXM4HTNFNJ3SPWNV5ANCNFSM4GFQXL4A>.
|
I can make the Nov 19 meeting as well. |
I was on vacation, but will be catching up. The Dec 4-10 homework should be good. Look for emails. |
It looks like use_leap_years is true for the JRA55 forcing configurations. I don't have a lot of confidence in that option, but maybe others do. The new time manager will carefully test leap and no leap capabilities but that's not ready yet. We should make sure the calendar, forcing, and model are behaving properly over time and especially across leap days when we do longer runs using the JRA55 forcing. Maybe that has already been done and checked, but I just want to call it out if it hasn't. |
#343 is partly why I don't have much confidence in the leap calendar. Two things have to work. The CICE time manager has to handle the leap calendar properly. That means dealing with changing number of days per year, timestep counts, restart dates, and all the rest. The second is that reading the forcing has to work. Does the CICE calendar have to match the forcing calendar? Can the CICE calendar be set different from the forcing calendar? I have many open questions, but was not involved in debugging the leap year problems in #343 and why I'm unclear on the exact status. It's not clear everything is sorted out in #343, but maybe it is. |
No, everything is not sorted out - I debugged a little bit and came to the conclusion that it was better if this issue was investigated thoroughly when the time manager is overhauled. |
From #533, To-do:
Notes from global/ (these are hemispheric values produced by print_global=T): We should turn on the brine tracer for these tests. When it's off, maybe we shouldn't print this diagnostic.
Notes from point/ (these are values for particular lat/lon points produced by print_point=T):
The following issue may be more about the physical representation than diagnostic calculations. From the point plots:
|
See also #533 |
I'm struggling to download the gx1 and tx1 forcing - the files are too big. Since most of our testing uses 1 year or less of the data, maybe we should break up the big JRA forcing files into 1-year chunks? The other 4 years could also be put on zenodo (the full 5 year files would remain there for people who can get them), and then we can also offer the scripts to make the data files, once they are ready, if downloading them is a problem. Thoughts? |
When I was downloading the files, they were already in 1 year chunks, which was manageable for me. I do favor making things accessible to people with less bandwidth or computing power. Annual files aren't that much smaller than 5 years. So I'll suggest a larger number of monthly files. By way of my experiences distributing data for marine branch products, monthly works better for people than quarterly. |
Where are you downloading from Elizabeth? It does look like we have all five years in a single tar file for the CORE forcing. What is strange is that the JRA file (18GB) is much smaller than the CORE file (38GB). I think the JRA is only a single year. I'll do some digging. |
I was downloading from zenodo, using the links on the github input data pages. I wasn't downloading the CORE forcing, since I already have that. I was able to get the gx3 JRA data onto my laptop (wireless), and all of it onto badger at LANL (using an actual internet cable), so I think I'm good to go for now. But it was frustrating - the big JRA downloads kept hanging. I'm wondering if we need to make it a little more user-friendly, but if users aren't complaining, maybe we don't need to worry about it. Internet here is notoriously bad. I think the JRA data is 5 years per file. |
I believe we were trying to strike a balance between having files that were too big vs. having to download a bunch of files and possibly losing track of what you'd downloaded. |
Agreed. There was also the issue of Travis downloading the forcing. |
On Zenodo it looks like the gx3 data is 3.9GB, and the gx1 data is 39.8 GB. The gx1 in particular seems very large for one file. gx3 I would consider large for downloading to my work laptop, but I don’t do any coding on the laptop.
We could break the files up into one year segments. For gx1 that is about 7.5 GB per year. 5 files, one per year, seems reasonable. Monthly files would result in 60 (12/year for 5 years). That seems like too much.
From: Elizabeth Hunke <[email protected]>
Sent: Friday, January 29, 2021 11:54 AM
To: CICE-Consortium/CICE <[email protected]>
Cc: David Hebert, Code 7322 <[email protected]>; Mention <[email protected]>
Subject: Re: [CICE-Consortium/CICE] Replace forcing for gx1 (#249)
I was downloading from zenodo, using the links on the github input data pages. I wasn't downloading the CORE forcing, since I already have that. I was able to get the gx3 JRA data onto my laptop (wireless), and all of it onto badger at LANL (using an actual internet cable), so I think I'm good to go for now. But it was frustrating - the big JRA downloads kept hanging. I'm wondering if we need to make it a little more user-friendly, but if users aren't complaining, maybe we don't need to worry about it. Internet here is notoriously bad. I think the JRA data is 5 years per file.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#249 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/AE52VPBL3HXFZLKSWCFDWS3S4LY4VANCNFSM4GFQXL4A> . <https://github.com/notifications/beacon/AE52VPDIQFZIKKAOPYAASWTS4LY4VA5CNFSM4GFQXL4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOFXSJHLQ.gif>
|
travis (moving to github workflows) can be quickly changed to download any number of files. it's hardwired into the script and doesn't matter much whether it's 1 file or 100 in the end. For the CI testing, the main thing that might be useful would be to create reduced dataset if possible to support the CI testing. But for now, the CI testing is just downloading some gx3 datasets (no gx1 testing in CI), so it's working pretty well. |
I forgot that it was only gx3 in Travis testing. I will go ahead and add single year files. This will give users flexibility. |
I think many of the issues here have been addressed, including questions about the calendar and JRA55 forcing. I think those problems have been sorted out. There are still some outstanding science questions, see checklist above. The main point seems to be that we have generated gx3, gx1, and tx1 JRA55 forcing datasets and have now carried out longer runs. We're still fine tuning those and looking at the science. Should we close and start a new issue about JRA55 results? |
I am going to close this. |
Prepare a new forcing data set for gx1, to replace the 'LYq' option that currently includes a mixture of COREII and other forcing with calculated radiation fields. See #237.
The text was updated successfully, but these errors were encountered: