Comparing different models using the same NDBC buoys #2381
-
Hello, I am trying to compare different models using the same set of NDBC buoys, but when I plot the results they have different values of OBAR. In each model's Point Stat config file, I have: Am I missing something? I want to make sure that both sets of statistics are computed using the same set of NDBC buoy observations. What else can I specify to make sure they only use the same set of buoy obs in the "ONA" area? Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Hi Deanna, And thank you for your question. With what you've provided, the observation dataset should be restricted to only those buoys falling between your provided lat and lon thresholds, plus the sid_inc list you provided via OVERRIDES. It's odd to hear that you're receiving a different OBAR value, though. Two pieces of information may help in this: first, how many matched pairs are you ending with in each run? I'm wondering if it's possible that the resolution of one of the models you're comparing the NDBC data to is a different resolution, resulting in more matched pairs and thus a different OBAR. A note on the configuration options you listed: POINT_STAT_DUPLICATE_FLAG does not exist as a PointStat option; it will default to "None" from the PointStat default config file in METplus wrappers and you'll need to override it via
|
Beta Was this translation helpful? Give feedback.
-
Hello John,
Thank you for replying so quickly. The two models do have very different
resolutions (a global 25 km versus a regional "ONA" 5 km model), but the
difference in OBAR is sometimes more than 1 meter when comparing wave
heights - it's too big to be a rounding difference.
I looked at the number of MPR's for each model, and below is an example of
what I see (it changes according to matched hour, forecast lead time,
etc). All of these have the same list of buoys in the sid_inc override.
significant wave height MPR's:
"FULL" domain: 5km model: 87 ("ONA" domain), 25km model: 91 (global domain)
"ONA" domain: 5km model: 87, 25km model: 63
peak period MPR's:
"FULL" domain: 5km model: 65, 25km model: 69
"ONA" domain: 5km model: 65, 25km model: 49
I thought that model resolution would not matter as each model would be
interpolated to the buoy locations, and so just one model value would be
compared to each buoy. Is this correct? If not, how would I get that
result?
Thank you for the duplicate_flag correction, I'll fix that now.
Best,
Deanna
…--
Deanna Spindler, PhD
Physical Scientist III
Lynker at NOAA/NWS/NCEP/Ocean Prediction Center
NOAA Center for Weather and Climate Prediction
On Fri, Oct 13, 2023 at 1:18 PM j-opatz ***@***.***> wrote:
Hi Deanna,
And thank you for your question. With what you've provided, the
observation dataset should be restricted to only those buoys falling
between your provided lat and lon thresholds, plus the sid_inc list you
provided via OVERRIDES. It's odd to hear that you're receiving a different
OBAR value, though.
Two pieces of information may help in this: first, how many matched pairs
are you ending with in each run? I'm wondering if it's possible that the
resolution of one of the models you're comparing the NDBC data to is a
different resolution, resulting in more matched pairs and thus a different
OBAR.
Second, how large of a difference are you seeing in your values? If it's a
1 or 2 end digit changing we might be looking at a rounding difference. Any
more than that and it's definitely a change in the calculation.
A note on the configuration options you listed: POINT_STAT_DUPLICATE_FLAG
does not exist as a PointStat option; it will default to "None" from the
PointStat default config file in METplus wrappers and you'll need to
override it via
POINT_STAT_MET_CONFIG_OVERRIDES = duplicate_flag = UNIQUE;
—
Reply to this email directly, view it on GitHub
<#2381 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AK6TDPQJLCTWTQC7BTRBM3LX7FZWBAVCNFSM6AAAAAA56FX4WSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TENZWGAYTE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you for the extra info Deanna, your numbers confirmed my suspicions on why you were seeing different OBAR values. Take a look here at the Appendix C definition of OBAR. You can see that the two components of the measure are the observation values, o_i, and matched pairs, n (not to be confused with the total number of observations). As a result, OBAR will change as the number of matched pairs changes; so your changing OBAR values are a reflection of your changing MPRs. However, while we now know why OBAR changes, the bigger issue still exists of why you're seeing changing MPRs. I agree with your assessment: setting the included SID list should stop any other observation points from being used. And a bilinear interpolation uses the 4 closest gridpoints for each of the point values, so you should have a set number of observations that you should always be matching to. You're also using MASK_LLPNT, which effectively does more thresholding for observations. Can you please send along your latest log file, along with a copy of the configuration file you used? I'll try to set something up with some data that mimics your setup and see if I can replicate the behavior. The log file will also help show me if any of the settings you're using aren't being translated to MET correctly. |
Beta Was this translation helpful? Give feedback.
-
@DeannaSpindler-NOAA I wanted to check in to see if this problem has been resolved as it has been some time since you opened this Discussion. If it has been resolved, please select a reply as the answer and I'll go ahead and mark the issue as resolved. |
Beta Was this translation helpful? Give feedback.
-
Hello John,
Yes, you answered my question. I tried to view the issue on GitHub to mark
it, but I am not able to see any of the DTC/MET pages right now. I'll try
to come back to it, but if you want to mark the issue as resolved that is
fine with me.
Thanks,
Deanna
…--
Deanna Spindler, PhD
Physical Scientist III
Lynker at NOAA/NWS/NCEP/Ocean Prediction Center
NOAA Center for Weather and Climate Prediction
On Fri, Nov 3, 2023 at 2:42 PM j-opatz ***@***.***> wrote:
@DeannaSpindler-NOAA <https://github.com/DeannaSpindler-NOAA> I wanted to
check in to see if this problem has been resolved as it has been some time
since you opened this Discussion. If it has been resolved, please select a
reply as the answer and I'll go ahead and mark the issue as resolved.
—
Reply to this email directly, view it on GitHub
<#2381 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AK6TDPXY2HZPJ3H5URBHJWTYCU3INAVCNFSM6AAAAAA56FX4WSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TINZQGEZTA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
I wanted to migrate some of the discussion we've had in email to this Discussions:
Thank you for supplying this data. I was able to take a quick look through the Hera directories and should have what I need to see what's happening. I'll try to get back to you tomorrow on my findings.
I may end up finding the solution through log files, though; it seems like all 95 of the buoy locations are making it through the initial read in process, and then through mismatches of levels between the obs and fcst, masking regions, etc. your remaining matched pairs are different.
Are you expecting certain MPRs across model comparisons to be the same? For example, in significant wave height verification ar…