You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 28, 2024. It is now read-only.
To assess the performance of the benchmark AR model, I forecast the validation datasets. The model with slope and temperature performed very poorly, even at the sites with lots of data:
The RMSEs for predicting DO mean are high across all sites with an average of ~3.6
I think the poor performance compared to when we fit a single site is because the model is required to select parameters that are pooled across all of these sites, so it is unable to optimize them for a single one, even with lots of data.
Given how poorly this does, I don't think it makes sense to try to optimize it with different variables, etc.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
To assess the performance of the benchmark AR model, I forecast the validation datasets. The model with slope and temperature performed very poorly, even at the sites with lots of data:
The RMSEs for predicting DO mean are high across all sites with an average of ~3.6
I think the poor performance compared to when we fit a single site is because the model is required to select parameters that are pooled across all of these sites, so it is unable to optimize them for a single one, even with lots of data.
Given how poorly this does, I don't think it makes sense to try to optimize it with different variables, etc.
The text was updated successfully, but these errors were encountered: