-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
first commit for joinsteep=50 #623
first commit for joinsteep=50 #623
Conversation
perhaps I should revert back to the previous big values (1000) and small values (10) to see which had the biggest impact. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes all look good, but I'm having second thoughts about the value of making all the join functions the same. I think there are at least 3 distinct cases that might benefit from separate values:
- join functions used for piecewise linear functions like the harvest control rules and the hockey stick stock-recruit function. Ideally the transitions would be steep so that the resulting patterns are as linear as possible and you don't get mismatches from the expectation as discussed in [Bug]: ForeCatch not quite equal to OFLCatch * buffer #485 and [Refactor]: make use of join functions more consistent #486. Changing the values should only impact models where the population crosses one of the joins in a way that impacts the likelihood. Thus, changes to the join in the HCR shouldn't impact estimation because it's only applied in the forecast.
- join functions used for selectivity, e.g. smooth transition from ascending limb to flat top of double normal. I don't see any clear benefit from having those joins be steeper as we would expect the selectivity curves to be pretty smooth. Furthermore, making a change here will likely have a small impact on the large majority of models which use double normal selectivity.
- join functions that relate to hitting limits like max F or crash penalty. I don't know that it matters how steep these are. Too steep may hinder estimation for those models that hit such bounds, but the MLE values may not be impacted as long as the estimation leaves the parameter space which caused the problem (though the different search path may lead to slightly different values).
I manually ran the run-ss3-with-est github action to see the impact of the joiner change on all models. (I'm not sure why this action isn't being run every time, perhaps we turned them off to save time?). The files in the artifact from the action are in this zip: The one model that failed the test is the "three_area_nomove" model where the ForeCatch_2010_se increased from 1.28173e+03 to 2.44708e+03. For future reference, comparison is done by this R function https://github.com/nmfs-ost/ss3-test-models/blob/main/.github/r_scripts/compare.R where for most quantities a change of greater than 1% fails the test but there are a few exceptions, like max gradient, where an absolute change of more than 0.001 causes it to fail. |
I agree with all you are saying and like your logic about the categories. I remain uncomfortable that so many values end up with small changes based on ill-informed differences 20 vs 30 vs 40. I changed the 1000 and 10 instances back to that value, but still find differences. Now I am curious which one is making the biggest impact, but don't want to go through the tedious exercise of changing them one at a time. Shelving for now; will do fix for #622 separately. |
Concisely describe what has been changed/addressed in the pull request.
Replace all the individual join function constants with the constant termed joinsteep and with a value of 50. Values previously ranged from 10 to 1000.
What tests have been done?
see comparisons in the issue
Where are the relevant files?
What tests/review still need to be done?
Is there an input change for users to Stock Synthesis?
Additional information (optional).