You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When sample weights are applied to a learner classifier the up weighting on one class will be reflected in a higher predicted probability than what was observed in the unweighted data.
Describe the solution you'd like
There are two aspects to an ideal solution:
when ever using weights with a classifier and performing simulation a warning should be displayed noting as such and that uncalibrated probabilities may not align with observed rates for the positive class.
post training calibration could be added as a toggled option for the ClassifierPipelineDF class, where the default is set to true but can be turned off if desired. This could help ensure that naively even with weights applied to learning the probabilities shown in the simulation outputs align reasonably well with those observed in the data.
Describe alternatives you've considered
None.
Additional context
None.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
When sample weights are applied to a learner classifier the up weighting on one class will be reflected in a higher predicted probability than what was observed in the unweighted data.
Describe the solution you'd like
There are two aspects to an ideal solution:
ClassifierPipelineDF
class, where the default is set to true but can be turned off if desired. This could help ensure that naively even with weights applied to learning the probabilities shown in the simulation outputs align reasonably well with those observed in the data.Describe alternatives you've considered
None.
Additional context
None.
The text was updated successfully, but these errors were encountered: