Guess / Not Guess optimization #144
papipopapu
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
Hey there, You might want to look into a few different things:
The second one is a big research topic so I'd start by searching "out of distribution detection in machine learning" and go from there. Though perhaps I've interpreted your question incorrectly. If so, please let me know. PS if you have an example of the ideal inputs and outputs of your model that would be great. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I dont know if this is even possible, so i marked this as an idea.
I am working in a binary classification model, and I was wondering if there was a way to change the modelling process so that the model could sometimes just "not guess". I mean, it would be great that the model could fit perfectly the ground truth predictions, but since in my case that is not possible (a lot of noise), it would be much better if the model could distinguish the instances where its probably not going to get it right, and those where its more likely.
For the model to be optimized in this way I thought the way to go was to change the loss function so it depenalises to some degree "not guessing". Maybe it is neccessary to turn it somehow into a multiclass classification problem, where "not guessing" is an option, but I dont know how would I go about it since I alredy have the ground truth perfect binary predictions.
I tried doing some research, but I dont even know what to search for, so I came here looking for your input, since you have proven to be so helpful to us newbies. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions