-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of BETA BCO Ranking systems #329
Comments
Spoke with Hadley and we had some ideas for the representation of the scores in the data model. I've been implementing scores in the biomarker project for scoring "trustworthy" biomarkers and a few things that we've done that have made things easier to track are:
{
"score": 3.4,
"score_info": {
"contributions": [
{
"c": "first_pmid",
"w": 1,
"f": 1
},
{
"c": "other_pmid",
"w": 0.2,
"f": 7
},
{
"c": "first_source",
"w": 1,
"f": 1
},
{
"c": "other_source",
"w": 0.1,
"f": 0
},
{
"c": "generic_condition_pen",
"w": -4,
"f": 0
},
{
"c": "loinc",
"w": 1,
"f": 0
}
],
"formula": "sum(w*f)",
"variables": {
"w": "weight",
"c": "condition",
"f": "frequency"
}
}
} This shows that the score was calculated by the sum of the weights times the frequencies. For example, having one PMID associated with the biomarker is a weight of 1. Additional PMIDs get a weight of 0.2, and so on. So the calculation for this score was |
Write a FAQ on how the ranking system works, criteria, etc? |
FAQ created in #446 |
Implement the ideas from #328 into the BCO Scoring function:
bco_api/biocompute/services.py
Lines 599 to 621 in 456d002
The text was updated successfully, but these errors were encountered: