You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for the great package!
I was trying to understand the default prior placed on noise standard deviation (I suppose that is what the last few hyper-parameters mean in the code?
When I played around with the scale parameter of the implemented horseshoe prior, I think the distribution and the generated sample do not seem to match.
Here are the codes to generate some plots and you can see while the peak of the historgram of the generated sample is between -3 and -2, the log probability of the distribution still peaks at 0.
import matplotlib.pyplot as plt
import numpy as np
from moe.optimal_learning.python.base_prior import HorseshoePrior
horseshoe = HorseshoePrior(0.1)
x = horseshoe.sample_from_prior(1000)
plt.hist(x, 20)
plt.show()
plt.plot(x, horseshoe.lnprob(x), '.')
plt.show()
I am worried this may influence the behavior of the algorithm.
Any thought or any suggestion for alternative prior for the noise variance?
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the great package!
I was trying to understand the default prior placed on noise standard deviation (I suppose that is what the last few hyper-parameters mean in the code?
When I played around with the scale parameter of the implemented horseshoe prior, I think the distribution and the generated sample do not seem to match.
Here are the codes to generate some plots and you can see while the peak of the historgram of the generated sample is between -3 and -2, the log probability of the distribution still peaks at 0.
I am worried this may influence the behavior of the algorithm.
Any thought or any suggestion for alternative prior for the noise variance?
The text was updated successfully, but these errors were encountered: