-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constrained optimization #10
Comments
The first point is an issue indeed. That is currently wrong. I disagree we should automatically detect no feasible points and switch to PoF only. I'd leave that out of the library. Its possible to first use a BayesianOptimizer with PoF, then run one with gamma. It would be interesting though to support some sort of stopping criteria. It would be helpful in this scenario. I specifically only included scipy optimization instead of TF optimizers. I don't see a specific reason to work penalty functions. Although I am open to be convinced otherwise. For white-box equality constraints can be avoided by transforming the domain first. PoF works quite bad with equality constraints so better options are welcome there. |
It might be a dumb question, but how is tensorflow's gradient tensors passed to scipy for optimization purposes? |
https://github.com/GPflow/GPflow/blob/master/GPflow/model.py In _optimize_np the objective function is used in result = minimize(..) with jac=True the jacobian is assumed to come with the objective function. |
@nknudde Thank you, I will take a more careful look into GPflow. |
Ah, I see. It's because evaluate() is decorated with AutoFlow, so it's automatically compiled. |
Not sure if you were referring to the SciPy optimization of the model likelihood (as described by @nknudde ) or the optimization of the acquisition function which applies a similar principle. The actual construction of the gradient of the acquisition function occurs in Thank you for pointing this out, its something I should mention in the documentation. |
Makes sense. I think it should be well-documented that in BayesianOptimizer the Acquisition function (and underlying GPFlow model) is updated with new data, so the user knows he can use his own pointer to the acquisition function in a next sampling stage (possible with a new hybrid acquisition function).
Agree, lets just rely on the scipy optimizers. |
… be aware of what points are feasibly or not.
Would you mind providing a small example of inequality constraints please as they stand currently? My domain is invariant to position, e.g. [1, 2, 3] and [2, 3, 1] are the same solution. Is there a better way to model this than a > b and b > c in BO? |
The problem is that unlike GP hyperparameters, real world input points are highly unregulatory in their settings. Examples are:
I think it would be better to separate the package into a lower level and a higher level, so that it's possible for advanced users to bypass Domain (in the higher level) and specify constraints and optimization methods explicitly (in the lower level). I've created a gist to illustrate my point: In particular, the high-level functionalities (etc. can be used even if the low-level functionalities are extended (etc. KB). This is somehow similar to TensorFlow's design. |
Hi @mccajm , thank you for your interest in GPFlowOpt. With regards to constraints, there are two distinct use cases:
Regarding your position invariant domain, your solution would definitely be valid. Although no simple way exists yet, you can probably already achieve this by implementing your own optimizer (i.e., based on the SciPyOptimizer) which you then use to optimize the acquisition function in BayesianOptimizer. Different solution: I guess your domain is symmetric so you probably can also come up with a transform to a domain of lower dimensionality which covers all your function values (i.e. figure 1 in https://arxiv.org/pdf/1310.6740.pdf which is related to your problem, but in a black-box setting). You can then optimize on this reduced domain. Some thoughts on the implementation of white-box constraints in GPflowOpt:
|
For expensive constraints we can take a look at Gelbart's dissertation section 2.6: It summarizes a number of proposed constrained acquisitions functions over the years. At the moment we only have implemented the probability of feasibility. |
Recall that we do not intend to support a wide range of methods, I'd only include others if there is a specific reason for it. |
@yangnw The current plan is to store the (cheap) constraints in the Domain object. This can include coefficient matrices (for whitebox linear constraints) or function handles (for cheap blackbox constraints). The Domain object is passed to an Optimizer and so the constraints are available there. I think this is still low-level enough. Of course for really specific constraints (e.g., symmetry) a custom Optimizer class must be implemented. |
This issue keeps track of the support for constraints in GPFlowOpt.
Support for expensive constraints will be initially added using the Probability of Feasbility (PoF). With the acquisition function defined as gamma(x) = EI * PoF.
Paying attention to:
Other constraints, which can be cheaply evaluated, are passed through to the optimizer. Although I'm not sure about the support (and performance) of constraints in scipy. I believe the TF optimizers do not have direct support for constraints, of course adding a penalty to the objective function (or loss) is always possible.
@javdrher we make a distinction between equality and inequality constraints? Equality constraints might not work well with the PoF, if at all. At first sight, expected violation is an option, but I'll have to look in literature what the standard approaches are again. It is not needed for version 0.1.0.
The text was updated successfully, but these errors were encountered: