-
Notifications
You must be signed in to change notification settings - Fork 294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
constrains #2548
Comments
Hi @Fa20, the evaluation function should not need any knowledge of the constraints. That will be handled in our modeling layer. It's only job it to return the metric results for a given parameterization, regardless of whether that parameterization violates the constraints (in theory it should never be passed such a parameterization) or the results violate the outcome constraints (which it may). I can help you out more with |
@danielchoenlive Thanks. could you please correct the above code 1- I want to just try to handel this constrains correctly |
If this is what you really want to do, one obvious thing to fix is first defining your x1, x2...variables:
and so on. |
@Fa20 my advice on @bernardbeckerman's comment you referenced is that returning partial data will work for enforcing parameter constraints. It's a little more risky for outcome constraints because you're approximating what the modeling layer would do on modeled data, and this is raw data. |
Also, this isn't a fully supported feature of Ax, so use at your own discretion. It may not be supported in the future. |
@danielcohenlive but evaluate the objectives in my case is really more expensive since I need to do simulation for each set of parameters even are not satisfy the outcome. Using the above will be help to calculate just the objective function after check the constraints. Is there another way to do this rather calculate the objectives for all the iterations. |
@Fa20 there is no supported way to do more complicated outcome constraints than what is shown in the tutorial. You're welcome to use @bernardbeckerman's solution, but it won't have long term support. |
@danielcohenlive the problem with my case realted to evaluate the objective functions which each evalaution required more than 5 hours and for that filter to have just the one that is satisfy the outcomes will save too much time . 2- what do you mean it won't have long term support? does this means in the new version will be not able to use it ? 3- can this way of filter leads to not optimal solutions but close to optimal one? |
We don't have any plans to remove it, but we also don't have tests to ensure this continues to work. So I definitely wouldn't design a system around this mechanism. But if it helps you run a few experiments in the short term you can use it. I would actually say there is a more supported way of doing this. Rewrite the optimization loop to look like
I would say that is probably what it will do. |
@danielcohenlive using the above Code required also to evaluate the objective functions for each Iteration and then come if Statement to Check the condition. This will need to evaluate objective functions before or I understood Wrangler?. |
This is where how your code would fit in the function:
Note: there are some syntax errors in your code so it won't run, but I'll leave that to you to correct. Maybe it's intended as semicode.
I think there's some autocorrect happening here, but you could separate out the evaluation of parameters from outcomes if you wanted to save compute/time.
|
@danielcohenlive thank you so much .I will run the code with the above update but regardinh the evaluate function what will be return since l2norm and other conditions are in satisfies_params . |
@Fa20 l2norm should be an outcome and not in satisfies_params.
There's been lots of code in this issue :) I can't answer that because I don't know which code you're referring to. |
@danielcohenlive << import math
from math import *
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
from ax.core.types import (
TEvaluationOutcome,
TParameterization,
)
init_notebook_plotting()
ax_client = AxClient()
ax_client.create_experiment(
name="hartmann_test_experiment",
parameters=[
{
"name": "x1",
"type": "range",
"bounds": [0.0, 1.0],
"value_type": "float", # Optional, defaults to inference from type of "bounds".
"log_scale": False, # Optional, defaults to False.
},
{
"name": "x2",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x3",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x4",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x5",
"type": "range",
"bounds": [0.0, 1.0],
},
{
"name": "x6",
"type": "range",
"bounds": [0.0, 1.0],
},
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
#parameter_constraints=["x1 + x2 <= 2.0"], # Optional.
#outcome_constraints=["l2norm <= 1.25"], # Optional.
)
def evaluate(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
# In our case, standard error is 0, since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}
def satisfies_param_constraints(parameterization: TParameterization) -> bool:
x1 = parameterization.get("x1")
x2 = parameterization.get("x2")
x3 = parameterization.get("x3")
x4 = parameterization.get("x4")
condition1=(x1 + x2 <= 2.0)&(x2 -5*x1<= 0)
l_b = (
np.sin(x1)
* (0.5 / 12)
* math.pi
* x3
)
u_b = (
np.sin(x2)
* (0.8 /x4)
* math.pi
* x3
)
check = (l_b < x2) & (
x2 < u_b)
return condition1 and check
def satisfies_outcome_constraints( outcome: TEvaluationOutcome) -> bool:
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
l2norm=(np.sqrt((x**2).sum()) )
return l2norm <= 1.25
for i in range(25):
parameterization, trial_index = ax_client.get_next_trial()
if not satisfies_param_constraints(parameterization=parameterization):
ax_client.log_trial_failure(trial_index=trial_index)
outcome = evaluate(parameterization)
if satisfies_outcome_constraints(outcome=outcome):
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
else:
ax_client.log_trial_failure(trial_index=trial_index)>>
|
I think the most obvious point here which I can spot is the use of |
Thanks for helping out @Abrikosoff! @Fa20 if it violates both param and outcome constraints that would be a problem. You could modify the loop to look like
In fact without that |
Hallo Ax team,
is it correct to set the constarins same like in the blow code and update the evaluetion function to consider this condition on the outcome and parameters constrains
The text was updated successfully, but these errors were encountered: