Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

constrains #2548

Open
Fa20 opened this issue Jun 27, 2024 · 16 comments
Open

constrains #2548

Fa20 opened this issue Jun 27, 2024 · 16 comments
Assignees
Labels
question Further information is requested

Comments

@Fa20
Copy link

Fa20 commented Jun 27, 2024

Hallo Ax team,

is it correct to set the constarins same like in the blow code and update the evaluetion function to consider this condition on the outcome and parameters constrains

<<ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    parameter_constraints=["x1 + x2 <= 2.0", "x2 -5*x1<= 0"], 
    outcome_constraints=["l2norm <= 1.25"," l_check <= -0.0001 "," upper_check <= -0.0001"],  
)

import numpy as np


def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    
    condition1=(x1 + x2 <= 2.0)&(x2 -5*x1<= 0)
    l_bound = (
            np.sin(x1) 
            * (0.5 / 12)
            * math.pi
            * x3
        )
    u_b = (
            np.sin(x2)
            * (0.8 /x4)
            * math.pi
            * x3
        )
    
    check = (l_b < x2) & (
            x2 < u_b)
    l_check = l_b - x2
    upper_check = x2 - u_b
     
    l2norm=(np.sqrt((x**2).sum())            
                
    if condition1 and check and l2norm <= 1.25:
        return {"hartmann6": (hartmann6(x), 0.0), ("l2norm":l2norm, 0.0), ("check":check, 0.0)}
    else:
            return { ("l2norm":l2norm, 0.0)} >>
@danielcohenlive
Copy link

Hi @Fa20, the evaluation function should not need any knowledge of the constraints. That will be handled in our modeling layer. It's only job it to return the metric results for a given parameterization, regardless of whether that parameterization violates the constraints (in theory it should never be passed such a parameterization) or the results violate the outcome constraints (which it may). I can help you out more with evaluate(), but I don't understand exactly what you're trying to do and the code references variables before they're defined.

@Fa20
Copy link
Author

Fa20 commented Jun 28, 2024

@danielchoenlive Thanks. could you please correct the above code

1- I want to just try to handel this constrains correctly
I did it like this example #2460 (comment)

@Abrikosoff
Copy link

@danielchoenlive Thanks. could you please correct the above code

1- I want to just try to handel this constrains correctly I did it like this example #2460 (comment)

If this is what you really want to do, one obvious thing to fix is first defining your x1, x2...variables:

x1 = parameterization.get("x1")
x2 = parameterization.get("x2")

and so on.

@danielcohenlive
Copy link

@Fa20 my advice on @bernardbeckerman's comment you referenced is that returning partial data will work for enforcing parameter constraints. It's a little more risky for outcome constraints because you're approximating what the modeling layer would do on modeled data, and this is raw data.

@danielcohenlive
Copy link

Also, this isn't a fully supported feature of Ax, so use at your own discretion. It may not be supported in the future.

@Fa20
Copy link
Author

Fa20 commented Jun 28, 2024

@danielcohenlive but evaluate the objectives in my case is really more expensive since I need to do simulation for each set of parameters even are not satisfy the outcome. Using the above will be help to calculate just the objective function after check the constraints. Is there another way to do this rather calculate the objectives for all the iterations.

@danielcohenlive
Copy link

@Fa20 there is no supported way to do more complicated outcome constraints than what is shown in the tutorial. You're welcome to use @bernardbeckerman's solution, but it won't have long term support.

@Fa20
Copy link
Author

Fa20 commented Jun 28, 2024

@danielcohenlive the problem with my case realted to evaluate the objective functions which each evalaution required more than 5 hours and for that filter to have just the one that is satisfy the outcomes will save too much time .

2- what do you mean it won't have long term support? does this means in the new version will be not able to use it ?

3- can this way of filter leads to not optimal solutions but close to optimal one?

@danielcohenlive
Copy link

2- what do you mean it won't have long term support? does this means in the new version will be not able to use it ?

We don't have any plans to remove it, but we also don't have tests to ensure this continues to work. So I definitely wouldn't design a system around this mechanism. But if it helps you run a few experiments in the short term you can use it.

I would actually say there is a more supported way of doing this. Rewrite the optimization loop to look like

from ax.core.types import (
    TEvaluationOutcome,
    TParameterization,
)

def satisfies_additional_constraints(parameterization: TParameterization, outcome: TEvaluationOutcome) -> bool:
   # [your constraint logic here]
   ...

for i in range(25):
    parameterization, trial_index = ax_client.get_next_trial()
    outcome = evaluate(parameterization)
    if satisfies_additional_constraints(parameterization=parameterization, outcome=outcome):
        # Local evaluation here can be replaced with deployment to external system.
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    else:
        ax_client.log_trial_failure(trial_index=trial_index)

3- can this way of filter leads to not optimal solutions but close to optimal one?

I would say that is probably what it will do.

@Fa20
Copy link
Author

Fa20 commented Jun 28, 2024

@danielcohenlive using the above Code required also to evaluate the objective functions for each Iteration and then come if Statement to Check the condition. This will need to evaluate objective functions before or I understood Wrangler?.
2-could you please Show how can be used with the code I provied above
Thanks

@danielcohenlive
Copy link

2-could you please Show how can be used with the code I provied above

This is where how your code would fit in the function:

def satisfies_additional_constraints(parameterization: TParameterization, outcome: TEvaluationOutcome) -> bool:
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    
    condition1=(x1 + x2 <= 2.0)&(x2 -5*x1<= 0)
    l_bound = (
            np.sin(x1) 
            * (0.5 / 12)
            * math.pi
            * x3
        )
    u_b = (
            np.sin(x2)
            * (0.8 /x4)
            * math.pi
            * x3
        )
    
    check = (l_b < x2) & (
            x2 < u_b)
    l_check = l_b - x2
    upper_check = x2 - u_b
    # l2norm may alternately come from `outcome["l2norm"]` here
    l2norm=(np.sqrt((x**2).sum())            
                
    return condition1 and check and l2norm <= 1.25

Note: there are some syntax errors in your code so it won't run, but I'll leave that to you to correct. Maybe it's intended as semicode.

This will need to evaluate objective functions before or I understood Wrangler?.

I think there's some autocorrect happening here, but you could separate out the evaluation of parameters from outcomes if you wanted to save compute/time.

def satisfies_param_constraints(parameterization: TParameterization) -> bool:
   # [your constraint logic here]
   ...


def satisfies_outcome_constraints(, outcome: TEvaluationOutcome) -> bool:
   # [your outcome constraint logic here]
   ...

for i in range(25):
    parameterization, trial_index = ax_client.get_next_trial()
    if not satisfies_param_constraints(parameterization=parameterization):
        ax_client.log_trial_failure(trial_index=trial_index)
    outcome = evaluate(parameterization)
    if satisfies_outcome_constraints(outcome=outcome):
        # Local evaluation here can be replaced with deployment to external system.
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    else:
        ax_client.log_trial_failure(trial_index=trial_index)

@Fa20
Copy link
Author

Fa20 commented Jun 28, 2024

@danielcohenlive thank you so much .I will run the code with the above update but regardinh the evaluate function what will be return since l2norm and other conditions are in satisfies_params .
2-parmametrs constrained will stay same as I defined in the code I sheared and also outcomes constraints when we creat the experiments?

@danielcohenlive
Copy link

@Fa20 l2norm should be an outcome and not in satisfies_params.

2-parmametrs constrained will stay same as I defined in the code I sheared and also outcomes constraints when we creat the experiments?

There's been lots of code in this issue :) I can't answer that because I don't know which code you're referring to.

@Fa20
Copy link
Author

Fa20 commented Jun 29, 2024

@danielcohenlive
I update the code based on the above recommendation but got the blow described error. could you please help

<< import math
from math import *

import numpy as np

from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
from ax.core.types import (
    TEvaluationOutcome,
    TParameterization,
)

init_notebook_plotting()
ax_client = AxClient()
ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    #parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    #outcome_constraints=["l2norm <= 1.25"],  # Optional.
)


def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}
def satisfies_param_constraints(parameterization: TParameterization) -> bool:
    x1 = parameterization.get("x1")
    x2 = parameterization.get("x2")
    x3 = parameterization.get("x3")
    x4 = parameterization.get("x4")
    
    condition1=(x1 + x2 <= 2.0)&(x2 -5*x1<= 0)
    l_b = (
            np.sin(x1) 
            * (0.5 / 12)
            * math.pi
            * x3
        )
    u_b = (
            np.sin(x2)
            * (0.8 /x4)
            * math.pi
            * x3
        )
    
    check = (l_b < x2) & (
            x2 < u_b)
    
               
                
    return condition1 and check 

def satisfies_outcome_constraints( outcome: TEvaluationOutcome) -> bool:
        x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
        l2norm=(np.sqrt((x**2).sum())  )          
                
        return l2norm <= 1.25
for i in range(25):
    parameterization, trial_index = ax_client.get_next_trial()
    if not satisfies_param_constraints(parameterization=parameterization):
        ax_client.log_trial_failure(trial_index=trial_index)
    outcome = evaluate(parameterization)
    if satisfies_outcome_constraints(outcome=outcome):
        # Local evaluation here can be replaced with deployment to external system.
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    else:
        ax_client.log_trial_failure(trial_index=trial_index)>>
<<  ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[1], line 103
    101     ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    102 else:
--> 103     ax_client.log_trial_failure(trial_index=trial_index)

File ~\anaconda3\Lib\site-packages\ax\service\ax_client.py:816, in AxClient.log_trial_failure(self, trial_index, metadata)
    809 """Mark that the given trial has failed while running.
    810 
    811 Args:
    812     trial_index: Index of trial within the experiment.
    813     metadata: Additional metadata to track about this run.
    814 """
    815 trial = self.experiment.trials[trial_index]
--> 816 trial.mark_failed()
    817 logger.info(f"Registered failure of trial {trial_index}.")
    818 if metadata is not None:

File ~\anaconda3\Lib\site-packages\ax\core\base_trial.py:725, in BaseTrial.mark_failed(self, reason, unsafe)
    717 """Mark trial as failed.
    718 
    719 Args:
   (...)
    722     The trial instance.
    723 """
    724 if not unsafe and self._status != TrialStatus.RUNNING:
--> 725     raise ValueError("Can only mark failed a trial that is currently running.")
    727 self._failed_reason = reason
    728 self._status = TrialStatus.FAILED

ValueError: Can only mark failed a trial that is currently running.
 >>

@Abrikosoff
Copy link

Abrikosoff commented Jun 29, 2024

@danielcohenlive I update the code based on the above recommendation but got the blow described error. could you please help

<< import math
from math import *

import numpy as np

from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import init_notebook_plotting, render
from ax.core.types import (
    TEvaluationOutcome,
    TParameterization,
)

init_notebook_plotting()
ax_client = AxClient()
ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    #parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    #outcome_constraints=["l2norm <= 1.25"],  # Optional.
)


def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}
def satisfies_param_constraints(parameterization: TParameterization) -> bool:
    x1 = parameterization.get("x1")
    x2 = parameterization.get("x2")
    x3 = parameterization.get("x3")
    x4 = parameterization.get("x4")
    
    condition1=(x1 + x2 <= 2.0)&(x2 -5*x1<= 0)
    l_b = (
            np.sin(x1) 
            * (0.5 / 12)
            * math.pi
            * x3
        )
    u_b = (
            np.sin(x2)
            * (0.8 /x4)
            * math.pi
            * x3
        )
    
    check = (l_b < x2) & (
            x2 < u_b)
    
               
                
    return condition1 and check 

def satisfies_outcome_constraints( outcome: TEvaluationOutcome) -> bool:
        x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
        l2norm=(np.sqrt((x**2).sum())  )          
                
        return l2norm <= 1.25
for i in range(25):
    parameterization, trial_index = ax_client.get_next_trial()
    if not satisfies_param_constraints(parameterization=parameterization):
        ax_client.log_trial_failure(trial_index=trial_index)
    outcome = evaluate(parameterization)
    if satisfies_outcome_constraints(outcome=outcome):
        # Local evaluation here can be replaced with deployment to external system.
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    else:
        ax_client.log_trial_failure(trial_index=trial_index)>>
<<  ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[1], line 103
    101     ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    102 else:
--> 103     ax_client.log_trial_failure(trial_index=trial_index)

File ~\anaconda3\Lib\site-packages\ax\service\ax_client.py:816, in AxClient.log_trial_failure(self, trial_index, metadata)
    809 """Mark that the given trial has failed while running.
    810 
    811 Args:
    812     trial_index: Index of trial within the experiment.
    813     metadata: Additional metadata to track about this run.
    814 """
    815 trial = self.experiment.trials[trial_index]
--> 816 trial.mark_failed()
    817 logger.info(f"Registered failure of trial {trial_index}.")
    818 if metadata is not None:

File ~\anaconda3\Lib\site-packages\ax\core\base_trial.py:725, in BaseTrial.mark_failed(self, reason, unsafe)
    717 """Mark trial as failed.
    718 
    719 Args:
   (...)
    722     The trial instance.
    723 """
    724 if not unsafe and self._status != TrialStatus.RUNNING:
--> 725     raise ValueError("Can only mark failed a trial that is currently running.")
    727 self._failed_reason = reason
    728 self._status = TrialStatus.FAILED

ValueError: Can only mark failed a trial that is currently running.
 >>

I think the most obvious point here which I can spot is the use of log_trial_failure twice in the loop, which seems to be strange to me; what's happening is probably an arm that failed and got caught in the first log_trial_failure being given to log_trial_failure to check again, at which point the arm has already been logged as failed and thus inactive.

@danielcohenlive
Copy link

Thanks for helping out @Abrikosoff! @Fa20 if it violates both param and outcome constraints that would be a problem. You could modify the loop to look like

for i in range(25):
    parameterization, trial_index = ax_client.get_next_trial()
    if not satisfies_param_constraints(parameterization=parameterization):
        ax_client.log_trial_failure(trial_index=trial_index)
        continue  # ADDED
    outcome = evaluate(parameterization)
    if satisfies_outcome_constraints(outcome=outcome):
        # Local evaluation here can be replaced with deployment to external system.
        ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))
    else:
        ax_client.log_trial_failure(trial_index=trial_index)

In fact without that continue checking the param constraints first won't save any compute because it would still evaluate the outcomes even in the event of param contraints failure.

@danielcohenlive danielcohenlive self-assigned this Jul 1, 2024
@danielcohenlive danielcohenlive added the question Further information is requested label Jul 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants