Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using OMLT block (Linear Tree) to estimate derivative w.r.t. time leads to error when applying pyomo.environ.TransformationFactory #165

Open
tomasvanoyen opened this issue Nov 6, 2024 · 9 comments

Comments

@tomasvanoyen
Copy link

Hi,

first of all let me thank you for this nice work.

I am trying to use the library to provide a data-driven surrogate model of a time derivative of quantity (say speed) and use this optimize the total cost of a route, considering also that the cost of the accelerating depends on time.

As such, I am defining the following:

`t_sim = np.array(range(0, 100))
model1 = pyo.ConcreteModel()
model1.t = pyo_dae.ContinuousSet(initialize = t_sim)
model1.speed = pyo.Var(model1.t, bounds=(0, 40))
model1.modelled_acceleration = pyo.Var(model1.t, bounds=(-10, 10))
model1.forc = pyo.Var(model1.t, bounds=(0, 10))
model1.cost_petrol = pyo.Var(model1.t)
model1.speed_dot = pyo_dae.DerivativeVar(model1.speed, wrt=model1.t)
model1.lt = OmltBlock()
formulation1_lt = LinearTreeGDPFormulation(ltmodel, transformation="hull") # ltmodel being a LinearTreeDefinition of a regression model
model1.lt.build_formulation(formulation1_lt)

model1.connection_constrain_1 = pyo.Constraint(
model1.t, rule=lambda m, t: m.forc[t] == model1.lt.inputs[0]
)
model1.connection_constrain_2 = pyo.Constraint(
model1.t, rule=lambda m, t: m.modelled_acceleration[t] == model1.lt.outputs[0]
)
model1.speed_dot_ode = pyo.Constraint(
model1.t, rule=lambda m, t: m.speed_dot[t] == m.modelled_acceleration[t] - 1
)
model1.speed_min = pyo.Constraint(
range(0, len(t_sim)), rule=lambda m, k: m.speed[t_sim[k]] >= 10
)
model1.control = sum([model1.forc[t_sim[k]] for k in range(0, len(t_sim))])
model1.obj = pyo.Objective(
expr = model1.control,
sense=pyo.minimize
)`

However, applying then the TransformationFactory
pyo.TransformationFactory( "dae.collocation" ).apply_to(model1, nfe=len(t_sim))

provides the error below.

Could you provide any guidance on how to resolve this error?

Thanks in advance!

Tomas


TypeError Traceback (most recent call last)
Cell In[37], line 3
1 pyo.TransformationFactory(
2 "dae.collocation"
----> 3 ).apply_to(model1, nfe=len(t_sim))

File ~/mambaforge/envs/fs_ems_ops/lib/python3.11/site-packages/pyomo/core/base/transformation.py:77, in Transformation.apply_to(self, model, **kwds)
75 if not hasattr(model, '_transformation_data'):
76 model._transformation_data = TransformationData()
---> 77 reverse_token = self._apply_to(model, **kwds)
78 timer.report()
80 return reverse_token

File ~/mambaforge/envs/fs_ems_ops/lib/python3.11/site-packages/pyomo/dae/plugins/colloc.py:464, in Collocation_Discretization_Transformation._apply_to(self, instance, **kwds)
461 elif self._scheme_name == 'LAGRANGE-LEGENDRE':
462 self._get_legendre_constants(currentds)
--> 464 self._transformBlock(instance, currentds)

File ~/mambaforge/envs/fs_ems_ops/lib/python3.11/site-packages/pyomo/dae/plugins/colloc.py:501, in Collocation_Discretization_Transformation._transformBlock(self, block, currentds)
498 disc_info['afinal'] = self._afinal[currentds]
499 disc_info['scheme'] = self._scheme_name
--> 501 expand_components(block)
503 for d in block.component_objects(DerivativeVar, descend_into=True):
504 dsets = d.get_continuousset_list()

File ~/mambaforge/envs/fs_ems_ops/lib/python3.11/site-packages/pyomo/dae/misc.py:124, in expand_components(block)
118 # Record the missing BlockData before expanding components. This is for
119 # the case where a ContinuousSet indexed Block is used in a Constraint.
120 # If the Constraint is expanded before the Block then the missing
121 # BlockData will be added to the indexed Block but will not be
122 # constructed correctly.
123 for blk in block.component_objects(Block, descend_into=True):
--> 124 missing_idx = set(blk.index_set()) - set(blk._data.keys())
125 if missing_idx:
126 blk._dae_missing_idx = missing_idx

File ~/mambaforge/envs/fs_ems_ops/lib/python3.11/site-packages/pyomo/core/base/set.py:572, in SetData.iter(self)
564 def iter(self) -> Iterator[typingAny]:
565 """Iterate over the set members
566
567 Raises AttributeError for non-finite sets. This must be
(...)
570 underlying indexing set).
571 """
--> 572 raise TypeError(
573 "'%s' object is not iterable (non-finite Set '%s' "
574 "is not iterable)" % (self.class.name, self.name)
575 )

TypeError: 'GlobalSet' object is not iterable (non-finite Set 'NonNegativeIntegers' is not iterable)`

@rmisener
Copy link
Member

rmisener commented Nov 6, 2024

@bammari -- Possible to take a look at this one?

@tomasvanoyen
Copy link
Author

Hi @rmisener @bammari ,

Thanks for this nice library.

Yet I am wondering how to solve my problem. Any take on this? Guidance on how to implement / circumvent this problem ?
Or is this just not possible and we should switched to a different method?

Thanks in any case!

Regards,

Tomas

@bammari
Copy link
Collaborator

bammari commented Nov 13, 2024

@tomasvanoyen Thank you for raising this issue! Another student and I are looking into this and we will respond shortly. Thank you.

@bammari
Copy link
Collaborator

bammari commented Nov 15, 2024

Hi @emma58! I'm hoping I can get your input here. I believe that because we're using Pyomo.GDP for the linear tree formulations, we get this error when a NonNegativeInteger set is introduced during transformation. I haven't had the opportunity to look into this further but is there a way around this?

@tomasvanoyen You can get around this error by following Emma's Solution Below!

Please let me know if you have any additional questions in the meantime.

Bashar

@emma58
Copy link
Contributor

emma58 commented Nov 15, 2024

@tomasvanoyen, @bammari is correct that the DAE transformation is getting tripped up by what the gdp.hull transformation is doing: You can fix this by calling dae.collocation first: When you construct LinearTreeGDPFormulation, set transformation="custom". This will return you a GDP formulation of the linear model tree rather than a MILP. Then, after you've built the model, call:

pyo.TransformationFactory( "dae.collocation" ).apply_to(model1, nfe=len(t_sim))
pyo.TransformationFactory("gdp.hull").apply_to(model1)

@tomasvanoyen
Copy link
Author

Hi @emma58, @bammari thank you for your response.

I am stuck in second gear, and therefore hadn't have the chance yet to check in on your solution.

I hope to get back to you by Friday.

Thanks again,
Tomas

@tomasvanoyen
Copy link
Author

Dear @emma58 and @bammari ,

yes indeed, by calling dae.collocation first and setting the transformation="custom"; and building the model afterwords allows to tackle a time continuous problem (e.g. model1.t = pyo_dae.ContinuousSet(initialize = t_sim)) .

Now, I am attempting to find an optimal solution for my problem.

Thanks!

Tomas

@tomasvanoyen
Copy link
Author

tomasvanoyen commented Dec 4, 2024

Dear @emma58 and @bammari,

indeed your suggestion allows to proceed setting up the optimization problem. However, it seems that I am not able to apply this to my problem.

As mentioned, I am trying to use the library to provide a data-driven surrogate model of a time derivative of quantity (say C ):

$\frac{\partial C}{\partial t} = C(t) + F(t), $

and use this to optimize the total cost over time where

  1. the cost of $F$ depends on time.
  2. $C$ needs to remain between certain limits

We considered two approaches:

  1. make a surrogate model of $\frac{\partial C}{\partial t}$
  2. make a surrogate model of $C_{n + 1}$, considering a first order discretization of the equation made.

We follow the ingredients in the linear_tree_formulations.ipynb:

# Build the linear-tree model
regr = LinearTreeRegressor(
LinearRegression(),
criterion="mse",
max_bins=120,
min_samples_leaf=20,
max_depth=8
)

# Data needs to be in array and reshaped
x_scaled = df[["x1_scaled","x2_scaled"]].to_numpy() #.reshape(-1, 1)
y_scaled = df["y_scaled"].to_numpy().reshape(-1, 1)

x = df[["x1_scaled", "x2_scaled"]].values
y = df["y_scaled"].values.reshape(-1,1)

# train the linear tree on the scaled data
history1 = regr.fit(x, y)

# create an omlt scaling object
scaler = omlt.scaling.OffsetScaling(
offset_inputs=[mean_data["x1"], mean_data["x2"]],
factor_inputs=[std_data["x1"], std_data["x2"]],
offset_outputs=[mean_data["y"]],
factor_outputs=[std_data["y"]],
)

# create the input bounds. note that the key 0corresponds to input0 and that we also scale the input bounds
input_bounds = {
0: (
(min(df["x1"]) - mean_data["x1"]) / std_data["x1"],
(max(df["x1"]) - mean_data["x1"]) / std_data["x1"],
),
1: (
(min(df["x2"]) - mean_data["x2"]) / std_data["x2"],
(max(df["x2"]) - mean_data["x2"]) / std_data["x2"],
)
}

C0 = 25

t_sim = np.array(range(0, 1000))
setpoint_sim = np.array([T0+10]*len(t_sim))
p_cost = np.sin(2.*np.pi * np.array(range(0, 1000))/1000) + 2

## make the model (option 2)
m = pyo.ConcreteModel()
m.t = pyo_dae.ContinuousSet(initialize = t_sim)

m.x1 = pyo.Var(m.t)
m.x2 = pyo.Var(m.t)

m.y = pyo.Var(m.t)
m.p_cost = pyo.Var(m.t)

m.x1[0] = T0

ltmodel = LinearTreeDefinition(
regr,
scaling_object=scaler,
scaled_input_bounds=input_bounds,
)
m.lt = OmltBlock()
formulation1_lt = LinearTreeGDPFormulation(ltmodel, transformation="custom")
m.lt.build_formulation(formulation1_lt)

m.connection_constrain_1 = pyo.Constraint(
m.t, rule=lambda m, t: m.x1[t] == m.lt.inputs[0]
)
m.connection_constrain_2 = pyo.Constraint(
m.t, rule=lambda m, t: m.x2[t] == m.lt.inputs[1]
)
m.connection_constrain_3 = pyo.Constraint(
m.t, rule=lambda m, t: m.y[t] == m.lt.outputs[0]
)

m.temp_constrain_min = Constraint(range(0, len(t_sim)), rule = lambda m, k: m.y[t_sim[k]] >= setpoint_sim[k])
m.temp_constrain_max = Constraint(range(0, len(t_sim)), rule = lambda m, k: m.y[t_sim[k]] <= setpoint_sim[k]+10)

# input specifications
m.p_cost_constrain = Constraint(
range(0, len(t_sim)), rule = lambda m, k: m.p_cost[t_sim[k]] == p_cost[k]
)

m.ls_control = sum(
[(m.p_cost[t_sim[k]]*m.x2[t_sim[k]]) for k in range(0, len(t_sim))]
)

m.obj = Objective(expr = m.ls_control, sense=minimize)
pyo.TransformationFactory(
"dae.collocation"
).apply_to(m, nfe=len(t_sim))
pyo.TransformationFactory("gdp.hull").apply_to(m)
pyo.SolverFactory('ipopt').solve(m).write()
`

This leads to
message from solver: Ipopt 3.14.16\x3a Converged to a locally infeasible point. Problem may be infeasible.
Increasing the input_bounds (beyond values related to the standard deviation):

input_bounds = {
0: (-50, 50),
1: (-50, 50)
}

does lead to a solution. However this solution an trivial solution (a straight line in time) which does not account for the fact that the cost changes in time.

## make the model (option 1)
Following option 2, we change the above to connect the derivative with respect to time to the surrogate model:

`
m.y = DerivativeVar(m.x2, wrt = m.t)

m.connection_constrain_3 = pyo.Constraint(
m.t, rule=lambda m, t: m.y[t] == m.lt.outputs[0]
)
`

This leads to

`
WARNING: Loading a SolverResults object with a warning status into
model.name="unknown";
- termination condition: other
- message from solver: Too few degrees of freedom (rethrown)!
Solver:

  • Status: warning
    Message: Too few degrees of freedom (rethrown)!
    Termination condition: other
    `

All together, I seem to be stuck to apply this approach to my problem.

With respect to option number 1, apparently is doesn't work like that, even though this appears the intuitive way in my opinion.

With respect to option number 2, I wonder if maybe the surrogate model is not suitable for this approach, and that adopting a neural network would help? But, it is in any case strange that I need to widen the input bounds in order to find a solution.

I can provide a notebook offline to exemplify the problem better, if requested.

Looking forward to see any comments or suggestions.

Kind regards and thanks in any case.

Tomas

@tomasvanoyen
Copy link
Author

Dear @emma58 and @bammari,

just a ping to query if there are any thoughts on this issue?

Please note that the answer: this will not work with our library is also valid - as it allows to close the issue.

Kind regards and thanks in any case.

Tomas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants