API reference¶
The full public surface, autogenerated from docstrings.
Top-level¶
polar-high — Python library for building indexed linear and mixed-integer programs in polars.
Problem ¶
LP container. Generic — no flextool-specific knowledge.
set_solver_options ¶
Store HiGHS options to be applied in solve(). Pass None
to clear. Keys are HiGHS canonical option names (presolve,
solver, parallel, time_limit etc); values must be
already coerced to the type HiGHS expects (str/int/float/bool).
Unknown keys are tolerated (a warning is emitted at solve time).
add_cstr ¶
add_cstr(name: str, *, over: DataFrame | None = None, sense: str, lhs_terms: dict[str, Var | Expr | Param | int | float], rhs_terms: dict[str, Var | Expr | Param | int | float] | None = None) -> None
Add a constraint of the form Σ lhs_terms sense Σ rhs_terms.
Each term entry is either
- a
VarorExpr— variable contribution, or - a
Param,intorfloat— constant contribution.
The engine sorts variables and constants out per side, builds
(lhs_var − rhs_var) sense (rhs_const − lhs_const), and adds
the row to highspy at solve time. Labels (the dict keys) are
used in row names and diagnostics.
cstr_names ¶
All constraint family names currently registered, in declaration order. Useful for emission audits and debugging.
cstrs_named ¶
Return constraint metadata records matching name.
An exact-name match returns the single record; otherwise a prefix
match returns every record whose name starts with name + "_"
(so passing "minimum_uptime" returns both
minimum_uptime_linear and minimum_uptime_integer).
Each :class:CstrRecord carries:
* name: full registered name of the constraint family;
* over: the polars DataFrame of axis tuples (len(over)
is the row count);
* proto: the underlying _CstrProto (expr, sense,
rhs) for advanced introspection.
cstr_row_count ¶
Total LP-row count across all constraint families matching
name (exact or prefix; see :meth:cstrs_named). Returns 0
when no families match — letting callers distinguish "absent"
from "empty" without exception handling. A scalar constraint
(over=None) counts as one row.
add_obj_constant ¶
Accumulate a constant into the objective offset. HiGHS adds
this to the reported getObjectiveValue() after solve, so it
shows up in Solution.obj even though no decision variable
carries it. Used for pure-Param objective terms like the §8.1
existing-entity fixed cost.
solve ¶
solve(*, options: dict | None = None, keep_solver: bool = False, streaming: bool = True) -> Solution
Solve the LP and return a :class:Solution.
Parameters¶
options
Per-call HiGHS options dict (overrides set_solver_options).
keep_solver
When True, the live HiGHS instance is kept on the returned
:class:Solution so callers can inspect it post-solve (e.g.
sol.highs.writeModel("model.mps")). Default False —
the C-side LP storage is released as soon as primal/dual/
objective have been extracted.
streaming
When True (default), columns are added once via addCols
and each constraint family is emitted to HiGHS via addRows
immediately after its COO triples are built; the family's local
arrays then go out of scope before the next family is processed.
This caps peak memory at one family's COO + the running HiGHS
LP. When False, the entire model is assembled into a single
:class:highspy.HighsLp and loaded via passModel —
numerically identical results either way; False is mostly
useful for benchmarking the legacy path.
peek_lp_ranges ¶
Build the LP into numpy arrays and return coefficient ranges, WITHOUT running HiGHS.
Returns a dict with these keys:
* 'matrix', 'cost', 'col_bound', 'row_bound' —
(abs_min, abs_max) of the finite, non-zero, non-infinity
values, or None when empty.
* When top_k > 0, also includes 'matrix_smallest',
'matrix_largest', 'cost_smallest', 'cost_largest',
'col_bound_smallest', 'col_bound_largest',
'row_bound_smallest', 'row_bound_largest' — lists of
(abs_value, col_name, row_name_or_None) triples
(row_name is None for cost / col_bound; col_name
is None for row_bound).
The returned ranges are exactly what HiGHS would see during its "Coefficient ranges" diagnostic.
Notes¶
- Uses the non-streaming build path (single
HighsLpconstruction). - Cost: scans
col_obj(per-column objective coefficients). - Matrix: scans
sorted_v(CSC values, with parallelsorted_rrow indices +startscol offsets for name lookup whentop_k > 0). - Col bounds: scans col-bound arrays, filtered to finite +
|v| < kHighsInf+v != 0. - Row bounds (RHS): same filter on row-bound arrays.
Var ¶
Var(name: str, dims: tuple[str, ...], frame: DataFrame, lower: float = 0.0, upper: float = float('inf'), integer: bool = False)
A variable family. frame carries columns *dims, col_id.
Var.frame stays an eager polars DataFrame — it's small (one row
per LP column), produced once in :meth:Problem.add_var, and
consumed by both flextool integration (v.frame["col_id"].unique())
and Problem.solve (col_id → bound/name lookups). Algebra ops on
Var lazify on the fly so the resulting _Term is lazy.
Param ¶
Param(dims: tuple[str, ...], frame: DataFrame | LazyFrame, name: str | None = None, _sources: list[tuple[Param, int]] | None = None)
A parameter table. frame carries columns *dims, value.
Stored internally as a polars.LazyFrame so that chained algebra
ops (Param * Param, Param + Param etc.) defer materialization
until a consumer reads .frame or the engine collects in
Problem.solve. The .frame property caches the eager
DataFrame on first read — flextool reads .frame.rename(...)
repeatedly off the same Param so we want that to be cheap.
name (optional) is a logical Param identifier (e.g. "p_inflow").
It is opt-in metadata used by :class:WarmProblem's Param-tracked
auto-update (declare_mutable / update_param). When unset,
Params are anonymous and carry no tracking overhead.
_sources records constituent named Params for composite results
of Param * Param / Param / Param. Each entry is
(param_name, dims_tuple, direction) where direction is +1 if the
Param contributes to the numerator and -1 if to the denominator.
Anonymous-only chains have _sources is None.
Expr ¶
A sum of terms (decision-variable contributions).
The terms can have different open-dim sets — they're concatenated,
not broadcast. Broadcasting happens once, at constraint emission,
via a join to the constraint's over= row index.
WarmProblem ¶
Warm-update wrapper around a :class:Problem.
Build a :class:Problem as usual (add_var, add_cstr,
set_objective). Wrap with :class:WarmProblem, then alternate
update_* calls and solve() calls — the LP is built ONCE and
only the changed coefficients / RHS values are pushed to HiGHS
between solves.
Typical rolling-horizon usage::
wp = WarmProblem(p)
sol_0 = wp.solve()
for r in range(1, n_rolls):
wp.update_rhs("balance", demand_param_for_roll[r])
wp.update_obj_coef("v_flow", cost_param_for_roll[r])
sol_r = wp.solve()
The update_* calls are O(rows_or_cols_in_family); the solve()
benefits from HiGHS's hot-start (basis is preserved across calls).
problem
property
¶
The underlying :class:Problem.
Useful for diagnostics that need to inspect the un-built LP —
e.g. :meth:Problem.peek_lp_ranges to read coefficient ranges
before the first :meth:solve triggers the build.
update_rhs ¶
Replace the RHS of constraint family cstr_name with values
drawn from new_param.
new_param may be a :class:Param whose dims match a subset
of the constraint's over= axis (broadcasting to the rest), a
scalar (broadcast to all rows), or a numpy array (positional —
length must equal the family's row count, in the order of the
original over frame).
update_obj_coef ¶
Replace the objective coefficient on every column of var_name.
Assumes the objective contribution from var_name is exactly
coef[*dims] * var[*dims] for some coef Param; this method
OVERWRITES that coefficient via h.changeColsCost. If the
objective also has contributions from this variable through more
complex algebra (e.g. var * unitsize * slope), the update is
still valid as long as new_param carries the full product —
the caller is responsible for collapsing multi-Param products
ahead of the call.
This DOES NOT touch the cost coefficients of other variables.
update_obj_coef_array ¶
Array-form of :meth:update_obj_coef.
dim_tuples is a list of dim-value tuples (one per cell) for
variable var_name; each tuple must have one entry per dim
in the var's declared signature. values is a same-length
numpy array of new objective coefficients.
The columns are resolved positionally: values[k] becomes the
new objective coefficient on the column whose dim-tuple is
dim_tuples[k]. Vectorised — a single changeColsCost
call regardless of cell count.
fix_cols ¶
Fix the listed columns of var_name to the given values.
For each (dim_tuple, value) pair, sets both the column's
lower and upper bound to value (so the LP has no choice
but to set the column at that level). Used by the Lagrangian
primal-recovery step ("fix-and-resolve"). Vectorised — single
changeColsBounds call.
update_coef ¶
Update a single (row, col) coefficient in the constraint
matrix. Use :meth:row_id_of_cstr / :meth:col_id_of_var to
resolve indices semantically.
declare_mutable ¶
Declare a set of :class:Param names whose values should be
tracked into LP cells, so :meth:update_param can later push
new values into the live HiGHS instance via changeCoeff.
MUST be called BEFORE the first :meth:solve. Tracking is
opt-in: Params not declared here pay no bookkeeping cost.
Pass the same names that the Params carry on their .name
field — typically the FlexData attribute name ("p_inflow",
"p_penalty_up" etc.).
update_param ¶
Replace the values of a tracked Param. Every LP cell whose
coefficient was originally a function of param_name is
re-computed from the new Param's values and pushed via
h.changeCoeff.
new_param must be either a scalar (broadcast to all tracked
cells) or a :class:Param whose dim signature matches the
signature recorded for that Param at build time.
Raises if param_name was not in :meth:declare_mutable's
list (silent corruption is worse than a hard error).
col_id_of_var ¶
Return the col_id(s) for a variable.
dims=None returns every col_id in the variable's family
(numpy array, ordered by the var's declaration order).
dims as a tuple of dim values returns the single col_id for
that one cell (a python int). dims as a dict
{dim_name: value} is a partial filter — returns a numpy
array of the matching col_ids.
row_id_of_cstr ¶
Return the row_id(s) for a constraint family. Mirrors
:meth:col_id_of_var.
solve ¶
Solve the LP. First call builds the LP from scratch (same
pipeline as :meth:Problem.solve); subsequent calls just run
HiGHS again on the (possibly updated) live model.
options is honoured on the FIRST solve only — subsequent
solves use the same HiGHS instance. To change options on a
rebuilt LP, drop the WarmProblem and create a new one.
Solution ¶
Solution(*, optimal: bool, obj: float, col_value: ndarray, row_dual: ndarray, col_names: list[str], row_names: list[str], vars: dict[str, Var], col_dual: ndarray | None = None, highs: Highs | None = None)
Read-only view of the solved LP. Look up variable values by
name; values come back as a polars frame (*dims, value).
value_wide ¶
value_wide(var_name: str, time_dims: tuple[str, ...] = ('d', 't'), solve_name: str | None = None) -> pl.DataFrame
Wide-form, flextool-compatible: time dims become rows, the remaining dims are encoded as a tuple-stringified column header.
For a 2-d variable like vq_state_up(n, d, t):
long : rows = (n, d, t, value)
wide : rows = (d, t) + one column per n (header = "west").
For a 5-d variable like v_flow(p, source, sink, d, t):
wide : rows = (d, t) + one column per (p, source, sink),
header = "('coal_plant', 'coal_market', 'west')"
to match flextool's MultiIndex parquet round-trip.
If solve_name is given, prepend a constant solve column
for fuller flextool-output compatibility.
constraint_dual ¶
Per-row dual values for a named constraint. Returns a frame
(over_dims..., dual) if the constraint had over= rows,
else a single-row scalar frame (dual,).
CstrRecord ¶
Read-only metadata for a registered constraint family.
Returned by :meth:Problem.cstrs_named for emission-introspection
tests. proto carries the LHS Expr, sense and rhs
structures; most callers only need over (whose height is the
row count) and name.
CouplingEntry
dataclass
¶
One participant in a :class:CouplingSpec. dim_tuples has
one tuple per coupling cell; entries in one CouplingSpec must
share length (entry-i tuple-k pairs with entry-j tuple-k under
the same λ_k).
CouplingSpec
dataclass
¶
A linear coupling family across subproblems: per cell k,
Σ_e coef_e · x[entries[e].cols[k]] = rhs[k]. rhs is a
scalar or an array sized to the cell count; default 0.
LagrangianProblem ¶
Lagrangian decomposition driver. Build N :class:Problems,
list the cross-subproblem :class:CouplingSpecs, then call
LagrangianProblem(subproblems, couplings).solve(...).
solve ¶
solve(*, max_iters: int = 100, tol: float = 1.0, step: float = 1.0, initial_lambda: float = 0.0, min_iters: int = 1, primal_tail: int | None = None) -> LagrangianSolution
Run the dual-subgradient loop.
step / √k is the diminishing step on iter k.
initial_lambda is a non-zero seed (breaks trivial 0-flow
equilibria). min_iters floors the iteration count so the
early-termination test can't fire on iter 1. primal_tail
defaults to max(20, max_iters//4).
LagrangianSolution
dataclass
¶
LagrangianSolution(converged: bool, iterations: int, total_objective: float, report_kind: str, subproblem_objectives: list[float], iteration_log: list[dict], final_lambdas: list[ndarray], primal_recovery: list[ndarray] = list(), best_dual_total: float = 0.0, recovered_total: float = 0.0)
Result bundle from :meth:LagrangianProblem.solve.
total_objective is the chosen reported total; report_kind
is "best_dual" (always for now — best LB across iters).
final_lambdas and primal_recovery are ordered like
LagrangianProblem.couplings. The trailing iteration_log
entry has iter == -1 and carries report_kind / dual / primal
summary fields.
Sum ¶
Aggregate an Expr. over lists the dims to sum out; the
remaining dims become the term's open dims. where is an index
frame that pre-filters the term frames (inner join on shared
columns) before the group-by-sum.
Sum(expr) with over=None collapses every open dim — useful
for a scalar (objective term, single-row constraint).
Where ¶
Inner-join an Expr against frame. Two effects in one op:
- Filter — rows of the term whose shared-column values don't
appear in
frameare dropped (e.g.Where(v_flow, wind_only)keeps only the wind rows). - Map — any columns of
framethat the term doesn't already carry become new open dims of the resulting term (e.g.Where(v_flow, flow_to_n)whereflow_to_nhas columns(p, source, sink, n)addsnso the term can be bound to a constraint indexed by(n, t)).
Lag ¶
Return an Expr that, for each (carry_dims, time_dim) in
lag_frame, references var at (carry_dims, lag_col).
Used for shifting variables in time, e.g. for storage state-change:
v_state[n, d, t] - v_state[n, d, t_prev]
= v_state - Lag(v_state, dtttdt, "t", "t_prev_within_timeset")
lag_frame carries the (d, t, t_prev) lookup; carry_dims are
the columns shared between var and lag_frame other than the
time dim itself (typically d).
Modules¶
polar_high.engine ¶
Generic polars-backed LP kernel.
Three primitives — Var, Param, Sum — and one container
(Problem). Knows nothing about flextool, energy systems, or any
specific model. A constraint is built as either:
-
an
Exprproduced by overloaded operators (v <= cap,Sum(...) >= rhs,lhs.eq(rhs)), passed positionally toProblem.add_cstr; or -
a labelled
termsdict, summed across all entries, with an explicitsenseandrhs. Use this when a constraint is naturally a sum of named contributions (storage transitions, sink flow, source flow, slack — like flextool's nodeBalance_eq).
A variable is a polars frame (*dims, col_id) — one LP column per
row. A parameter is a polars frame (*dims, value). Var * Param
joins on shared dims and emits an Expr term (*union_dims, col_id,
coef). Sum(expr, over=…) group-by-sums one or more dims; the
remaining dims become the constraint's row dims when the term is bound
to over= at add_cstr time.
Param ¶
Param(dims: tuple[str, ...], frame: DataFrame | LazyFrame, name: str | None = None, _sources: list[tuple[Param, int]] | None = None)
A parameter table. frame carries columns *dims, value.
Stored internally as a polars.LazyFrame so that chained algebra
ops (Param * Param, Param + Param etc.) defer materialization
until a consumer reads .frame or the engine collects in
Problem.solve. The .frame property caches the eager
DataFrame on first read — flextool reads .frame.rename(...)
repeatedly off the same Param so we want that to be cheap.
name (optional) is a logical Param identifier (e.g. "p_inflow").
It is opt-in metadata used by :class:WarmProblem's Param-tracked
auto-update (declare_mutable / update_param). When unset,
Params are anonymous and carry no tracking overhead.
_sources records constituent named Params for composite results
of Param * Param / Param / Param. Each entry is
(param_name, dims_tuple, direction) where direction is +1 if the
Param contributes to the numerator and -1 if to the denominator.
Anonymous-only chains have _sources is None.
Var ¶
Var(name: str, dims: tuple[str, ...], frame: DataFrame, lower: float = 0.0, upper: float = float('inf'), integer: bool = False)
A variable family. frame carries columns *dims, col_id.
Var.frame stays an eager polars DataFrame — it's small (one row
per LP column), produced once in :meth:Problem.add_var, and
consumed by both flextool integration (v.frame["col_id"].unique())
and Problem.solve (col_id → bound/name lookups). Algebra ops on
Var lazify on the fly so the resulting _Term is lazy.
Expr ¶
A sum of terms (decision-variable contributions).
The terms can have different open-dim sets — they're concatenated,
not broadcast. Broadcasting happens once, at constraint emission,
via a join to the constraint's over= row index.
CstrRecord ¶
Read-only metadata for a registered constraint family.
Returned by :meth:Problem.cstrs_named for emission-introspection
tests. proto carries the LHS Expr, sense and rhs
structures; most callers only need over (whose height is the
row count) and name.
Problem ¶
LP container. Generic — no flextool-specific knowledge.
set_solver_options ¶
Store HiGHS options to be applied in solve(). Pass None
to clear. Keys are HiGHS canonical option names (presolve,
solver, parallel, time_limit etc); values must be
already coerced to the type HiGHS expects (str/int/float/bool).
Unknown keys are tolerated (a warning is emitted at solve time).
add_cstr ¶
add_cstr(name: str, *, over: DataFrame | None = None, sense: str, lhs_terms: dict[str, Var | Expr | Param | int | float], rhs_terms: dict[str, Var | Expr | Param | int | float] | None = None) -> None
Add a constraint of the form Σ lhs_terms sense Σ rhs_terms.
Each term entry is either
- a
VarorExpr— variable contribution, or - a
Param,intorfloat— constant contribution.
The engine sorts variables and constants out per side, builds
(lhs_var − rhs_var) sense (rhs_const − lhs_const), and adds
the row to highspy at solve time. Labels (the dict keys) are
used in row names and diagnostics.
cstr_names ¶
All constraint family names currently registered, in declaration order. Useful for emission audits and debugging.
cstrs_named ¶
Return constraint metadata records matching name.
An exact-name match returns the single record; otherwise a prefix
match returns every record whose name starts with name + "_"
(so passing "minimum_uptime" returns both
minimum_uptime_linear and minimum_uptime_integer).
Each :class:CstrRecord carries:
* name: full registered name of the constraint family;
* over: the polars DataFrame of axis tuples (len(over)
is the row count);
* proto: the underlying _CstrProto (expr, sense,
rhs) for advanced introspection.
cstr_row_count ¶
Total LP-row count across all constraint families matching
name (exact or prefix; see :meth:cstrs_named). Returns 0
when no families match — letting callers distinguish "absent"
from "empty" without exception handling. A scalar constraint
(over=None) counts as one row.
add_obj_constant ¶
Accumulate a constant into the objective offset. HiGHS adds
this to the reported getObjectiveValue() after solve, so it
shows up in Solution.obj even though no decision variable
carries it. Used for pure-Param objective terms like the §8.1
existing-entity fixed cost.
solve ¶
solve(*, options: dict | None = None, keep_solver: bool = False, streaming: bool = True) -> Solution
Solve the LP and return a :class:Solution.
Parameters¶
options
Per-call HiGHS options dict (overrides set_solver_options).
keep_solver
When True, the live HiGHS instance is kept on the returned
:class:Solution so callers can inspect it post-solve (e.g.
sol.highs.writeModel("model.mps")). Default False —
the C-side LP storage is released as soon as primal/dual/
objective have been extracted.
streaming
When True (default), columns are added once via addCols
and each constraint family is emitted to HiGHS via addRows
immediately after its COO triples are built; the family's local
arrays then go out of scope before the next family is processed.
This caps peak memory at one family's COO + the running HiGHS
LP. When False, the entire model is assembled into a single
:class:highspy.HighsLp and loaded via passModel —
numerically identical results either way; False is mostly
useful for benchmarking the legacy path.
peek_lp_ranges ¶
Build the LP into numpy arrays and return coefficient ranges, WITHOUT running HiGHS.
Returns a dict with these keys:
* 'matrix', 'cost', 'col_bound', 'row_bound' —
(abs_min, abs_max) of the finite, non-zero, non-infinity
values, or None when empty.
* When top_k > 0, also includes 'matrix_smallest',
'matrix_largest', 'cost_smallest', 'cost_largest',
'col_bound_smallest', 'col_bound_largest',
'row_bound_smallest', 'row_bound_largest' — lists of
(abs_value, col_name, row_name_or_None) triples
(row_name is None for cost / col_bound; col_name
is None for row_bound).
The returned ranges are exactly what HiGHS would see during its "Coefficient ranges" diagnostic.
Notes¶
- Uses the non-streaming build path (single
HighsLpconstruction). - Cost: scans
col_obj(per-column objective coefficients). - Matrix: scans
sorted_v(CSC values, with parallelsorted_rrow indices +startscol offsets for name lookup whentop_k > 0). - Col bounds: scans col-bound arrays, filtered to finite +
|v| < kHighsInf+v != 0. - Row bounds (RHS): same filter on row-bound arrays.
Solution ¶
Solution(*, optimal: bool, obj: float, col_value: ndarray, row_dual: ndarray, col_names: list[str], row_names: list[str], vars: dict[str, Var], col_dual: ndarray | None = None, highs: Highs | None = None)
Read-only view of the solved LP. Look up variable values by
name; values come back as a polars frame (*dims, value).
value_wide ¶
value_wide(var_name: str, time_dims: tuple[str, ...] = ('d', 't'), solve_name: str | None = None) -> pl.DataFrame
Wide-form, flextool-compatible: time dims become rows, the remaining dims are encoded as a tuple-stringified column header.
For a 2-d variable like vq_state_up(n, d, t):
long : rows = (n, d, t, value)
wide : rows = (d, t) + one column per n (header = "west").
For a 5-d variable like v_flow(p, source, sink, d, t):
wide : rows = (d, t) + one column per (p, source, sink),
header = "('coal_plant', 'coal_market', 'west')"
to match flextool's MultiIndex parquet round-trip.
If solve_name is given, prepend a constant solve column
for fuller flextool-output compatibility.
constraint_dual ¶
Per-row dual values for a named constraint. Returns a frame
(over_dims..., dual) if the constraint had over= rows,
else a single-row scalar frame (dual,).
WarmProblem ¶
Warm-update wrapper around a :class:Problem.
Build a :class:Problem as usual (add_var, add_cstr,
set_objective). Wrap with :class:WarmProblem, then alternate
update_* calls and solve() calls — the LP is built ONCE and
only the changed coefficients / RHS values are pushed to HiGHS
between solves.
Typical rolling-horizon usage::
wp = WarmProblem(p)
sol_0 = wp.solve()
for r in range(1, n_rolls):
wp.update_rhs("balance", demand_param_for_roll[r])
wp.update_obj_coef("v_flow", cost_param_for_roll[r])
sol_r = wp.solve()
The update_* calls are O(rows_or_cols_in_family); the solve()
benefits from HiGHS's hot-start (basis is preserved across calls).
problem
property
¶
The underlying :class:Problem.
Useful for diagnostics that need to inspect the un-built LP —
e.g. :meth:Problem.peek_lp_ranges to read coefficient ranges
before the first :meth:solve triggers the build.
update_rhs ¶
Replace the RHS of constraint family cstr_name with values
drawn from new_param.
new_param may be a :class:Param whose dims match a subset
of the constraint's over= axis (broadcasting to the rest), a
scalar (broadcast to all rows), or a numpy array (positional —
length must equal the family's row count, in the order of the
original over frame).
update_obj_coef ¶
Replace the objective coefficient on every column of var_name.
Assumes the objective contribution from var_name is exactly
coef[*dims] * var[*dims] for some coef Param; this method
OVERWRITES that coefficient via h.changeColsCost. If the
objective also has contributions from this variable through more
complex algebra (e.g. var * unitsize * slope), the update is
still valid as long as new_param carries the full product —
the caller is responsible for collapsing multi-Param products
ahead of the call.
This DOES NOT touch the cost coefficients of other variables.
update_obj_coef_array ¶
Array-form of :meth:update_obj_coef.
dim_tuples is a list of dim-value tuples (one per cell) for
variable var_name; each tuple must have one entry per dim
in the var's declared signature. values is a same-length
numpy array of new objective coefficients.
The columns are resolved positionally: values[k] becomes the
new objective coefficient on the column whose dim-tuple is
dim_tuples[k]. Vectorised — a single changeColsCost
call regardless of cell count.
fix_cols ¶
Fix the listed columns of var_name to the given values.
For each (dim_tuple, value) pair, sets both the column's
lower and upper bound to value (so the LP has no choice
but to set the column at that level). Used by the Lagrangian
primal-recovery step ("fix-and-resolve"). Vectorised — single
changeColsBounds call.
update_coef ¶
Update a single (row, col) coefficient in the constraint
matrix. Use :meth:row_id_of_cstr / :meth:col_id_of_var to
resolve indices semantically.
declare_mutable ¶
Declare a set of :class:Param names whose values should be
tracked into LP cells, so :meth:update_param can later push
new values into the live HiGHS instance via changeCoeff.
MUST be called BEFORE the first :meth:solve. Tracking is
opt-in: Params not declared here pay no bookkeeping cost.
Pass the same names that the Params carry on their .name
field — typically the FlexData attribute name ("p_inflow",
"p_penalty_up" etc.).
update_param ¶
Replace the values of a tracked Param. Every LP cell whose
coefficient was originally a function of param_name is
re-computed from the new Param's values and pushed via
h.changeCoeff.
new_param must be either a scalar (broadcast to all tracked
cells) or a :class:Param whose dim signature matches the
signature recorded for that Param at build time.
Raises if param_name was not in :meth:declare_mutable's
list (silent corruption is worse than a hard error).
col_id_of_var ¶
Return the col_id(s) for a variable.
dims=None returns every col_id in the variable's family
(numpy array, ordered by the var's declaration order).
dims as a tuple of dim values returns the single col_id for
that one cell (a python int). dims as a dict
{dim_name: value} is a partial filter — returns a numpy
array of the matching col_ids.
row_id_of_cstr ¶
Return the row_id(s) for a constraint family. Mirrors
:meth:col_id_of_var.
solve ¶
Solve the LP. First call builds the LP from scratch (same
pipeline as :meth:Problem.solve); subsequent calls just run
HiGHS again on the (possibly updated) live model.
options is honoured on the FIRST solve only — subsequent
solves use the same HiGHS instance. To change options on a
rebuilt LP, drop the WarmProblem and create a new one.
Lag ¶
Return an Expr that, for each (carry_dims, time_dim) in
lag_frame, references var at (carry_dims, lag_col).
Used for shifting variables in time, e.g. for storage state-change:
v_state[n, d, t] - v_state[n, d, t_prev]
= v_state - Lag(v_state, dtttdt, "t", "t_prev_within_timeset")
lag_frame carries the (d, t, t_prev) lookup; carry_dims are
the columns shared between var and lag_frame other than the
time dim itself (typically d).
Where ¶
Inner-join an Expr against frame. Two effects in one op:
- Filter — rows of the term whose shared-column values don't
appear in
frameare dropped (e.g.Where(v_flow, wind_only)keeps only the wind rows). - Map — any columns of
framethat the term doesn't already carry become new open dims of the resulting term (e.g.Where(v_flow, flow_to_n)whereflow_to_nhas columns(p, source, sink, n)addsnso the term can be bound to a constraint indexed by(n, t)).
Sum ¶
Aggregate an Expr. over lists the dims to sum out; the
remaining dims become the term's open dims. where is an index
frame that pre-filters the term frames (inner join on shared
columns) before the group-by-sum.
Sum(expr) with over=None collapses every open dim — useful
for a scalar (objective term, single-row constraint).
polar_high.lagrangian ¶
Generic Lagrangian decomposition for coupled :class:Problems.
A domain-agnostic dual-subgradient driver for N independent LP subproblems linked by linear coupling constraints
Σ_i coef_i · col_i = rhs
Each :class:CouplingSpec carries a list of
(subproblem_idx, var_name, dim_tuple, coef) entries plus an
optional rhs (default 0). The most common use is the 2-entry
consensus coupling x_A == x_B with coefs +1 / -1, rhs 0.
Algorithm
- Bump each entry's column cost by
coef · λ(relaxes the coupling residual into the objective). - Solve every subproblem (warm-started after iter 1).
- Compute residual
Σ coef_i · x_i − rhsper cell. - Subgradient step
λ ← λ + (step / √k) · residual. - Tail-window primal averaging → fix-and-resolve for a feasible primal upper bound; report the best dual (max Σ obj across iters) as the tight lower bound.
Knows nothing about half-flows or regions — that lives in the flextool-side wrapper.
CouplingEntry
dataclass
¶
One participant in a :class:CouplingSpec. dim_tuples has
one tuple per coupling cell; entries in one CouplingSpec must
share length (entry-i tuple-k pairs with entry-j tuple-k under
the same λ_k).
CouplingSpec
dataclass
¶
A linear coupling family across subproblems: per cell k,
Σ_e coef_e · x[entries[e].cols[k]] = rhs[k]. rhs is a
scalar or an array sized to the cell count; default 0.
LagrangianSolution
dataclass
¶
LagrangianSolution(converged: bool, iterations: int, total_objective: float, report_kind: str, subproblem_objectives: list[float], iteration_log: list[dict], final_lambdas: list[ndarray], primal_recovery: list[ndarray] = list(), best_dual_total: float = 0.0, recovered_total: float = 0.0)
Result bundle from :meth:LagrangianProblem.solve.
total_objective is the chosen reported total; report_kind
is "best_dual" (always for now — best LB across iters).
final_lambdas and primal_recovery are ordered like
LagrangianProblem.couplings. The trailing iteration_log
entry has iter == -1 and carries report_kind / dual / primal
summary fields.
LagrangianProblem ¶
Lagrangian decomposition driver. Build N :class:Problems,
list the cross-subproblem :class:CouplingSpecs, then call
LagrangianProblem(subproblems, couplings).solve(...).
solve ¶
solve(*, max_iters: int = 100, tol: float = 1.0, step: float = 1.0, initial_lambda: float = 0.0, min_iters: int = 1, primal_tail: int | None = None) -> LagrangianSolution
Run the dual-subgradient loop.
step / √k is the diminishing step on iter k.
initial_lambda is a non-zero seed (breaks trivial 0-flow
equilibria). min_iters floors the iteration count so the
early-termination test can't fire on iter 1. primal_tail
defaults to max(20, max_iters//4).