Skip to content

API reference

The full public surface, autogenerated from docstrings.

Top-level

polar-high — Python library for building indexed linear and mixed-integer programs in polars.

Problem

Problem()

LP container. Generic — no flextool-specific knowledge.

set_solver_options

set_solver_options(options: dict | None) -> None

Store HiGHS options to be applied in solve(). Pass None to clear. Keys are HiGHS canonical option names (presolve, solver, parallel, time_limit etc); values must be already coerced to the type HiGHS expects (str/int/float/bool). Unknown keys are tolerated (a warning is emitted at solve time).

add_cstr

add_cstr(name: str, *, over: DataFrame | None = None, sense: str, lhs_terms: dict[str, Var | Expr | Param | int | float], rhs_terms: dict[str, Var | Expr | Param | int | float] | None = None) -> None

Add a constraint of the form Σ lhs_terms sense Σ rhs_terms.

Each term entry is either
  • a Var or Expr — variable contribution, or
  • a Param, int or float — constant contribution.

The engine sorts variables and constants out per side, builds (lhs_var − rhs_var) sense (rhs_const − lhs_const), and adds the row to highspy at solve time. Labels (the dict keys) are used in row names and diagnostics.

cstr_names

cstr_names() -> list[str]

All constraint family names currently registered, in declaration order. Useful for emission audits and debugging.

cstrs_named

cstrs_named(name: str) -> list[CstrRecord]

Return constraint metadata records matching name.

An exact-name match returns the single record; otherwise a prefix match returns every record whose name starts with name + "_" (so passing "minimum_uptime" returns both minimum_uptime_linear and minimum_uptime_integer).

Each :class:CstrRecord carries: * name: full registered name of the constraint family; * over: the polars DataFrame of axis tuples (len(over) is the row count); * proto: the underlying _CstrProto (expr, sense, rhs) for advanced introspection.

cstr_row_count

cstr_row_count(name: str) -> int

Total LP-row count across all constraint families matching name (exact or prefix; see :meth:cstrs_named). Returns 0 when no families match — letting callers distinguish "absent" from "empty" without exception handling. A scalar constraint (over=None) counts as one row.

add_obj_constant

add_obj_constant(value: float) -> None

Accumulate a constant into the objective offset. HiGHS adds this to the reported getObjectiveValue() after solve, so it shows up in Solution.obj even though no decision variable carries it. Used for pure-Param objective terms like the §8.1 existing-entity fixed cost.

solve

solve(*, options: dict | None = None, keep_solver: bool = False, streaming: bool = True) -> Solution

Solve the LP and return a :class:Solution.

Parameters

options Per-call HiGHS options dict (overrides set_solver_options). keep_solver When True, the live HiGHS instance is kept on the returned :class:Solution so callers can inspect it post-solve (e.g. sol.highs.writeModel("model.mps")). Default False — the C-side LP storage is released as soon as primal/dual/ objective have been extracted. streaming When True (default), columns are added once via addCols and each constraint family is emitted to HiGHS via addRows immediately after its COO triples are built; the family's local arrays then go out of scope before the next family is processed. This caps peak memory at one family's COO + the running HiGHS LP. When False, the entire model is assembled into a single :class:highspy.HighsLp and loaded via passModel — numerically identical results either way; False is mostly useful for benchmarking the legacy path.

peek_lp_ranges

peek_lp_ranges(top_k: int = 0) -> dict[str, tuple[float, float] | None | list]

Build the LP into numpy arrays and return coefficient ranges, WITHOUT running HiGHS.

Returns a dict with these keys: * 'matrix', 'cost', 'col_bound', 'row_bound'(abs_min, abs_max) of the finite, non-zero, non-infinity values, or None when empty. * When top_k > 0, also includes 'matrix_smallest', 'matrix_largest', 'cost_smallest', 'cost_largest', 'col_bound_smallest', 'col_bound_largest', 'row_bound_smallest', 'row_bound_largest' — lists of (abs_value, col_name, row_name_or_None) triples (row_name is None for cost / col_bound; col_name is None for row_bound).

The returned ranges are exactly what HiGHS would see during its "Coefficient ranges" diagnostic.

Notes
  • Uses the non-streaming build path (single HighsLp construction).
  • Cost: scans col_obj (per-column objective coefficients).
  • Matrix: scans sorted_v (CSC values, with parallel sorted_r row indices + starts col offsets for name lookup when top_k > 0).
  • Col bounds: scans col-bound arrays, filtered to finite + |v| < kHighsInf + v != 0.
  • Row bounds (RHS): same filter on row-bound arrays.

Var

Var(name: str, dims: tuple[str, ...], frame: DataFrame, lower: float = 0.0, upper: float = float('inf'), integer: bool = False)

A variable family. frame carries columns *dims, col_id.

Var.frame stays an eager polars DataFrame — it's small (one row per LP column), produced once in :meth:Problem.add_var, and consumed by both flextool integration (v.frame["col_id"].unique()) and Problem.solve (col_id → bound/name lookups). Algebra ops on Var lazify on the fly so the resulting _Term is lazy.

Param

Param(dims: tuple[str, ...], frame: DataFrame | LazyFrame, name: str | None = None, _sources: list[tuple[Param, int]] | None = None)

A parameter table. frame carries columns *dims, value.

Stored internally as a polars.LazyFrame so that chained algebra ops (Param * Param, Param + Param etc.) defer materialization until a consumer reads .frame or the engine collects in Problem.solve. The .frame property caches the eager DataFrame on first read — flextool reads .frame.rename(...) repeatedly off the same Param so we want that to be cheap.

name (optional) is a logical Param identifier (e.g. "p_inflow"). It is opt-in metadata used by :class:WarmProblem's Param-tracked auto-update (declare_mutable / update_param). When unset, Params are anonymous and carry no tracking overhead.

_sources records constituent named Params for composite results of Param * Param / Param / Param. Each entry is (param_name, dims_tuple, direction) where direction is +1 if the Param contributes to the numerator and -1 if to the denominator. Anonymous-only chains have _sources is None.

frame property

frame: DataFrame

Eager DataFrame view; collects on first read, then caches.

Expr

Expr(terms: list[_Term])

A sum of terms (decision-variable contributions).

The terms can have different open-dim sets — they're concatenated, not broadcast. Broadcasting happens once, at constraint emission, via a join to the constraint's over= row index.

WarmProblem

WarmProblem(problem: Problem)

Warm-update wrapper around a :class:Problem.

Build a :class:Problem as usual (add_var, add_cstr, set_objective). Wrap with :class:WarmProblem, then alternate update_* calls and solve() calls — the LP is built ONCE and only the changed coefficients / RHS values are pushed to HiGHS between solves.

Typical rolling-horizon usage::

wp = WarmProblem(p)
sol_0 = wp.solve()
for r in range(1, n_rolls):
    wp.update_rhs("balance", demand_param_for_roll[r])
    wp.update_obj_coef("v_flow", cost_param_for_roll[r])
    sol_r = wp.solve()

The update_* calls are O(rows_or_cols_in_family); the solve() benefits from HiGHS's hot-start (basis is preserved across calls).

problem property

problem: Problem

The underlying :class:Problem.

Useful for diagnostics that need to inspect the un-built LP — e.g. :meth:Problem.peek_lp_ranges to read coefficient ranges before the first :meth:solve triggers the build.

update_rhs

update_rhs(cstr_name: str, new_param: Param | float | int) -> None

Replace the RHS of constraint family cstr_name with values drawn from new_param.

new_param may be a :class:Param whose dims match a subset of the constraint's over= axis (broadcasting to the rest), a scalar (broadcast to all rows), or a numpy array (positional — length must equal the family's row count, in the order of the original over frame).

update_obj_coef

update_obj_coef(var_name: str, new_param: Param | float | int) -> None

Replace the objective coefficient on every column of var_name.

Assumes the objective contribution from var_name is exactly coef[*dims] * var[*dims] for some coef Param; this method OVERWRITES that coefficient via h.changeColsCost. If the objective also has contributions from this variable through more complex algebra (e.g. var * unitsize * slope), the update is still valid as long as new_param carries the full product — the caller is responsible for collapsing multi-Param products ahead of the call.

This DOES NOT touch the cost coefficients of other variables.

update_obj_coef_array

update_obj_coef_array(var_name: str, dim_tuples: list[tuple], values: ndarray) -> None

Array-form of :meth:update_obj_coef.

dim_tuples is a list of dim-value tuples (one per cell) for variable var_name; each tuple must have one entry per dim in the var's declared signature. values is a same-length numpy array of new objective coefficients.

The columns are resolved positionally: values[k] becomes the new objective coefficient on the column whose dim-tuple is dim_tuples[k]. Vectorised — a single changeColsCost call regardless of cell count.

fix_cols

fix_cols(var_name: str, dim_tuples: list[tuple], values: ndarray) -> None

Fix the listed columns of var_name to the given values.

For each (dim_tuple, value) pair, sets both the column's lower and upper bound to value (so the LP has no choice but to set the column at that level). Used by the Lagrangian primal-recovery step ("fix-and-resolve"). Vectorised — single changeColsBounds call.

update_coef

update_coef(row: int, col: int, value: float) -> None

Update a single (row, col) coefficient in the constraint matrix. Use :meth:row_id_of_cstr / :meth:col_id_of_var to resolve indices semantically.

declare_mutable

declare_mutable(*param_names: str) -> None

Declare a set of :class:Param names whose values should be tracked into LP cells, so :meth:update_param can later push new values into the live HiGHS instance via changeCoeff.

MUST be called BEFORE the first :meth:solve. Tracking is opt-in: Params not declared here pay no bookkeeping cost.

Pass the same names that the Params carry on their .name field — typically the FlexData attribute name ("p_inflow", "p_penalty_up" etc.).

update_param

update_param(param_name: str, new_param: Param | float | int) -> None

Replace the values of a tracked Param. Every LP cell whose coefficient was originally a function of param_name is re-computed from the new Param's values and pushed via h.changeCoeff.

new_param must be either a scalar (broadcast to all tracked cells) or a :class:Param whose dim signature matches the signature recorded for that Param at build time.

Raises if param_name was not in :meth:declare_mutable's list (silent corruption is worse than a hard error).

col_id_of_var

col_id_of_var(var_name: str, dims: tuple | dict | None = None) -> int | np.ndarray

Return the col_id(s) for a variable.

dims=None returns every col_id in the variable's family (numpy array, ordered by the var's declaration order). dims as a tuple of dim values returns the single col_id for that one cell (a python int). dims as a dict {dim_name: value} is a partial filter — returns a numpy array of the matching col_ids.

row_id_of_cstr

row_id_of_cstr(cstr_name: str, axis: tuple | dict | None = None) -> int | np.ndarray

Return the row_id(s) for a constraint family. Mirrors :meth:col_id_of_var.

solve

solve(*, options: dict | None = None) -> Solution

Solve the LP. First call builds the LP from scratch (same pipeline as :meth:Problem.solve); subsequent calls just run HiGHS again on the (possibly updated) live model.

options is honoured on the FIRST solve only — subsequent solves use the same HiGHS instance. To change options on a rebuilt LP, drop the WarmProblem and create a new one.

Solution

Solution(*, optimal: bool, obj: float, col_value: ndarray, row_dual: ndarray, col_names: list[str], row_names: list[str], vars: dict[str, Var], col_dual: ndarray | None = None, highs: Highs | None = None)

Read-only view of the solved LP. Look up variable values by name; values come back as a polars frame (*dims, value).

value

value(var_name: str) -> pl.DataFrame

Long-form per-variable solution: (*dims, value).

value_wide

value_wide(var_name: str, time_dims: tuple[str, ...] = ('d', 't'), solve_name: str | None = None) -> pl.DataFrame

Wide-form, flextool-compatible: time dims become rows, the remaining dims are encoded as a tuple-stringified column header.

For a 2-d variable like vq_state_up(n, d, t): long : rows = (n, d, t, value) wide : rows = (d, t) + one column per n (header = "west").

For a 5-d variable like v_flow(p, source, sink, d, t): wide : rows = (d, t) + one column per (p, source, sink), header = "('coal_plant', 'coal_market', 'west')" to match flextool's MultiIndex parquet round-trip.

If solve_name is given, prepend a constant solve column for fuller flextool-output compatibility.

constraint_dual

constraint_dual(name: str) -> pl.DataFrame

Per-row dual values for a named constraint. Returns a frame (over_dims..., dual) if the constraint had over= rows, else a single-row scalar frame (dual,).

CstrRecord

CstrRecord(name: str, over, proto: _CstrProto)

Read-only metadata for a registered constraint family.

Returned by :meth:Problem.cstrs_named for emission-introspection tests. proto carries the LHS Expr, sense and rhs structures; most callers only need over (whose height is the row count) and name.

CouplingEntry dataclass

CouplingEntry(subproblem_idx: int, var_name: str, dim_tuples: list[tuple], coef: float = 1.0)

One participant in a :class:CouplingSpec. dim_tuples has one tuple per coupling cell; entries in one CouplingSpec must share length (entry-i tuple-k pairs with entry-j tuple-k under the same λ_k).

CouplingSpec dataclass

CouplingSpec(entries: list[CouplingEntry], rhs: float | ndarray = 0.0, key: object | None = None)

A linear coupling family across subproblems: per cell k, Σ_e coef_e · x[entries[e].cols[k]] = rhs[k]. rhs is a scalar or an array sized to the cell count; default 0.

LagrangianProblem

LagrangianProblem(subproblems: Sequence[Problem], couplings: Sequence[CouplingSpec])

Lagrangian decomposition driver. Build N :class:Problems, list the cross-subproblem :class:CouplingSpecs, then call LagrangianProblem(subproblems, couplings).solve(...).

solve

solve(*, max_iters: int = 100, tol: float = 1.0, step: float = 1.0, initial_lambda: float = 0.0, min_iters: int = 1, primal_tail: int | None = None) -> LagrangianSolution

Run the dual-subgradient loop.

step / √k is the diminishing step on iter k. initial_lambda is a non-zero seed (breaks trivial 0-flow equilibria). min_iters floors the iteration count so the early-termination test can't fire on iter 1. primal_tail defaults to max(20, max_iters//4).

LagrangianSolution dataclass

LagrangianSolution(converged: bool, iterations: int, total_objective: float, report_kind: str, subproblem_objectives: list[float], iteration_log: list[dict], final_lambdas: list[ndarray], primal_recovery: list[ndarray] = list(), best_dual_total: float = 0.0, recovered_total: float = 0.0)

Result bundle from :meth:LagrangianProblem.solve.

total_objective is the chosen reported total; report_kind is "best_dual" (always for now — best LB across iters). final_lambdas and primal_recovery are ordered like LagrangianProblem.couplings. The trailing iteration_log entry has iter == -1 and carries report_kind / dual / primal summary fields.

Sum

Sum(expr, over: tuple[str, ...] | str | None = None, where: DataFrame | None = None) -> Expr

Aggregate an Expr. over lists the dims to sum out; the remaining dims become the term's open dims. where is an index frame that pre-filters the term frames (inner join on shared columns) before the group-by-sum.

Sum(expr) with over=None collapses every open dim — useful for a scalar (objective term, single-row constraint).

Where

Where(expr, frame: DataFrame) -> Expr

Inner-join an Expr against frame. Two effects in one op:

  • Filter — rows of the term whose shared-column values don't appear in frame are dropped (e.g. Where(v_flow, wind_only) keeps only the wind rows).
  • Map — any columns of frame that the term doesn't already carry become new open dims of the resulting term (e.g. Where(v_flow, flow_to_n) where flow_to_n has columns (p, source, sink, n) adds n so the term can be bound to a constraint indexed by (n, t)).

Lag

Lag(var, lag_frame: DataFrame, time_dim: str, lag_col: str) -> Expr

Return an Expr that, for each (carry_dims, time_dim) in lag_frame, references var at (carry_dims, lag_col).

Used for shifting variables in time, e.g. for storage state-change:

v_state[n, d, t]  -  v_state[n, d, t_prev]

= v_state - Lag(v_state, dtttdt, "t", "t_prev_within_timeset")

lag_frame carries the (d, t, t_prev) lookup; carry_dims are the columns shared between var and lag_frame other than the time dim itself (typically d).

Modules

polar_high.engine

Generic polars-backed LP kernel.

Three primitives — Var, Param, Sum — and one container (Problem). Knows nothing about flextool, energy systems, or any specific model. A constraint is built as either:

  1. an Expr produced by overloaded operators (v <= cap, Sum(...) >= rhs, lhs.eq(rhs)), passed positionally to Problem.add_cstr; or

  2. a labelled terms dict, summed across all entries, with an explicit sense and rhs. Use this when a constraint is naturally a sum of named contributions (storage transitions, sink flow, source flow, slack — like flextool's nodeBalance_eq).

A variable is a polars frame (*dims, col_id) — one LP column per row. A parameter is a polars frame (*dims, value). Var * Param joins on shared dims and emits an Expr term (*union_dims, col_id, coef). Sum(expr, over=…) group-by-sums one or more dims; the remaining dims become the constraint's row dims when the term is bound to over= at add_cstr time.

Param

Param(dims: tuple[str, ...], frame: DataFrame | LazyFrame, name: str | None = None, _sources: list[tuple[Param, int]] | None = None)

A parameter table. frame carries columns *dims, value.

Stored internally as a polars.LazyFrame so that chained algebra ops (Param * Param, Param + Param etc.) defer materialization until a consumer reads .frame or the engine collects in Problem.solve. The .frame property caches the eager DataFrame on first read — flextool reads .frame.rename(...) repeatedly off the same Param so we want that to be cheap.

name (optional) is a logical Param identifier (e.g. "p_inflow"). It is opt-in metadata used by :class:WarmProblem's Param-tracked auto-update (declare_mutable / update_param). When unset, Params are anonymous and carry no tracking overhead.

_sources records constituent named Params for composite results of Param * Param / Param / Param. Each entry is (param_name, dims_tuple, direction) where direction is +1 if the Param contributes to the numerator and -1 if to the denominator. Anonymous-only chains have _sources is None.

frame property
frame: DataFrame

Eager DataFrame view; collects on first read, then caches.

Var

Var(name: str, dims: tuple[str, ...], frame: DataFrame, lower: float = 0.0, upper: float = float('inf'), integer: bool = False)

A variable family. frame carries columns *dims, col_id.

Var.frame stays an eager polars DataFrame — it's small (one row per LP column), produced once in :meth:Problem.add_var, and consumed by both flextool integration (v.frame["col_id"].unique()) and Problem.solve (col_id → bound/name lookups). Algebra ops on Var lazify on the fly so the resulting _Term is lazy.

Expr

Expr(terms: list[_Term])

A sum of terms (decision-variable contributions).

The terms can have different open-dim sets — they're concatenated, not broadcast. Broadcasting happens once, at constraint emission, via a join to the constraint's over= row index.

CstrRecord

CstrRecord(name: str, over, proto: _CstrProto)

Read-only metadata for a registered constraint family.

Returned by :meth:Problem.cstrs_named for emission-introspection tests. proto carries the LHS Expr, sense and rhs structures; most callers only need over (whose height is the row count) and name.

Problem

Problem()

LP container. Generic — no flextool-specific knowledge.

set_solver_options
set_solver_options(options: dict | None) -> None

Store HiGHS options to be applied in solve(). Pass None to clear. Keys are HiGHS canonical option names (presolve, solver, parallel, time_limit etc); values must be already coerced to the type HiGHS expects (str/int/float/bool). Unknown keys are tolerated (a warning is emitted at solve time).

add_cstr
add_cstr(name: str, *, over: DataFrame | None = None, sense: str, lhs_terms: dict[str, Var | Expr | Param | int | float], rhs_terms: dict[str, Var | Expr | Param | int | float] | None = None) -> None

Add a constraint of the form Σ lhs_terms sense Σ rhs_terms.

Each term entry is either
  • a Var or Expr — variable contribution, or
  • a Param, int or float — constant contribution.

The engine sorts variables and constants out per side, builds (lhs_var − rhs_var) sense (rhs_const − lhs_const), and adds the row to highspy at solve time. Labels (the dict keys) are used in row names and diagnostics.

cstr_names
cstr_names() -> list[str]

All constraint family names currently registered, in declaration order. Useful for emission audits and debugging.

cstrs_named
cstrs_named(name: str) -> list[CstrRecord]

Return constraint metadata records matching name.

An exact-name match returns the single record; otherwise a prefix match returns every record whose name starts with name + "_" (so passing "minimum_uptime" returns both minimum_uptime_linear and minimum_uptime_integer).

Each :class:CstrRecord carries: * name: full registered name of the constraint family; * over: the polars DataFrame of axis tuples (len(over) is the row count); * proto: the underlying _CstrProto (expr, sense, rhs) for advanced introspection.

cstr_row_count
cstr_row_count(name: str) -> int

Total LP-row count across all constraint families matching name (exact or prefix; see :meth:cstrs_named). Returns 0 when no families match — letting callers distinguish "absent" from "empty" without exception handling. A scalar constraint (over=None) counts as one row.

add_obj_constant
add_obj_constant(value: float) -> None

Accumulate a constant into the objective offset. HiGHS adds this to the reported getObjectiveValue() after solve, so it shows up in Solution.obj even though no decision variable carries it. Used for pure-Param objective terms like the §8.1 existing-entity fixed cost.

solve
solve(*, options: dict | None = None, keep_solver: bool = False, streaming: bool = True) -> Solution

Solve the LP and return a :class:Solution.

Parameters

options Per-call HiGHS options dict (overrides set_solver_options). keep_solver When True, the live HiGHS instance is kept on the returned :class:Solution so callers can inspect it post-solve (e.g. sol.highs.writeModel("model.mps")). Default False — the C-side LP storage is released as soon as primal/dual/ objective have been extracted. streaming When True (default), columns are added once via addCols and each constraint family is emitted to HiGHS via addRows immediately after its COO triples are built; the family's local arrays then go out of scope before the next family is processed. This caps peak memory at one family's COO + the running HiGHS LP. When False, the entire model is assembled into a single :class:highspy.HighsLp and loaded via passModel — numerically identical results either way; False is mostly useful for benchmarking the legacy path.

peek_lp_ranges
peek_lp_ranges(top_k: int = 0) -> dict[str, tuple[float, float] | None | list]

Build the LP into numpy arrays and return coefficient ranges, WITHOUT running HiGHS.

Returns a dict with these keys: * 'matrix', 'cost', 'col_bound', 'row_bound'(abs_min, abs_max) of the finite, non-zero, non-infinity values, or None when empty. * When top_k > 0, also includes 'matrix_smallest', 'matrix_largest', 'cost_smallest', 'cost_largest', 'col_bound_smallest', 'col_bound_largest', 'row_bound_smallest', 'row_bound_largest' — lists of (abs_value, col_name, row_name_or_None) triples (row_name is None for cost / col_bound; col_name is None for row_bound).

The returned ranges are exactly what HiGHS would see during its "Coefficient ranges" diagnostic.

Notes
  • Uses the non-streaming build path (single HighsLp construction).
  • Cost: scans col_obj (per-column objective coefficients).
  • Matrix: scans sorted_v (CSC values, with parallel sorted_r row indices + starts col offsets for name lookup when top_k > 0).
  • Col bounds: scans col-bound arrays, filtered to finite + |v| < kHighsInf + v != 0.
  • Row bounds (RHS): same filter on row-bound arrays.

Solution

Solution(*, optimal: bool, obj: float, col_value: ndarray, row_dual: ndarray, col_names: list[str], row_names: list[str], vars: dict[str, Var], col_dual: ndarray | None = None, highs: Highs | None = None)

Read-only view of the solved LP. Look up variable values by name; values come back as a polars frame (*dims, value).

value
value(var_name: str) -> pl.DataFrame

Long-form per-variable solution: (*dims, value).

value_wide
value_wide(var_name: str, time_dims: tuple[str, ...] = ('d', 't'), solve_name: str | None = None) -> pl.DataFrame

Wide-form, flextool-compatible: time dims become rows, the remaining dims are encoded as a tuple-stringified column header.

For a 2-d variable like vq_state_up(n, d, t): long : rows = (n, d, t, value) wide : rows = (d, t) + one column per n (header = "west").

For a 5-d variable like v_flow(p, source, sink, d, t): wide : rows = (d, t) + one column per (p, source, sink), header = "('coal_plant', 'coal_market', 'west')" to match flextool's MultiIndex parquet round-trip.

If solve_name is given, prepend a constant solve column for fuller flextool-output compatibility.

constraint_dual
constraint_dual(name: str) -> pl.DataFrame

Per-row dual values for a named constraint. Returns a frame (over_dims..., dual) if the constraint had over= rows, else a single-row scalar frame (dual,).

WarmProblem

WarmProblem(problem: Problem)

Warm-update wrapper around a :class:Problem.

Build a :class:Problem as usual (add_var, add_cstr, set_objective). Wrap with :class:WarmProblem, then alternate update_* calls and solve() calls — the LP is built ONCE and only the changed coefficients / RHS values are pushed to HiGHS between solves.

Typical rolling-horizon usage::

wp = WarmProblem(p)
sol_0 = wp.solve()
for r in range(1, n_rolls):
    wp.update_rhs("balance", demand_param_for_roll[r])
    wp.update_obj_coef("v_flow", cost_param_for_roll[r])
    sol_r = wp.solve()

The update_* calls are O(rows_or_cols_in_family); the solve() benefits from HiGHS's hot-start (basis is preserved across calls).

problem property
problem: Problem

The underlying :class:Problem.

Useful for diagnostics that need to inspect the un-built LP — e.g. :meth:Problem.peek_lp_ranges to read coefficient ranges before the first :meth:solve triggers the build.

update_rhs
update_rhs(cstr_name: str, new_param: Param | float | int) -> None

Replace the RHS of constraint family cstr_name with values drawn from new_param.

new_param may be a :class:Param whose dims match a subset of the constraint's over= axis (broadcasting to the rest), a scalar (broadcast to all rows), or a numpy array (positional — length must equal the family's row count, in the order of the original over frame).

update_obj_coef
update_obj_coef(var_name: str, new_param: Param | float | int) -> None

Replace the objective coefficient on every column of var_name.

Assumes the objective contribution from var_name is exactly coef[*dims] * var[*dims] for some coef Param; this method OVERWRITES that coefficient via h.changeColsCost. If the objective also has contributions from this variable through more complex algebra (e.g. var * unitsize * slope), the update is still valid as long as new_param carries the full product — the caller is responsible for collapsing multi-Param products ahead of the call.

This DOES NOT touch the cost coefficients of other variables.

update_obj_coef_array
update_obj_coef_array(var_name: str, dim_tuples: list[tuple], values: ndarray) -> None

Array-form of :meth:update_obj_coef.

dim_tuples is a list of dim-value tuples (one per cell) for variable var_name; each tuple must have one entry per dim in the var's declared signature. values is a same-length numpy array of new objective coefficients.

The columns are resolved positionally: values[k] becomes the new objective coefficient on the column whose dim-tuple is dim_tuples[k]. Vectorised — a single changeColsCost call regardless of cell count.

fix_cols
fix_cols(var_name: str, dim_tuples: list[tuple], values: ndarray) -> None

Fix the listed columns of var_name to the given values.

For each (dim_tuple, value) pair, sets both the column's lower and upper bound to value (so the LP has no choice but to set the column at that level). Used by the Lagrangian primal-recovery step ("fix-and-resolve"). Vectorised — single changeColsBounds call.

update_coef
update_coef(row: int, col: int, value: float) -> None

Update a single (row, col) coefficient in the constraint matrix. Use :meth:row_id_of_cstr / :meth:col_id_of_var to resolve indices semantically.

declare_mutable
declare_mutable(*param_names: str) -> None

Declare a set of :class:Param names whose values should be tracked into LP cells, so :meth:update_param can later push new values into the live HiGHS instance via changeCoeff.

MUST be called BEFORE the first :meth:solve. Tracking is opt-in: Params not declared here pay no bookkeeping cost.

Pass the same names that the Params carry on their .name field — typically the FlexData attribute name ("p_inflow", "p_penalty_up" etc.).

update_param
update_param(param_name: str, new_param: Param | float | int) -> None

Replace the values of a tracked Param. Every LP cell whose coefficient was originally a function of param_name is re-computed from the new Param's values and pushed via h.changeCoeff.

new_param must be either a scalar (broadcast to all tracked cells) or a :class:Param whose dim signature matches the signature recorded for that Param at build time.

Raises if param_name was not in :meth:declare_mutable's list (silent corruption is worse than a hard error).

col_id_of_var
col_id_of_var(var_name: str, dims: tuple | dict | None = None) -> int | np.ndarray

Return the col_id(s) for a variable.

dims=None returns every col_id in the variable's family (numpy array, ordered by the var's declaration order). dims as a tuple of dim values returns the single col_id for that one cell (a python int). dims as a dict {dim_name: value} is a partial filter — returns a numpy array of the matching col_ids.

row_id_of_cstr
row_id_of_cstr(cstr_name: str, axis: tuple | dict | None = None) -> int | np.ndarray

Return the row_id(s) for a constraint family. Mirrors :meth:col_id_of_var.

solve
solve(*, options: dict | None = None) -> Solution

Solve the LP. First call builds the LP from scratch (same pipeline as :meth:Problem.solve); subsequent calls just run HiGHS again on the (possibly updated) live model.

options is honoured on the FIRST solve only — subsequent solves use the same HiGHS instance. To change options on a rebuilt LP, drop the WarmProblem and create a new one.

Lag

Lag(var, lag_frame: DataFrame, time_dim: str, lag_col: str) -> Expr

Return an Expr that, for each (carry_dims, time_dim) in lag_frame, references var at (carry_dims, lag_col).

Used for shifting variables in time, e.g. for storage state-change:

v_state[n, d, t]  -  v_state[n, d, t_prev]

= v_state - Lag(v_state, dtttdt, "t", "t_prev_within_timeset")

lag_frame carries the (d, t, t_prev) lookup; carry_dims are the columns shared between var and lag_frame other than the time dim itself (typically d).

Where

Where(expr, frame: DataFrame) -> Expr

Inner-join an Expr against frame. Two effects in one op:

  • Filter — rows of the term whose shared-column values don't appear in frame are dropped (e.g. Where(v_flow, wind_only) keeps only the wind rows).
  • Map — any columns of frame that the term doesn't already carry become new open dims of the resulting term (e.g. Where(v_flow, flow_to_n) where flow_to_n has columns (p, source, sink, n) adds n so the term can be bound to a constraint indexed by (n, t)).

Sum

Sum(expr, over: tuple[str, ...] | str | None = None, where: DataFrame | None = None) -> Expr

Aggregate an Expr. over lists the dims to sum out; the remaining dims become the term's open dims. where is an index frame that pre-filters the term frames (inner join on shared columns) before the group-by-sum.

Sum(expr) with over=None collapses every open dim — useful for a scalar (objective term, single-row constraint).

polar_high.lagrangian

Generic Lagrangian decomposition for coupled :class:Problems.

A domain-agnostic dual-subgradient driver for N independent LP subproblems linked by linear coupling constraints

Σ_i  coef_i · col_i  =  rhs

Each :class:CouplingSpec carries a list of (subproblem_idx, var_name, dim_tuple, coef) entries plus an optional rhs (default 0). The most common use is the 2-entry consensus coupling x_A == x_B with coefs +1 / -1, rhs 0.

Algorithm
  1. Bump each entry's column cost by coef · λ (relaxes the coupling residual into the objective).
  2. Solve every subproblem (warm-started after iter 1).
  3. Compute residual Σ coef_i · x_i − rhs per cell.
  4. Subgradient step λ ← λ + (step / √k) · residual.
  5. Tail-window primal averaging → fix-and-resolve for a feasible primal upper bound; report the best dual (max Σ obj across iters) as the tight lower bound.

Knows nothing about half-flows or regions — that lives in the flextool-side wrapper.

CouplingEntry dataclass

CouplingEntry(subproblem_idx: int, var_name: str, dim_tuples: list[tuple], coef: float = 1.0)

One participant in a :class:CouplingSpec. dim_tuples has one tuple per coupling cell; entries in one CouplingSpec must share length (entry-i tuple-k pairs with entry-j tuple-k under the same λ_k).

CouplingSpec dataclass

CouplingSpec(entries: list[CouplingEntry], rhs: float | ndarray = 0.0, key: object | None = None)

A linear coupling family across subproblems: per cell k, Σ_e coef_e · x[entries[e].cols[k]] = rhs[k]. rhs is a scalar or an array sized to the cell count; default 0.

LagrangianSolution dataclass

LagrangianSolution(converged: bool, iterations: int, total_objective: float, report_kind: str, subproblem_objectives: list[float], iteration_log: list[dict], final_lambdas: list[ndarray], primal_recovery: list[ndarray] = list(), best_dual_total: float = 0.0, recovered_total: float = 0.0)

Result bundle from :meth:LagrangianProblem.solve.

total_objective is the chosen reported total; report_kind is "best_dual" (always for now — best LB across iters). final_lambdas and primal_recovery are ordered like LagrangianProblem.couplings. The trailing iteration_log entry has iter == -1 and carries report_kind / dual / primal summary fields.

LagrangianProblem

LagrangianProblem(subproblems: Sequence[Problem], couplings: Sequence[CouplingSpec])

Lagrangian decomposition driver. Build N :class:Problems, list the cross-subproblem :class:CouplingSpecs, then call LagrangianProblem(subproblems, couplings).solve(...).

solve
solve(*, max_iters: int = 100, tol: float = 1.0, step: float = 1.0, initial_lambda: float = 0.0, min_iters: int = 1, primal_tail: int | None = None) -> LagrangianSolution

Run the dual-subgradient loop.

step / √k is the diminishing step on iter k. initial_lambda is a non-zero seed (breaks trivial 0-flow equilibria). min_iters floors the iteration count so the early-termination test can't fire on iter 1. primal_tail defaults to max(20, max_iters//4).