Lagrangian decomposition¶
LagrangianProblem is a domain-agnostic dual-subgradient driver for
N independent LP subproblems linked by linear coupling constraints
Building blocks¶
CouplingEntry— one participant:(subproblem_idx, var_name, dim_tuples, coef). One entry per subproblem that owns a coupled cell.CouplingSpec— one coupling family: a list of entries plus an optionalrhs(default 0). The most common shape is the 2-entry consensus couplingx_A == x_Bwith coefs+1/-1.LagrangianProblem— assembles the subproblems and the couplings, runs the subgradient loop, returns aLagrangianSolution.
Algorithm sketch¶
- Bump each entry's column cost by
coef · λ(relaxes the coupling residual into the objective). - Solve every subproblem (warm-started after iter 1 via
WarmProblem). - Compute residual
Σ coef_i · x_i − rhsper cell. - Subgradient step
λ ← λ + (step / √k) · residual. - Tail-window primal averaging → fix-and-resolve for a feasible
primal upper bound; report the best dual (max
Σ objacross iters) as the tight lower bound.
Minimal example¶
Two subproblems each with a local demand floor, linked by a consensus
coupling that forces them to agree on a common flow value. The model
code is sourced from tests/fixtures/lagrangian_example.py and is
verified by tests/test_lagrangian_example.py.
Build the subproblems:
import polars as pl
import polar_high as ph
from polar_high import CouplingEntry, CouplingSpec, LagrangianProblem
# Sub-problem A: min flow_A s.t. flow_A >= 4, 0 <= flow_A <= 100
sub_a = ph.Problem()
idx_a = pl.DataFrame({"k": [0]})
x_a = sub_a.add_var("flow", "k", idx_a, lower=0.0, upper=100.0)
sub_a.add_cstr(
"local_demand",
over=None,
sense=">=",
lhs_terms={"flow": ph.Sum(x_a, over=("k",))},
rhs_terms={"d": 4.0},
)
sub_a.set_objective(ph.Sum(x_a), sense="min")
# Sub-problem B: min flow_B s.t. flow_B >= 2, 0 <= flow_B <= 100
sub_b = ph.Problem()
idx_b = pl.DataFrame({"k": [0]})
x_b = sub_b.add_var("flow", "k", idx_b, lower=0.0, upper=100.0)
sub_b.add_cstr(
"local_demand",
over=None,
sense=">=",
lhs_terms={"flow": ph.Sum(x_b, over=("k",))},
rhs_terms={"d": 2.0},
)
sub_b.set_objective(ph.Sum(x_b), sense="min")
Assemble and solve:
# Consensus coupling: flow_A == flow_B (coefs +1 / -1, rhs 0)
# dim_tuples lists the variable cells that participate — one tuple per
# cell, matching the variable's dim values in order.
coupling = CouplingSpec(
entries=[
CouplingEntry(0, "flow", [(0,)], coef=+1.0),
CouplingEntry(1, "flow", [(0,)], coef=-1.0),
],
rhs=0.0,
)
lp = LagrangianProblem([sub_a, sub_b], [coupling])
sol = lp.solve(max_iters=200, tol=1e-9, step=0.5, min_iters=20)
print(f"best dual: {sol.total_objective}") # 8.0
print(f"iterations: {sol.iterations}")
Inspect the result:
sol.total_objective # best dual bound (== 8.0 for this LP)
sol.final_lambdas # one array per coupling
sol.iteration_log # per-iter diagnostics
dim_tuples lists the variable cells that participate in the coupling —
one tuple per cell, with values matching the variable's dim columns in
declaration order. Here "flow" has a single dim "k" with one row
(0,), so dim_tuples=[(0,)].
When to use it¶
Lagrangian is worth the iterations when:
- Subproblems have block structure (each is much faster than the monolith);
- Coupling is small (few λ per cell) compared to total state;
- A loose primal bound is acceptable, or you can afford the tail primal-recovery resolve.
If the monolith already solves quickly, just solve the monolith.
See also¶
tests/test_lagrangian.pyfor closed-form parity tests on tiny synthetic LPs — the cleanest place to learn the API.