Learning to Segment for Vehicle Routing Problems
Wenbin Ouyang, Sirui Li, Yining Ma, Cathy Wu
Abstract
Iterative heuristics are widely recognized as state-of-the-art for Vehicle Routing Problems (VRPs). In this work, we exploit a critical observation: a large portion of the solution remains stable, i.e., unchanged across search iterations, causing redundant computations, especially for large-scale VRPs with long subtours. To address this, we pioneer the formal study of the First-Segment-Then-Aggregate (FSTA) decomposition technique to accelerate iterative solvers. FSTA preserves stable solution segments during the search, aggregates nodes within each segment into fixed hypernodes, and focuses the search only on unstable portions. Yet, a key challenge lies in identifying which segments should be aggregated. To this end, we introduce Learning-to-Segment (L2Seg), a novel neural framework to intelligently differentiate potentially stable and unstable portions for FSTA decomposition. We present three L2Seg variants: non-autoregressive (globally comprehensive but locally indiscriminate), autoregressive (locally refined but globally deficient), and their synergy. Empirical results on CVRP and VRPTW show that L2Seg accelerates state-of-the-art solvers by 2x to 7x. We further provide in-depth analysis showing why synergy achieves the best performance. Notably, L2Seg is compatible with traditional, learning-based, and hybrid solvers, while supporting various VRPs.
L2Seg accelerates vehicle routing solvers 2-7x by learning to identify stable and unstable solution segments.
- Formalizes First-Segment-Then-Aggregate (FSTA) decomposition technique to accelerate iterative solvers
- Introduces Learning-to-Segment (L2Seg) neural framework to differentiate stable and unstable solution portions
- Presents three L2Seg variants: non-autoregressive, autoregressive, and their synergy
- Compatible with traditional, learning-based, and hybrid solvers supporting various VRP variants
- Encoder-decoder neural networks
- Autoregressive models
- Non-autoregressive models
- Combinatorial optimization
- CVRP
- VRPTW
Not guaranteed to boost all VRP solvers across all VRP variants
from the paper
Extend L2Seg to accelerate additional VRP solvers
from the paperApply L2Seg to more VRP variants and other combinatorial optimization problems
from the paperExpand synergy between autoregressive and non-autoregressive models to broader NCO community
from the paper
Author keywords
- Learning-Guided Optimization
- Vehicle Routing Problem
Related orals
On The Surprising Effectiveness of a Single Global Merging in Decentralized Learning
Shows decentralized learning with single global merging achieves convergence rates matching parallel SGD under data heterogeneity.
Non-Convex Federated Optimization under Cost-Aware Client Selection
Develops efficient federated optimization algorithm with cost-aware client selection achieving best communication and local complexity.
Fast Escape, Slow Convergence: Learning Dynamics of Phase Retrieval under Power-Law Data
Analyzes phase retrieval learning dynamics with anisotropic data, deriving explicit scaling laws and three-phase trajectories.
A Representer Theorem for Hawkes Processes via Penalized Least Squares Minimization
Representer theorem for Hawkes processes shows dual coefficients are analytically fixed to unity via penalized least squares.
Quantitative Bounds for Length Generalization in Transformers
Quantitative bounds show training length required for length generalization depends on periodicity, locality, alphabet size, and model norms.