A Representer Theorem for Hawkes Processes via Penalized Least Squares Minimization
Representer theorem for Hawkes processes shows dual coefficients are analytically fixed to unity via penalized least squares.
Optimization, learning theory, generalization bounds, convergence analysis, statistical learning.
Representer theorem for Hawkes processes shows dual coefficients are analytically fixed to unity via penalized least squares.
Theoretical bounds on polyhedral complex connectivity and diameter reveal fundamental ReLU network geometry properties.
EBTs frame System 2 thinking as energy minimization enabling inference-time reasoning emergence across modalities.
Analyzes phase retrieval learning dynamics with anisotropic data, deriving explicit scaling laws and three-phase trajectories.
Frozen-PINNs employ space-time separation with random features for fast, accurate PDE solving without gradient descent.
Shows InfoNCE loss induces Gaussian distribution in contrastive representations, providing principled explanation for observed Gaussianity.
L2Seg accelerates vehicle routing solvers 2-7x by learning to identify stable and unstable solution segments.
Develops efficient federated optimization algorithm with cost-aware client selection achieving best communication and local complexity.
Shows decentralized learning with single global merging achieves convergence rates matching parallel SGD under data heterogeneity.
MRT systematically stress tests LLM agent monitoring revealing agent awareness dominates and hybrid scaffolding enables weak-to-strong.
Analyzes how overparametrization shifts BBP transition point in loss landscape, bending geometric properties.
Enforces convex output constraints via operator splitting enabling fast parametric optimization solving.
Quantitative bounds show training length required for length generalization depends on periodicity, locality, alphabet size, and model norms.