RefineStat: Efficient Exploration for Probabilistic Program Synthesis
Madhav Kanda, Shubham Ugare, Sasa Misailovic
Abstract
Probabilistic programming offers a powerful framework for modeling uncertainty, yet statistical model discovery in this domain entails navigating an immense search space under strict domain‐specific constraints. When small language models are tasked with generating probabilistic programs, they frequently produce outputs that suffer from both syntactic, and semantic errors, such as flawed inference constructs. Motivated by probabilistic programmers’ domain expertise and debugging strategies, we introduce RefineStat, a language model–driven framework that enforces semantic constraints ensuring synthesized programs contain valid distributions, well‐formed parameters, and then applies diagnostic‐aware refinement by resampling prior or likelihood components whenever reliability checks fail. We evaluate RefineStat on multiple probabilistic-programming code-generation tasks using smaller language models (SLMs) and find that it produces programs that are both syntactically sound and statistically reliable, often matching or surpassing those from closed-source large language models (e.g., OpenAI o3).
RefineStat enforces semantic constraints and applies diagnostic-aware refinement for synthesizing valid probabilistic programs from smaller language models.
- Framework separating probabilistic modeling into prior and likelihood fragments
- Semantic constraints ensuring valid distributions and well-formed parameters
- Diagnostic-aware refinement by resampling components when reliability checks fail
- Matches or surpasses closed-source LLM outputs on probabilistic program synthesis
- Language model guided search
- Semantic constraint enforcement
- Diagnostic-aware refinement
- Bayesian workflow integration
Framework does not include prior-predictive or posterior-predictive checks, relying on subset of diagnostics
from the paperReported ELPD only partially reflects model adequacy in some cases
from the paperRefinement strategy is effective but does not guarantee convergence to globally optimal program
from the paper
Extend framework to enforce arbitrary reliability criteria defined by domain experts
from the paperApply to other domains involving domain-specific languages beyond probabilistic programming
from the paper
Author keywords
- Probabilistic Programming
- Constrained Generation
Related orals
Benchmarking Empirical Privacy Protection for Adaptations of Large Language Models
Benchmarks practical privacy risks in differential privacy-adapted LLMs, revealing distribution shifts and model choice impact effectiveness.
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
Proposes Recursive Likelihood Ratio optimizer for efficient fine-tuning of diffusion models with lower variance gradient estimation.
Invisible Safety Threat: Malicious Finetuning for LLM via Steganography
Demonstrates LLMs can be finetuned to generate harmful steganographically-hidden outputs while appearing benign to safety systems.
Reducing Belief Deviation in Reinforcement Learning for Active Reasoning of LLM Agents
Proposes T3 algorithm to detect belief deviation in LLM agents and truncate trajectories for improved reinforcement learning in active reasoning tasks.
Actions Speak Louder than Prompts: A Large-Scale Study of LLMs for Graph Inference
Large-scale study comparing LLM-graph interaction modes for node classification, finding code generation outperforms prompting on long-text and high-degree graphs.