mCLM: A Modular Chemical Language Model that Generates Functional and Makeable Molecules
Carl Edwards, Chi Han, Gawon Lee, Thao Nguyen, Sara Szymkuć, Chetan Kumar Prasad, Bowen Jin, Jiawei Han, Ying Diao, Ge Liu, Hao Peng, Bartosz Andrzej Grzybowski, Martin D. Burke, Heng Ji
We propose mCLM: a bilingual, modular Chemical-Language Model that understands both natural language descriptions of functions and molecular blocks; mCLM front-loads synthesizability while improving the functions of molecules in a principled manner.
Abstract
Despite their ability to understand chemical knowledge, large language models (LLMs) remain limited in their capacity to propose novel molecules with desired functions (e.g., drug-like properties). In addition, the molecules that LLMs propose can often be challenging to make, and are almost never compatible with automated synthesis approaches. To better enable the discovery of functional small molecules, LLMs need to learn a new molecular language that is more effective in predicting properties and inherently synced with automated synthesis technology. Current molecule LLMs are limited by representing molecules based on atoms. In this paper, we argue that just like tokenizing texts into meaning-bearing (sub-)word tokens instead of characters, molecules should be tokenized at the level of functional building blocks, i.e., parts of molecules that bring unique functions and serve as effective building blocks for real-world automated laboratory synthesis. This motivates us to propose mCLM, a modular Chemical-Language Model that comprises a bilingual language model that understands both natural language descriptions of functions and molecular blocks. mCLM front-loads synthesizability considerations while improving the predicted functions of molecules in a principled manner. Experiments on 430 FDA-approved drugs showed that mCLM is capable of significantly improving chemical functions critical to determining drug potentials. mCLM, with only 3B parameters, also achieves improvements in synthetic accessibility relative to 7 other leading generative AI methods including GPT-5. When tested on 122 out-of-distribution medicines using only building blocks/tokens that are compatible with automated modular synthesis, mCLM outperforms all baselines in property scores and synthetic accessibility. mCLM can also reason on multiple functions and iteratively self-improve to rescue drug candidates that failed late in clinical trials (“fallen angels”).
mCLM uses modular chemical language combining natural language and molecular building blocks for function-aware synthesis.
- Bilingual language model jointly understanding natural language descriptions and molecular building block tokens
- Front-loads synthesizability by generating only molecules compatible with automated modular synthesis
- Demonstrates capability to improve drug properties and handle out-of-distribution medicines with synthesis-compatible building blocks
- Language modeling
- Bilingual training
- Molecular tokenization at functional building block level
- Transfer learning
- FDA-approved drugs
- out-of-distribution medicines
Authors did not state explicit limitations.
Scale mCLM to larger backbones
from the paperIncorporate multimodal knowledge from 2D/3D molecular structures, protein-ligand complexes, cell lines and nucleic acid sequences
from the paperExtend chemical reasoning to knowledge gap filling, System 2 thinking for counterfactual reasoning and plausibility prediction
from the paperLeverage physical constraints from simulation tools and chemical reaction knowledge bases
from the paperIntegrate into comprehensive multi-agent human-in-the-loop autonomous laboratory with iterative cycles of reasoning, proposal, synthesis, testing and feedback
from the paper
Author keywords
- molecule-language multimodality
- language model
- molecule tokenization
- molecule generation
Related orals
Benchmarking Empirical Privacy Protection for Adaptations of Large Language Models
Benchmarks practical privacy risks in differential privacy-adapted LLMs, revealing distribution shifts and model choice impact effectiveness.
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
Proposes Recursive Likelihood Ratio optimizer for efficient fine-tuning of diffusion models with lower variance gradient estimation.
Invisible Safety Threat: Malicious Finetuning for LLM via Steganography
Demonstrates LLMs can be finetuned to generate harmful steganographically-hidden outputs while appearing benign to safety systems.
Reducing Belief Deviation in Reinforcement Learning for Active Reasoning of LLM Agents
Proposes T3 algorithm to detect belief deviation in LLM agents and truncate trajectories for improved reinforcement learning in active reasoning tasks.
RefineStat: Efficient Exploration for Probabilistic Program Synthesis
RefineStat enforces semantic constraints and applies diagnostic-aware refinement for synthesizing valid probabilistic programs from smaller language models.