ICLR 2026 Orals

Modality-free Graph In-context Alignment

Wei Zhuo, Siqiang Luo

LLMs & Reasoning Thu, Apr 23 · 11:18 AM–11:28 AM · 203 A/B Avg rating: 6.00 (4–8)
Author-provided TL;DR

A pretraining framework that enables in-context learning on graph-structured data without modality assumptions.

Abstract

In-context learning (ICL) converts static encoders into task-conditioned reasoners, enabling adaptation to new data from just a few examples without updating pretrained parameters. This capability is essential for graph foundation models (GFMs) to approach LLM-level generality. Yet current GFMs struggle with cross-domain alignment, typically relying on modality-specific encoders that fail when graphs are pre-vectorized or raw data is inaccessible. In this paper, we introduce **M**odality-**F**ree **G**raph **I**n-context **A**lignment (MF-GIA), a framework that makes a pretrained graph encoder promptable for few-shot prediction across heterogeneous domains without modality assumptions. MF-GIA captures domain characteristics through gradient fingerprints, which parameterize lightweight transformations that align pre-encoded features and indexed labels into unified semantic spaces. During pretraining, a dual prompt-aware attention mechanism with episodic objective learns to match queries against aligned support examples to establish prompt-based reasoning capabilities. At inference, MF-GIA performs parameter-update-free adaptation using only a few-shot support set to trigger cross-domain alignment and enable immediate prediction on unseen domains. Experiments demonstrate that MF-GIA achieves superior few-shot performance across diverse graph domains and strong generalization to unseen domains. The code is available at https://github.com/JhuoW/MF-GIA.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

MF-GIA framework enables graph neural networks to perform in-context learning across heterogeneous domains without modality assumptions using gradient fingerprints.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • MF-GIA framework for graph foundation models using gradient fingerprints for domain alignment
  • Parameter-update-free adaptation enabling few-shot prediction across domains
  • Removes need for post-training fine-tuning and modality-specific conversions
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Graph neural networks
  • In-context learning
  • Gradient fingerprints
  • Prompt-based reasoning
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Couple gradient fingerprints with LLMs to generate semantic domain descriptions
    from the paper
  • Leverage fingerprints to automatically discover latent domain structure from unlabeled graph collections
    from the paper

Author keywords

  • Graph foundation model
  • In-context learning
  • Pretraining

Related orals

Something off? Let us know →