platform
stringclasses
1 value
venue
stringclasses
4 values
year
int32
2.02k
2.03k
title
stringlengths
8
177
abstract
stringlengths
310
3.08k
keywords
stringlengths
0
613
areas
stringclasses
152 values
tldr
stringlengths
0
281
scores
listlengths
0
8
decision
stringclasses
21 values
authors
stringlengths
6
834
author_ids
stringlengths
8
956
cdate
stringclasses
976 values
url
stringlengths
41
45
platform_id
stringlengths
9
13
bibtex
stringlengths
228
1.26k
figure_path
stringlengths
61
79
figure_number
stringclasses
134 values
figure_caption
stringlengths
8
2.35k
figure_context
stringlengths
0
20.2k
figure_type
stringclasses
1 value
confidence
float32
0.85
1
OpenReview
ICLR
2,026
FreqKV: Key-Value Compression in Frequency Domain for Context Window Extension
Existing key-value (KV) cache compression methods for large language models (LLMs) often rely on token eviction, which risks losing critical local information in both long prefilling and decoding scenarios. When extrapolating beyond the pretrained context length, their performance degrades sharply on long-context benchmarks. Motivated by the observation in the frequency domain that the context information is concentrated in the low-frequency components, we propose FreqKV, a parameter-free and architecture-agnostic approach. It iteratively compresses the increasing KV cache in the frequency domain, allowing models to process lengthy contexts efficiently. With minimal training at 8K length, FreqKV extends the context window of LLaMA-2-7B up to 256K tokens while maintaining stable perplexity. Extensive experiments on both prefilling and decoding stages demonstrate that FreqKV enables robust context window extension and consistently outperforms existing KV cache compression methods, highlighting its effectiveness for both understanding and generation in long contexts.
Large Language Models, KV Compression, Context Extension
foundation or frontier models, including LLMs
This paper introduces FreqKV, an efficient context extension method that iteratively compresses key-value states in the frequency domain.
[ 4, 6, 4 ]
Accept (Poster)
Jushi Kai, Yixuan Wang, Boyi Zeng, Haoli Bai, Bo Jiang, Ziwei He, Zhouhan Lin
~Jushi_Kai1, ~Yixuan_Wang10, ~Boyi_Zeng2, ~Haoli_Bai2, ~Bo_Jiang2, ~Ziwei_He1, ~Zhouhan_Lin1
20250918
https://openreview.net/forum?id=wFSOtyvQ9d
wFSOtyvQ9d
@inproceedings{ kai2026freqkv, title={Freq{KV}: Key-Value Compression in Frequency Domain for Context Window Extension}, author={Jushi Kai and Yixuan Wang and Boyi Zeng and Haoli Bai and Bo Jiang and Ziwei He and Zhouhan Lin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wFSOtyvQ9d} }
OpenReview/ICLR/figures/2026/accept_poster/wFSOtyvQ9d/Figure3.png
3
Figure 3: The overview of our FreqKV. (a) The illustration of the frequency-domain compression. (b) The KV cache will be compressed in an iterative manner to extend the context window. Sink tokens remain uncompressed throughout the process. The tokens after sink tokens will be compressed in the frequency domain and subsequent tokens will continue to get into the cache. When the cache is filled again, the compressed tokens and incoming tokens will be compressed together.
<paragraph_1>To reduce redundancy in the key-value (KV) cache, we compress KV states in the frequency domain as shown in Figure 3a. Specifically, we conduct DCT along the sequence dimension to transfer the KV cache to the frequency domain:</paragraph_1> <paragraph_2>Extending the context window of LLMs is fundamentally constrained by memory and computation cost. To address this, FreqKV employs an iterative compression strategy in the frequency domain that constrains the effective cache size while enabling processing of arbitrarily long sequences. The overall pipeline is illustrated in Figure 3b.</paragraph_2>
diagram
0.899471
OpenReview
ICLR
2,026
ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding
Omni-modal reasoning is essential for intelligent systems to understand and draw inferences from diverse data sources. While existing omni-modal large language models (OLLM) excel at perceiving diverse modalities, they lack the complex reasoning abilities of recent large reasoning models (LRM). However, enhancing the reasoning ability of OLLMs through additional training presents significant challenges, including the need for high-quality data, task-specific adaptation, and substantial computational costs. To address these limitations, we propose ThinkOmni, a training-free and data-free framework that lifts textual reasoning to omni-modal scenarios. ThinkOmni introduces two key components: 1) LRM-as-a-Guide, which leverages off-the-shelf LRMs to guide the OLLM decoding process; 2) Stepwise Contrastive Scaling, which adaptively balances perception and reasoning signals without manual hyperparameter tuning. Experiments on six multi-modal reasoning benchmarks demonstrate that ThinkOmni consistently delivers performance improvements, with main results achieving 70.2 on MathVista and 75.5 on MMAU. Overall, ThinkOmni offers a flexible and generalizable solution for omni-modal reasoning and provides new insights into the generalization and application of reasoning capabilities.
Omni-modal large language models, training-free guidance decoding, language model reasoning
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 6 ]
Accept (Poster)
Yiran Guan, Sifan Tu, Dingkang Liang, Linghao Zhu, Jianzhong Ju, Zhenbo Luo, Jian Luan, Yuliang Liu, Xiang Bai
~Yiran_Guan1, ~Sifan_Tu2, ~Dingkang_Liang2, ~Linghao_Zhu1, ~Jianzhong_Ju1, ~Zhenbo_Luo2, ~Jian_Luan1, ~Yuliang_Liu2, ~Xiang_Bai1
20250917
https://openreview.net/forum?id=pMpCOjzwI1
pMpCOjzwI1
@inproceedings{ guan2026thinkomni, title={ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding}, author={Yiran Guan and Sifan Tu and Dingkang Liang and Linghao Zhu and Jianzhong Ju and Zhenbo Luo and Jian Luan and Yuliang Liu and Xiang Bai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=pMpCOjzwI1} }
OpenReview/ICLR/figures/2026/accept_poster/pMpCOjzwI1/Figure3.png
3
Figure 3: Guidance decoding methods. “Guid.” denotes the guiding model, and “Amat.” denotes the amateur model.
<paragraph_1>In Contrastive Decoding (Fig. 3(a)), the contrastive pair is formed by comparing the responses to the same prompt from the original guiding model and an additional amateur model, with z+ set to zbase. In Visual Contrastive Decoding (Fig. 3(b)), the contrastive pair is created by applying different input conditions to the same model. Specifically, z−is obtained by adding Gaussian noise to the input image and then performing inference. In contrast to these approaches, ProxyTuning and ProxyThinker (Fig. 3(c)) construct contrastive pairs across different models within the same family, aiming to transfer behaviors from more minor, guiding models to larger, amateur models.</paragraph_1>
diagram
0.93543
OpenReview
ICLR
2,026
Task-Agnostic Amortized Multi-Objective Optimization
Balancing competing objectives is omnipresent across disciplines, from drug design to autonomous systems. Multi-objective Bayesian optimization is a promising solution for such expensive, black-box problems: it fits probabilistic surrogates and selects new designs via an acquisition function that balances exploration and exploitation. In practice, it requires tailored choices of surrogate and acquisition that rarely transfer to the next problem, is myopic when multi-step planning is often required, and adds refitting overhead, particularly in parallel or time-sensitive loops. We present TAMO, a fully amortized, universal policy for multi-objective black-box optimization. TAMO uses a transformer architecture that operates across varying input and objective dimensions, enabling pretraining on diverse corpora and transfer to new problems without retraining: at test time, the pretrained model proposes the next design with a single forward pass. We pretrain the policy with reinforcement learning to maximize cumulative hypervolume improvement over full trajectories, conditioning on the entire query history to approximate the Pareto frontier. Across synthetic benchmarks and real tasks, TAMO produces fast proposals, reducing proposal time by 50–1000× versus alternatives while matching or improving Pareto quality under tight evaluation budgets. These results show that transformers can perform multi-objective optimization entirely in-context, eliminating per-task surrogate fitting and acquisition engineering, and open a path to foundation-style, plug-and-play optimizers for scientific discovery workflows.
Multi-Objective Optimization, Bayesian Optimization, Transformers, Neural Processes
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
We introduce a fully amortized (surrogate model + acquisition function), dimension-agnostic policy for multi-objective optimization.
[ 6, 6, 8, 4 ]
Accept (Poster)
Xinyu Zhang, Conor Hassan, Julien Martinelli, Daolang Huang, Samuel Kaski
~Xinyu_Zhang41, ~Conor_Hassan1, ~Julien_Martinelli1, ~Daolang_Huang1, ~Samuel_Kaski1
20250920
https://openreview.net/forum?id=odmeUlWta8
odmeUlWta8
@inproceedings{ zhang2026taskagnostic, title={Task-Agnostic Amortized Multi-Objective Optimization}, author={Xinyu Zhang and Conor Hassan and Julien Martinelli and Daolang Huang and Samuel Kaski}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=odmeUlWta8} }
OpenReview/ICLR/figures/2026/accept_poster/odmeUlWta8/Figure2.png
2
Figure 2: Dimension-agnostic embedder for a single observation.
<paragraph_1>(I) Dimension-agnostic embedder. We apply learnable scalar-to-vector maps ex : R →Rde and ey : R →Rde dimension-wise, resulting in ex = ex(x) ∈Rdτ x×de and ey = ey(y) ∈Rdτ y×de. Both functions ex and ey are parameterized as feedforward neural networks. After L transformer layers on the concatenated tokens [ex; ey], we apply learnable dimension-specific positional tokens px ∈Rdτ x×de and py ∈Rdτ y×de element-wise and mean-pool across the dτ x+dτ y token axis to obtain a single representation E ∈Rde. These positional tokens are randomly sampled for each batch from fixed pools of learned embeddings. We introduce the positional tokens to prevent the spurious symmetries over dimensionalities from a permutation-invariant set encoder, allowing the model to distinguish between features and objectives with the same values. During training, the embedder is applied to Dh and Dq to yield Eh and Eq for the optimization task, and to Dc and Dp to yield Ec and Ep for the prediction task. Each observation contributes O(1) tokens, so the cost scales with the number of observations, not with dτ x or dτ y. Figure 2 summarizes the embedder.</paragraph_1> <paragraph_2>Figure S16: Inference on GP examples (dx = 2, dy = 1), with query points proposed over 100 optimization steps (white circle, size increasing along with the number of queries).</paragraph_2> <paragraph_3>Figure S17: Inference on GP examples (dx = 2, dy = 2), with query points proposed over 100 optimization steps (white circles, size increasing along with the number of queries).</paragraph_3>
diagram
0.99614
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-logic-guidEd Reasoning data synthesis pipeline that leverages naturally available, extensive raw documents to generate multidisciplinary questions. The central insight is the notion of Design Logic, a form of reusable meta-knowledge that encapsulates the structured process human experts use to transform knowledge into complex exam questions, enabling LLMs to generate new questions with the same complex reasoning patterns from entirely different source texts with explicit control over difficulty, diversity, and question types. We use LLMs to reverse-engineer and abstract over 120,000 Design Logics from existing questions across various disciplines. By designing a two-stage retrieve-and-generate mechanism to match these Design Logics with raw corpus, we synthesized two large-scale reasoning datasets that span 75 disciplines: DLR-Book (3.04 million questions from the book corpus) and DLR-Web (1.66 million questions from the web corpus). Data analysis indicates that the questions synthesized by our method exhibit greater difficulty and diversity compared to those in the baseline datasets. Supervised fine-tuning (SFT) on Qwen3 and Llama3 with our data substantially improves multidisciplinary reasoning and outperforms baseline datasets. Notably, by applying SFT on the base versions of these models using only our data, we even surpass their official final models that have undergone the full post-training.
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SQVxBJhIrK} }
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure2.png
2
Figure 2: The Design-Logic-Guided Multidisciplinary Data Synthesis Pipeline.
<paragraph_1>Specifically, our pipeline is illustrated in Figure 2. First, we process large-scale book and web corpora with multi-dimensional labeling and filtering (discipline, readability, educational value, reasoning depth) to construct a high-quality source material library. From a question bank of hundreds of millions, we cluster and sample a diverse set of difficult questions, from which an LLM reverseengineers and abstracts over 120K structured Design Logics to construct a reusable Design Logic library. In question synthesis, we adopt a two-stage retrieve-and-generate mechanism: (1) vector similarity retrieves coarse candidate logics for each source document, and (2) an LLM performs a fine-grained evaluation to select the optimal logic and generates a reasoning question from the source document by strictly following its steps. This approach addresses the absence of guiding principles in prior data synthesis methods, enabling the automated generation of a large number of diverse and high-difficulty exam questions while reducing reliance on expensive manual creation.</paragraph_1> <paragraph_2>We curate three data sources for question synthesis: a proprietary question bank, a book corpus, and a web corpus, all aligned to a unified 75-discipline taxonomy (see Appendix A). Figure 2 (Phase 1) illustrates the overall data processing pipeline.</paragraph_2> <paragraph_3>Figure 2 (Phase 2 and Phase 3) illustrates the overall data synthesis pipeline.</paragraph_3>
diagram
0.99595
OpenReview
ICLR
2,026
DESIGNER: Design-Logic-Guided Multidisciplinary Data Synthesis for LLM Reasoning
Large language models (LLMs) perform strongly on many language tasks but still struggle with complex multi-step reasoning across disciplines. Existing reasoning datasets often lack disciplinary breadth, reasoning depth, and diversity, as well as guiding principles for question synthesis. We propose DESIGNER: a DESIGN-logic-guidEd Reasoning data synthesis pipeline that leverages naturally available, extensive raw documents to generate multidisciplinary questions. The central insight is the notion of Design Logic, a form of reusable meta-knowledge that encapsulates the structured process human experts use to transform knowledge into complex exam questions, enabling LLMs to generate new questions with the same complex reasoning patterns from entirely different source texts with explicit control over difficulty, diversity, and question types. We use LLMs to reverse-engineer and abstract over 120,000 Design Logics from existing questions across various disciplines. By designing a two-stage retrieve-and-generate mechanism to match these Design Logics with raw corpus, we synthesized two large-scale reasoning datasets that span 75 disciplines: DLR-Book (3.04 million questions from the book corpus) and DLR-Web (1.66 million questions from the web corpus). Data analysis indicates that the questions synthesized by our method exhibit greater difficulty and diversity compared to those in the baseline datasets. Supervised fine-tuning (SFT) on Qwen3 and Llama3 with our data substantially improves multidisciplinary reasoning and outperforms baseline datasets. Notably, by applying SFT on the base versions of these models using only our data, we even surpass their official final models that have undergone the full post-training.
Large Language Models, Data Synthesis, Synthetic Data, Reasoning, Post-Training, Supervised Fine-Tuning
datasets and benchmarks
[ 6, 4, 2, 8, 4 ]
Accept (Poster)
Weize Liu, Yongchi Zhao, Yijia Luo, Mingyu Xu, Jiaheng Liu, Yanan Li, Xiguo Hu, ZhiqiBai, Yuchi Xu, Wenbo Su, Bo Zheng
~Weize_Liu1, ~Yongchi_Zhao1, ~Yijia_Luo1, ~Mingyu_Xu3, ~Jiaheng_Liu1, ~Yanan_Li8, ~Xiguo_Hu1, ~ZhiqiBai1, ~Yuchi_Xu1, ~Wenbo_Su2, ~Bo_Zheng5
20250903
https://openreview.net/forum?id=SQVxBJhIrK
SQVxBJhIrK
@inproceedings{ liu2026designer, title={{DESIGNER}: Design-Logic-Guided Multidisciplinary Data Synthesis for {LLM} Reasoning}, author={Weize Liu and Yongchi Zhao and Yijia Luo and Mingyu Xu and Jiaheng Liu and Yanan Li and Xiguo Hu and ZhiqiBai and Yuchi Xu and Wenbo Su and Bo Zheng}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SQVxBJhIrK} }
OpenReview/ICLR/figures/2026/accept_poster/SQVxBJhIrK/Figure22.png
22
Figure 22: An example of the Design Logic for a Mathematics problem, showing the Mermaid source code (a) and the corresponding visual flowchart (b).
diagram
0.907912
OpenReview
ICLR
2,026
Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval
Multivariate time series forecasting (MTSF) plays a vital role in numerous real-world applications, yet existing models remain constrained by their reliance on a limited historical context. This limitation prevents them from effectively capturing global periodic patterns that often span cycles significantly longer than the input horizon—despite such patterns carrying strong predictive signals. Naïve solutions, such as extending the historical window, lead to severe drawbacks, including overfitting, prohibitive computational costs, and redundant information processing. To address these challenges, we introduce the Global Temporal Retriever (GTR), a lightweight and plug-and-play module designed to extend any forecasting model’s temporal awareness beyond the immediate historical context. GTR maintains an adaptive global temporal embedding of the entire cycle and dynamically retrieves and aligns relevant global segments with the input sequence. By jointly modeling local and global dependencies through a 2D convolution and residual fusion, GTR effectively bridges short-term observations with long-term periodicity without altering the host model architecture. Extensive experiments on six real-world datasets demonstrate that GTR consistently delivers state-of-the-art performance across both short-term and long-term forecasting scenarios, while incurring minimal parameter and computational overhead. These results highlight GTR as an efficient and general solution for enhancing global periodicity modeling in MTSF tasks. Code is available at this repository: https://github.com/macovaseas/GTR.
Time-series forecasting, model plugins
learning on time series and dynamical systems
A lightweight, model-agnostic plug-and-play module for time-series forecasting models.
[ 6, 4, 4, 8 ]
Accept (Poster)
Fanpu Cao, Lu Dai, Jindong Han, Hui Xiong
~Fanpu_Cao1, ~Lu_Dai1, ~Jindong_Han1, ~Hui_Xiong1
20250915
https://openreview.net/forum?id=QUJBPSfyui
QUJBPSfyui
@inproceedings{ cao2026enhancing, title={Enhancing Multivariate Time Series Forecasting with Global Temporal Retrieval}, author={Fanpu Cao and Lu Dai and Jindong Han and Hui Xiong}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QUJBPSfyui} }
OpenReview/ICLR/figures/2026/accept_poster/QUJBPSfyui/Figure2.png
2
Figure 2: Overview of the Global Temporal Retriever (GTR): a plug-and-play module compatible with any MTSF forecaster. GTR operates in three stages: (1) retrieves corresponding segments from global temporal embedding; (2) aligns them with the input and uses 2D convolution to jointly model local and global periodicity; (3) fuses the result with the original input via residual connection.
<paragraph_1>Method Overview. In this paper, we propose the Global Temporal Retriever (GTR) — a lightweight, plug-and-play module designed to extend a model’s temporal receptive field beyond the immediate input window. As illustrated in Figure 2, the proposed method operates in two phases: (1) The GTR module enhances global cyclic patterns by dynamically retrieving periodic information from the global temporal embedding, then fusing them with the input series through a linear transformation and 2D convolution (c.f. Section 3.2). (2) The enhanced representation is subsequently processed by the backbone model (a multi-layer perceptron in this work, c.f. Section 3.3.) for final forecasting.</paragraph_1>
diagram
0.993829
OpenReview
ICLR
2,026
From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization
While foundation models (FMs), such as diffusion models and large vision-language models (LVLMs), have been widely applied in educational contexts, their ability to generate pedagogically effective visual explanations remains limited. Most existing approaches focus primarily on textual reasoning, overlooking the critical role of structured and interpretable visualizations in supporting conceptual understanding. To better assess the visual reasoning capabilities of FMs in educational settings, we introduce EduVisBench, a multi-domain, multi-level benchmark. EduVisBench features diverse STEM problem sets requiring visually grounded solutions, along with a fine-grained evaluation rubric informed by pedagogical theory. Our empirical analysis reveals that existing models frequently struggle with the inherent challenge of decomposing complex reasoning and translating it into visual representations aligned with human cognitive processes. To address these limitations, we propose EduVisAgent, a multi-agent collaborative framework that coordinates specialized agents for instructional planning, reasoning decomposition, metacognitive prompting, and visualization design. Experimental results show that EduVisAgent substantially outperforms all baselines, achieving a 40.2% improvement and delivering more educationally aligned visualizations.
education, agent, benchmark, llm, application, visualisation
datasets and benchmarks
[ 6, 2, 2, 6, 6 ]
Accept (Poster)
Haonian Ji, Shi Qiu, Siyang Xin, Siwei Han, Zhaorun Chen, Dake Zhang, Hongyi Wang, Huaxiu Yao
~Haonian_Ji1, ~Shi_Qiu2, ~Siyang_Xin1, ~Siwei_Han1, ~Zhaorun_Chen1, ~Dake_Zhang3, ~Hongyi_Wang1, ~Huaxiu_Yao1
20250918
https://openreview.net/forum?id=FVCpV04ZRe
FVCpV04ZRe
@inproceedings{ ji2026from, title={From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization}, author={Haonian Ji and Shi Qiu and Siyang Xin and Siwei Han and Zhaorun Chen and Dake Zhang and Hongyi Wang and Huaxiu Yao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=FVCpV04ZRe} }
OpenReview/ICLR/figures/2026/accept_poster/FVCpV04ZRe/Figure4.png
4
Figure 4: Workflow for evaluation.
<paragraph_1>Evaluation Protocol. As shown in Figure 4, models are provided with a visualization prompt together with a question and are asked to generate visual outputs. To enable fair comparison across heterogeneous outputs, we first canonicalize every model result to a raster image prior to scoring. This standardization is a crucial step that ensures all systems are evaluated on a level playing field, independent of their native modality or file format, and prevents format-specific rendering artifacts from biasing the assessment. Visuals produced directly as SVG or PNG are used as-is. Web pages (HTML or Next.js) are rendered in a headless browser and captured as screenshots of the primary view; when lightweight interactivity is present (e.g., buttons, tabs, or toggles), we systematically traverse the reachable states and retain one representative screenshot per state. All resulting images are then evaluated by GPT-4o along five dimensions defined in Appendix A.2 to compute an overall performance score. Each dimension is rated on a 0-5 scale; the ratings are summed (0-25) and, when appropriate, normalized to a percentage to yield the final overall score.</paragraph_1>
diagram
0.932038
OpenReview
ICLR
2,026
A State-Transition Framework for Efficient LLM Reasoning
While Long Chain-of-Thought (CoT) reasoning significantly improves Large Language Models (LLMs) performance on complex reasoning tasks, the substantial computational and memory costs of generating long CoT sequences limit their efficiency and practicality. Existing studies usually enhance the reasoning efficiency of LLMs by compressing CoT sequences. However, this approach conflicts with test‑time scaling, limiting the reasoning capacity of LLMs. In this paper, we propose an efficient reasoning framework that models the reasoning process of LLMs as a state‑transition process. Specifically, we first apply a linear attention mechanism to estimate the LLM’s reasoning state, which records the historical reasoning information from previous reasoning steps. Then, based on the query prompt and the reasoning state, the LLM can efficiently perform the current reasoning step and update the state. With the linear attention, each token in the current reasoning step can directly retrieve relevant historical reasoning information from the reasoning state, without explicitly attending to tokens in previous reasoning steps. In this way, the computational complexity of attention is reduced from quadratic to linear, significantly improving the reasoning efficiency of LLMs. In addition, we propose a state-based reasoning strategy to mitigate the over-thinking issue caused by noisy reasoning steps. Extensive experiments across multiple datasets and model sizes demonstrate that our framework not only improves the reasoning efficiency of LLMs but also enhances their reasoning performance.
Large Language Models, reasoning, efficient reasoning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Liang Zhang, Yu Zhao, Longyue Wang, Tianqi Shi, Weihua Luo, Kaifu Zhang, Jinsong Su
~Liang_Zhang9, ~Yu_Zhao1, ~Longyue_Wang3, ~Tianqi_Shi1, ~Weihua_Luo2, ~Kaifu_Zhang2, ~Jinsong_Su1
20250919
https://openreview.net/forum?id=Zz8ikW4uWG
Zz8ikW4uWG
@inproceedings{ zhang2026a, title={A State-Transition Framework for Efficient {LLM} Reasoning}, author={Liang Zhang and Yu Zhao and Longyue Wang and Tianqi Shi and Weihua Luo and Kaifu Zhang and Jinsong Su}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=Zz8ikW4uWG} }
OpenReview/ICLR/figures/2026/accept_poster/Zz8ikW4uWG/Figure4.png
4
Figure 4: (a) shows the computational and memory efficiency of our model and the base model. (b) and (c) present our model’s performance with different values of hyper-parameters β and αmax, respectively. These experiments are conducted on Qwen2.5-1.5B.
<paragraph_1>Analysis of Computational and Memory Costs. We conduct experiments to further compare the computational and memory efficiency of our model and the base model across varying CoT lengths. The experimental results are presented in Figure 4(a). Although our model exhibits similar reasoning efficiency to the base model for shorter CoT, it significantly surpasses the base model once the CoT length exceeds 4K. In particular, when the CoT length reaches 32K, our model achieves over 40% faster reasoning speed than the base model. Moreover, our model maintains a nearly constant memory usage across varying CoT lengths, whereas that of the base model increases linearly with CoT length. Theoretically, our model’s advantages in computational and memory efficiency would become even more significant when FlashAttention-2 is disabled.</paragraph_1> <paragraph_2>Analysis of Hyper-Parameters. We also investigate the impact of the two key hyper-parameters, β and αmax, on the performance of our model. As illustrated in Figure 4(b)–(c), our model exhibits low sensitivity to these two hyper-parameters. Meanwhile, our model attains the best performance when β and αmax are set to 0.2 and 0.4, respectively. We further analyze the choice of these two hyperparameter values as follows:</paragraph_2>
diagram
0.868907
OpenReview
ICLR
2,026
STITCH: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models
Spoken Language Models (SLMs) are designed to take speech inputs and produce spoken responses. However, current SLMs lack the ability to perform an internal, unspoken thinking process before responding. In contrast, humans typically engage in complex mental reasoning internally, enabling them to communicate ideas clearly and concisely. Thus, integrating an unspoken thought process into SLMs is highly desirable. While naively generating a complete chain-of-thought (CoT) reasoning before starting to talk can enable thinking for SLMs, this induces additional latency for the speech response, as the CoT reasoning can be arbitrarily long. To solve this issue, we propose STITCH, a novel generation method that alternates between the generation of unspoken reasoning chunks and spoken response chunks. Since the audio duration of a chunk of spoken response is much longer than the time to generate the tokens in a chunk of spoken response, we use the remaining free time to generate the unspoken reasoning tokens. When a chunk of audio is played to the user, the model continues to generate the next unspoken reasoning chunk, achieving simultaneous thinking and talking. Remarkably, STITCH matches the latency of baselines that cannot generate unspoken CoT by design while outperforming those baselines by 15% on math reasoning datasets; STITCH also performs equally well on non-reasoning datasets as those baseline models. Some animations and demonstrations are on the project page: https://d223302.github.io/STITCH.
spoken language model, reasoning, chain-of-thought
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Cheng-Han Chiang, Xiaofei Wang, Linjie Li, Chung-Ching Lin, Kevin Lin, Shujie LIU, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
~Cheng-Han_Chiang1, ~Xiaofei_Wang9, ~Linjie_Li1, ~Chung-Ching_Lin2, ~Kevin_Lin3, ~Shujie_LIU1, ~Zhendong_Wang1, ~Zhengyuan_Yang1, ~Hung-yi_Lee2, ~Lijuan_Wang1
20250915
https://openreview.net/forum?id=5Z1eMhCeTb
5Z1eMhCeTb
@inproceedings{ chiang2026stitch, title={{STITCH}: Simultaneous Thinking and Talking with Chunked Reasoning for Spoken Language Models}, author={Cheng-Han Chiang and Xiaofei Wang and Linjie Li and Chung-Ching Lin and Kevin Lin and Shujie LIU and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=5Z1eMhCeTb} }
OpenReview/ICLR/figures/2026/accept_poster/5Z1eMhCeTb/Figure2.png
2
Figure 2: Different generation method explored in this paper. The arrow represents the timeline for the SLM to generate the tokens; this timeline should not be confused with the timeline that the end user receives the audio, i.e., the upper timeline in Figure 1. We plot tokens of the same type in a chunk using the same color. (a) GLM-4-Voice: Interleaving between text and speech token chunks (Section 2). This is the design of the original interleaved SLMs. (b) TBS: Generating a complete reasoning span and then interleaving between text and speech token chunks (Section 3.1). (c) STITCH-R: Alternating between reasoning token chunks, text token chunks, and speech token chunks (Section 3.2). (d) STITCH-S: Alternating between text token chunks, speech token chunks, and reasoning token chunks (Section 3.3).
<paragraph_1>In the interleaved decoding paradigm, the SLM backbone model generates a chunk of text tokens and a chunk of speech tokens alternately. The text tokens serve as guidance for future speech tokens by transcribing what the speech token will say. For example, GLM-4-Voice (Zeng et al., 2024) interleaves between generating Ntext = 13 text tokens and Nspeech = 26 speech tokens. After a chunk of speech tokens is generated, it is immediately synthesized into audio by the speech decoder and streamed to the user, enabling low latency and real-time interaction. A figurative illustration of this output format is shown in Figure 2(a). When concatenating the chunk of text tokens, they should correspond to the transcription of the speech tokens. The ratio of the text tokens and speech tokens is carefully selected such that the text tokens are always faster than the speech tokens to ensure that the content of the speech tokens has already appeared in previous text tokens. Once all the text tokens are generated, the model will continue to generate the remaining speech tokens.</paragraph_1> <paragraph_2>To teach SLMs to operate in TBS, we construct the training data DTBS where each training instance has the form (x, z, y), x is the speech token sequence of the user input, z is the reasoning token sequence, and y = [t1 ◦s1 ◦t2 ◦s2, · · · ] is the token sequence for the speech output that interleaves between Ntext text tokens (tj) and Nspeech tokens (sj) (The last text token chunk may be less than Ntext while the last speech token span can have more than Nspeech tokens); ◦denotes concatenating two token sequences. We defer how we construct DTBS from existing datasets until Section 4.1. A figurative illustration of the target output for TBS is in Figure 2(b).</paragraph_2> <paragraph_3>STITCH-R realizes this ”thinking when speaking” by alternating fixed-length (Nreason) partial reasoning spans, fixed-length (Ntext) text token spans, and fixed-length (Nspeech) speech token spans. The partial reasoning spans are for inner thinking, while the text and speech token spans are for the spoken response. Stitching the partial reasoning spans together will form a complete CoT reasoning. A figurative illustration of the output of STITCH-R is shown in Figure 2(c), and some samples generated by STITCH-R are shown in Table 5 in the Appendix. The ”R” in STITCH-R stands for ”reasoning first” since it generates a partial reasoning chunk before speaking; this is used to distinguish the ”speaking first” STITCH-S that will be introduced in Section 3.3.</paragraph_3> <paragraph_4>To construct the training data for STITCH-R, we simply split the full reasoning CoT z in DTBS into chunks with Nreason tokens {z1, z2, · · · }, where each zi except the last chunk has Nreason tokens. Next, we interleave those chunks with the interleaved text-speech token sequence y = [t1 ◦s1 ◦t2 ◦s2, · · · ] to create the interleaved data of the form [z1 ◦t1 ◦s1 ◦z2 ◦t2 ◦s2 ◦· · · ], as shown in Figure 2(c). If the number of the reasoning span is more than the number of text spans, this indicates that the reasoning token spans think slower than the text token spans, so we remove the sample from the training data.2 The model is fine-tuned to auto-regressively predict the interleaved reasoning-text-speech token spans using standard language modeling cross-entropy loss.</paragraph_4> <paragraph_5>To fully remove the latency of waiting for the first partial reasoning span, we propose an alternative generative pipeline that directly starts to generate the text and speech token chunks and then generates the first reasoning chunk; the model continues to interleave this generation pattern. We call this STITCH-S since it generates a speech response first, and an illustrative figure is shown in Figure 2(d).</paragraph_5>
diagram
0.959533
OpenReview
ICLR
2,026
Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes
Vision-language models (VLMs) are essential to Embodied AI, enabling robots to perceive, reason, and act in complex environments. They also serve as the foundation for the recent Vision-Language-Action (VLA) models. Yet, most evaluations of VLMs focus on single-view settings, leaving their ability to integrate multi-view information largely underexplored. At the same time, multi-camera setups are increasingly standard in robotic platforms, as they provide complementary perspectives to mitigate occlusion and depth ambiguity. Whether VLMs can effectively leverage such multi-view inputs for robotic reasoning therefore remains an open question. To bridge this gap, we introduce MV-RoboBench, a benchmark specifically designed to evaluate the multi-view spatial reasoning capabilities of VLMs in robotic manipulation. MV-RoboBench consists of 1.7k manually curated QA items across eight subtasks, divided into two primary categories: spatial understanding and robotic execution. We evaluate a diverse set of existing VLMs, including both open-source and closed-source models, along with enhanced versions augmented by Chain-of-Thought (CoT)-inspired enhancements. The results show that state-of-the-art models remain far below human performance, underscoring the substantial challenges VLMs face in multi-view robotic perception. Additionally, our analysis uncovers two key findings: (i) spatial intelligence and robotic task reasoning are correlated in multi-view robotic scenarios; and (ii) strong performance on existing general-purpose single-view spatial understanding benchmarks does not reliably translate to success in the robotic spatial tasks assessed by our benchmark. We release MV-RoboBench as an open resource to foster progress in spatially grounded VLMs and VLAs, providing a foundation for advancing embodied multi-view intelligence in robotics.
spatial understanding, benchmark, multi-view, vlm, robotics
datasets and benchmarks
MV-RoboBench evaluates whether vision–language models can integrate multi-view images for precise robotic perception and decision-making, revealing major gaps compared to human performance.
[ 8, 6, 6, 6 ]
Accept (Poster)
ZhiYuan Feng, Zhaolu Kang, Qijie Wang, Zhiying Du, Jiongrui Yan, Shi Shubin, Chengbo Yuan, Huizhi Liang, Yu Deng, Qixiu Li, Rushuai Yang, Ruichuan An, Leqi Zheng, Weijie Wang, Shawn Chen, Sicheng Xu, Yaobo Liang, Jiaolong Yang, Baining Guo
~ZhiYuan_Feng1, ~Zhaolu_Kang2, ~Qijie_Wang1, ~Zhiying_Du1, ~Jiongrui_Yan1, ~Shi_Shubin3, ~Chengbo_Yuan2, ~Huizhi_Liang1, ~Yu_Deng2, ~Qixiu_Li1, ~Rushuai_Yang1, ~Ruichuan_An1, ~Leqi_Zheng1, ~Weijie_Wang2, ~Shawn_Chen1, ~Sicheng_Xu1, ~Yaobo_Liang1, ~Jiaolong_Yang3, ~Baining_Guo1
20250913
https://openreview.net/forum?id=jXDZJAfRZB
jXDZJAfRZB
@inproceedings{ feng2026seeing, title={Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes}, author={ZhiYuan Feng and Zhaolu Kang and Qijie Wang and Zhiying Du and Jiongrui Yan and Shi Shubin and Chengbo Yuan and Huizhi Liang and Yu Deng and Qixiu Li and Rushuai Yang and Ruichuan An and Leqi Zheng and Weijie Wang and Shawn Chen and Sicheng Xu and Yaobo Liang and Jiaolong Yang and Baining Guo}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=jXDZJAfRZB} }
OpenReview/ICLR/figures/2026/accept_poster/jXDZJAfRZB/Figure12.png
12
Figure 12: Illustration of the righthanded coordinate system defined relative to each camera.
<paragraph_1>Directional convention. In summary, +z = upward, −z = downward; +y = forward, −y = backward; +x = right, −x = left. Figure 12 provides an illustration of this definition.</paragraph_1>
diagram
0.955413
OpenReview
ICLR
2,026
R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?
Recent trends in test-time scaling for reasoning models (e.g., OpenAI o1, DeepSeek-R1) have led to remarkable improvements through long Chain-of-Thought (CoT). However, existing benchmarks mainly focus on immediate, single-horizon tasks, failing to adequately evaluate models’ ability to understand and respond to complex, long-horizon scenarios. To address this incomplete evaluation of Large Reasoning Models (LRMs), we propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs through query composition. Based on R-HORIZON, we construct a long-horizon reasoning benchmark, comprising complex multi-step reasoning tasks with interdependent problems that span long reasoning horizons. Through comprehensive evaluation of LRMs using the R-HORIZON benchmark, we find that even the most advanced LRMs suffer significant performance degradation. Our analysis reveals that LRMs exhibit limited effective reasoning length and struggle to allocate thinking budget across multiple problems appropriately. Recognizing these limitations, we use R-HORIZON to construct long-horizon reasoning data for reinforcement learning with verified rewards (RLVR). Compared to training with single-horizon data, RLVR with R-HORIZON not only substantially improves performance on the multi-horizon reasoning tasks, but also promotes accuracy on standard reasoning tasks (+7.5 on AIME2024). These results position R-HORIZON as a scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs.
Large Reasoning Models, Long Horizon Reasoning
foundation or frontier models, including LLMs
A scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs
[ 6, 6, 6, 6 ]
Accept (Poster)
Yi Lu, Jianing Wang, Linsen Guo, Wei He, Hongyin Tang, Tao Gui, Xuanjing Huang, Xuezhi Cao, Wei Wang, Xunliang Cai
~Yi_Lu7, ~Jianing_Wang4, ~Linsen_Guo2, ~Wei_He14, ~Hongyin_Tang1, ~Tao_Gui1, ~Xuanjing_Huang1, ~Xuezhi_Cao1, ~Wei_Wang41, ~Xunliang_Cai1
20250916
https://openreview.net/forum?id=rRB1bYErbL
rRB1bYErbL
@inproceedings{ lu2026rhorizon, title={R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?}, author={Yi Lu and Jianing Wang and Linsen Guo and Wei He and Hongyin Tang and Tao Gui and Xuanjing Huang and Xuezhi Cao and Wei Wang and Xunliang Cai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=rRB1bYErbL} }
OpenReview/ICLR/figures/2026/accept_poster/rRB1bYErbL/Figure2.png
2
Figure 2: The R-HORIZON data composition pipeline is illustrated in (a)-(c). We leverage RHORIZON to construct a comprehensive long-horizon reasoning evaluation benchmark spanning 6 tasks and generate multi-horizon training data for long-horizon reinforcement learning.
<paragraph_1>We propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs via query composition. As illustrated in Figure 2, R-HORIZON supports the concatenation of three types of expanded questions and can be employed in both the training and evaluation stages to enhance and evaluate the long-horizon capabilities of LRMs.</paragraph_1>
diagram
0.95814
OpenReview
ICLR
2,026
IGC-Net for conditional average potential outcome estimation over time
Estimating potential outcomes for treatments over time based on observational data is important for personalized decision-making in medicine. However, many existing methods for this task fail to properly adjust for time-varying confounding and thus yield biased estimates. There are only a few neural methods with proper adjustments, but these have inherent limitations (e.g., division by propensity scores that are often close to zero), which result in poor performance. As a remedy, we introduce the iterative G-computation network (IGC-Net). Our IGC-Net is a novel, neural end-to-end model which adjusts for time-varying confounding in order to estimate conditional average potential outcomes (CAPOs) over time. Specifically, our IGC-Net is the first neural model to perform fully regression-based iterative G-computation for CAPOs in the time-varying setting. We evaluate the effectiveness of our IGC-Net across various experiments. In sum, this work represents a significant step towards personalized decision-making from electronic health records.
causal inference, potential outcomes, treatment effects, healthcare
causal reasoning
We develop a novel neural method that performs G-computation in an iterative end-to-end training algorithm for conditional average potential outcome estimation over time.
[ 8, 6, 2, 4, 4 ]
Accept (Poster)
Konstantin Hess, Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
~Konstantin_Hess1, ~Dennis_Frauen1, ~Valentyn_Melnychuk1, ~Stefan_Feuerriegel1
20250916
https://openreview.net/forum?id=ZmhpqpKzAT
ZmhpqpKzAT
@inproceedings{ hess2026igcnet, title={{IGC}-Net for conditional average potential outcome estimation over time}, author={Konstantin Hess and Dennis Frauen and Valentyn Melnychuk and Stefan Feuerriegel}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=ZmhpqpKzAT} }
OpenReview/ICLR/figures/2026/accept_poster/ZmhpqpKzAT/Figure1.png
1
Figure 1: Iterative G-computation network. Neural end-toend architecture and training of our iterative G-computation network.
<paragraph_1>Our IGC-Net consists of two key components (see Figure 1): (i) a neural backbone zϕ(·), which can be, for example, be an LSTM or a transformer, and (ii) several G-computation heads {gϕ δ (·)}τ−1 δ=0, where ϕ denote the trainable weights. First, the neural backbone encodes the entire observed history. Then, the G-computation heads take the encoded history and perform the iterative regressions according to Equation 5. For all t = 1, . . . , T −τ and δ = 0, . . . , τ −1, the components are designed as follows:</paragraph_1>
diagram
0.992686
OpenReview
ICLR
2,026
**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils
Accurate simulation of flow fields around tandem geometries is critical for engineering design but remains computationally intensive. Existing machine learning approaches typically focus on simpler cases and lack evaluation on multi-body configurations. To support research in this area, we present **TandemFoilSet**: five tandem-airfoil datasets (4152 tandem-airfoil simulations) paired with four single-airfoil counterparts, for a total of 8104 CFD simulations. We provide benchmark results of a curriculum learning framework using a directional integrated distance representation, residual pre-training, training schemes based on freestream conditions and smooth-combined estimated fields, and a domain decomposition strategy. Evaluations demonstrate notable gains in prediction accuracy. We believe these datasets will enable future work on scalable, data-driven flow prediction for tandem-airfoil scenarios.
Physics-informed Graph Neural Network; Tandem-Airfoil; Flow Field Prediction; CFD; Aerodynamics;
datasets and benchmarks
We introduce TandemFoilSet, a paired set of 5 tandem-airfoil + 4 single-airfoil CFD datasets (8,104 simulations total) and baseline benchmarks to enable scalable ML flow-field prediction for tandem-airfoil interactions.
[ 2, 6, 6, 4 ]
Accept (Poster)
Wei Xian Lim, Loh Sher En Jessica, Zenong Li, Thant Zin Oo, Wai Lee Chan, Adams Wai-Kin Kong
~Wei_Xian_Lim2, ~Loh_Sher_En_Jessica1, ~Zenong_Li1, ~Thant_Zin_Oo1, ~Wai_Lee_Chan1, ~Adams_Wai-Kin_Kong1
20250918
https://openreview.net/forum?id=4Z0P4Nbosn
4Z0P4Nbosn
@inproceedings{ lim2026tandemfoilset, title={**TandemFoilSet**: Datasets for Flow Field Prediction of Tandem-Airfoil Through the Reuse of Single Airfoils}, author={Wei Xian Lim and Loh Sher En Jessica and Zenong Li and Thant Zin Oo and Wai Lee Chan and Adams Wai-Kin Kong}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4Z0P4Nbosn} }
OpenReview/ICLR/figures/2026/accept_poster/4Z0P4Nbosn/Figure16.png
16
Figure 16: Determining obstruction of a boundary point from the reference point in a (a) single-object case and (b) double-object case. Note how a boundary point that is unobstructed in the first case may be obstructed by another object in the second case.
<paragraph_1>As mentions previously, the DID was estimated numerically following the procedure outlined in Algorithm 1. Although extending the theoretical definition of DID to multiple geometries is conceptually straightforward, the numerical calculations grow significantly more complex with each additional object. These challenges are indicated in red within Alg. 1, and are illustrated in Figs. 16 and 17.</paragraph_1> <paragraph_2>The first challenge is in determining whether the point on the object boundary k is obstructed from the point of reference i. As shown in Fig. 16(a), in a single object scenario, it suffices to ascertain</paragraph_2> <paragraph_3>that either boundary face adjacent to k is on the side of the object that faces i. However, as seen in Fig. 16(b), there is the possibility that k is obstructed from i by the boundary faces of another object. Determining obstruction is a process that increases in complexity with the addition of every object.</paragraph_3>
diagram
0.991554
OpenReview
ICLR
2,026
Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models
Effectively processing long contexts is a critical challenge for language models. While standard Transformers are limited by quadratic complexity and poor length extrapolation, alternative architectures like sliding window attention and state space models sacrifice the ability to effectively utilize the full context due to their fixed-size memory. Chunk-based sparse attention has emerged as a promising paradigm for extreme length generalization, yet the key architectural principles underpinning its success are not yet fully understood. In this work, we present a systematic dissection of these models to identify the core components driving their performance. Through a unified framework and comprehensive ablation studies, we demonstrate that a combination of three design principles is critical: (1) an expressive, non-linear Chunk Encoder with a dedicated CLS token to produce representations for retrieval; (2) a Bypassing Residual Path to stably integrate retrieved global information without it being overridden by the local residual stream; and (3) enforced selection sparsity during pre-training to bridge the train-test distribution gap. We provide a theoretical motivation for intra-chunk information processing and landmark generation. By combining these principles, we establish a new state-of-the-art for training-free length extrapolation, successfully generalizing models trained on a 4K context to 32 million tokens on RULER and BABILong. Our findings provide a clear and empirically-grounded set of design principles for developing future, highly-capable long-context language models.
long-context modeling, length generalization, length extrapolation, sparse attention, language modeling
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We demonstrate that extreme length generalization in hierarchical sparse attention is enabled by the interplay of an expressive chunking, a stable bypassing residual path, and enforced retrieval sparsity.
[ 4, 6, 4, 8 ]
Accept (Poster)
Jiaqi Leng, Xiang Hu, Junxiong Wang, Jianguo Li, Wei Wu, Yucheng Lu
~Jiaqi_Leng3, ~Xiang_Hu2, ~Junxiong_Wang1, ~Jianguo_Li2, ~Wei_Wu1, ~Yucheng_Lu1
20250912
https://openreview.net/forum?id=iHqdSQk6qc
iHqdSQk6qc
@inproceedings{ leng2026understanding, title={Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models}, author={Jiaqi Leng and Xiang Hu and Junxiong Wang and Jianguo Li and Wei Wu and Yucheng Lu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=iHqdSQk6qc} }
OpenReview/ICLR/figures/2026/accept_poster/iHqdSQk6qc/Figure2.png
2
Figure 2: Design of Encoder: (a): Encoder w/o CLS (b): Encoder with a learnable CLS token.
<paragraph_1>The different architectural configurations we investigate, summarized in Table 1, can be expressed as joint definitions of (f, g). In the “w/ CLS” variant, we prepend a learnable token, xCLS, to the input chunk H[i], as shown in Fig. 2. The Encoder processes this combined sequence, and its output corresponding to the xCLS position is used to form the landmark, while the remaining outputs form the KV cache.</paragraph_1>
diagram
0.911093
OpenReview
ICLR
2,026
Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding
Weather modeling requires both accurate prediction and mechanistic interpretation, yet existing methods treat these goals in isolation, separating generation from understanding. To address this gap, we present Omni-Weather, the first multimodal foundation model that unifies weather generation and understanding within a single architecture. Omni-Weather integrates a radar encoder for weather generation tasks, followed by unified processing using a shared self-attention mechanism. Moreover, we construct a Chain-of-Thought dataset for causal reasoning in weather generation, enabling interpretable outputs and improved perceptual quality. Extensive experiments show Omni-Weather achieves state-of-the-art performance in both weather generation and understanding. Our findings further indicate that generative and understanding tasks in the weather domain can mutually enhance each other. Omni-Weather also demonstrates the feasibility and value of unifying weather generation and understanding.
AI for Science, Unified foundation model, Interpretable reasoning
applications to physical sciences (physics, chemistry, biology, etc.)
[ 6, 6, 4, 8 ]
Accept (Poster)
Zhiwang Zhou, Yuandong Pu, Xuming He, Yidi Liu, Yixin Chen, Junchao Gong, Xiang Zhuang, Wanghan Xu, Qinglong Cao, SHIXIANG TANG, Yihao Liu, Wenlong Zhang, LEI BAI
~Zhiwang_Zhou1, ~Yuandong_Pu1, ~Xuming_He4, ~Yidi_Liu3, ~Yixin_Chen26, ~Junchao_Gong1, ~Xiang_Zhuang1, ~Wanghan_Xu1, ~Qinglong_Cao1, ~SHIXIANG_TANG1, ~Yihao_Liu1, ~Wenlong_Zhang3, ~LEI_BAI1
20250910
https://openreview.net/forum?id=3WnXsp72v6
3WnXsp72v6
@inproceedings{ zhou2026omniweather, title={Omni-Weather: Unified Multimodal Foundation Model for Weather Generation and Understanding}, author={Zhiwang Zhou and Yuandong Pu and Xuming He and Yidi Liu and Yixin Chen and Junchao Gong and Xiang Zhuang and Wanghan Xu and Qinglong Cao and SHIXIANG TANG and Yihao Liu and Wenlong Zhang and LEI BAI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=3WnXsp72v6} }
OpenReview/ICLR/figures/2026/accept_poster/3WnXsp72v6/Figure2.png
2
Figure 2: Comparison between separated architectures for weather understanding / generation (top) and unified framework with shared self-attention (bottom).
<paragraph_1>Despite these advances, unified architectures remain absent in the weather domain. As shown in Figure 2, existing approaches are divided into two disjoint paradigms: model such as ClimaX Nguyen et al. (2023) and WeatherGFM Zhao et al. (2024) excel at forecasting and downscaling but lack interpretation, while understanding models such as RadarQA He et al. (2025a) and WeatherQA Ma et al. (2024) provide diagnostic reasoning yet cannot synthesize physical fields. However, atmospheric systems are inherently multiscale, shaped by storm genesis, intensification and decay, where accurate prediction is often accompanied by the need for mechanistic interpretation. Moreover, extreme events such as rapid intensification of cyclones demand models that can not only predict hazardous outcomes but also explain the underlying drivers for actionable decision-making. Current studies isolate these links—generative nowcasting models do not understand radar observations, yet MLLMs do not predict radar variables. Bridging this gap with a foundation model that unifies generation and understanding is therefore an urgent requirement for weather domain.</paragraph_1> <paragraph_2>To this end, we propose Omni-Weather, a unified multimodal foundation model for both weather generation and understanding. By consolidating these tasks within a shared backbone (Figure 2, bottom), we further propose a Chain-of-Thought dataset tailored for causal reasoning in generation tasks, which enables Omni-Weather to be finetuned with explicit reasoning supervision and to perform thinking inference. Through this integration, Omni-Weather bridges predictive accuracy with interpretability, marking a step toward reasoning unified foundation models for weather.</paragraph_2>
diagram
0.992263
OpenReview
ICLR
2,026
Weight Space Representation Learning on Diverse NeRF Architectures
Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures.
weight space learning, representation learning, metanetworks, graph metanetworks, neural fields, neural radiance fields, NeRF, implicit neural representations, INR
unsupervised, self-supervised, semi-supervised, and supervised representation learning
We present the first framework that performs tasks on NeRFs by processing their weights and is able to work on diverse architectures
[ 6, 4, 4, 6 ]
Accept (Poster)
Francesco Ballerini, Pierluigi Zama Ramirez, Luigi Di Stefano, Samuele Salti
~Francesco_Ballerini1, ~Pierluigi_Zama_Ramirez1, ~Luigi_Di_Stefano2, ~Samuele_Salti1
20250918
https://openreview.net/forum?id=u90rHXaBve
u90rHXaBve
@inproceedings{ ballerini2026weight, title={Weight Space Representation Learning on Diverse Ne{RF} Architectures}, author={Francesco Ballerini and Pierluigi Zama Ramirez and Luigi Di Stefano and Samuele Salti}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=u90rHXaBve} }
OpenReview/ICLR/figures/2026/accept_poster/u90rHXaBve/Figure5.png
5
Figure 5: Parameter graph conversion. Top left: parameter graph representation of an MLP, proposed by Lim et al. (2024). Right: parameter graph representation of a tri-plane, proposed by Lim et al. (2024). Dotted edges should be connected to the C channel nodes, but are not fully drawn for better visual clarity. Bottom left: our parameter graph representation of a multi-resolution hash table.
<paragraph_1>The parameter graph conversion of an MLP, a tri-plane, and a multi-resolution hash table is depicted in Fig. 5, with additional details compared to Fig. 2 (left).</paragraph_1>
diagram
0.883032
OpenReview
ICLR
2,026
Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning
Tool-Integrated Reasoning (TIR) enables large language models (LLMs) to enhance their internal reasoning ability by integrating external tools. However, models with TIR often exhibit suboptimal behaviors, including insufficient tool calls, excessive tool calls, and overthinking after receiving tool call results. How to empower LLMs to perform TIR efficiently and accurately, while stabilizing the reasoning process, remains an open challenge. In this paper, we first analyze the impact of tool calls on model reasoning from the perspective of information entropy. We find that when tool call results are provided, the information entropy of subsequent reasoning content will show a clear trend of change, and the overall information entropy of the reasoning chain will vary depending on the number of tool calls. Based on these observations, we propose Tool-Light, a framework designed to encourage LLMs to perform TIR efficiently and accurately. Our framework consists of dataset construction and multi-stage fine-tuning. For dataset construction, we use the trained model for continuous self-evolved sampling, integrating two methods: vanilla sampling and entropy-guided sampling. At the same time, during the sampling process, we design strict criteria for selecting positive-negative pairs. For the training process, we introduce a two-stage method, which includes a Supervised Fine-Tuning (SFT), and Self-Evolved Direct Preference Optimization (DPO). Test results on 10 datasets reveal the effectiveness of Tool-Light, significantly improving the efficiency and accuracy of the model in completing TIR tasks.
reasoning model, tool-integrated reasoning, self-evolved training, information entropy
foundation or frontier models, including LLMs
[ 4, 6, 8, 6 ]
Accept (Poster)
Yifei Chen, Guanting Dong, Zhicheng Dou
~Yifei_Chen12, ~Guanting_Dong1, ~Zhicheng_Dou1
20250916
https://openreview.net/forum?id=mNeitRAdWV
mNeitRAdWV
@inproceedings{ chen2026toward, title={Toward Effective Tool-Integrated Reasoning via Self-Evolved Preference Learning}, author={Yifei Chen and Guanting Dong and Zhicheng Dou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=mNeitRAdWV} }
OpenReview/ICLR/figures/2026/accept_poster/mNeitRAdWV/Figure3.png
3
Figure 3: The overall structure of Tool-Light’s training pipeline. Among them, the Self-Evolved DPO Alignment stage will conduct multiple rounds of training.
<paragraph_1>Overview. We propose Tool-Light, a multi-stage training pipeline aiming to improve the effectiveness of model tool calls. As shown in Figures 2 and 3, Tool-Light consists of two key components: (1) Dataset construction, which includes carefully designed sampling strategies to screen out training data. (2) Two-stage TIR training paradigm, which trains the model successively with SFT and self-evolved DPO training. In the self-evolved DPO training stage, we design pre-aligned DPO training and self-evolved DPO alignment stages to gradually improve the model’s capabilities.</paragraph_1> <paragraph_2>Based on existing research (Li et al., 2025g; Dong et al., 2025a; Song et al., 2025), we propose a two-stage self-evolved training pipeline to gradually boost the effectiveness and stability of the model’s TIR process. The specific pipeline is shown in Figure 3.</paragraph_2> <paragraph_3>(x,y)∈D log Pθ(y|x). As shown in the first step of Figure 3, this step aims to help the model quickly acquire the ability to complete TIR tasks.</paragraph_3>
diagram
0.939537
OpenReview
ICLR
2,026
Lookup multivariate Kolmogorov-Arnold Networks
High-dimensional linear mappings, or linear layers, dominate both the parameter count and the computational cost of most modern deep-learning models. We introduce lookup multivariate Kolmogorov-Arnold Networks (lmKANs), which deliver a substantially better trade-off between capacity and inference cost. Our construction expresses a general high-dimensional mapping through trainable low-dimensional multivariate functions. These functions can carry dozens or hundreds of trainable parameters each, and yet it takes only a few multiplications to compute them because they are implemented as spline lookup tables. Empirically, lmKANs reduce inference FLOPs by up to 6.0× while matching the flexibility of MLPs in general high-dimensional function approximation. In another feedforward fully connected benchmark, on the tabular-like dataset of randomly displaced methane configurations, lmKANs enable more than 10× higher H100 throughput at equal accuracy. Within the framework of Convolutional Neural Networks, lmKAN-based CNNs cut inference FLOPs at matched accuracy by 1.6–2.1× and by 1.7× on the CIFAR-10 and ImageNet-1k datasets, respectively.
KAN, inference efficiency, CUDA kernels
other topics in machine learning (i.e., none of the above)
We propose a fully connected layer that decouples inference efficiency from the number of trainable parameters and empirically find it to be Pareto optimal across a wide range of macro-architectural backbones.
[ 6, 2, 6, 6 ]
Accept (Poster)
Sergey Pozdnyakov, Philippe Schwaller
~Sergey_Pozdnyakov1, ~Philippe_Schwaller1
20250919
https://openreview.net/forum?id=XRQVIeBnB0
XRQVIeBnB0
@inproceedings{ pozdnyakov2026lookup, title={Lookup multivariate Kolmogorov-Arnold Networks}, author={Sergey Pozdnyakov and Philippe Schwaller}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=XRQVIeBnB0} }
OpenReview/ICLR/figures/2026/accept_poster/XRQVIeBnB0/Figure6.png
6
Figure 6: A methane configuration
<paragraph_1>Having demonstrated that lmKANs are Pareto-optimal when approximating a general function, we proceed to benchmark their efficiency on real data. We chose the tabular-like dataset of randomly displaced methane configurations for the comparison, as it is particularly suitable for this purpose (see Appendix G.4). The dataset consists of multiple off-equilibrium methane configurations, as illustrated in Fig. 6. The target is given by the corresponding quantum-mechanical energy (Turney et al., 2012; Kohn & Sham, 1965). Hydrogen atoms are placed around the carbon atom randomly, varying from instance to instance, which leads to different target energies.</paragraph_1>
diagram
0.866739
OpenReview
ICLR
2,026
Automata Learning and Identification of the Support of Language Models
We study the learnability of languages in the *Next Symbol Prediction* (NSP) setting, where a learner receives only positive examples from a language together with, for every prefix, (i) whether the prefix itself is in the language and (ii) which next symbols can lead to an accepting string. This setting has been used in prior work to empirically analyze neural sequence models, and additionally, we observe that efficient algorithms for the NSP setting can be used to learn the (truncated) support of language models. We first show that the class of DFAs with at most $n$ states is identifiable from positive examples augmented with these NSP labels. Nevertheless, even with this richer supervision, we show that PAC-learning DFAs remains computationally hard, and exact identification using only membership queries cannot be achieved in polynomial time. We then present $\mathrm{L_{nsp}^{\star}}$, an extension of Angluin’s $\mathrm{L}^{\star}$ algorithm, and show that DFAs can be PAC-learned efficiently using a language-model–based teacher that answers membership queries and generates valid strings conditioned on prefix prompts. Finally, we conduct a comprehensive experimental evaluation on 11 regular languages of varying complexity. Using $\mathrm{L}^{\star}_{\text{nsp}}$, we extract DFAs from Transformer-based language models trained on regular languages to evaluate the algorithm’s effectiveness and identify erroneous examples.
automata learning, regular languages, learning theory, DFA extraction, language models
learning theory
[ 8, 6, 6, 8 ]
Accept (Poster)
Satwik Bhattamishra, Michael Hahn, Varun Kanade
~Satwik_Bhattamishra1, ~Michael_Hahn1, ~Varun_Kanade1
20250919
https://openreview.net/forum?id=L8SMNWsxfK
L8SMNWsxfK
@inproceedings{ bhattamishra2026automata, title={Automata Learning and Identification of the Support of Language Models}, author={Satwik Bhattamishra and Michael Hahn and Varun Kanade}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=L8SMNWsxfK} }
OpenReview/ICLR/figures/2026/accept_poster/L8SMNWsxfK/Figure7.png
7
Figure 7: DFA with 28 states extracted by L⋆ nsp from Transformer trained on Tomita-5. See App. H.2 for more details.
<paragraph_1>Identifying erroneous examples. When the learned DFA ˆA is not equivalent to the target DFA A⋆, we construct the product DFA B which recognizes the strings in the symmetric difference of the two languages L(B) = L( ˆA)△L(A⋆). We use a BFS-like approach to identify several erroneous examples for the language model. Table 2 illustrates some erroneous examples for Bounded Dycks, Parity, and Tomita-5 language. Fig. 6 and 7 depict the extracted automaton for Parity and Tomita5; the ones for DYCK-(2, 2) and DYCK-(3, 3) are too large to be visually informative. Note that these models were not intentionally trained to fail, and all the examples generated by the language models were in their respective target languages. The DFAs extracted by L⋆ nsp were based on a few disagreements in the NSP labels of the generated strings. Training the language models for longer avoids such errors for synthetic languages of this scale. Note that the Transformer models used for Tomita-5 and Dyck languages in Figure 2 (well-trained) and Table 2 (imperfect) are different. See App. H.2 for further details.</paragraph_1> <paragraph_2>Results. We observed erroneous strings for languages like Parity, Tomita-5, DYCK-(2, 2), and DYCK-(3, 3). Examples of some erroneous strings identified by the hypothesis DFA is provided in Table 2. Figure 6 and 7 show the DFAs extracted for Parity and Tomita-5, respectively. The DFAs for DYCK-(2, 2) and DYCK-(3, 3) are too large to be visually interpretable. Constructing the product DFA is efficient and identifying several erroneous examples takes only a few seconds. There is no natural distribution over the symmetric difference language and further it can even be finite in some cases which makes it difficult to systematically compute the accuracy of predicting erroneous examples using the extracted DFA. The closest signal we have is the NSP accuracy for the extracted DFAs which is near perfect.</paragraph_2>
diagram
0.92614
OpenReview
ICLR
2,026
Nef-Net v2: Adapting Electrocardio Panorama in the wild
Conventional multi-lead electrocardiogram (ECG) systems capture cardiac signals from a fixed set of anatomical viewpoints defined by lead placement. However, cer- tain cardiac conditions (e.g., Brugada syndrome) require additional, non-standard viewpoints to reveal diagnostically critical patterns that may be absent in standard leads. To systematically overcome this limitation, Nef-Net was recently introduced to reconstruct a continuous electrocardiac field, enabling virtual observation of ECG signals from arbitrary views (termed Electrocardio Panorama). Despite its promise, Nef-Net operates under idealized assumptions and faces in-the-wild challenges, such as long-duration ECG modeling, robustness to device-specific signal artifacts, and suboptimal lead placement calibration. This paper presents NEF-NET V2, an enhanced framework for realistic panoramic ECG synthesis that supports arbitrary-length signal synthesis from any desired view, generalizes across ECG devices, and compensates for operator-induced deviations in electrode place- ment. These capabilities are enabled by a newly designed model architecture that performs direct view transformation, incorporating a workflow comprising offline pretraining, device calibration tuning steps as well as an on-the-fly calibration step for patient-specific adaptation. To rigorously evaluate panoramic ECG synthe- sis, we construct a new Electrocardio Panorama benchmark, called Panobench, comprising 4470 recordings with 48-view per subject, capturing the full spatial variability of cardiac electrical activity. Experimental results show that NEF-NET V2 delivers substantial improvements over Nef-Net, yielding an increase of around 6 dB in PSNR in real-world setting. Our data and code are publicly available at https://github.com/HKUSTGZ-ML4Health-Lab/NEFNET-v2.
ECG representation, Cardiac Diagnosis
applications to physical sciences (physics, chemistry, biology, etc.)
An enhanced variant of Nef-Net to generate panoramic ECG views, including previously unseen views.
[ 6, 2, 6 ]
Accept (Poster)
Zehui Zhan, Yaojun Hu, Jiajing Zhang, Wanchen Lian, Wanqing Wu, Jintai Chen
~Zehui_Zhan1, ~Yaojun_Hu2, ~Jiajing_Zhang1, ~Wanchen_Lian1, ~Wanqing_Wu1, ~Jintai_Chen1
20250917
https://openreview.net/forum?id=JzZhhhxniR
JzZhhhxniR
@inproceedings{ zhan2026nefnet, title={Nef-Net v2: Adapting Electrocardio Panorama in the wild}, author={Zehui Zhan and Yaojun Hu and Jiajing Zhang and Wanchen Lian and Wanqing Wu and Jintai Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=JzZhhhxniR} }
OpenReview/ICLR/figures/2026/accept_poster/JzZhhhxniR/Figure2.png
2
Figure 2: NEF-NET V2 architecture for Electrocardio Panorama synthesis (illustrated for a 3-input to 2-query view synthesis task as example). The NEF-NET V2 first employs a View Encoder to extract features from the Recorded ECG that are relevant to the Queried ECG. These extracted features are then fused using a Geometric View Transformer to synthesize the query view.
<paragraph_1>The key idea of NEF-NET V2 is to formulate ECG view synthesis as a direct view-to-view transformation problem. This is a pairwise deterministic mapping: the model converts the observed lead signals into the target lead through a single-step transformation, without modeling any shared geometric prior (e.g., the electrocardio field representation) as Nef-Net (Chen et al., 2021). NEF-NET V2 incorporates three core components: Angle Embedding, View Encoder, and Geometric View Transformer (GeoVT), as illustrated in Fig. 2. Formally, let X = {x1, · · · , xl} with each xi ∈R1×t denote l ECG signals recorded from distinct viewing angles.</paragraph_1>
diagram
0.992609
OpenReview
ICLR
2,026
Unified Vision–Language Modeling via Concept Space Alignment
We introduce vSONAR, a vision–language embedding space extended from the text-only embedding space SONAR, which supports 200 text languages and 37 speech languages. To construct vSONAR, we propose a post-hoc alignment pipeline that maps the representations of an existing vision encoder into the SONAR space. We thoroughly evaluate vSONAR and show that its embeddings achieve competitive performance on text-to-video retrieval. Equipped with the SONAR text decoder, vSONAR further surpasses state-of-the-art vision–language models on video captioning tasks, including DREAM-1K (BLEU 24.3 vs. 19.6) and VATEX (BLEU 45.0 vs. 41.5). Leveraging vSONAR, we first demonstrate that the Large Concept Model (LCM) operating in SONAR and trained with English text only, can perform both single- and multi-visual concept understanding in a zero-shot manner. Finally, we introduce vLCM, which extends the LCM with vision–language instruction tuning. vLCM encodes vision and language inputs into an unified sequence of latent embeddings via vSONARand SONAR, and it is trained with the same latent diffusion objective for next-embedding prediction as in LCM's text-only pre-training. Experiments on a large-scale multilingual and -modal instruction–tuning data mixture highlight the potential of vLCM: vLCM matches state-of-the-art vision-language models on tasks covering image/video captioning and question answering, while significantly outperforming them across 61 rich- to low-resource languages out of all 62 tested languages.
multimodal embedding space, multilingual embedding space
applications to computer vision, audio, language, and other modalities
[ 6, 6, 6, 4 ]
Accept (Poster)
Yifu QIU, Paul-Ambroise Duquenne, Holger Schwenk
~Yifu_QIU1, ~Paul-Ambroise_Duquenne1, ~Holger_Schwenk1
20250918
https://openreview.net/forum?id=4LiX5ddGcU
4LiX5ddGcU
@inproceedings{ qiu2026unified, title={Unified Vision{\textendash}Language Modeling via Concept Space Alignment}, author={Yifu QIU and Paul-Ambroise Duquenne and Holger Schwenk}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4LiX5ddGcU} }
OpenReview/ICLR/figures/2026/accept_poster/4LiX5ddGcU/Figure1.png
1
Figure 1: Left: Illustration of V-SONAR. Right: fine-tuning V-LCM with vision-language instruction tuning.
<paragraph_1>Architecture The architecture of V-SONAR is illustrated in the left panel of Figure 1. Given the input image or video, PERCEPTION ENCODER (PE) will first encode each frame separately. Then, we stack a lightweight projector on top of PE to adapt the encoder’s representations into the SONAR space. The projector first injects positional embeddings to the embeddings of all frames, thus encoding temporal order information, followed by a single temporal attention layer that enables frame-level interactions. Finally, an attention layer then aggregates the frame embeddings into a single video-level representation, which serves as the final embedding for downstream tasks. See Appendix D for implementation details.</paragraph_1>
diagram
0.931501
OpenReview
ICLR
2,026
Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients
As AI becomes more personal, e.g., Agentic AI, there is an increasing need for personalizing models for various use cases. Personalized federated learning (PFL) enables each client to collaboratively leverage other clients' knowledge for better adaptation to the task of interest, without privacy risks. Despite its potential, existing PFL methods remain confined to rather simplified scenarios where data and models are the same across clients. To move towards realistic scenarios, we propose FedMosaic, a method that jointly addresses data and model heterogeneity with a task-relevance-aware model aggregation strategy to reduce parameter interference, and a dimension-invariant module that enables knowledge sharing across heterogeneous architectures without huge computational cost. To mimic the real-world task diversity, we propose a multi-modal PFL benchmark spanning 40 distinct tasks with distribution shifts over time. The empirical study shows that FedMosaic outperforms the state-of-the-art PFL methods, excelling in both personalization and generalization capabilities under challenging, realistic scenarios.
Collaborative Learning, Federated Learning, Continual Learning, Multi-modal Learning, Personalization, Distributed Learning
applications to computer vision, audio, language, and other modalities
[ 10, 4, 6, 8 ]
Accept (Poster)
Minhyuk Seo, Taeheon Kim, Hankook Lee, Jonghyun Choi, Tinne Tuytelaars
~Minhyuk_Seo1, ~Taeheon_Kim3, ~Hankook_Lee1, ~Jonghyun_Choi1, ~Tinne_Tuytelaars1
20250918
https://openreview.net/forum?id=0g5Dk4Qfh0
0g5Dk4Qfh0
@inproceedings{ seo2026not, title={Not All Clients Are Equal: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients}, author={Minhyuk Seo and Taeheon Kim and Hankook Lee and Jonghyun Choi and Tinne Tuytelaars}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0g5Dk4Qfh0} }
OpenReview/ICLR/figures/2026/accept_poster/0g5Dk4Qfh0/Figure14.png
14
Figure 14: Illustration of blockwise PQ-LoRA. When a model has NB PQ-LoRA modules, each block employs PQ-LoRA at its last layer, while the remaining layers adopt conventional LoRA. Each block contains the same number of layers.
<paragraph_1>To identify layer-wise correspondences between depth-heterogeneous models, we analyze representation alignment using CKA (Kornblith et al., 2019). Specifically, we measure similarity across layers within the Llama-3 family (1B, 3B, 8B) and the Qwen-2.5 family (0.5B, 1.5B, 3B), as illustrated in Fig. 12. As shown in the figure, layers with the same relative depth exhibit strong alignment, indicating approximately linear alignment within both the Llama-3 and Qwen-2.5 families. Moreover, we observe near-linear alignment even across families, i.e., between Llama-3 and Qwen-2.5, despite weaker linearity than intra-family alignment. Moreover, to demonstrate that this layer-wise correlation trend generally holds across different models, not just between Llama and Qwen, we have additionally included the layer-wise correlation analysis between InternLM (Cai et al., 2024) and Llama in Fig. 13, which shows the same trend as our previous findings. This empirical analysis supports our block-wise aggregation of PQ-LoRA. We provide an illustration of the block-wise PQ-LoRA in Fig. 14.</paragraph_1>
diagram
0.962517
OpenReview
ICLR
2,026
FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference
Large language models (LLMs) have been widely deployed with rapidly expanding context windows to support increasingly demanding applications. However, long contexts pose significant deployment challenges, primarily due to the KV cache whose size grows proportionally with context length. While KV cache compression methods are proposed to address this issue, KV dropping methods incur considerable accuracy loss, and KV retrieval methods suffer from significant efficiency bottlenecks. We propose FreeKV, an algorithm-system co-optimization framework to enhance KV retrieval efficiency while preserving accuracy. On the algorithm side, FreeKV introduces speculative retrieval to shift the KV selection and recall processes out of the critical path, combined with fine-grained correction to ensure accuracy. On the system side, FreeKV employs hybrid KV layouts across CPU and GPU memory to eliminate fragmented data transfers, and leverages double-buffered streamed recall to further improve efficiency, enabling effective overlap with computation, full latency hiding, and practical speedups from speculative recall. Experiments demonstrate that FreeKV achieves near-lossless accuracy across various scenarios and models, delivering up to 13$\times$ speedup compared to SOTA KV retrieval methods.
LLM inference, KV cache
infrastructure, software libraries, hardware, systems, etc.
We propose FreeKV, an algorithm-system co-optimization framework for LLM inference to enhance KV retrieval efficiency while preserving accuracy.
[ 8, 2, 6, 6 ]
Accept (Poster)
Guangda Liu, Chengwei Li, Zhenyu Ning, Jing Lin, Yiwu Yao, Danning Ke, Minyi Guo, Jieru Zhao
~Guangda_Liu1, ~Chengwei_Li1, ~Zhenyu_Ning1, ~Jing_Lin6, ~Yiwu_Yao1, ~Danning_Ke1, ~Minyi_Guo1, ~Jieru_Zhao1
20250918
https://openreview.net/forum?id=wXAn7orB1H
wXAn7orB1H
@inproceedings{ liu2026freekv, title={Free{KV}: Boosting {KV} Cache Retrieval for Efficient {LLM} Inference}, author={Guangda Liu and Chengwei Li and Zhenyu Ning and Jing Lin and Yiwu Yao and Danning Ke and Minyi Guo and Jieru Zhao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wXAn7orB1H} }
OpenReview/ICLR/figures/2026/accept_poster/wXAn7orB1H/Figure5.png
5
Figure 5: System overview of FreeKV.
<paragraph_1>The system overview of FreeKV is illustrated in Fig. 5. In the data plane, FreeKV retains the query vectors from the previous step, page summaries and cache for selected KV pages in GPU memory. In CPU memory, FreeKV maintains a complete KV cache pool for offloading KV pages. In the control plane, a controller on CPU manages the scheduling and synchronization of operations such as correction, attention, selection and recall launched on different CPU threads and GPU streams, following the timeline described in Sec. 3.</paragraph_1> <paragraph_2>End-to-end latency As shown in Fig. 7, FreeKV demonstrates significant efficiency gains over SOTA KV retrieval methods, achieving up to 13.7× and 8.4× speedups compared to ArkVale and ShadowKV, respectively. Moreover, FreeKV attains efficiency comparable to dropping methods like RaaS and RazorAttention, which do not involve offloading or recall. The speedups over ArkVale are detailed in Fig. 7. For InfiniGen, FreeKV achieves 3.2× and 5.4× speedups under long-input and long-generation scenarios on Qwen-2.5-7B, and 5.1× and 8.5× on Llama-3.1-8B. The improvements over ShadowKV are comparable to those over InfiniGen, reaching up to 8.4× on Llama-3.18B in the long-generation scenario. The improvements become more pronounced for large batch sizes and in long-generation scenarios, where more recall operations are required. In addition, the improvements are amplified for Llama-3.1-8B, which has more KV heads and a larger KV cache compared to Qwen-2.5-7B. Moreover, we present inference latency across different input and output lengths in Appendix C.1, showing that FreeKV consistently achieves substantial speedups under various settings. We also conduct ablation studies on the impact of our efficiency optimizations in Appendix C.2, which demonstrate their effectiveness. 6 DISCUSSION</paragraph_2>
diagram
0.981499
OpenReview
ICLR
2,026
Fine-Grained Activation Steering: Steering Less, Achieving More
Activation steering has emerged as a cost-effective paradigm for modifying large language model (LLM) behaviors. Existing methods typically intervene at the block level, steering the bundled activations of selected attention heads, feedforward networks, or residual streams. However, we reveal that block-level activations are inherently heterogeneous, entangling beneficial, irrelevant, and harmful features, thereby rendering block-level steering coarse, inefficient, and intrusive. To investigate the root cause, we decompose block activations into fine-grained atomic unit (AU)–level activations, where each AU-level activation corresponds to a single dimension of the block activation, and each AU denotes a slice of the block weight matrix. Steering an AU-level activation is thus equivalent to steering its associated AU. Our theoretical and empirical analysis show that heterogeneity arises because different AUs or dimensions control distinct token distributions in LLM outputs. Hence, block-level steering inevitably moves helpful and harmful token directions together, which reduces efficiency. Restricting intervention to beneficial AUs yields more precise and effective steering. Building on this insight, we propose AUSteer, a simple and efficient method that operates at a finer granularity of the AU level. AUSteer first identifies discriminative AUs globally by computing activation momenta on contrastive samples. It then assigns adaptive steering strengths tailored to diverse inputs and selected AU activations. Comprehensive experiments on multiple LLMs and tasks show that AUSteer consistently surpasses advanced baselines while steering considerably fewer activations, demonstrating that steering less achieves more.
Activation Steering, Large Language Models, Fine-Grained Intervention
foundation or frontier models, including LLMs
Breaking LLM blocks to fine-grained atomic units for intervention: steering less achieves more
[ 4, 4, 6 ]
Accept (Poster)
Zijian Feng, Tianjiao Li, Zixiao Zhu, Hanzhang Zhou, Junlang Qian, Li Zhang, Chua Jia Jim Deryl, Mak Lee Onn, Gee Wah Ng, Kezhi Mao
~Zijian_Feng2, ~Tianjiao_Li2, ~Zixiao_Zhu2, ~Hanzhang_Zhou1, ~Junlang_Qian1, ~Li_Zhang70, ~Chua_Jia_Jim_Deryl2, ~Mak_Lee_Onn1, ~Gee_Wah_Ng1, ~Kezhi_Mao1
20250918
https://openreview.net/forum?id=guSVafqhrB
guSVafqhrB
@inproceedings{ feng2026finegrained, title={Fine-Grained Activation Steering: Steering Less, Achieving More}, author={Zijian Feng and Tianjiao Li and Zixiao Zhu and Hanzhang Zhou and Junlang Qian and Li Zhang and Chua Jia Jim Deryl and Mak Lee Onn and Gee Wah Ng and Kezhi Mao}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=guSVafqhrB} }
OpenReview/ICLR/figures/2026/accept_poster/guSVafqhrB/Figure1.png
1
Figure 1: Comparison of block-level steering (prior work) and AU-level steering (Ours).
<paragraph_1>However, a common practice in existing methods is block-level steering, where a “block” denotes the multi-head attention (MHA), the feed-forward network (FFN), or the layer’s residual stream. As shown in Figure 1 (a), the intervention is vector-level: every dimension of the selected block’s activation is bundled and steered simultaneously. One of the main limitations of block-level intervention is that it ignores heterogeneity within block activations. These activations often span hundreds or thousands of dimensions, each indicating a different feature. Some features are beneficial for the task, while others are irrelevant or harmful. As a result, block level steering is (1) too coarse: a block can be decomposed into finer functional units, and treating it as a single entity prevents precise targeting; (2) inefficient: steering the entire block amplifies both useful and harmful signals, which reduces efficiency and risks performance degradation; and (3) overly intrusive: it modifies many dimensions unnecessarily, increasing the intervention footprint.</paragraph_1> <paragraph_2>In greater depth, we empirically and theoretically justify the heterogeneity of block-level activations. We first decompose block-level activations into finer-grained atomic unit (AU) activations, where each AU-level activation corresponds to a single dimension of the block activation, and each AU denotes a slice of the block weight matrix. Steering an AU-level activation is thus equivalent to steering its associated AU. As shown in Figure 1 (b), each AUlevel intervention targets a single dimension1. Both the intervention value and the affected activation are scalars. Empirically, we find that AU-level steering effects vary widely: some dimensions improve performance, some degrade it, and others are neutral, confirming heterogeneity. In many cases, steering a single dimension or a small subset outperforms steering the entire block.</paragraph_2> <paragraph_3>To further validate this, we first examine the convergence behavior of AU steering: different AUs govern different output token distributions, and as steering strength increases, the LLM’s output tends to converge to the AU’s token distribution. For the selected 7th attention head at the 27th layer, we scale the AU coefficient from 10 to an extremely large value (100,000) and compute the normalized KL divergence between the output at each strength and the output at 100,000. In Figure 3, columns 1 and 2 show these divergences for the 44th AU and the 84th AU. The divergence decreases with strength, indicating convergence. Column 3 shows the pairwise KL divergence between the 44th AU and the 84th AU across strengths. The divergence increases with strength, indicating that the two AUs tend to drive the model toward different output distributions.</paragraph_3>
diagram
0.998495
OpenReview
ICLR
2,026
Counterfactual Structural Causal Bandits
Causal reasoning lies at the heart of robust and generalizable decision-making, and the *Pearl Causal Hierarchy* provides a formal language for distinguishing between observational ($\mathcal{L}_1$), interventional ($\mathcal{L}_2$), and counterfactual ($\mathcal{L}_3$) levels of reasoning. Existing bandit algorithms that leverage causal knowledge have primarily operated within the $\mathcal{L}_1$ and $\mathcal{L}_2$ regimes, treating each realizable and physical intervention as a distinct arm. That is, they have largely excluded counterfactual quantities due to their perceived inaccessibility. In this paper, we introduce a *counterfactual structural causal bandit* (ctf-SCB) framework which expands the agent's feasible action space beyond conventional observational and interventional arms to include a class of realizable counterfactual actions. Our framework offers a principled extension of structural causal bandits and paves the way for integrating counterfactual reasoning into sequential decision-making.
causal inference, counterfactual inference, structural causal bandits, causal decision making
causal reasoning
We introduce a counterfactual structural causal bandit (ctf-SCB) framework which expands the agent's feasible action space beyond conventional observational and interventional arms to include a class of realizable counterfactual actions.
[ 4, 4, 6, 8 ]
Accept (Poster)
Min Woo Park, Sanghack Lee
~Min_Woo_Park1, ~Sanghack_Lee1
20250920
https://openreview.net/forum?id=gjvTNxVd2f
gjvTNxVd2f
@inproceedings{ park2026counterfactual, title={Counterfactual Structural Causal Bandits}, author={Min Woo Park and Sanghack Lee}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=gjvTNxVd2f} }
OpenReview/ICLR/figures/2026/accept_poster/gjvTNxVd2f/Figure10.png
10
Figure 10: MUCT and IB are shown in red and blue, respectively; (b, c) non-POMISs; (d, e) POMISs.
<paragraph_1>For example, consider the causal diagram in Fig. 10a. Here, G = G[An(Y )G] holds. An L1 action do(∅) is not a POMIS. To see this, we construct MUCT, initializing T = {Y }, as follows: Since Y has an unobserved confounder with C, we update T = cc(Y )G = {C, Y }, and thereafter add all the descendants of C, obtaining T = {C, D, Y }. Since there are no more unobserved confounders between T and An(Y )G \ T, MUCT has been found and is given by MUCT(G, Y ) = {C, D, Y } along with IB(G, Y ) = {A, B} (Fig. 10b). According to the graphical characterization, we can conclude that do(∅) is not a POMIS with respect to ⟨G, Y ⟩. Similarly, {B, C} is also not a POMIS, as IB(G{B,C}, Y ) = {B, D}, as depicted in Fig. 10c. In contrast, the regimes corresponding to Figs. 10d and 10e are POMISs, since they satisfy IB(GX, Y ) = X.</paragraph_1> <paragraph_2>Task 1 (Fig. 5a) Task 2 (Fig. 3b) Task 3 (Fig. 9) Total trials 10k 10k 100k</paragraph_2>
diagram
0.990313
OpenReview
ICLR
2,026
SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning
Multi-modal Large Language Models (MLLMs) represent a significant advancement in artificial intelligence. Among the growing capabilities exhibited by MLLMs, abilities to understand and reason in real-world environments stand out as particularly vital as a fundamental prerequisite for a wide array of real-world applications. The current methods for evaluating MLLMs often fall short in their ability to comprehensively assess these crucial capabilities. However, being able to reason on complex environment-scale spaces, for example, room spaces, building spaces, and even urban spaces, and to predict the future and plan actions, is essential for humans and various autonomous agents to survive in the real physical world. To address these gaps, we propose a visual-question-answering benchmark, **SpaCE-Eval** (**Spa**tial Reasoning, **C**ommonsense Knowledge and **E**nvironment Interaction) in the real world, designed to evaluate some of MLLM’s most important reasoning abilities in real-world environments. As the name suggests, it challenges the models to reason on complex spatial scenarios, invoke commonsense knowledge of the physical world, and interact with the environment. The dataset consists of all new diagrams purposefully produced by humans, where diagram-question pairs are meticulously refined and selected through a rigorous pipeline. Additionally, with the benchmark, we evaluate a selection of leading MLLMs, both proprietary and open source. The results suggest that a significant enhancement of MLLMs in reasoning in the real physical world is necessary to realise more advanced general artificial intelligence.
Benchmark, Multi-modal Large Language Model, Visual Reasoning, Real World Environments, Evaluation
datasets and benchmarks
[ 6, 4, 6, 6 ]
Accept (Poster)
Xuyou Yang, Yucheng Zhao, Wenxuan Zhang, Immanuel Koh
~Xuyou_Yang1, ~Yucheng_Zhao3, ~Wenxuan_Zhang1, ~Immanuel_Koh1
20250919
https://openreview.net/forum?id=VAEkLS9VBr
VAEkLS9VBr
@inproceedings{ yang2026spaceeval, title={Spa{CE}-Eval: A Benchmark for Real-World Multi-Modal Reasoning}, author={Xuyou Yang and Yucheng Zhao and Wenxuan Zhang and Immanuel Koh}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=VAEkLS9VBr} }
OpenReview/ICLR/figures/2026/accept_poster/VAEkLS9VBr/Figure9.png
9
Figure 9: Example of Spatial Reasoning/Form Transformation.
diagram
0.873445
OpenReview
ICLR
2,026
GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception
The bird’s-eye view (BEV) representation enables multi-sensor features to be fused within a unified space, serving as the primary approach for achieving comprehensive multi-task perception. However, the discrete grid representation of BEV leads to significant detail loss and limits feature alignment and cross-modal information interaction in multimodal fusion perception. In this work, we break from the conventional BEV paradigm and propose a new universal framework for multi-task multi-modal fusion based on 3D Gaussian representation. This approach naturally unifies multi-modal features within a shared and continuous 3D Gaussian space, effectively preserving edge and fine texture details. To achieve this, we design a novel forward-projection-based multi-modal Gaussian initialization module and a shared cross-modal Gaussian encoder that iteratively updates Gaussian properties based on an attention mechanism. GaussianFusion is inherently a task-agnostic model, with its unified Gaussian representation naturally supporting various 3D perception tasks. Extensive experiments demonstrate the generality and robustness of GaussianFusion. On the nuScenes dataset, it outperforms the 3D object detection baseline BEVFusion by 2.6 NDS. Its variant surpasses GaussFormer on 3D semantic occupancy with 1.55 mIoU improvement while using only 30% of the Gaussians and achieving a 450% speedup.
Gaussian Representation, BEV Representation, Detection, Occupancy
applications to robotics, autonomy, planning
[ 2, 4, 6, 6 ]
Accept (Poster)
Xiao Zhao, Chang Liu, Mingxu Zhu, Zheyuan Zhang, Linna Song, Qingliang Luo, Chufan Guo, Kuifeng Su
~Xiao_Zhao4, ~Chang_Liu67, ~Mingxu_Zhu1, ~Zheyuan_Zhang6, ~Linna_Song1, ~Qingliang_Luo1, ~Chufan_Guo1, ~Kuifeng_Su1
20250916
https://openreview.net/forum?id=7jXxQ9bGoU
7jXxQ9bGoU
@inproceedings{ zhao2026gaussianfusion, title={GaussianFusion: Unified 3D Gaussian Representation for Multi-Modal Fusion Perception}, author={Xiao Zhao and Chang Liu and Mingxu Zhu and Zheyuan Zhang and Linna Song and Qingliang Luo and Chufan Guo and Kuifeng Su}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=7jXxQ9bGoU} }
OpenReview/ICLR/figures/2026/accept_poster/7jXxQ9bGoU/Figure1.png
1
Figure 1: Comparison of the discrete BEV representation fusion paradigm Liu et al. (2023b) and our proposed continuous Gaussian representation fusion paradigm. B, G, C, L, and F denote BEV, Gaussian, Camera, Lidar, and Fusion.
<paragraph_1>BEV directly discretizes and quantizes data, leading to inevitable information loss. During feature extraction, perception data are projected onto a fixed-resolution BEV grid, which compresses spatial information. This issue becomes particularly severe when the BEV resolution is low, as it directly impacts model performance by failing to adequately preserve fine-grained scene structures. While increasing the BEV resolution will bring unacceptable computational overhead, as shown in Table 1. Additionally, BEV fusion strategies often rely on simple feature concatenation or weighted summation, which are insufficient for effective cross-modal feature interaction and alignment, ultimately leading to suboptimal fusion performance, as illustrated in Fig. 1(a).</paragraph_1> <paragraph_2>To address these challenges, we introduce a fusion approach based on 3D Gaussian Splatting (3DGS) Kerbl et al. (2023) to achieve more fine-grained information modeling and more natural multimodal alignment. As shown in Fig. 1(b), 3DGS employs continuous Gaussian distributions to represent the scene, preserving rich geometric and semantic information in the Gaussian stage and preventing the early quantization-induced information loss seen in BEV-based methods. Unlike direct BEV quantization, 3DGS aggregates information before its final projection onto the BEV grid, allowing crossmodal features to interact at a higher-dimensional level and capturing finer spatial structures prior to quantization, Table 1 shows the effectiveness of this strategy. Moreover, the covariance matrices of Gaussians enable adaptive modeling of uncertainty, enhancing the representation of object shapes and boundaries.</paragraph_2>
diagram
0.993349
OpenReview
ICLR
2,026
Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs
Learning-based methods for routing have gained significant attention in recent years, both in single-objective and multi-objective contexts. Yet, existing methods are unsuitable for routing on multigraphs, which feature multiple edges with distinct attributes between node pairs, despite their strong relevance in real-world scenarios. In this paper, we propose two graph neural network-based methods to address multi-objective routing on multigraphs. Our first approach operates directly on the multigraph by autoregressively selecting edges until a tour is completed. The second model, which is more scalable, first simplifies the multigraph via a learned pruning strategy and then performs autoregressive routing on the resulting simple graph. We evaluate both models empirically, across a wide range of problems and graph distributions, and demonstrate their competitive performance compared to strong heuristics and neural baselines.
Combinatorial Optimization, Reinforcement Learning, Graph-based Machine Learning, Multigraphs, Traveling Salesman Problem, Multi-Objective Optimization
learning on graphs and other geometries & topologies
We introduce two GNN-based models for routing with multiple objectives on multigraphs and asymmetric graphs
[ 8, 4, 4 ]
Accept (Poster)
Filip Rydin, Attila Lischka, Jiaming Wu, Morteza Haghir Chehreghani, Balazs Kulcsar
~Filip_Rydin1, ~Attila_Lischka1, ~Jiaming_Wu3, ~Morteza_Haghir_Chehreghani2, ~Balazs_Kulcsar1
20250919
https://openreview.net/forum?id=55laGcPNZZ
55laGcPNZZ
@inproceedings{ rydin2026beyond, title={Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs}, author={Filip Rydin and Attila Lischka and Jiaming Wu and Morteza Haghir Chehreghani and Balazs Kulcsar}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=55laGcPNZZ} }
OpenReview/ICLR/figures/2026/accept_poster/55laGcPNZZ/Figure1.png
1
Figure 1: Edge-based GMS and its most important components.
<paragraph_1>We visualize GMS-EB in Figure 1. The encoder, consisting of L GREAT-layers, outputs edge embeddings. Using them, the decoder constructs valid tours autoregressively. Given the instance s and incomplete route π1:t−1 in construction step t, the decoder selects edge πt with probability pθ(λ)(πt | π1:t−1, s). Thus the probability of the whole route π is</paragraph_1>
diagram
0.998319
OpenReview
ICLR
2,026
Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction
Automated theorem proving (ATP) --- the task of generating a proof that passes automated proof verification given a math question in formal language --- is a critical challenge at the intersection of mathematics and Artificial Intelligence (AI). We introduce Goedel-Prover-V2, a family of two language models that establish a new state-of-the-art (SOTA) in open-source ATP, using the Lean proof assistant. In addition to standard expert iteration and reinforcement learning, our approach incorporates three key innovations: (1) During training when improvement plateaus on human questions, the prover does scaffolded data synthesis to generate synthetic questions of increasing difficulty for its own training; (2) The prover is trained to self-correct using Lean compiler feedback; (3) Improved test-time exploration through checkpoint averaging to balance accuracy and diversity. Our small model, Goedel-Prover-V2-8B, reaches 84.6\% pass@32 on MiniF2F and outperforms DeepSeek-Prover-V2-671B despite being $80\times$ smaller. Our flagship model, Goedel-Prover-V2-32B, achieves 88.1\% on MiniF2F at pass@32 in standard mode and 90.4\% in self-correction mode, outperforming prior SOTA by a large margin. Additionally, our flagship model solves 86 problems on PutnamBench at pass@184, securing first place among open-source models and surpassing DeepSeek-Prover-V2-671B's record of 47 problems by pass@1024 with about $20\times$ smaller model size and significantly lower compute budget. Our models, code, and data are released at \url{https://github.com/Goedel-LM/Goedel-Prover-V2}.
Theorem Proving, Reasoning
foundation or frontier models, including LLMs
[ 6, 6, 4, 6 ]
Accept (Poster)
Yong Lin, Shange Tang, Bohan Lyu, Ziran Yang, Jui-Hui Chung, Haoyu Zhao, Lai Jiang, Yihan Geng, Jiawei Ge, Jingruo Sun, Jiayun Wu, Jiri Gesi, Ximing Lu, David Acuna, Kaiyu Yang, Hongzhou Lin, Yejin Choi, Danqi Chen, Sanjeev Arora, Chi Jin
~Yong_Lin2, ~Shange_Tang1, ~Bohan_Lyu1, ~Ziran_Yang1, ~Jui-Hui_Chung1, ~Haoyu_Zhao1, ~Lai_Jiang4, ~Yihan_Geng1, ~Jiawei_Ge3, ~Jingruo_Sun1, ~Jiayun_Wu1, ~Jiri_Gesi1, ~Ximing_Lu1, ~David_Acuna1, ~Kaiyu_Yang1, ~Hongzhou_Lin1, ~Yejin_Choi1, ~Danqi_Chen1, ~Sanjeev_Arora1, ~Chi_Jin1
20250916
https://openreview.net/forum?id=j4C0nALrgK
j4C0nALrgK
@inproceedings{ lin2026goedelproverv, title={Goedel-Prover-V2: Scaling Formal Theorem Proving with Scaffolded Data Synthesis and Self-Correction}, author={Yong Lin and Shange Tang and Bohan Lyu and Ziran Yang and Jui-Hui Chung and Haoyu Zhao and Lai Jiang and Yihan Geng and Jiawei Ge and Jingruo Sun and Jiayun Wu and Jiri Gesi and Ximing Lu and David Acuna and Kaiyu Yang and Hongzhou Lin and Yejin Choi and Danqi Chen and Sanjeev Arora and Chi Jin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=j4C0nALrgK} }
OpenReview/ICLR/figures/2026/accept_poster/j4C0nALrgK/Figure3.png
3
Figure 3: The overall pipeline of model training.
<paragraph_1>We observe that while DeepSeek-Prover-V2 models are already heavily trained and have lost selfcorrection capabilities, other models like Qwen3 lack the ability to generate formal proofs. To address this trade-off, we use data distilled from DeepSeek-Prover-V2 to cold-start Qwen3, followed by large-scale generation of revision and direct proof data with the resulting model. We then train our own model and iteratively refine it, incorporating scaffolded data. During training, we observe a reduction in output diversity (a form of overfitting) after each stage and apply model averaging to mitigate this. The whole training pipeline consists of the following steps, as illustrated in Figure 3:</paragraph_1>
diagram
0.951549
OpenReview
ICLR
2,026
Learning Unified Representation of 3D Gaussian Splatting
A well-designed vectorized representation is crucial for the learning systems natively based on 3D Gaussian Splatting. While 3DGS enables efficient and explicit 3D reconstruction, its parameter-based representation remains hard to learn as features, especially for neural-network-based models. Directly feeding raw Gaussian parameters into learning frameworks fails to address the non-unique and heterogeneous nature of the Gaussian parameterization, yielding highly data-dependent models. This challenge motivates us to explore a more principled approach to represent 3D Gaussian Splatting in neural networks that preserves the underlying color and geometric structure while enforcing unique mapping and channel homogeneity. In this paper, we propose an embedding representation of 3DGS based on continuous submanifold fields that encapsulate the intrinsic information of Gaussian primitives, thereby benefiting the learning of 3DGS.
Representation Learning, 3D Gaussian Splatting
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Proposed a new representation of 3DGS based on submanifold field that is more suitable for learning.
[ 2, 4, 8, 8 ]
Accept (Poster)
Yuelin Xin, Yuheng Liu, Xiaohui Xie, Xinke Li
~Yuelin_Xin1, ~Yuheng_Liu1, ~Xiaohui_Xie2, ~Xinke_Li1
20250904
https://openreview.net/forum?id=NvpVtGG6hk
NvpVtGG6hk
@inproceedings{ xin2026learning, title={Learning Unified Representation of 3D Gaussian Splatting}, author={Yuelin Xin and Yuheng Liu and Xiaohui Xie and Xinke Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=NvpVtGG6hk} }
OpenReview/ICLR/figures/2026/accept_poster/NvpVtGG6hk/Figure6.png
6
Figure 6: Setting of a Gaussian Neural Field, we compare between the prediction target SF embedding and raw GS parameters.
<paragraph_1>Gaussian Neural Fields. To validate the potential of our representation for advanced downstream tasks, we introduce the Gaussian Neural Field (GNF). Drawing inspiration from the decoding structures in generative diffusion models (e.g., DiffGS by Zhou et al. (2024b)) and neural compression frameworks (Wu & Tuytelaars, 2024), the GNF functions as a coordinate-based neural implicit field as illustrated in Fig. 6. Specifically, it employs a lightweight MLP (architecture detailed in App. D.4) to learn a continuous mapping from spatial coordinates xi to per-primitive descriptors. This setup allows us to evaluate the “learnability” of our representation: while regressing heterogeneous raw parameters θi often leads to optimization difficulties, our unified SF embeddings provide a smooth and well-conditioned target for the neural field. As evidenced in Tab. 3 and visualization in App. D.4, the SF-guided GNF outperforms the parameter-based baseline in visual fidelity with equivalent training effort. This indicates that our representation is more friendly to neural networks, hinting at its utility for potential downstream generative and compression tasks.</paragraph_1>
diagram
0.973273
OpenReview
ICLR
2,026
Disentangled representation learning through unsupervised symmetry group discovery
Symmetry-based disentangled representation learning leverages the group structure of environment transformations to uncover the latent factors of variation. Prior approaches to symmetry-based disentanglement have required strong prior knowledge of the symmetry group's structure, or restrictive assumptions about the subgroup properties. In this work, we remove these constraints by proposing a method whereby an embodied agent autonomously discovers the group structure of its action space through unsupervised interaction with the environment. We prove the identifiability of the true action group decomposition under minimal assumptions, and derive two algorithms: one for discovering the group decomposition from interaction data, and another for learning Linear Symmetry-Based Disentangled (LSBD) representations without assuming specific subgroup properties. Our method is validated on three environments exhibiting different group decompositions, where it outperforms existing LSBD approaches.
Representation learning, Disentanglement, Group Theory
unsupervised, self-supervised, semi-supervised, and supervised representation learning
[ 8, 4, 8, 6 ]
Accept (Poster)
Barthélémy Dang-Nhu, Louis Annabi, Sylvain ARGENTIERI
~Barthélémy_Dang-Nhu1, ~Louis_Annabi1, ~Sylvain_ARGENTIERI1
20250919
https://openreview.net/forum?id=I6xjMoLY3j
I6xjMoLY3j
@inproceedings{ dang-nhu2026disentangled, title={Disentangled representation learning through unsupervised symmetry group discovery}, author={Barth{\'e}l{\'e}my Dang-Nhu and Louis Annabi and Sylvain ARGENTIERI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=I6xjMoLY3j} }
OpenReview/ICLR/figures/2026/accept_poster/I6xjMoLY3j/Figure4.png
4
Figure 4: Two isomorphic group actions satisfying Assumption 2.
<paragraph_1>We argue that this assumption alone is not sufficient to recover the correct decomposition. To illustrate this point, consider two distinct environments analogous to Flatland shown Figure 4: (a) a 2  3 cyclic grid i.e. Ga  Z{2Z  Z{3Z with actions Ga  txu Y tyu and (b) a 6  1 cyclic grid i.e. Gb  Z{6Z with actions Gb  t2x, 3xu. Both environments satisfy Assumption 2 and can share the same representation, as there exists an isomorphism from Ga to Gb that maps each element of Ga to a corresponding element in Gb. From the agent’s perspective, these two situations are indistinguishable in the absence of additional assumptions. Ideally, we seek an assumption that both covers a wide range of practical scenarios, i.e. action sets G, and enables a computationally tractable procedure for recovering the group decomposition. Among the various options considered, we adopt the following assumption, as it offers a favorable trade-off between situation coverage and computational feasibility: Assumption 3. For all g, g1 P G, if they belong to the same subgroup then there exists u P G and m P J1, MK such that we have either g  umg1, g  g1um, g1  gum or g1  umg.</paragraph_1> <paragraph_2>Combined with Assumption 2, it is straightforward to show that the implication of Assumption 3 is in fact an equivalence. As a result, we obtain a simple and practical criterion for determining whether two actions belong to the same subgroup. In terms of situation coverage, as soon as M ¥ 2, Assumption 3 holds in common cases such as when Gi contains an action and its inverse, when Gk  Gk, or when Gk  G k. In practice, the action sets considered in the experimental sections of state-of-the-art SBDRL algorithms typically fall into one of these categories. In the scenario illustrated in Figure 4, Assumption 3 allows us to assume that situation (b) will never occur, our method will thus assume that the environment corresponds to case (a).</paragraph_2>
diagram
0.908796
OpenReview
ICLR
2,026
On-the-Fly Adaptation to Quantization: Configuration-Aware LoRA for Efficient Fine-Tuning of Quantized LLMs
As increasingly large pre-trained models are released, deploying them on edge devices for privacy-preserving applications requires effective compression. Recent works combine quantization with the fine-tuning of high-precision LoRA adapters, which can substantially reduce model size while mitigating the accuracy loss from quantization. However, edge devices have inherently heterogeneous capabilities, while performing configuration-wise fine-tuning for every quantization setting is computationally prohibitive. In this paper, we propose CoA-LoRA, a method that dynamically adjusts the LoRA adapter to arbitrary quantization configurations (i.e., the per-layer bit-width choices of a pre-trained model) without requiring repeated fine-tuning. This is accomplished via a configuration-aware model that maps each configuration to its low-rank adjustments. The effectiveness of this model critically depends on the training configuration set, a collection of configurations chosen to cover different total bit-width budgets. However, constructing a high-quality configuration set is non-trivial. We therefore design a Pareto-based configuration search that iteratively optimizes the training configuration set, yielding more precise low-rank adjustments. Our experiments demonstrate that, unlike the state-of-the-art methods that require fine-tuning a separate LoRA adapter for each configuration, CoA-LoRA incurs no additional time cost while achieving comparable or even superior performance to those methods.
Configuration-aware optimization, Pareto-base configuration search, Quantization, Fine-tuning
foundation or frontier models, including LLMs
[ 4, 6, 6, 6 ]
Accept (Poster)
Rongguang Ye, Ming Tang, Edith C. H. Ngai
~Rongguang_Ye1, ~Ming_Tang5, ~Edith_C._H._Ngai1
20250916
https://openreview.net/forum?id=9OUg0nJE72
9OUg0nJE72
@inproceedings{ ye2026onthefly, title={On-the-Fly Adaptation to Quantization: Configuration-Aware Lo{RA} for Efficient Fine-Tuning of Quantized {LLM}s}, author={Rongguang Ye and Ming Tang and Edith C. H. Ngai}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=9OUg0nJE72} }
OpenReview/ICLR/figures/2026/accept_poster/9OUg0nJE72/Figure3.png
3
Figure 3: Illustration of configuration-aware LoRA adapters with parallel adjustment. The configurationaware model θ generates adjustment matrices I+Uθ(Ci) from the quantization configuration Ci in parallel, where I denotes the identity matrix.
<paragraph_1>Motivated by this observation, we introduce a configuration-aware model θ : R|Qi| →Rr×r, which maps a layer-level configuration vector Qi to a lightweight adjustment matrix Uθ(Qi) ∈Rr×r. As shown in Fig. 3, each layer’s low-rank matrix L2,i is reparameterized as (I + Uθ(Qi))L2,i, where I is the identity matrix. Given a dataset D, let f WC denote the quantized pre-trained model weights under configuration C. We define the adjusted model weights using a configuration-aware adjustment function:</paragraph_1> <paragraph_2>where HVI(f(C), C) = Hr(C ∪{f(C)}) −Hr(C) measures the potential hypervolume increase contributed by C. For example, in Fig. 4 (left), the yellow area indicates the HVI of C(3).</paragraph_2> <paragraph_3>Fig. C.3 compares the results under different values of U, where U = 0 corresponds to the case without segment Pareto selection. We observe that applying segment Pareto selection (i.e., U = 20</paragraph_3> <paragraph_4>Figure C.3: Comparison of performance with different segment numbers K across four tasks.</paragraph_4>
diagram
0.998697
OpenReview
ICLR
2,026
FHE-Coder: Evaluating LLM Agents for secure Fully Homomorphic Encryption Code Generation
Fully Homomorphic Encryption over the Torus (TFHE) is a cornerstone of confidential computing, yet its adoption is severely limited by a steep learning curve requiring specialized cryptographic expertise. To bridge this skills gap, we investigate the potential of Large Language Model (LLM) agents to automate the generation of secure TFHE and CKKS code from natural language. We introduce FHE-CODER, a novel, three-phase agentic framework designed to overcome the critical failure points of this process. Our framework integrates a Prompt Formalizer to structure user intent and configure secure parameters, a specialized RAG retriever for accurate API knowledge , and an automated Security Verifier that provides iterative feedback to correct cryptographic flaws. We comprehensively evaluate our framework by testing four leading LLMs on a benchmark of ten programming tasks of increasing difficulty. Our results demonstrate that while baseline agents consistently produce functionally correct but insecure code, our full agentic framework is uniquely capable of generating solutions that are simultaneously compilable, functionally correct, and verifiably secure. This work establishes the first robust methodology and benchmark for agentic TFHE and CKKS code generation, demonstrating a viable path toward democratizing secure computation.
Large Language Models, Agents, Code generation, Fully Homomorphic Encryption, Retrieval Augmented Generation
alignment, fairness, safety, privacy, and societal considerations
We built a three-phase agentic framework that enables Large Language Models to automatically generate secure and functional TFHE code, bridging the expertise gap that currently limits the adoption of privacy-preserving computation.
[ 6, 4, 6 ]
Accept (Poster)
Mayank Kumar, Jiaqi Xue, Mengxin Zheng, Qian Lou
~Mayank_Kumar8, ~Jiaqi_Xue1, ~Mengxin_Zheng1, ~Qian_Lou1
20250919
https://openreview.net/forum?id=4F1py5vQXm
4F1py5vQXm
@inproceedings{ kumar2026fhecoder, title={{FHE}-Coder: Evaluating {LLM} Agents for secure Fully Homomorphic Encryption Code Generation}, author={Mayank Kumar and Jiaqi Xue and Mengxin Zheng and Qian Lou}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=4F1py5vQXm} }
OpenReview/ICLR/figures/2026/accept_poster/4F1py5vQXm/Figure4.png
4
Figure 4: An offline, human-in-the-loop process creates a dictionary mapping expert-enriched docstrings to code snippets from the TFHE documentation.
<paragraph_1>Therefore, to mitigate each of these issues, we introduce the novel agentic code generation workflow and evaluation framework as shown in Fig. 2. Our workflow is composed of three key components designed to address these specific challenges. First, the FHE Prompt Formalizer (Fig. 3) corrects structural and parameterization errors by translating the user’s request into a formal specification with secure, correctly calculated cryptographic parameters. Second, to remedy the model’s lack of API knowledge, an FHE API RAG Retriever (Fig. 4) provides the agent with relevant documentation and code examples on-demand. Finally, to overcome inadequate evaluation, our FHE Security Verifier (Fig. 5)introduces a multi-faceted check for critical security properties, ensuring the generated code is not only functionally correct but also verifiably secure.</paragraph_1> <paragraph_2>The FHE API RAG Retriever, illustrated in Figure 4 , addresses the limitations of standard retrieval methods, which fail almost entirely in this domain because LLMs lack the intrinsic structure to interpret strict cryptographic APIs or respect ciphertext-only computation rules B. To bridge the semantic gap between natural-language intent and these rigid library constraints, we construct a knowledge base using expert-enriched metadata. Specifically, we transform TFHE method docstrings1 into the Doxygen format2, utilizing structured tags such as @objective to embed machine-readable semantic instructions. This enrichment enables the agent to retrieve precise, security-compliant code snippets based on cryptographic purpose rather than ambiguous</paragraph_2>
diagram
0.926502
OpenReview
ICLR
2,026
PALC: Preference Alignment via Logit Calibration
Aligning Large Language Models with human preferences typically requires computationally intensive training or complex reward architectures. We introduce PALC (Preference Alignment via Logit Calibration), a parameter-efficient framework that achieves test-time alignment through a novel intervention strategy: direct calibration in vocabulary space. Unlike existing methods that manipulate entangled hidden representations or rely on external reward models, PALC operates at the logit layer where each dimension corresponds to a distinct token, providing interpretable and efficient control. Our approach employs a bottleneck architecture that learns to compress the base model's hidden states and generate position-dependent calibration vectors, requiring only a fraction of the base model's parameters. Through this design, PALC sidesteps the superposition problem inherent in representation engineering while eliminating the computational overhead of guided decoding methods. A single scaling factor enables runtime adjustment of alignment strength without retraining, allowing practitioners to balance between preserving model capabilities and enforcing preferences. Experiments demonstrate that PALC outperforms most test-time alignment methods while maintaining near-baseline inference speed. Our ablations reveal that human preferences concentrate on surprisingly low-dimensional manifolds, validating our architectural choices. By establishing vocabulary-space intervention as an effective alignment paradigm, PALC makes preference alignment accessible for resource-constrained deployments where traditional methods are infeasible, opening new avenues for scalable and adaptive AI alignment.
AI alignment, Representation Editing
alignment, fairness, safety, privacy, and societal considerations
PALC: preference alignment via logit calibration. Learns compact calibrations for frozen LLMs, achieving strong alignment without external rewards or fine-tuning. Outperforms most test-time methods with minimal latency.
[ 6, 6, 6, 4 ]
Accept (Poster)
SANGHYUN LEE, Hoh Peter In
~SANGHYUN_LEE4, ~Hoh_Peter_In1
20250920
https://openreview.net/forum?id=0cmuYj3WeG
0cmuYj3WeG
@inproceedings{ lee2026palc, title={{PALC}: Preference Alignment via Logit Calibration}, author={SANGHYUN LEE and Hoh Peter In}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0cmuYj3WeG} }
OpenReview/ICLR/figures/2026/accept_poster/0cmuYj3WeG/Figure1.png
1
Figure 1: Overview of the PALC framework. Unlike conventional representation steering methods that intervene in entangled hidden spaces, PALC treats the base model’s hidden states ht strictly as a read-only context. A lightweight Calibration Module (θ) extracts essential preference signals through a bottleneck architecture (Wdown,Wup) to generate calibration vectors mt in the disentangled logit space. This decoupling ensures precise preference alignment with minimal computational overhead and preserves the base model’s general capabilities.
<paragraph_1>We examine how the scaling factor γ affects PALC’s performance. Figure 3 shows results for five values: γ ∈{0.5, 1.0, 3.0, 5.0, 10.0}.</paragraph_1>
diagram
0.942897
OpenReview
ICLR
2,026
Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning
The Homotopy paradigm, a general principle for solving challenging problems, appears across diverse domains such as robust optimization, global optimization, polynomial root-finding, and sampling. Practical solvers for these problems typically follow a predictor-corrector (PC) structure, but rely on hand-crafted heuristics for step sizes and iteration termination, which are often suboptimal and task-specific. To address this, we unify these problems under a single framework, which enables the design of a general neural solver. Building on this unified view, we propose Neural Predictor-Corrector (NPC), which replaces hand-crafted heuristics with automatically learned policies. NPC formulates policy selection as a sequential decision-making problem and leverages reinforcement learning to automatically discover efficient strategies. To further enhance generalization, we introduce an amortized training mechanism, enabling one-time offline training for a class of problems and efficient online inference on new instances. Experiments on four representative homotopy problems demonstrate that our method generalizes effectively to unseen instances. It consistently outperforms classical and specialized baselines in efficiency while demonstrating superior stability across tasks, highlighting the value of unifying homotopy methods into a single neural framework.
Homotopy System, Graduated optimization, Reinforcement Learning, Polynomial Equitions System, Gaussian Homotopy, Sampling
applications to computer vision, audio, language, and other modalities
[ 6, 6, 4 ]
Accept (Poster)
Jiayao Mai, Bangyan Liao, Zhenjun Zhao, Yingping Zeng, Haoang Li, Javier Civera, Tailin Wu, Yi Zhou, Peidong Liu
~Jiayao_Mai3, ~Bangyan_Liao1, ~Zhenjun_Zhao1, ~Yingping_Zeng1, ~Haoang_Li1, ~Javier_Civera1, ~Tailin_Wu1, ~Yi_Zhou27, ~Peidong_Liu3
20250905
https://openreview.net/forum?id=x6iodYWNty
x6iodYWNty
@inproceedings{ mai2026neural, title={Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning}, author={Jiayao Mai and Bangyan Liao and Zhenjun Zhao and Yingping Zeng and Haoang Li and Javier Civera and Tailin Wu and Yi Zhou and Peidong Liu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=x6iodYWNty} }
OpenReview/ICLR/figures/2026/accept_poster/x6iodYWNty/Figure2.png
2
Figure 2: Illustration of the Predictor-Corrector algorithm. Predictor proposes the next level and provides an initial solution estimate, while Corrector iteratively refines this estimate to project it back onto the solution trajectory. Orange curve denotes the implicit solution trajectory, as in Fig. 1.
<paragraph_1>While the homotopy paradigm specifies the abstract principle, an effective algorithm is needed to trace the implicit solution trajectory in practice. The PC method (Allgower & Georg, 2012) provides such a concrete algorithmic framework. As shown in Fig. 2, PC decomposes trajectory tracking into two complementary steps:</paragraph_1>
diagram
0.881063
OpenReview
ICLR
2,026
CLUE: Conflict-guided Localization for LLM Unlearning Framework
The LLM unlearning aims to eliminate the influence of undesirable data without affecting causally unrelated information. This process typically involves using a **forget set** to remove target information, alongside a **retain set** to maintain non-target capabilities. While recent localization-based methods demonstrate promise in identifying important nodes (neurons) to be unlearned, they fail to disentangle nodes responsible for forgetting undesirable knowledge or retaining essential skills, often treating them as a single entangled group. As a result, these methods apply uniform interventions, risking catastrophic over-forgetting or incomplete erasure of the target knowledge. To address this, we turn to circuit discovery, a mechanistic interpretability technique, and propose the **C**onflict-guided **L**ocalization for LLM **U**nlearning fram**E**work (**CLUE**). This framework identifies the forget and retain circuit composed of important nodes, and then the circuits are transformed into conjunctive normal forms (CNF). The assignment of each node in the CNF satisfiability solution reveals whether it should be forgotten or retained. We then provide targeted fine-tuning strategies for different categories of nodes. Extensive experiments demonstrate that, compared to existing localization methods, CLUE achieves superior forget efficacy and retain utility through precise neural localization.
LLM unlearning, circuit discovery, conjunctive normal form, interpretability
foundation or frontier models, including LLMs
We use circuit discovery and CNF solving to design the localization for forget neurons and retain neurons in the LLM unlearning task.
[ 6, 6, 4, 2 ]
Accept (Poster)
Hang Chen, Jiaying Zhu, Xinyu Yang, Wenya Wang
~Hang_Chen3, ~Jiaying_Zhu5, ~Xinyu_Yang2, ~Wenya_Wang1
20250901
https://openreview.net/forum?id=jtRYvazBWv
jtRYvazBWv
@inproceedings{ chen2026clue, title={{CLUE}: Conflict-guided Localization for {LLM} Unlearning Framework}, author={Hang Chen and Jiaying Zhu and Xinyu Yang and Wenya Wang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=jtRYvazBWv} }
OpenReview/ICLR/figures/2026/accept_poster/jtRYvazBWv/Figure2.png
2
Figure 2: Overview from datasets to localization.
<paragraph_1>In this section, we provide a three-step framework of how circuit discovery ultimately enables precise localization. An overview of our localization procedure is shown in Figure 2. Specifically,</paragraph_1>
diagram
0.850337
OpenReview
ICLR
2,026
Latent Geometry-Driven Network Automata for Complex Network Dismantling
Complex networks model the structure and function of critical technological, biological, and communication systems. Network dismantling, the targeted removal of nodes to fragment a network, is essential for analyzing and improving system robustness. Existing dismantling methods suffer from key limitations: they depend on global structural knowledge, exhibit slow running times on large networks, and overlook the network’s latent geometry, a key feature known to govern the dynamics of complex systems. Motivated by these findings, we introduce Latent Geometry-Driven Network Automata (LGD-NA), a novel framework that leverages local network automata rules to approximate effective link distances between interacting nodes. LGD-NA is able to identify critical nodes and capture latent manifold information of a network for effective and efficient dismantling. We show that this latent geometry-driven approach outperforms all existing dismantling algorithms, including spectral Laplacian-based methods and machine learning ones such as graph neural networks and . We also find that a simple common-neighbor-based network automata rule achieves near state-of-the-art performance, highlighting the effectiveness of minimal local information for dismantling. LGD-NA is extensively validated on the largest and most diverse collection of real-world networks to date (1,475 real-world networks across 32 complex systems domains) and scales efficiently to large networks via GPU acceleration. Finally, we leverage the explainability of our common-neighbor approach to engineer network robustness, substantially increasing the resilience of real-world networks. We validate LGD-NA's practical utility on domain-specific functional metrics, spanning neuronal firing rates in the Drosophila Connectome, transport efficiency in flight maps, outbreak sizes in contact networks, and communication pathways in terrorist cells. Our results confirm latent geometry as a fundamental principle for understanding the robustness of real-world systems, adding dismantling to the growing set of processes that network geometry can explain.
network robustness, network dismantling, network geometry, network science, complex systems, network automata, graphs, network topology
learning on graphs and other geometries & topologies
Latent Geometry-Driven Network Automata dismantles networks by estimating effective link distances on the latent manifold via local rules, outperforming all existing methods on 1,475 real-world networks and runs efficiently on large systems via GPU.
[ 4, 2, 6, 6 ]
Accept (Poster)
Thomas Adler, Marco Grassia, Ziheng Liao, Giuseppe Mangioni, Carlo Vittorio Cannistraci
~Thomas_Adler2, ~Marco_Grassia1, ~Ziheng_Liao1, ~Giuseppe_Mangioni1, ~Carlo_Vittorio_Cannistraci1
20250918
https://openreview.net/forum?id=yz29QCGVzC
yz29QCGVzC
@inproceedings{ adler2026latent, title={Latent Geometry-Driven Network Automata for Complex Network Dismantling}, author={Thomas Adler and Marco Grassia and Ziheng Liao and Giuseppe Mangioni and Carlo Vittorio Cannistraci}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=yz29QCGVzC} }
OpenReview/ICLR/figures/2026/accept_poster/yz29QCGVzC/Figure1.png
1
Figure 1: Overview of the LGD Network Automata framework. A: Begin with an unweighted and undirected network. B: Estimate latent geometry by assigning a weight νij to each edge between nodes i and j using local latent geometry estimators. C: Construct a dissimilarity-weighted network based on these weights. D: Compute node strength as the sum of geometric weights to all neighbors in N (i): si = ∑ j∈N (i) νij E–F: Perform dynamic dismantling by iteratively computing node strengths, removing the node with the highest si and its edges, and checking whether the normalized size of the largest connected component (LCC) has dropped below a threshold. G–H (optional): Reinsert dismantled nodes using a selected reinsertion method.
<paragraph_1>We introduce the Latent Geometry-Driven Network Automata (LGD-NA) framework. LGD-NA adopts a parameter-free network automaton rule, such as RA2, to estimate latent geometric linked node pairwise distances and to assign edge weights based on these geometric distances. Then, it computes for each node its network centrality as a sum of the weights of adjacent edges. The higher this sum, the more a node dominates numerous and far-apart regions of the network, becoming a prioritized candidate for a targeted attack in the network dismantling process. This prioritized node is then removed from the network, and the procedure is iteratively repeated until the network is dismantled (see Figure 1 for a full breakdown).</paragraph_1> <paragraph_2>To ensure full reproducibility, we have made our source code publicly available, including detailed instructions on how to replicate all experiments. The codebase includes an implementation of our LGD-NA framework (illustrated in Figure 1), the exact formulas used (detailed in Appendix A), and an example network for demonstration. The code is compatible with both CPU and GPU environments and also provides the necessary tools to engineer network robustness as described in this work. The baseline methods were implemented using the code from the review by Artime et al. (2024). The exact topological measures of all networks used in our study are provided in Appendix 9. Further details regarding the experimental setup, including hardware specifications, are described in Appendix M and N.</paragraph_2>
diagram
0.976884
OpenReview
ICLR
2,026
Accelerated co-design of robots through morphological pretraining
The co-design of robot morphology and neural control typically requires using reinforcement learning to approximate a unique control policy gradient for each body plan, demanding massive amounts of training data to measure the performance of each design. Here we show that a universal, morphology-agnostic controller can be rapidly and directly obtained by gradient-based optimization through differentiable simulation. This process of morphological pretraining allows the designer to explore non-differentiable changes to a robot's physical layout (e.g. adding, removing and recombining discrete body parts) and immediately determine which revisions are beneficial and which are deleterious using the pretrained model. We term this process "zero-shot evolution" and compare it with the simultaneous co-optimization of a universal controller alongside an evolving design population. We find the latter results in _diversity collapse_, a previously unknown pathology whereby the population—and thus the controller's training data—converges to similar designs that are easier to steer with a shared universal controller. We show that zero-shot evolution with a pretrained controller quickly yields a diversity of highly performant designs, and by fine-tuning the pretrained controller on the current population throughout evolution, diversity is not only preserved but significantly increased as superior performance is achieved. Videos viewable at this website: https://gilded-macaron-5a75e3.netlify.app
robot co-design, universal control, differentiable simulation, embodied intelligence
applications to robotics, autonomy, planning
[ 2, 6, 6 ]
Accept (Poster)
Luke Strgar, Sam Kriegman
~Luke_Strgar1, ~Sam_Kriegman1
20250919
https://openreview.net/forum?id=WVliGyFwZv
WVliGyFwZv
@inproceedings{ strgar2026accelerated, title={Accelerated co-design of robots through morphological pretraining}, author={Luke Strgar and Sam Kriegman}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=WVliGyFwZv} }
OpenReview/ICLR/figures/2026/accept_poster/WVliGyFwZv/Figure2.png
2
Figure 2: Overview of the proposed method. End-to-end differentiable policy training across tens of millions of morphologically distinct robots—morphological pretraining—produces a universal controller, which was kept frozen throughout zero-shot evolution and finetuned for each generation of few-shot evolution.
<paragraph_1>Inspired by the remarkable success of large-scale pretrained models in computer vision and natural language processing, we here pretrain a universal controller across millions of complex body plans using gradient information from differentiable simulation, averaging gradients across variations in the robot’s body, world and goal (Fig. 1). Armed with a universal controller, evolution can now iteratively improve the robot’s morphology, and the controller can be rapidly finetuned for the current population with simulation gradients (Fig. 2). This also enables the successful recombination of designs (a.k.a. crossover; Fig. 4), a hallmark of biological evolution and of human engineering that has yet to be convincingly demonstrated in robots.</paragraph_1>
diagram
0.924178
OpenReview
ICLR
2,026
Automatic and Structure-Aware Sparsification of Hybrid Neural ODEs with Application to Glucose Prediction
Hybrid neural ordinary differential equations (neural ODEs) integrate mechanistic models with neural ODEs, offering strong inductive bias and flexibility, and are particularly advantageous in data-scarce healthcare settings. However, excessive latent states and interactions from mechanistic models can lead to training inefficiency and over-fitting, limiting practical effectiveness of hybrid neural ODEs. In response, we propose a new hybrid pipeline for automatic state selection and structure optimization in mechanistic neural ODEs, combining domain-informed graph modifications with data-driven regularization to sparsify the model for improving predictive performance and stability while retaining mechanistic plausibility. Experiments on synthetic and real-world data show improved predictive performance and robustness with desired sparsity, establishing an effective solution for hybrid model reduction in healthcare applications.
Predictive Sparsity, Hybrid Neural ODE, Group LASSO, Glucose Prediction
applications to physical sciences (physics, chemistry, biology, etc.)
[ 4, 6, 4, 8 ]
Accept (Poster)
Bob Junyi Zou, Lu Tian
~Bob_Junyi_Zou1, ~Lu_Tian4
20250918
https://openreview.net/forum?id=QBzFrjEF59
QBzFrjEF59
@inproceedings{ zou2026automatic, title={Automatic and Structure-Aware Sparsification of Hybrid Neural {ODE}s with Application to Glucose Prediction}, author={Bob Junyi Zou and Lu Tian}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=QBzFrjEF59} }
OpenReview/ICLR/figures/2026/accept_poster/QBzFrjEF59/Figure5.png
5
Figure 5: An illustration of the mechanistic vs true graphs used in the synthetic experiments
<paragraph_1>In figure 5, we provide an illustration of the mechanistic graph used in the synthetic experiments.</paragraph_1>
diagram
0.92587
OpenReview
ICLR
2,026
Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks
The training of neural networks has been extensively studied from both algorithmic and complexity-theoretic perspectives, yet recent results in this direction almost exclusively concern real-valued networks. In contrast, advances in machine learning practice highlight the benefits of quantization, where network parameters and data are restricted to finite integer domains, yielding significant improvements in speed and energy efficiency. Motivated by this gap, we initiate a systematic complexity-theoretic study of ReLU Neural Network Training in the full quantization mode. We establish strong lower bounds by showing that hardness already arises in the binary setting and under highly restrictive structural assumptions on the architecture, thereby excluding parameterized tractability for natural measures such as depth and width. On the positive side, we identify nontrivial fixed-parameter tractable cases when parameterizing by input dimensionality in combination with width and either output dimensionality or error bound, and further strengthen these results by replacing width with the more general treewidth.
treewidth, parameterized complexity, quantized neural networks, ReLU networks
learning theory
We study the classical and parameterized complexity of training quantized neural networks and obtain new upper as well as lower bounds for the problem.
[ 6, 8, 6 ]
Accept (Poster)
Robert Ganian, Frank Sommer, Manuel Sorge
~Robert_Ganian1, ~Frank_Sommer1, ~Manuel_Sorge1
20250918
https://openreview.net/forum?id=BAQNrsr987
BAQNrsr987
@inproceedings{ ganian2026tractability, title={Tractability via Low Dimensionality: The Parameterized Complexity of Training Quantized Neural Networks}, author={Robert Ganian and Frank Sommer and Manuel Sorge}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=BAQNrsr987} }
OpenReview/ICLR/figures/2026/accept_poster/BAQNrsr987/Figure4.png
4
Figure 4: An illustration of the reduction behind Theorem 3 for the universe U = [6] and the set family F with sets S1 = {1, 4, 5}, S2 = {2, 3}, S3 = {1, 6}, S4 = {2, 5}, S5 = {3, 5}, S6 = {6} and k = 3 and with a hitting set S = {2, 5, 6}. In the solution corresponding to S, inputs p1, p2 and p3 are associated with elements 2, 5 and 6, respectively. Moreover, each red arc has weight 0 and each blue arc has weight 1. The orange numbers are the biases of the output neurons.
<paragraph_1>We construct an equivalent instance I of 2-QNNT as follows; see Figure 4 for an illustration. Description of architecture G. We create two input neurons z1 and z2. For each of the two literals</paragraph_1> <paragraph_2>Construction. We construct an instance I of 2-QNNT as follows. For an illustration, see Figure 4. Description of the architecture G. We create k input neurons p1, . . . , pk. Abusing notation, for each set F ∈F we create one set output neuron F. We add arcs between every input and output neuron. Description of the data set. For each element u ∈U we add k element u data points d1 u, . . . , dk u. Element u data point di u has value 1 in input pi and value 0 in each other input. Moreover, di u has value 1 in each set output F such that u ∈F. Thus, di u has value 0 in each set output F ′ such that u /∈F ′. Observe that the k element u data points all have the same output but they have pairwise different inputs. Then, we add a verifier data point d∗which has value 1 in each input and in each output. In the following, we say that two data points d1 and d2 have the same type if the input values of d1 and d2 are pairwise identical. Note that we have exactly k + 1 distinct types of data points.</paragraph_2>
diagram
0.90793
OpenReview
ICLR
2,026
Constrained Decoding of Diffusion LLMs with Context-Free Grammars
Large language models (LLMs) have shown promising performance across diverse domains. Many practical applications of LLMs, such as code completion and structured data extraction, require adherence to syntactic constraints specified by a formal language. Yet, due to their probabilistic nature, LLM output is not guaranteed to adhere to such formal languages. To address this, prior work has proposed constrained decoding to restrict LLM generation to particular formal languages. However, existing works are not applicable to the emerging paradigm of diffusion LLMs, as this requires supporting token generation in arbitrary order instead of the traditional left-to-right order. In this paper, we address this challenge and present the first constrained decoding method for diffusion models, one that can handle formal languages captured by context-free grammars. We begin by reducing constrained decoding to the more general additive infilling problem, which asks whether a partial output with holes can be completed to a valid word in the target language. This problem also naturally subsumes the previously unaddressed multi-region infilling constrained decoding. We then reduce this problem to the task of deciding whether the intersection of the target language and a regular language is empty and present an efficient algorithm to solve this task for context-free languages. Empirical results on various applications, such as C++ code infilling and structured data extraction in JSON, demonstrate that our method achieves near-perfect syntactic correctness while consistently preserving or improving functional correctness. Importantly, our efficiency optimizations ensure that the computational overhead remains practical.
diffusion llm, constrained decoding, llm, code generation, json, multi-region infilling, fill in the middle, code synthesis
generative models
We reduce constrained decoding for generalized code generation paradigms to an operation on formal languages, enabling constrained decoding for infilling and diffusion LLMs.
[ 4, 8, 6, 4 ]
Accept (Poster)
Niels Mündler, Jasper Dekoninck, Martin Vechev
~Niels_Mündler1, ~Jasper_Dekoninck1, ~Martin_Vechev1
20250916
https://openreview.net/forum?id=7Sph4KyeYO
7Sph4KyeYO
@inproceedings{ mundler2026constrained, title={Constrained Decoding of Diffusion {LLM}s with Context-Free Grammars}, author={Niels M{\"u}ndler and Jasper Dekoninck and Martin Vechev}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=7Sph4KyeYO} }
OpenReview/ICLR/figures/2026/accept_poster/7Sph4KyeYO/Figure3.png
3
Figure 3: Examples of Figures 1 and 4 processed during our method. (a) The grammar is first normalized into C2F+ε, and (b) the NFA is transformed into a minimal DFA. (c) To determine
<paragraph_1>Constructing the regular language The language Cx of all possible completions of x = x1 . . . xn contains all words that start with x1, end with xn, and contain the strings xi (1 ≤i ≤n) in the correct order, with arbitrary symbols in between. We prove that Cx is regular by constructing an NFA that accepts Cx. We first construct automata Di, which accept exactly xi. Then, we concatenate Di with an additional state qi that accepts any string in Σ∗, i.e., δ(qi, σ) = qi for all σ ∈Σ. For the concatenation, we add an ε-edge from the accepting states of Di to qi and from qi to the start state of Di+1. A visualization for the prior example is shown in Figure 2b. In our algorithm, we construct this NFA for each update. We then transform it into an equivalent DFA and minimize the DFA using standard methods (Hopcroft and Ullman, 1979), as shown in Figure 3b.</paragraph_1> <paragraph_2>Constructing the intersection language We leverage the well-established facts that (a) the intersection L∩of CFL L and regular language Cx is a CFL, whose grammar can be constructed from L’s grammar G and Cx’s DFA, and (b) that the emptiness of a CFL can be checked in time polynomial to the size of the grammar (Gasarch, 2014; Hopcroft and Ullman, 1979). The symbols in the intersection language have the form p⃗A q for p, q ∈Σ and A ∈V , where each symbol intuitively represents deriving a word from A that also traverses the DFA from state p to q. The language is nonempty if we can derive a word from q0⃗S qf for start symbol S and initial and final state q0 and qf. An example of deriving a word in the intersection language is shown in Figure 3c. The intersection grammar G∩= (V∩, Σ, P∩, S∩) will have a cubic size in nonterminals and productions, with |V∩| ∈O(|V ||Q|2) and |P∩| ∈O(|P||Q|3 + |P||Q|2|Σ|) (Gasarch, 2014; Bar-Hillel et al., 1961). While we can not reduce the worst case complexity of this blowup, we carefully construct the intersection language to keep its size at a minimum, and employ several heuristics to reduce the practical cost of determining its emptiness, explained next.</paragraph_2> <paragraph_3>Efficient normalization The standard intersection algorithms require G to be transformed to Chomsky normal form, which only allows rules of the form A →BC or A →a, where A, B, C ∈V and a ∈Σ (Hopcroft and Ullman, 1979). The resulting grammar may have a quadratic increase in the number of production rules (Lange and Leiß, 2009). To avoid this increase, we extend the standard construction to support CFGs in C2F+ε, a normal form that additionally allows productions of the form A →ε and A →B. We provide an example of the normalized C++ grammar in Figure 3a. This normal form can be obtained with only a linear increase in production rules (Lange and Leiß, 2009). Our adaptations to the standard intersection algorithm and a proof of its correctness are provided in Appendix B.1. In Appendix B.2, we describe several further heuristics to reduce the size of the normalized CFG of G. After this step, we can intersect the languages and determine the emptiness of the intersection language.</paragraph_3>
diagram
0.965765
OpenReview
ICLR
2,026
Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied AI
While Large Language Models (LLMs) show immense promise as planners for embodied AI, their stochastic nature and lack of formal reasoning capabilities prevent the strict safety guarantees required for physical deployment. Current approaches fall short: they either rely on other unreliable LLMs for safety checks or simply reject unsafe plans without offering a path to success. This work bridges this critical gap by introducing the Verifiable Iterative Refinement Framework (VIRF), a neuro-symbolic architecture that shifts the paradigm from a passive safety gatekeeper to an active safety collaborator. Where prior verifiers simply reject failures, our framework provides causal, pedagogical feedback that teaches the LLM why its plan was unsafe, enabling intelligent repairs rather than mere avoidance.Our core contribution is a novel tutor-apprentice dialogue, where a deterministic Logic Tutor, grounded in a formal safety ontology, provides causal and explanatory feedback to an LLM Apprentice planner. This pedagogical interaction allows the apprentice to perform intelligent, creative plan repairs, resolving safety conflicts rather than merely avoiding them. To ground this dialogue in verifiable truth, we introduce a scalable knowledge acquisition pipeline that synthesizes a comprehensive safety knowledge base from real-world documents, a process that simultaneously reveals and corrects significant blind spots in existing benchmarks. On a new suite of challenging home safety tasks, VIRF achieves a perfect 0\% Hazardous Action Rate (HAR), completely eliminating unsafe actions while attaining a 77.3\% Goal-Condition Rate (GCR)—the highest among all baselines. It does so with remarkable efficiency, requiring only 1.1 correction iterations on average. By acting as a verifiable safety scaffold, VIRF demonstrates a principled and robust pathway toward building embodied agents that are not just capable, but fundamentally trustworthy.
neurosymbolic AI, hybrid AI, formal reasoning, large language models, AI safety, verifiable AI, embodied AI, robotics
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
We propose a hybrid neuro-symbolic architecture where a formal logic verifier tutors an LLM planner, enabling the generation of verifiably safe plans for embodied agents.
[ 4, 2, 6, 4 ]
Accept (Poster)
Feiyu Wu, Xu Zheng, Yue Qu, Zhuocheng Wang, Zicheng Feng, HUI LI
~Feiyu_Wu1, ~Xu_Zheng1, ~Yue_Qu4, ~Zhuocheng_Wang1, ~Zicheng_Feng1, ~HUI_LI17
20250916
https://openreview.net/forum?id=wb05ver1k8
wb05ver1k8
@inproceedings{ wu2026grounding, title={Grounding Generative Planners in Verifiable Logic: A Hybrid Architecture for Trustworthy Embodied {AI}}, author={Feiyu Wu and Xu Zheng and Yue Qu and Zhuocheng Wang and Zicheng Feng and HUI LI}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=wb05ver1k8} }
OpenReview/ICLR/figures/2026/accept_poster/wb05ver1k8/Figure1.png
1
Figure 1: The architecture of the Verifiable Iterative Refinement Framework (VIRF). Instead of direct execution, an LLM planner’s actions are verified in a symbolic sandbox against a formal knowledge base. The framework’s core is the Logic Tutor feedback loop, which provides three distinct responses: approval for safe plans, clarification questions for UNKNOWN states, and a structured diagnostic report for unsafe plans. This report enables a pedagogical dialogue, teaching the LLM Linguistic Apprentice how to refine its plan and avoid hazards.
<paragraph_1>Our work introduces the Verifiable Iterative Refinement Framework (VIRF), a novel neurosymbolic architecture designed to govern a generative Large Language Model (LLM) planner. At its core, VIRF transforms the interaction between the stochastic LLM and a deterministic symbolic verifier from a simple pass/fail gate into a rich, pedagogical dialogue. To provide the necessary logical rigor for this dialogue, we build our verifier upon the Web Ontology Language (OWL 2) and its underlying Description Logics (DL), which enable a level of formal, inferential reasoning unattainable by other symbolic approaches (see Appendix A for a detailed justification). As illustrated in Figure 1, our methodology is built upon three foundational pillars.</paragraph_1>
diagram
0.91071
OpenReview
ICLR
2,026
Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings
Multi-Resolution Hash Encoding (MHE), the foundational technique behind Instant Neural Graphics Primitives, provides a powerful parameterization for neural fields. However, its spatial behavior lacks rigorous understanding from a physical systems perspective, leading to reliance on heuristics for hyperparameter selection. This work introduces a novel analytical approach that characterizes MHE by examining its Point Spread Function (PSF), which is analogous to the Green's function of the system. This methodology enables a quantification of the encoding's spatial resolution and fidelity. We derive a closed-form approximation for the collision-free PSF, uncovering inherent grid-induced anisotropy and a logarithmic spatial profile. We establish that the idealized spatial bandwidth, specifically the Full Width at Half Maximum (FWHM), is determined by the average resolution, $N_{\text{avg}}$. This leads to a counterintuitive finding: the effective resolution of the model is governed by the broadened empirical FWHM (and therefore $N_{\text{avg}}$), rather than the finest resolution $N_{\max}$, a broadening effect we demonstrate arises from optimization dynamics. Furthermore, we analyze the impact of finite hash capacity, demonstrating how collisions introduce speckle noise and degrade the Signal-to-Noise Ratio (SNR). Leveraging these theoretical insights, we propose Rotated MHE (R-MHE), an architecture that applies distinct rotations to the input coordinates at each resolution level. R-MHE mitigates anisotropy while maintaining the efficiency and parameter count of the original MHE. This study establishes a methodology based on physical principles that moves beyond heuristics to characterize and optimize MHE.
multi-resolution hash encoding, implicit neural representations, neural fields, point spread function, spatial kernel analysis, anisotropy, resolution limit, FWHM, hash collisions, signal-to-noise ratio, NeRF
applications to computer vision, audio, language, and other modalities
We analyze Multi-Resolution Hash Encoding (MHE) using its Point Spread Function (PSF) to reveal that effective resolution is governed by average, not finest, resolution, and introduce Rotated MHE to mitigate inherent anisotropy and collision noise.
[ 4, 6, 6, 4 ]
Accept (Poster)
Tianxiang Dai, Jonathan Fan
~Tianxiang_Dai1, ~Jonathan_Fan1
20250920
https://openreview.net/forum?id=q05hC1Pzkr
q05hC1Pzkr
@inproceedings{ dai2026characterizing, title={Characterizing and Optimizing the Spatial Kernel of Multi Resolution Hash Encodings}, author={Tianxiang Dai and Jonathan Fan}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=q05hC1Pzkr} }
OpenReview/ICLR/figures/2026/accept_poster/q05hC1Pzkr/Figure1.png
1
Figure 1: Overview of MHE Characterization and Optimization. (a) The MHE architecture utilizes L grid levels with resolutions growing by a factor b. The encoding e(x) is passed to an MLP gθ. We characterize the system by optimizing for a point constraint and measuring the resulting Point Spread Function (PSF). (b) This analysis reveals inherent grid-induced anisotropy (narrower along axes) and optimization-induced broadening, establishing that the effective resolution (FWHM) scales with 1/Navg. (c) To mitigate anisotropy, we propose Rotated MHE (R-MHE), which applies distinct rotations at each resolution level, leading to a more isotropic PSF.
<paragraph_1>In this work, we introduce a novel methodology to characterize and understand the performance of MHE by analyzing its Point Spread Function (PSF). Analogous to measuring the Green’s function of a physical system, the PSF characterizes the model’s response when optimized to represent an idealized point source (Figure 1b). This approach permits the rigorous quantification of effective</paragraph_1> <paragraph_2>We further investigate the impact of finite hash capacity, demonstrating how collisions introduce speckle-like side lobes and degrade the Signal-to-Noise Ratio (SNR). Informed by our comprehensive PSF analysis, we demonstrate how these insights can be leveraged to improve reconstruction quality. We introduce Rotated MHE (R-MHE) (Figure 1c), an architecture that applies distinct rotations to the input coordinates at each resolution level. By utilizing the existing multi-resolution structure, R-MHE improves isotropy without requiring additional hash tables or parameters, maintaining the efficiency of the original MHE.</paragraph_2>
diagram
0.984853
OpenReview
ICLR
2,026
CaTs and DAGs: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions
Artificial Neural Networks (ANNs), including fully-connected networks and transformers, are highly flexible and powerful function approximators, widely applied in fields like computer vision and natural language processing. However, their inability to inherently respect causal structures can limit their robustness, making them vulnerable to covariate shift and difficult to interpret/explain. This poses significant challenges for their reliability in real-world applications. In this paper, we introduce Causal Transformers (CaTs), a general model class designed to operate under predefined causal constraints, as specified by a Directed Acyclic Graph (DAG). CaTs retain the powerful function approximation abilities of traditional neural networks while adhering to the underlying structural constraints, improving robustness, reliability, and interpretability at inference time. This approach opens new avenues for deploying neural networks in more demanding, real-world scenarios where robustness and explainability is critical.
transformers, causal inference, causality, inductive bias, DAGs
causal reasoning
Causal Transformers (CaTs) are neural networks constrained by a causal DAG, combining the power of standard ANNs with improved robustness to covariate shift, greater reliability, and interpretability for real-world applications.
[ 4, 6, 4 ]
Accept (Poster)
Matthew James Vowels, Mathieu Rochat, Sina Akbari
~Matthew_James_Vowels1, ~Mathieu_Rochat1, ~Sina_Akbari1
20250910
https://openreview.net/forum?id=ZIQactmQxb
ZIQactmQxb
@inproceedings{ vowels2026cats, title={CaTs and {DAG}s: Integrating Directed Acyclic Graphs with Transformers for Causally Constrained Predictions}, author={Matthew James Vowels and Mathieu Rochat and Sina Akbari}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=ZIQactmQxb} }
OpenReview/ICLR/figures/2026/accept_poster/ZIQactmQxb/Figure8.png
8
Figure 8: The DAG used in the real-world psychology example - reconstructed from the causal discovery and domain expertise results presented in (Vowels et al., 2023a). Treatment is attachment style ’attachment’ (also highlighted in orange) and the two outcomes of interest at the measures of depression (highlighted in green).
<paragraph_1>We follow closely the process in (Vowels et al., 2023a) for estimating the causal effect of shifting from one category of attachment style to another on depression. We also report the results for a subset of their analyses in Table 3, which use a ‘naive’ estimator (comprising the bivariate linear model between the categorical treatment ‘attachment’ and the two outcomes), a targeted learning estimator specialized for causal inference which incorporates semi-parametric techniques (van der Laan & Starmans, 2014; Vowels et al., 2023b), and our results using CaT. Note that there may be some minor differences in their data preprocessing which we were not able to reproduce. In particular, for each node in the DAG, the original authors reduced the dimensionality of the construct to be uni-dimensional by taking the sum of the scores for each of the individual items. In contrast, we padded all input variables so that they were the same dimensionality as the node with the highest dimensionality. For instance, social distancing ‘social dist’ was found to have 16 items, so loneliness, which has only 3 items, was zero-padded to have 16 dimensions. The enables us to use all available information in the input. The dimensionalities / number of items for each construct are shown in Table reftab:realworlddimensions. We also use the DAG presented in (Vowels et al., 2023a) which was the result of a causal discovery process alongside domain expertise, this DAG is reproduced in Figure 8.</paragraph_1>
diagram
0.913085
OpenReview
ICLR
2,026
A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering
Effectively applying Vision-Language Models (VLMs) to Video Question Answering (VideoQA) hinges on selecting a concise yet comprehensive set of frames, as processing entire videos is computationally infeasible. However, current frame selection methods face a critical trade-off: approaches relying on lightweight similarity models, such as CLIP, often fail to capture the nuances of complex queries, resulting in inaccurate similarity scores that cannot reflect the authentic query-frame relevance, which further undermines frame selection. Meanwhile, methods that leverage a VLM for deeper analysis achieve higher accuracy but incur prohibitive computational costs. To address these limitations, we propose A.I.R., a training-free approach for Adaptive, Iterative, and Reasoning-based frame selection. We leverage a powerful VLM to perform deep, semantic analysis on complex queries, and this analysis is deployed within a cost-effective iterative loop that processes only a small batch of the most high-potential frames at a time. Extensive experiments on various VideoQA benchmarks demonstrate that our approach outperforms existing frame selection methods, significantly boosts the performance of the foundation VLM, and achieves substantial gains in computational efficiency over other VLM-based techniques.
Video Frame Selection, Vision Language Model, Training-Free, Video understanding
applications to computer vision, audio, language, and other modalities
[ 6, 4, 6, 4 ]
Accept (Poster)
Yuanhao Zou, Shengji Jin, Andong Deng, Youpeng Zhao, Jun Wang, Chen Chen
~Yuanhao_Zou1, ~Shengji_Jin1, ~Andong_Deng2, ~Youpeng_Zhao2, ~Jun_Wang7, ~Chen_Chen18
20250902
https://openreview.net/forum?id=SZVpOKw0YD
SZVpOKw0YD
@inproceedings{ zou2026air, title={A.I.R.: Enabling Adaptive, Iterative, and Reasoning-based Frame Selection For Video Question Answering}, author={Yuanhao Zou and Shengji Jin and Andong Deng and Youpeng Zhao and Jun Wang and Chen Chen}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=SZVpOKw0YD} }
OpenReview/ICLR/figures/2026/accept_poster/SZVpOKw0YD/Figure2.png
2
Figure 2: General pipeline of A.I.R. with three stages: (1) Adaptive Initial Sampling that identifies potential ‘events’ based on query similarity and dynamically samples frames around them using an adaptive budget; (2) Iterative Frame Selection that progressively refines the frame selection via four steps; and (3) QA Stage that feeds the final selected frames into Answering VLM.
<paragraph_1>As illustrated in Fig. 2, our proposed approach, A.I.R., performs frame selection in three stages: Adaptive Initial Sampling, Iterative Frame Selection, and QA Stage. The process begins by sampling n frames from the video (containing N total frames) at a fixed frame rate. As a pre-processing step, these n frames are passed to a CLIP model (Radford et al., 2021) to compute query-frame similarity scores, which is stored as a sparse vector S ∈RN 1. This similarity signal S is the input to the Adaptive Initial Sampling stage (Sec. 3.2), which identifies an initial set of K high-potential frame</paragraph_1> <paragraph_2>Step 2: Reasoning-Based VLM Analysis. Following the Potential Interval Ranking, the C selected frames Fcand are analyzed by a Analysis VLM for a focused, reasoning-based evaluation. We leverage the zero-shot, instruction-following capabilities of foundation VLMs to assess the relevance of each frame quantitatively. Guided by a detailed prompt (see Fig. 3 (b) and A.2.5), the VLM is instructed to reason step-by-step, providing both a textual justification and a relevance score (e.g., an integer from 1 to 5) for each candidate frame. Based on the relationship to a predefined threshold θ, these scores are classified as ‘Positive’ (> θ), ‘Neutral’ (= θ), or ‘Negative’ (< θ) and collected into a vector R ∈NC. We retain the ‘Positive’ frames to form a validated frame set F∗as:</paragraph_2>
diagram
0.968053
OpenReview
ICLR
2,026
Amortising Inference and Meta-Learning Priors in Neural Networks
One of the core facets of Bayesianism is in the updating of prior beliefs in light of new evidence$\textemdash$so how can we maintain a Bayesian approach if we have no prior beliefs in the first place? This is one of the central challenges in the field of Bayesian deep learning, where it is not clear how to represent beliefs about a prediction task by prior distributions over model parameters. Bridging the fields of Bayesian deep learning and probabilistic meta-learning, we introduce a way to $\textit{learn}$ a weights prior from a collection of datasets by introducing a way to perform per-dataset amortised variational inference. The model we develop can be viewed as a neural process whose latent variable is the set of weights of a BNN and whose decoder is the neural network parameterised by a sample of the latent variable itself. This unique model allows us to study the behaviour of Bayesian neural networks under well-specified priors, use Bayesian neural networks as flexible generative models, and perform desirable but previously elusive feats in neural processes such as within-task minibatching or meta-learning under extreme data-starvation.
neural processes, Bayesian neural networks, meta-learning, priors, variational inference
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
[ 4, 6, 4, 6 ]
Accept (Poster)
Tommy Rochussen, Vincent Fortuin
~Tommy_Rochussen1, ~Vincent_Fortuin1
20250919
https://openreview.net/forum?id=KG6SSTz2GJ
KG6SSTz2GJ
@inproceedings{ rochussen2026amortising, title={Amortising Inference and Meta-Learning Priors in Neural Networks}, author={Tommy Rochussen and Vincent Fortuin}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=KG6SSTz2GJ} }
OpenReview/ICLR/figures/2026/accept_poster/KG6SSTz2GJ/Figure9.png
9
Figure 9: Computational diagrams of the amortised attention layer (a), amortised attention block (b), and BNAM (c). Due to the numerous crossing lines in (a), we colour code the context and target input data paths as orange and light blue respectively. Arbitrarily many amortised attention blocks can be stacked sequentially in the BNAM; our diagram shows the simplest possible BNAM architecture.
<paragraph_1>We see in Fig. 9(a) that amortised inference can be performed in an attention layer by using amortised linear layers in place of standard linear layers, where MHA is the usual multi-head dot-product attention mechanism acting on keys K, queries Q, and values V. Similarly, in Fig. 9(b) we follow the standard approach (Vaswani et al., 2017) for constructing stackable attention blocks from attention layers, residual connections, layer norms, and 2-layer MLPs, but replacing each of the attention layer and MLP with their amortised counterparts. In Fig. 9(c) we show how amortised inference can be performed in a transformer by composing amortised linear layers and amortised attention blocks. We note that the resulting model can only be used in a somewhat unusual way for transformers; to map from test inputs Xt to predicted test outputs Yt where attention is performed between the test inputs, and where the posterior over the transformer’s weights is estimated from a context set.</paragraph_1>
diagram
0.988838
OpenReview
ICLR
2,026
DETR-ViP: Detection Transformer with Robust Discriminative Visual Prompts
Visual prompted object detection enables interactive and flexible definition of target categories, thereby facilitating open-vocabulary detection. Since visual prompts are derived directly from image features, they often outperform text prompts in recognizing rare categories. Nevertheless, research on visual prompted detection has been largely overlooked, and it is typically treated as a byproduct of training text prompted detectors, which hinders its development. To fully unlock the potential of visual-prompted detection, we investigate the reasons why its performance is suboptimal and reveal that the underlying issue lies in the absence of global discriminability in visual prompts. Motivated by these observations, we propose DETR-ViP, a robust object detection framework that yields class-distinguishable visual prompts. On top of basic image-text contrastive learning, DETR-ViP incorporates global prompt integration and visual-textual prompt relation distillation to learn more discriminative prompt representations. In addition, DETR-ViP employs a selective fusion strategy that ensures stable and robust detection. Extensive experiments on COCO, LVIS, ODinW, and Roboflow100 demonstrate that DETR-ViP achieves substantially higher performance in visual prompt detection compared to other state-of-the-art counterparts. A series of ablation studies and analyses further validate the effectiveness of the proposed improvements and shed light on the underlying reasons for the enhanced detection capability of visual prompts.
object detection, prompt-based detection, open-set object detection
applications to computer vision, audio, language, and other modalities
This paper presents the DETR-ViP framework, which enhances visual prompt detection by improving the semantic consistency of visual prompts and introducing a selective fusion strategy.
[ 6, 4, 6 ]
Accept (Poster)
Bo Qian, Dahu Shi, Xing Wei
~Bo_Qian1, ~Dahu_Shi2, ~Xing_Wei5
20250903
https://openreview.net/forum?id=2KKDWERRm3
2KKDWERRm3
@inproceedings{ qian2026detrvip, title={{DETR}-ViP: Detection Transformer with Robust Discriminative Visual Prompts}, author={Bo Qian and Dahu Shi and Xing Wei}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=2KKDWERRm3} }
OpenReview/ICLR/figures/2026/accept_poster/2KKDWERRm3/Figure2.png
2
Figure 2: The overview of DETR-ViP. DETR-ViP builds on Grounding DINO by incorporating a visual prompt encoder for visual-prompted detection. It improves prompt semantics via global prompt Integration and visual-textual prompt relation distillation, and refines the fusion module to stabilize image-prompt interactions, thereby enhancing detection robustness.
<paragraph_1>We develop the baseline VIS-GDINO from Grounding DINO by inserting the visual prompt encoder, as defined in Equation (3), between the backbone and the encoder, and removing the fusion modules in the encoder and decoder as represented in Equation (2). On top of this architecture, we introduce the global prompt integration, visual-textual prompt relation distillation loss, and selective fusion strategy to enhance visual prompt detection, thereby upgrading VIS-GDINO to DETR-ViP, as shown in Figure 2.</paragraph_1>
diagram
0.991753
OpenReview
ICLR
2,026
When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations
Large Multimodal Models (LMMs) store vast amounts of pretrained knowledge but struggle to remain aligned with real-world updates, making it difficult to avoid capability degradation when acquiring evolving knowledge. Furthermore, most current work focuses on exploring static textual knowledge injection, neglecting dynamic multimodal evolving knowledge injection, leaving the potential of LMMs for multimodal knowledge injection as an open question. To address this, we first propose a pipeline to construct MMEVOKE, a benchmark for evaluating LMMs' ability in multimodal evolving knowledge injection. MMEVOKE contains 9,422 samples spanning 159 subtypes. Then, based on extensive experiments with MMEVOKE, we reveal challenges such as poor injection performance and capability degradation in existing knowledge injection methods through knowledge injection tests and general capability tests. Finally, to tackle these challenges, we introduce knowledge augmentation and knowledge retention methods, finding that knowledge-aware augmentation strengthens knowledge injection performance, and that Data Replay and MoE methods effectively mitigate capability degradation.
Evolving Knowledge Injection; Large multimodal model; Benchmark and Dataset
datasets and benchmarks
This work introduces MMEVOKE benchmark to reveal challenges in knowledge injection and explores potential solutions.
[ 6, 6, 4, 8 ]
Accept (Poster)
Kailin Jiang, Yuntao Du, Yukai Ding, Yuchen Ren, Ning Jiang, Zhi Gao, Zilong Zheng, Lei Liu, Bin Li, Qing Li
~Kailin_Jiang1, ~Yuntao_Du2, ~Yukai_Ding2, ~Yuchen_Ren1, ~Ning_Jiang7, ~Zhi_Gao5, ~Zilong_Zheng1, ~Lei_Liu28, ~Bin_Li8, ~Qing_Li1
20250901
https://openreview.net/forum?id=iaPEM00wEs
iaPEM00wEs
@inproceedings{ jiang2026when, title={When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations}, author={Kailin Jiang and Yuntao Du and Yukai Ding and Yuchen Ren and Ning Jiang and Zhi Gao and Zilong Zheng and Lei Liu and Bin Li and Qing Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=iaPEM00wEs} }
OpenReview/ICLR/figures/2026/accept_poster/iaPEM00wEs/Figure25.png
25
Figure 25: Fine-grained dimensional results on MathVision and HallusionBench.
<paragraph_1>According to Figures 22, 23, 24, 25, and 26, we conduct result analysis for each benchmark.</paragraph_1>
diagram
0.915522
OpenReview
ICLR
2,026
Robustness in the Face of Partial Identifiability in Reward Learning
In Reward Learning (ReL), we are given feedback on an unknown target reward, and the goal is to use this information to recover it in order to carry out some downstream application, e.g., planning. When the feedback is not informative enough, the target reward is only partially identifiable, i.e., there exists a set of rewards, called the feasible set, that are equally plausible candidates for the target reward. In these cases, the ReL algorithm might recover a reward function different from the target reward, possibly leading to a failure in the application. In this paper, we introduce a general ReL framework that permits to quantify the drop in "performance" suffered in the considered application because of identifiability issues. Building on this, we propose a robust approach to address the identifiability problem in a principled way, by maximizing the "performance" with respect to the worst-case reward in the feasible set. We then develop Rob-ReL, a ReL algorithm that applies this robust approach to the subset of ReL problems aimed at assessing a preference between two policies, and we provide theoretical guarantees on sample and iteration complexity for Rob-ReL. We conclude with some numerical simulations to illustrate the setting and empirically characterize Rob-ReL.
Inverse Reinforcement Learning, Reward Learning, Preference Based Reinforcement Learning, Theory
reinforcement learning
We propose to tackle the identifiability problem in reward learning with a robust approach.
[ 4, 2, 8, 8, 8 ]
Accept (Poster)
Filippo Lazzati, Alberto Maria Metelli
~Filippo_Lazzati2, ~Alberto_Maria_Metelli2
20250918
https://openreview.net/forum?id=e4xANXjA9W
e4xANXjA9W
@inproceedings{ lazzati2026robustness, title={Robustness in the Face of Partial Identifiability in Reward Learning}, author={Filippo Lazzati and Alberto Maria Metelli}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=e4xANXjA9W} }
OpenReview/ICLR/figures/2026/accept_poster/e4xANXjA9W/Figure2.png
2
Figure 2: Illustration of the quantities of interest. r is any reward.
<paragraph_1>See Figure 2 for a simple graphical intuition of all these quantities.</paragraph_1>
diagram
0.966508
OpenReview
ICLR
2,026
Unveiling the Mechanism of Continuous Representation Full-Waveform Inversion: A Wave Based Neural Tangent Kernel Framework
Full-waveform inversion (FWI) estimates physical parameters in the wave equation from limited measurements and has been widely applied in geophysical exploration, medical imaging, and non-destructive testing. Conventional FWI methods are limited by their notorious sensitivity to the accuracy of the initial models. Recent progress in continuous representation FWI (CR-FWI) demonstrates that representing parameter models with a coordinate-based neural network, such as implicit neural representation (INR), can mitigate the dependence on initial models. However, its underlying mechanism remains unclear, and INR-based FWI shows slower high-frequency convergence. In this work, we investigate the general CR-FWI framework and develop a unified theoretical understanding by extending the neural tangent kernel (NTK) for FWI to establish a wave-based NTK framework. Unlike standard NTK, our analysis reveals that wave-based NTK is not constant, both at initialization and during training, due to the inherent nonlinearity of FWI. We further show that the eigenvalue decay behavior of the wave-based NTK can explain why CR-FWI alleviates the dependency on initial models and shows slower high-frequency convergence. Building on these insights, we propose several CR-FWI methods with tailored eigenvalue decay properties for FWI, including a novel hybrid representation combining INR and multi-resolution grid (termed IG-FWI) that achieves a more balanced trade-off between robustness and high-frequency convergence rate. Applications in geophysical exploration on Marmousi, 2D SEG/EAGE Salt and Overthrust, 2004 BP model, and the more realistic 2014 Chevron models show the superior performance of our proposed methods compared to conventional FWI and existing INR-based FWI methods.
Full-waveform inversion; Continuous representation; Implicit neural representation; Neural tangent kernel
applications to physical sciences (physics, chemistry, biology, etc.)
This paper develops a theoretical framework to explain and optimize continuous representation FWI methods, and based on this, proposes some novel hybrid representations that strike a better balance between robustness and high-frequency convergence.
[ 6, 8, 8, 4 ]
Accept (Poster)
Ruihua Chen, Yisi Luo, Bangyu Wu, Deyu Meng
~Ruihua_Chen1, ~Yisi_Luo1, ~Bangyu_Wu1, ~Deyu_Meng1
20250915
https://openreview.net/forum?id=blqYa21WOv
blqYa21WOv
@inproceedings{ chen2026unveiling, title={Unveiling the Mechanism of Continuous Representation Full-Waveform Inversion: A Wave Based Neural Tangent Kernel Framework}, author={Ruihua Chen and Yisi Luo and Bangyu Wu and Deyu Meng}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=blqYa21WOv} }
OpenReview/ICLR/figures/2026/accept_poster/blqYa21WOv/Figure3.png
3
Figure 3: Pipeline of CR-FWI. CR-FWI employs (a) implicit neural representation, (b) low rank tensor function, or (c) multi-grid parametric encoding to represent the velocity parameter model and integrate the wave equation in a loop.
<paragraph_1>• Theory: We develop a unified wave-based NTK framework for conventional FWI and CR-FWI. The eigenvalue analysis explains why CR-FWI reduces reliance on initial models and exhibits slower high-frequency convergence, with numerical tests confirming these insights (see Fig. 5). • Method: Inspired by the eigenvalue decay analysis, we propose a novel integrated representation combining INR and multi-grid for FWI. This method achieves a tailored eigenvalue decay that is more suitable for FWI, leading to a better robustness-convergence trade-off (see Figs. 3 and 4). • Application: We extensively evaluate our methods under challenging scenarios, including inaccurate initial models, sparse sampling, seismic data with missing low frequencies and noise interference, various benchmark datasets (i.e., Marmousi, SEG/EAGE Salt and Overthrust, and 2004 BP models), and more realistic 2014 Chevron blind data compared to conventional and existing CR-FWI methods. Inversion results show consistently superior performance (see Fig. 6).</paragraph_1> <paragraph_2>We propose novel low-rank tensor function (LR-FWI) and multi-grid parametric encoding (MPE-FWI)-based FWI methods using continuous representations, shown in Fig. 3. LR-FWI and MPE-FWI improve the high-frequency convergence rate by reducing the attenuation rate of eigenvalues (Theorem 5.1). To further achieve a trade-off between robustness and convergence, we propose an integrated INR and multigrid representation (IG-FWI), whose eigenvalue decay rate is between INR-based and MPE-based FWI methods (Theorem 5.2).</paragraph_2> <paragraph_3>INR-based FWI. Traditional INR methods (Sitzmann et al., 2020; Sun et al., 2023a; Yang & Ma, 2025) use a coordinate-based neural network (e.g., MLP) with special activation functions to parameterize the velocity model (Fig. 3 (a)). The INR can be expressed as</paragraph_3> <paragraph_4>LR-FWI. The reparameterized subsurface geophysical parameters typically exhibit inherent structural constraints, such as low-rank and non-local similarity (Li et al., 2024a). To embed these properties, LR methods decompose the velocity model using tensor factorization methods (e.g., Tucker and CP decomposition) and represent the low-dimensional tensor separately using INRs (Luo et al., 2023), as shown in Fig. 3 (b). The general formulation of 2D LR-FWI can be expressed as:</paragraph_4> <paragraph_5>MPE-based FWI. Multi-grid parametric encoding employs trainable auxiliary data structures (e.g., grid-based representations) to construct higher-dimensional embedding spaces. As illustrated in Fig. 3 (c), MPE-based FWI leverages a multigrid hash encoding (M¨uller et al., 2022) to represent the velocity model. The hash function h(·) : U →Rng×nf maps a coordinate point x ∈U to a feature vector via h(x) ∈Rng×nf , where ng denotes the number of multigrid levels and nf is the number of features per grid. Then, these interpolated features are passed through a lightweight INR to produce the physical velocity value. The overall representation can be expressed as:</paragraph_5>
diagram
0.964798
OpenReview
ICLR
2,026
BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration
Diffusion Transformer has shown remarkable abilities in generating high-fidelity videos, delivering visually coherent frames and rich details over extended durations. However, existing video generation models still fall short in subject-consistent video generation due to an inherent difficulty in parsing prompts that specify complex spatial relationships, temporal logic, and interactions among multiple subjects. To address this issue, we propose BindWeave, a unified framework that handles a broad range of subject-to-video scenarios from single-subject cases to complex multi-subject scenes with heterogeneous entities. To bind complex prompt semantics to concrete visual subjects, we introduce an MLLM-DiT framework in which a pretrained multimodal large language model performs deep cross-modal reasoning to ground entities and disentangle roles, attributes, and interactions, yielding subject-aware hidden states that condition the diffusion transformer for high-fidelity subject-consistent video generation. Experiments on the OpenS2V benchmark demonstrate that our method achieves superior performance across subject consistency, naturalness, and text relevance in generated videos, outperforming existing open-source and commercial models.
Video generation, Diffusion models
generative models
[ 4, 6, 4, 6 ]
Accept (Poster)
Zhaoyang Li, Dongjun Qian, Kai Su, qishuai diao, Xiangyang Xia, Chang Liu, Wenfei Yang, Tianzhu Zhang, Zehuan Yuan
~Zhaoyang_Li7, ~Dongjun_Qian1, ~Kai_Su1, ~qishuai_diao1, ~Xiangyang_Xia1, ~Chang_Liu71, ~Wenfei_Yang2, ~Tianzhu_Zhang1, ~Zehuan_Yuan1
20250919
https://openreview.net/forum?id=FP2XNyV9WL
FP2XNyV9WL
@inproceedings{ li2026bindweave, title={BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration}, author={Zhaoyang Li and Dongjun Qian and Kai Su and qishuai diao and Xiangyang Xia and Chang Liu and Wenfei Yang and Tianzhu Zhang and Zehuan Yuan}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=FP2XNyV9WL} }
OpenReview/ICLR/figures/2026/accept_poster/FP2XNyV9WL/Figure2.png
2
Figure 2: Framework of our method. A multimodal large language model performs cross-modal reasoning to ground entities and disentangle roles, attributes, and interactions from the prompt and optional reference images. The resulting subject-aware signals condition a Diffusion Transformer through cross-attention and lightweight adapters, guiding identity-faithful, relation-consistent, and temporally coherent video generation.
<paragraph_1>Our proposed BindWeave is designed to overcome the limitations of shallow fusion paradigms in subject-consistent video generation. The core principle of our approach is to replace shallow, posthoc fusion with a deep, reasoned understanding of multimodal inputs before the generation process begins. To this end, BindWeave first leverages a Multimodal Large Language Model (MLLM) to act as an intelligent instruction parser. The MLLM thus generates a guiding schema, realized as a sequence of hidden states that encodes complex cross-modal semantics and spatio-temporal logic, then meticulously guides a Diffusion Transformer (DiT) throughout the synthesis process. Figure 2 provides a schematic overview of the BindWeave architecture.</paragraph_1>
diagram
0.939486
OpenReview
ICLR
2,026
FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring
Recent advancements in image motion deblurring, driven by CNNs and transformers, have made significant progress. Large-scale pre-trained diffusion models, which are rich in real-world modeling, have shown great promise for high-quality image restoration tasks such as deblurring, demonstrating stronger generative capabilities than CNN and transformer-based methods. However, challenges such as unbearable inference time and compromised fidelity still limit the full potential of the diffusion models. To address this, we introduce FideDiff, a novel single-step diffusion model designed for high-fidelity deblurring. We reformulate motion deblurring as a diffusion-like process where each timestep represents a progressively blurred image, and we train a consistency model that aligns all timesteps to the same clean image. By reconstructing training data with matched blur trajectories, the model learns temporal consistency, enabling accurate one-step deblurring. We further enhance model performance by integrating Kernel ControlNet for blur kernel estimation and introducing adaptive timestep prediction. Our model achieves superior performance on full-reference metrics, surpassing previous diffusion-based methods and matching the performance of other state-of-the-art models. FideDiff offers a new direction for applying pre-trained diffusion models to high-fidelity image restoration tasks, establishing a robust baseline for further advancing diffusion models in real-world industrial applications. Our dataset and code will be publicly available.
Image Motion-Deblurring, Diffusion Model
applications to computer vision, audio, language, and other modalities
[ 6, 8, 6, 4 ]
Accept (Poster)
Xiaoyang Liu, Zhengyan Zhou, Zihang Xu, Jiezhang Cao, Zheng Chen, Yulun Zhang
~Xiaoyang_Liu4, ~Zhengyan_Zhou2, ~Zihang_Xu11, ~Jiezhang_Cao2, ~Zheng_Chen11, ~Yulun_Zhang1
20250906
https://openreview.net/forum?id=AFJMB9SkHT
AFJMB9SkHT
@inproceedings{ liu2026fidediff, title={FideDiff: Efficient Diffusion Model for High-Fidelity Image Motion Deblurring}, author={Xiaoyang Liu and Zhengyan Zhou and Zihang Xu and Jiezhang Cao and Zheng Chen and Yulun Zhang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=AFJMB9SkHT} }
OpenReview/ICLR/figures/2026/accept_poster/AFJMB9SkHT/Figure3.png
3
Figure 3: Forward and backward processes.
<paragraph_1>In Fig. 3, we reformulate the forward and backward processes for the image motion blurring and deblurring. We define the clean image as z0 and the initial blur kernel as identity convolution k0, where z0 = z0 ∗k0. From a pure clean image to the blurry image, we regard the forward blur kernel generation process as a chain, following:</paragraph_1> <paragraph_2>Commonly used kernel generation methods involve generating random blur trajectories, which are convolved with sharp images to produce blurry counterparts. In these methods, the blur kernel typically exhibits non-linear and non-uniform trajectories, with kt depending on previous states kt−1:0 based on different simulation techniques. For simplicity, Figure 3 illustrates a globally uniform blur, while real-world scenarios apply the blur kernel on a pixel-wise basis.</paragraph_2> <paragraph_3>4.2 KERNEL CONTROLNET End-to-end learning often overlooks crucial motion information, and incorporating kernel priors into DMs remains underexplored. For instance, Liu et al. (2024b) employs two vision encoders to extract semantic information as a plugin, whereas the vanilla ControlNet (Zhang et al., 2023) accepts conditions like human pose, depth for generation, but has not explored kernel information. Recently, Lin et al. (2024) adopts a two-stage reconstruction approach and proposes IRControlNet as a post-processing modifier, enhancing the quality of the repaired image. The foundation model alone is far from sufficient for high-fidelity deblurring tasks. In Fig. 3, with estimated kt, the network ϵθ(zt, t, c, kt) is expected to be more powerful to predict the z0 given the fact zt = kt ∗z0.</paragraph_3>
diagram
0.985859
OpenReview
ICLR
2,026
SPELL: Self-Play Reinforcement Learning for evolving Long-Context Language Models
Progress in long-context reasoning for large language models (LLMs) has lagged behind other recent advances. This gap arises not only from the intrinsic difficulty of processing long texts, but also from the scarcity of reliable human annotations and programmatically verifiable reward signals. In this paper, we propose SPELL, a multi-role self-play reinforcement learning framework that enables scalable, label-free optimization for long-context reasoning. SPELL integrates three cyclical roles—questioner, responder, and verifier—within a single model to enable continual self-improvement. The questioner generates questions from raw documents paired with reference answers; the responder learns to solve these questions based on the documents; and the verifier evaluates semantic equivalence between the responder’s output and the questioner's reference answer, producing reward signals to guide continual training. To stabilize training, we introduce an automated curriculum that gradually increases document length and a reward function that adapts question difficulty to the model’s evolving capabilities. Extensive experiments on six long-context benchmarks show that SPELL consistently improves performance across diverse LLMs and outperforms equally sized models fine-tuned on large-scale annotated data. Notably, SPELL achieves an average 7.6-point gain in pass@8 on the strong reasoning model Qwen3-30B-A3B-Thinking, raising its performance ceiling and showing promise for scaling to even more capable models. Our code is available at https://github.com/Tongyi-Zhiwen/Qwen-Doc.
Self-Play, Reinforcement Learning, Long-Context Reasoning, Large Language Models
reinforcement learning
A label-free RL framework that drives the autonomous evolution of LLMs in long-context reasoning
[ 6, 6, 4, 8 ]
Accept (Poster)
Ziyi Yang, Weizhou Shen, Chenliang Li, Ruijun Chen, Fanqi Wan, Ming Yan, Xiaojun Quan, Fei Huang
~Ziyi_Yang6, ~Weizhou_Shen1, ~Chenliang_Li2, ~Ruijun_Chen4, ~Fanqi_Wan1, ~Ming_Yan2, ~Xiaojun_Quan1, ~Fei_Huang2
20250916
https://openreview.net/forum?id=83F6YF4Hz6
83F6YF4Hz6
@inproceedings{ yang2026spell, title={{SPELL}: Self-Play Reinforcement Learning for evolving Long-Context Language Models}, author={Ziyi Yang and Weizhou Shen and Chenliang Li and Ruijun Chen and Fanqi Wan and Ming Yan and Xiaojun Quan and Fei Huang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=83F6YF4Hz6} }
OpenReview/ICLR/figures/2026/accept_poster/83F6YF4Hz6/Figure1.png
1
Figure 1: (Left) An overview of the SPELL framework, where a single LLM self-evolves by dynamically adopting the roles of questioner, responder, and verifier. (Right) SPELL consistently boosts performance across various models (top) and exhibits superior test-time scaling over traditional RLVR (bottom).
<paragraph_1>As illustrated in Figure 2 and Algorithm 1, SPELL proceeds iteratively: given a cluster of n documents C = {ci}n i=1 and a task type1 τ, the policy πθ first generates new questions,2 then attempts to solve them, and finally verifies the solutions before performing a unified policy update.</paragraph_1>
diagram
0.988391
OpenReview
ICLR
2,026
Accelerating Benchmarking of Functional Connectivity Modeling via Structure-aware Core-set Selection
Benchmarking the hundreds of functional connectivity (FC) modeling methods on large-scale fMRI datasets is critical for reproducible neuroscience. However, the combinatorial explosion of model–data pairings makes exhaustive evaluation computationally prohibitive, preventing such assessments from becoming a routine pre-analysis step. To break this bottleneck, we reframe the challenge of FC benchmarking by selecting a small, representative *core-set* whose sole purpose is to preserve the relative performance ranking of FC operators. We formalize this as a ranking-preserving subset selection problem and propose **S**tructure-aware **C**ontrastive **L**earning for **C**ore-set **S**election (**SCLCS**), a self-supervised framework to select these core-sets. **SCLCS** first uses an adaptive Transformer to learn each sample's unique FC structure. It then introduces a novel **S**tructural **P**erturbation **S**core (**SPS**) to quantify the stability of these learned structures during training, identifying samples that represent foundational connectivity archetypes. Finally, while **SCLCS** identifies stable samples via a top-$k$ ranking, we further introduce a **density-balanced sampling strategy** as a necessary correction to promote diversity, ensuring the final core-set is both structurally robust and distributionally representative. On the large-scale REST-meta-MDD dataset, **SCLCS** preserves the ground-truth model ranking with just 10% of the data, outperforming state-of-the-art (SOTA) core-set selection methods by up to 23.2% in ranking consistency (nDCG@k). To our knowledge, this is the first work to formalize core-set selection for FC operator benchmarking, thereby making large-scale operators comparisons a feasible and integral part of computational neuroscience. Code is publicly available on: [https://github.com/lzhan94swu/SCLCS](https://github.com/lzhan94swu/SCLCS)
Functional Connectivity Benchmark, Core-set Selection, Network Modeling, Structure-aware Sampling
applications to neuroscience & cognitive science
We frame functional connectivity benchmarking task as a ranking recommendation problem and propose a self-supervised core-set selection framework that achieves up to 23.2% higher ranking stability than baselines at a 10% sampling rate.
[ 6, 6, 6, 6 ]
Accept (Poster)
Ling Zhan, Zhen Li, Junjie Huang, Tao Jia
~Ling_Zhan2, ~Zhen_Li38, ~Junjie_Huang4, ~Tao_Jia3
20250907
https://openreview.net/forum?id=0RYazbfSzW
0RYazbfSzW
@inproceedings{ zhan2026accelerating, title={Accelerating Benchmarking of Functional Connectivity Modeling via Structure-aware Core-set Selection}, author={Ling Zhan and Zhen Li and Junjie Huang and Tao Jia}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=0RYazbfSzW} }
OpenReview/ICLR/figures/2026/accept_poster/0RYazbfSzW/Figure1.png
1
Figure 1: Overview of the SCLCS framework for ranking-preserving core-set selection. Contrasting with selection for single-model classification (top left), our task is to preserve the performance ranking of SPIs (top right). Our method (bottom) achieves this using a Transformer to learn structures, our novel SPS metric to ensure stability, and a density-aware strategy to promote diversity.
<paragraph_1>While core-set selection is well studied, most existing methods target a different goal: constructing a small training proxy for a single predictive model (Feldman, 2020; Lee et al., 2024; Hong et al., 2024b). In our setting (Figure 1), the core-set must preserve the relative performance ranking across hundreds of candidate SPIs (Liu et al., 2025; Cliff et al., 2023). This ranking-preservation objective raises three challenges: (1) Formulating a selection criterion that targets cross-SPI ranking stability rather than single-model training loss. (2) Defining a principled, structure-aware notion of sample importance based on FC patterns (the targets of SPIs). (3) Reducing the brittleness of score-based top-k selection, which can fail to generalize across sampling ratios and distort rankings.</paragraph_1> <paragraph_2>In this work, we cast core-set selection for FC benchmarking as a ranking-preserving subset selection problem. Rather than training a predictive model, we seek a subset that preserves the SPI ordering from the full dataset (Figure 1). We evaluate on the REST-meta-MDD dataset (Yan et al., 2019; Long et al., 2020), a large multi-site resting-state fMRI dataset for MDD, which captures heterogeneity across acquisition sites and a large cohort. We instantiate benchmarking with two tasks, brain fingerprinting (Van De Ville et al., 2021) and MDD diagnosis (Gallo et al., 2023), both widely used in FC research (Lu et al., 2024; Otte et al., 2016). For each task, we score each SPI by how well the resulting FC matrices separate within-class from between-class pairs using Spearman’s rank correlation (Sedgwick, 2014), yielding an SPI ranking. Core-set quality is measured by nDCG@k (Wang et al., 2013) between the SPI rankings induced by the core-set and the full dataset.</paragraph_2> <paragraph_3>We use SPIs as a validation case because benchmarking FC operators has been formalized as a well-defined task in recent work (Liu et al., 2025; Cliff et al., 2023; Honari et al., 2021). Based on this formulation, we propose Structure-aware Contrastive Learning for Core-set Selection (SCLCS). As shown in Figure 1, SCLCS is built around a Transformer-based encoder that encodes samplespecific synchronization structure via an adaptively weighted fusion of attention heads. Under the assumptions of Theorem 2, we show this encoder has universal approximation capacity for continuous SPI mappings. We then define a Structure Perturbation Score (SPS) to quantify the stability of these structures, and prioritize low-SPS samples to form a robust core-set. Because naïve top-k selection can be brittle for certain task structures, SCLCS augments it with a densityaware sampling strategy to improve diversity. SCLCS learns in an identity-supervised contrastive manner, using subject identities to encourage stable “brain fingerprints”(Van De Ville et al., 2021) that SPI-based analyses aim to capture(Liu et al., 2025; Luppi et al., 2024). This yields task-agnostic representations suitable for benchmarking. Finally, SCLCS is a pre-analysis acceleration tool that makes large-scale benchmarking computationally feasible, rather than a method for the final neuroscientific discovery task.</paragraph_3>
diagram
0.941608
OpenReview
ICLR
2,026
Tackling Time-Series Forecasting Generalization via Mitigating Concept Drift
Time-series forecasting finds broad applications in real-world scenarios. Due to the dynamic nature of time series data, it is important for time-series forecasting models to handle potential distribution shifts over time. In this paper, we initially identify two types of distribution shifts in time series: concept drift and temporal shift. We acknowledge that while existing studies primarily focus on addressing temporal shift issues in time series forecasting, designing proper concept drift methods for time series forecasting has received comparatively less attention. Motivated by the need to address potential concept drift, while conventional concept drift methods via invariant learning face certain challenges in time-series forecasting, we propose a soft attention mechanism that finds invariant patterns from both lookback and horizon time series. Additionally, we emphasize the critical importance of mitigating temporal shifts as a preliminary to addressing concept drift. In this context, we introduce ShifTS, a method-agnostic framework designed to tackle temporal shift first and then concept drift within a unified approach. Extensive experiments demonstrate the efficacy of ShifTS in consistently enhancing the forecasting accuracy of agnostic models across multiple datasets, and outperforming existing concept drift, temporal shift, and combined baselines.
Time-Series Forecasting, Distribution Shift, Concept Drift
learning on time series and dynamical systems
[ 6, 6, 6 ]
Accept (Poster)
Zhiyuan Zhao, Haoxin Liu, B. Aditya Prakash
~Zhiyuan_Zhao1, ~Haoxin_Liu3, ~B._Aditya_Prakash2
20250914
https://openreview.net/forum?id=emkvZ7NanK
emkvZ7NanK
@inproceedings{ zhao2026tackling, title={Tackling Time-Series Forecasting Generalization via Mitigating Concept Drift}, author={Zhiyuan Zhao and Haoxin Liu and B. Aditya Prakash}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=emkvZ7NanK} }
OpenReview/ICLR/figures/2026/accept_poster/emkvZ7NanK/Figure1.png
1
Figure 1: Comparison between conventional time-series forecasting and our approach. Our approach identifies invariant patterns in lookback and horizon window as XSUR and then models a stable conditional distribution accordingly to mitigate concept drift.
<paragraph_1>This instability arises because, for a given exogenous feature X, its lookback window XL alone may lack sufficient information to predict YH, while learning a stable conditional distribution requires that the inputs provide sufficient information to predict the output (Sagawa et al., 2019; Arjovsky et al., 2019). There are possible patterns in the horizon window XH, joint with XL, that influence the target. Thus, modeling P(YH|XL, XH) leads to a more stable conditional distribution compared to P(YH|XL), as [XL, XH] captures additional causal relationships across future time steps. We assume that incorporating causal relationships from the horizon window enables more complete causality modeling between that exogenous feature and target, given that the future cannot influence the past (e.g., XH t+1 ↛YH t ). However, these causal effects from the horizon window, while important for learning stable conditional distributions, are often overlooked by conventional time-series forecasting methods, as illustrated in Figure 1(a).</paragraph_1> <paragraph_2>To address the above challenges, instead of directly modeling P(YH|XL, XH), we propose a twostep approach: first, identifying patterns in [XL, XH] that lead to stable conditional distributions (namely invariant patterns), and then modeling these conditional distributions accordingly. To determine stability, a natural intuition is to assess whether a pattern’s correlation with the target remains consistent across all time steps. For instance, if a subsequence of [XL, XH] consistently exhibits stable correlations with the target over all or most time steps (e.g., an increase of the subsequence always results in an increase of the target), then its conditional distribution should be explicitly modeled due to the stability. Conversely, if a subsequence demonstrates correlations with the target only sporadically or locally, these correlations are likely spurious, which are unstable conditional distributions to other time steps. We leverage this intuition to identify all invariant patterns and aggregate them into a surrogate feature XSUR, accounting for the fact that the target can be determined by multiple patterns. For instance, an influenza-like illness (ILI) outbreak in winter can be triggered by either extreme cold weather in winter or extreme heat waves in summer (Nielsen et al., 2011; Jaakkola et al., 2014). By incorporating this information, we model the corresponding conditional distribution P(YH|XSUR), as illustrated in Figure 1(b).</paragraph_2> <paragraph_3>To address concept drift in time-series forecasting, while acknowledging that mitigating temporal shifts is a prerequisite for resolving concept drift, we propose ShifTS —a comprehensive framework designed to tackle both challenges in time-series forecasting. ShifTS is model-agnostic, as the stable conditional distributions distinguished by SAM can be learned by any time-series forecasting model. The workflow of ShifTS is illustrated in Figure 2 and consists of the following steps: (1) Normalize the input time series; (2) Forecast surrogate exogenous features ˆXSUR that invariantly</paragraph_3> <paragraph_4>support the target series, as determined by SAM; (3) An aggregation MLP that uses ˆXSUR to forecast the target, denoted as Agg(·) in Figure 2 and Algorithm 1; (4) Denormalize the output time series. Conceptually, steps 1 and 4 mitigate the temporal shift, step 2 addresses concept drift, and step 3 performs weighted aggregation of exogenous features to support the target series. The optimization objective of ShifTS is as follows:</paragraph_4>
diagram
0.94529
OpenReview
ICLR
2,026
SRT: Super-Resolution for Time Series via Disentangled Rectified Flow
Fine-grained time series data with high temporal resolution is critical for accurate analytics across a wide range of applications. However, the acquisition of such data is often limited by cost and feasibility. This problem can be tackled by reconstructing high-resolution signals from low-resolution inputs based on specific priors, known as super-resolution. While extensively studied in computer vision, directly transferring image super-resolution techniques to time series is not trivial. To address this challenge at a fundamental level, we propose **S**uper-**R**esolution for **T**ime series (SRT), a novel framework that reconstructs temporal patterns lost in low-resolution inputs via disentangled rectified flow. SRT decomposes the input into trend and seasonal components, aligns them to the target resolution using an implicit neural representation, and leverages a novel cross-resolution attention mechanism to guide the generation of high-resolution details. We further introduce SRT-large, a scaled-up version with extensive pretraining, which enables strong zero-shot super-resolution capability. Extensive experiments on nine public datasets demonstrate that SRT and SRT-large consistently outperform existing methods across multiple scale factors, showing both robust performance and the effectiveness of each component in our architecture.
Time Series Super-Resolution, Rectified Flow, Temporal Disentanglement, Implicit Neural Representations
learning on time series and dynamical systems
We propose SRT, a novel disentangled rectified flow framework for time series super-resolution that generates high-resolution details from low-resolution data, achieving state-of-the-art performance across nine benchmarks.
[ 4, 6, 4, 8 ]
Accept (Poster)
Jufang Duan, Shenglong Xiao, Yuren Zhang
~Jufang_Duan2, ~Shenglong_Xiao1, ~Yuren_Zhang4
20250920
https://openreview.net/forum?id=I94Eg6cu7P
I94Eg6cu7P
@inproceedings{ duan2026srt, title={{SRT}: Super-Resolution for Time Series via Disentangled Rectified Flow}, author={Jufang Duan and Shenglong Xiao and Yuren Zhang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=I94Eg6cu7P} }
OpenReview/ICLR/figures/2026/accept_poster/I94Eg6cu7P/Figure2.png
2
Figure 2: Architecture of our proposed SRT. The upper left shows the training process, where the true residual sequence is decomposed, and the velocity predictors (Vs and Vτ ) are trained to fit the difference between the true values of s and τ and their respective initial states. The lower left depicts the inference process. The predictions ŝ and τ̂ are obtained using predicted velocity via the Euler method. Summing these predictions yields the estimated residual sequence, which is then added to the linear interpolated low-resolution input to produce the final TSSR result. The right side presents the structure of the proposed velocity predictor, which adopts a decoder-only architecture and incorporates a specially designed cross-resolution attention mechanism for velocity prediction, conditioning on both the low-resolution input and features aligned by the ITF.
<paragraph_1>We summarize the aforementioned workflow in Figure 2.</paragraph_1>
diagram
0.87616
OpenReview
ICLR
2,026
Distilling and Adapting: A Topology-Aware Framework for Zero-Shot Interaction Prediction in Multiplex Biological Networks
Multiplex Biological Networks (MBNs), which represent multiple interaction types between entities, are crucial for understanding complex biological systems. Yet, existing methods often inadequately model multiplexity, struggle to integrate structural and sequence information, and face difficulties in zero-shot prediction for unseen entities with no prior neighbourhood information. To address these limitations, we propose a novel framework for zero-shot interaction prediction in MBNs by leveraging context-aware representation learning and knowledge distillation. Our approach leverages domain-specific foundation models to generate enriched embeddings, introduces a topology-aware graph tokenizer to capture multiplexity and higher-order connectivity, and employs contrastive learning to align embeddings across modalities. A teacher–student distillation strategy further enables robust zero-shot generalization. Experimental results demonstrate that our framework outperforms state-of-the-art methods in interaction prediction for MBNs, providing a powerful tool for exploring various biological interactions and advancing personalized therapeutics.
Graph representation learning, contrastive learning, multiplex networks, knowledge distillation, zero-shot prediction
applications to physical sciences (physics, chemistry, biology, etc.)
[ 6, 4, 6 ]
Accept (Poster)
Alana Deng, Sugitha Janarthanan, Yan Sun, Zihao Jing, Pingzhao Hu
~Alana_Deng1, ~Sugitha_Janarthanan1, ~Yan_Sun11, ~Zihao_Jing1, ~Pingzhao_Hu2
20250918
https://openreview.net/forum?id=GvK1y3xqmh
GvK1y3xqmh
@inproceedings{ deng2026distilling, title={Distilling and Adapting: A Topology-Aware Framework for Zero-Shot Interaction Prediction in Multiplex Biological Networks}, author={Alana Deng and Sugitha Janarthanan and Yan Sun and Zihao Jing and Pingzhao Hu}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=GvK1y3xqmh} }
OpenReview/ICLR/figures/2026/accept_poster/GvK1y3xqmh/Figure3.png
3
Figure 3: Illustration of the CAE module.
<paragraph_1>The CAE module refines multiplex embeddings through node- and layer-level inter-layer attention combined with contrastive learning (Figure 3 and Supplementary Section C.). Each layer is encoded with a Graph Transformer, and inter-layer attention enables nodes to adaptively attend to counterparts across interaction types. A contrastive learning framework aligns embeddings by training a discriminator to distinguish real from perturbed edges, while a consensus regularizer learns a shared embedding that maximizes agreement across true layers and minimizes alignment with negative views. This design enhances generalization in complex biological systems by improving representation consistency and contextual relevance across the multiplex structure. A CAE forward pass is outlined in Algorithm 1.</paragraph_1>
diagram
0.974683
OpenReview
ICLR
2,026
CroCoDiLight: Repurposing Cross-View Completion Encoders for Relighting
Cross-view completion (CroCo) has proven effective as pre-training for geometric downstream tasks such as stereo depth, optical flow, and point cloud prediction. In this paper we show that it also learns photometric understanding due to training pairs with differing illumination. We propose a method to disentangle CroCo latent representations into a single latent vector representing illumination and patch-wise latent vectors representing intrinsic properties of the scene. To do so, we use self-supervised cross-lighting and intrinsic consistency losses on a dataset two orders of magnitude smaller than that used to train CroCo. This comprises pixel-wise aligned, paired images under different illumination. We further show that the lighting latent can be used and manipulated for tasks such as interpolation between lighting conditions, shadow removal, and albedo estimation. This clearly demonstrates the feasibility of using cross-view completion as pre-training for photometric downstream tasks where training data is more limited.
cross-view completion, relighting, intrinsic image estimation, albedo estimation, shadow removal
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Disentangle CroCo latents into lighting and scene intrinsics, edit lighting for shadow removal, albedo estimation, relighting and lighting interpolation.
[ 4, 8, 4, 4 ]
Accept (Poster)
Alistair J Foggin, William A P Smith
~Alistair_J_Foggin1, ~William_A_P_Smith1
20250903
https://openreview.net/forum?id=GKvb3HCyNk
GKvb3HCyNk
@inproceedings{ foggin2026crocodilight, title={CroCoDiLight: Repurposing Cross-View Completion Encoders for Relighting}, author={Alistair J Foggin and William A P Smith}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=GKvb3HCyNk} }
OpenReview/ICLR/figures/2026/accept_poster/GKvb3HCyNk/Figure2.png
2
Figure 2: The architecture of the model comprises four main components. First is the frozen CroCo encoder. Last is the decoder D which is separately pre-trained and then frozen to decode from CroCo latent space to RGB. Then there are the delighting and relighting transformers, I and R respectively, which disentangle lighting and intrinsics before recombining them. The training process here shows pairs of images encoded and relit to match the lighting of the other image.
<paragraph_1>Our approach (see Fig. 2) starts with a delighting transformer which disentangles illumination from scene intrinsic properties by translating the patch latents into intrinsic patches and estimating a lighting latent vector that describes the appearance in that particular illumination environment. Second, a relighting transformer which recombines a lighting latent vector with intrinsic patches producing patch embeddings in the original CroCo latent space. Finally, to ensure high quality image synthesis we train a single view decoder to transform from CroCo latent space back to RGB images.</paragraph_1>
diagram
0.998946
OpenReview
ICLR
2,026
Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks, making their trustworthiness critical. A significant and underexplored risk is intentional deception, where an LLM deliberately fabricates or conceals information to serve a hidden objective. Existing studies typically induce deception by explicitly setting a hidden objective through prompting or fine-tuning, which may not reflect real-world human-LLM interactions. Moving beyond such human-induced deception, we investigate LLMs' self-initiated deception on benign prompts. To address the absence of ground truth, we propose a framework based on Contact Searching Questions~(CSQ). This framework introduces two statistical metrics derived from psychological principles to quantify the likelihood of deception. The first, the *Deceptive Intention Score*, measures the model's bias toward a hidden objective. The second, the *Deceptive Behavior Score*, measures the inconsistency between the LLM's internal belief and its expressed output. Evaluating 16 leading LLMs, we find that both metrics rise in parallel and escalate with task difficulty for most models. Moreover, increasing model capacity does not always reduce deception, posing a significant challenge for future LLM development.
Large Language Model, Deception, Lie, Honest, Trustworthy
alignment, fairness, safety, privacy, and societal considerations
We detected the widespread deception of LLM under benign prompts and found its tendency increases with task difficulty.
[ 6, 6, 8 ]
Accept (Oral)
Zhaomin Wu, Mingzhe Du, See-Kiong Ng, Bingsheng He
~Zhaomin_Wu1, ~Mingzhe_Du1, ~See-Kiong_Ng1, ~Bingsheng_He1
20250917
https://openreview.net/forum?id=PDBBYwd1LY
PDBBYwd1LY
@inproceedings{ wu2026beyond, title={Beyond Prompt-Induced Lies: Investigating {LLM} Deception on Benign Prompts}, author={Zhaomin Wu and Mingzhe Du and See-Kiong Ng and Bingsheng He}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=PDBBYwd1LY} }
OpenReview/ICLR/figures/2026/accept_oral/PDBBYwd1LY/Figure2.png
2
Figure 2: An illustration of Contact Searching Questions (CSQ), featuring a linked-list question (left) and a broken-list question (right). Given the full-length question, Answer 1 represents the model’s expression. For the shorter follow-up question, Answer 2 reflects its underlying belief.
<paragraph_1>LLM deception can arise in two settings: (1) an incentivizing prompt is given, and the model lies to satisfy the objective specified in the prompt (see Figure 1); (2) a benign prompt is given, yet the model lies due to its intrinsic objective. Most existing studies focus on the incentivizing prompt: for example, Ward et al. (2023) explicitly prompt LLMs to generate deceptive content, and Van Der Weij et al. (2024) fine-tune LLMs to intentionally underperform on specified tasks. Unifying these scenarios, DeceptionBench (Ji et al., 2025) provides a benchmark for prompt-induced deception and treats responses to benign prompts as ground truth.</paragraph_1> <paragraph_2>To address these challenges, inspired by existing studies (Bryant & Trabasso, 1971; Sternberg, 1980) in cognitive psychology, we design the Contact Searching Question (CSQ) framework (illustrated in Figure 2), a set of binary-choice questions requiring an LLM to determine if a statement (whether contact exists between two individuals) is true based on a provided set of facts (known contacts among individuals) and rules (transitivity, asymmetry, and closure). This task structure represents a wide range of real-world scenarios, including mathematical proving and logical reasoning.</paragraph_2> <paragraph_3>Our second metric, deceptive behavior, quantifies the act of an LLM “maintaining a belief that itself considers false”. The core challenge is to measure what a model “considers false” without direct access to its internal states. To this end, we leverage a principle from cognitive psychology: simple questions that require low cognitive load are more likely to elicit truthful beliefs than complex questions (Vrij et al., 2006). We therefore identify deceptive behavior by measuring response inconsistency between a simple query, which serves as a probe for the model’s baseline “belief”, and a related, complex query that elicits its final “expression”. An inconsistency between the “belief” (the answer to the simple probe) and the “expression” (the answer to the complex query) is thus classified as deceptive behavior (Figure 2). This approach effectively distinguishes the targeted act of deception from consistent hallucination or bias (Figure 1), where a model would be incorrect on both query types. The metric is formally defined in Definition 3.4.</paragraph_3> <paragraph_4>In this section, we first introduce the CSQ framework, a reachability task on a directed graph (Section 4.1), with examples in Figure 2. We then present our evaluation framework for deceptive behavior and intention (Section 4.2), with additional prompt examples in Appendix E.</paragraph_4> <paragraph_5>These rules establish that a question concerning a source vertex vs ∈V and a target vertex vt ∈V is a problem of determining the existence of a directed path from vs to vt in G. To control the task difficulty, we evaluate on two highly related question categories: Linked-List Question and BrokenLinked-List Question. Furthermore, each broken-linked-list question contains a follow-up question that is designed to test the consistency (deceptive behavior) of the LLM’s response (Figure 2). This follow-up question is only applied to broken-list questions, since the specific fabricated edge is known, allowing for a targeted test of the LLM’s consistency.</paragraph_5>
diagram
0.986552
OpenReview
ICLR
2,026
The Shape of Adversarial Influence: Characterizing LLM Latent Spaces with Persistent Homology
Existing interpretability methods for Large Language Models (LLMs) often fall short by focusing on linear directions or isolated features, overlooking the high-dimensional, nonlinear, and relational geometry within model representations. This study focuses on how adversarial inputs systematically affect the internal representation spaces of LLMs, a topic which remains poorly understood. We propose the application of persistent homology (PH) to measure and understand the geometry and topology of the representation space when the model is under external adversarial influence. Specifically, we use PH to systematically interpret six state-of-the-art models under two distinct adversarial conditions—indirect prompt injection and backdoor fine-tuning—and uncover a consistent topological signature of adversarial influence. Across architectures and model sizes, adversarial inputs induce "topological compression'', where the latent space becomes structurally simpler, collapsing from varied, compact, small-scale features into fewer, dominant, and more dispersed large-scale ones. This topological signature is statistically robust across layers, highly discriminative, and provides interpretable insights into how adversarial effects emerge and propagate. By quantifying the shape of activations and neuron-level information flow, our architecture-agnostic framework reveals fundamental invariants of representational change, offering a complementary perspective to existing interpretability methods.
Persistent Homology, Interpretability, Topological Data Analysis, Representation Geometry, Large Language Models, AI Security, Adversarial Attacks, Sparse Autoencoders
interpretability and explainable AI
We use persistent homology to interpret how adversarial inputs reshape LLM representation spaces, resulting in a robust signature that provides multiscale, geometry-aware insights complementary to standard interpretability methods.
[ 8, 6, 6, 4 ]
Accept (Oral)
Aideen Fay, Inés García-Redondo, Qiquan Wang, Haim Dubossarsky, Anthea Monod
~Aideen_Fay1, ~Inés_García-Redondo1, ~Qiquan_Wang2, ~Haim_Dubossarsky1, ~Anthea_Monod1
20250919
https://openreview.net/forum?id=v2PglvLLKT
v2PglvLLKT
@inproceedings{ fay2026the, title={The Shape of Adversarial Influence: Characterizing {LLM} Latent Spaces with Persistent Homology}, author={Aideen Fay and In{\'e}s Garc{\'\i}a-Redondo and Qiquan Wang and Haim Dubossarsky and Anthea Monod}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=v2PglvLLKT} }
OpenReview/ICLR/figures/2026/accept_oral/v2PglvLLKT/Figure4.png
4
Figure 4: Pipeline for local analysis.
<paragraph_1>computational cost of PH and to enable statistically robust inference. Subsampling approaches in PH are theoretically grounded, as under mild sampling models, persistence diagrams estimated from point clouds converge to the population diagrams with guaranteed rates (Chazal et al., 2015; 2014). For each model layer, we drew K = 64 subsamples of k = 4096 normal representations; and K = 64 subsamples of k = 4096 adversarial representations—see Appendix C.2 for ablations. We vectorized the corresponding barcodes into 41-dimensional barcode summaries (cf. Section 2.2), and performed the analysis in Figure 3, see results and further details in Section 4.1.</paragraph_1>
diagram
0.955924
OpenReview
ICLR
2,026
Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning
Causal discovery with latent variables is a fundamental task. Yet most existing methods rely on strong structural assumptions, such as enforcing specific indicator patterns for latents or restricting how they can interact with others. We argue that a core obstacle to a general, structural-assumption-free approach is the lack of an equivalence characterization: without knowing what can be identified, one generally cannot design methods for how to identify it. In this work, we aim to close this gap for linear non-Gaussian models. We establish the graphical criterion for when two graphs with arbitrary latent structure and cycles are distributionally equivalent, that is, they induce the same observed distribution set. Key to our approach is a new tool, edge rank constraints, which fills a missing piece in the toolbox for latent-variable causal discovery in even broader settings. We further provide a procedure to traverse the whole equivalence class and develop an algorithm to recover models from data up to such equivalence. To our knowledge, this is the first equivalence characterization with latent variables in any parametric setting without structural assumptions, and hence the first structural-assumption-free discovery method. Code and an interactive demo are available at https://equiv.cc.
causal discovery, latent variables, equivalence, rank constraints, linear non-Gaussian models, cycles
causal reasoning
[ 8, 8, 8, 8 ]
Accept (Oral)
Haoyue Dai, Immanuel Albrecht, Peter Spirtes, Kun Zhang
~Haoyue_Dai1, ~Immanuel_Albrecht1, ~Peter_Spirtes1, ~Kun_Zhang1
20250915
https://openreview.net/forum?id=b8TlYh6PN6
b8TlYh6PN6
@inproceedings{ dai2026distributional, title={Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning}, author={Haoyue Dai and Immanuel Albrecht and Peter Spirtes and Kun Zhang}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=b8TlYh6PN6} }
OpenReview/ICLR/figures/2026/accept_oral/b8TlYh6PN6/Figure8.png
8
Figure 8: Presentation of the equivalence class that glvLiNG estimates from the stock market data. Different colors of nodes indicate different sectors. Solid and dashed edges indicate edges that must appear in all or at least one equivalent graph(s).
<paragraph_1>By applying glvLiNG on this dataset, we recovered an equivalence class of causal graphs containing 2 latent variables. The presentation (see Appendix C.3) of this equivalence class is shown in Figure 8. Here is a summary: the class consists of 19,008 causal graphs with 16=14+2 vertices, and among them the numbers of edges range between 29 to 34. In the presentation, there are 20 “solid” (must appear) and 14 “dashed” (may appear) edges.</paragraph_1>
diagram
0.985522
OpenReview
ICLR
2,026
Triple-BERT: Do We Really Need MARL for Order Dispatch on Ride-Sharing Platforms?
On-demand ride-sharing platforms, such as Uber and Lyft, face the intricate real-time challenge of bundling and matching passengers—each with distinct origins and destinations—to available vehicles, all while navigating significant system uncertainties. Due to the extensive observation space arising from the large number of drivers and orders, order dispatching, though fundamentally a centralized task, is often addressed using Multi-Agent Reinforcement Learning (MARL). However, independent MARL methods fail to capture global information and exhibit poor cooperation among workers, while Centralized Training Decentralized Execution (CTDE) MARL methods suffer from the curse of dimensionality. To overcome these challenges, we propose Triple-BERT, a centralized Single Agent Reinforcement Learning (MARL) method designed specifically for large-scale order dispatching on ride-sharing platforms. Built on a variant TD3, our approach addresses the vast action space through an action decomposition strategy that breaks down the joint action probability into individual driver action probabilities. To handle the extensive observation space, we introduce a novel BERT-based network, where parameter reuse mitigates parameter growth as the number of drivers and orders increases, and the attention mechanism effectively captures the complex relationships among the large pool of driver and orders. We validate our method using a real-world ride-hailing dataset from Manhattan. Triple-BERT achieves approximately an 11.95% improvement over current state-of-the-art methods, with a 4.26% increase in served orders and a 22.25% reduction in pickup times. Our code, trained model parameters, and processed data are publicly available at https://github.com/RS2002/Triple-BERT .
Reinforcement Learning, Order Dispatching, Ride Sharing
reinforcement learning
This paper proposes a novel centralized reinforcement learning framework for large-scale order dispatching tasks in ride-sharing scenarios, achieving better cooperation among workers compared to previous multi-agent methods.
[ 8, 6, 6, 6 ]
Accept (Oral)
Zijian Zhao, Sen Li
~Zijian_Zhao7, ~Sen_Li5
20250918
https://openreview.net/forum?id=symgW6FhA6
symgW6FhA6
@inproceedings{ zhao2026triplebert, title={Triple-{BERT}: Do We Really Need {MARL} for Order Dispatch on Ride-Sharing Platforms?}, author={Zijian Zhao and Sen Li}, booktitle={The Fourteenth International Conference on Learning Representations}, year={2026}, url={https://openreview.net/forum?id=symgW6FhA6} }
OpenReview/ICLR/figures/2026/accept_oral/symgW6FhA6/Figure5.png
5
Figure 5: Network Structure in Stage 1
<paragraph_1>In stage 1, the network structure is shown as Fig. 5, which is consisted by the encoders and the QK-Attention module of proposed network in Fig. 2. Although the model takes the entire worker and</paragraph_1>
diagram
0.997812
OpenReview
ICLR
2,017
HyperNetworks
This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.
Natural language processing, Deep learning, Supervised Learning
We train a small RNN to generate weights for a larger RNN, and train the system end-to-end. We obtain state-of-the-art results on a variety of sequence modelling tasks.
[ 6, 7, 8, 9 ]
Accept (Poster)
David Ha, Andrew M. Dai, Quoc V. Le
hadavid@google.com, adai@google.com, qvl@google.com
20161027
https://openreview.net/forum?id=rkpACe1lx
rkpACe1lx
@inproceedings{ ha2017hypernetworks, title={HyperNetworks}, author={David Ha and Andrew M. Dai and Quoc V. Le}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=rkpACe1lx} }
OpenReview/ICLR/figures/2017/accept_poster/rkpACe1lx/Figure1.png
1
Figure 1: An overview of HyperRNNs. Black connections and parameters are associated basic RNNs. Orange connections and parameters are introduced in this work and associated with HyperRNNs. Dotted arrows are for parameter generation.
<paragraph_1>In HyperRNN, we allow Wh and Wx to float over time by using a smaller hypernetwork to generate these parameters of the main RNN at each step (see Figure 1). More concretely, the parameters Wh, Wx, b of the main RNN are different at different time steps, so that ht can now be computed as:</paragraph_1>
diagram
0.998998
OpenReview
ICLR
2,017
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks
It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.
Deep learning, Supervised Learning, Applications
Applying recurrent neural networks to fix errors and omissions in patient medication records.
[ 8, 6, 7 ]
Accept (Poster)
Jacek M. Bajor, Thomas A. Lasko
jacek.m.bajor@vanderbilt.edu, tom.lasko@vanderbilt.edu
20161103
https://openreview.net/forum?id=rJEgeXFex
rJEgeXFex
@inproceedings{ bajor2017predicting, title={Predicting Medications from Diagnostic Codes with Recurrent Neural Networks}, author={Jacek M. Bajor and Thomas A. Lasko}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=rJEgeXFex} }
OpenReview/ICLR/figures/2017/accept_poster/rJEgeXFex/Figure1.png
1
Figure 1: Simplified representation of a recurrent neural network (left) and an unrolled recurrent neural network (right). xi is a single element in an input sequence x, hi is an output after a single pass through the recurrent unit. Adapted from Olah (2015).
<paragraph_1>A recurrent neural network is a variation in which the output of one node on input xt loops around to become an input to another node on input xt+1, allowing information to be preserved as it iterates over an input data sequence (Figure 1). They were introduced in the 1980s (Rumelhart et al., 1986), but achieved explosive popularity only recently, after the development of methods to more reliably capture long-term dependencies, which significantly improved their performance on sequence-tosequence mapping (Hochreiter & Schmidhuber, 1997; Sutskever et al., 2014).</paragraph_1> <paragraph_2>The basic RNN unit has a simple internal structure (Figure 2a). Output from the previous iteration ht−1 and the next input in a sequence xt are both fed to the network on the next iteration. The Long Short-Term Memory configuration (LSTM) introduces new, more complex internal structure (Figure 2b) consisting of four neural network layers and a cell state (ct), which is carried from one iteration to another. The additional layers form forget, input and output gates, which allow for the information to be forgotten (reset) or passed on to varying degrees.</paragraph_2> <paragraph_3>Medication predictions for a simpler patient. Note that the high-prediction medications are clinically reasonable given the billing codes in the sequence. Figure representation as in case 1.</paragraph_3> <paragraph_4>Medication predictions for a patient with only one ICD-9 code, repeated many times over five years. The medications listed under true labels are not indicated for paralysis agitans (Parkinson’s disease), but the patient was surely taking them for reasons not documented in the ICD-9 sequence. The model predicted mostly reasonable medications for a patient with Parkinson’s disease, especially Dopaminergic agents, which is the primary treatment for the disease. Figure representation as in case 1, above.</paragraph_4>
diagram
0.982775
OpenReview
ICLR
2,017
Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement
We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ε-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
Deep learning, Reinforcement Learning, Games
We propose a new reinforcement learning algorithm based on zero order optimization, that we evaluate on StarCraft micromanagement scenarios.
[ 8, 7, 7 ]
Accept (Poster)
Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala
usunier@fb.com, gab@fb.com, zlin@fb.com, soumith@fb.com
20161104
https://openreview.net/forum?id=r1LXit5ee
r1LXit5ee
@inproceedings{ usunier2017episodic, title={Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement}, author={Nicolas Usunier and Gabriel Synnaeve and Zeming Lin and Soumith Chintala}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=r1LXit5ee} }
OpenReview/ICLR/figures/2017/accept_poster/r1LXit5ee/Figure1.png
1
Figure 1: Representation of the joint (state, command) featurization and scoring process.
<paragraph_1>The full scoring approach is depicted in Figure 1. In our approach, a state is represented as a list of units. The raw features are transformed by a featurizer that 1) takes the 3 unit features (pos, tgt_pos and next_pos) and computes their distances with the position the acting unit and its target (posc and tgtc). All 4 categorical variables are passed through a 10-dimensional linear embedding (not shown in figure). In addition to the 4 real valued unit features, we have a 40 dimensional feature vector per unit as input to our network.</paragraph_1>
diagram
0.9344
OpenReview
ICLR
2,017
Calibrating Energy-based Generative Adversarial Networks
In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal. We derive the analytic form of the induced solution, and analyze the properties. In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques. Empirically, the experiment results closely match our theoretical analysis, verifying the discriminator is able to recover the energy of data distribution.
Deep learning
[ 8, 8, 7 ]
Accept (Poster)
Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville
zander.dai@gmail.com, amjadmahayri@gmail.com, phil.bachman@gmail.com, hovy@cmu.edu, aaron.courville@gmail.com
20161104
https://openreview.net/forum?id=SyxeqhP9ll
SyxeqhP9ll
@inproceedings{ dai2017calibrating, title={Calibrating Energy-based Generative Adversarial Networks}, author={Zihang Dai and Amjad Almahairi and Philip Bachman and Eduard Hovy and Aaron Courville}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=SyxeqhP9ll} }
OpenReview/ICLR/figures/2017/accept_poster/SyxeqhP9ll/Figure4.png
4
Figure 4: 100 highest-ranked images out of 1000 generated and reals (bounding box) samples.
<paragraph_1>digit from the NIST dataset. 2 We compare the ability of EGAN-Ent-NN with both EGAN-Const and GAN on ranking a set of 1,000 images, half of which are generated samples and the rest are real test images. Figures 4 and 5 show the top-100 and bottom-100 ranked images respectively for each model, after training them on digit 1. We also show in Figure 7 the mean of all training samples, so we can get a sense of what is the most common style (highest density) of digit 1 in NIST. We can notice that all of the top-ranked images by EGAN-Ent-NN look similar to the mean sample. In addition, the lowest-ranked images are clearly different from the mean image, with either high (clockwise or counter-clockwise) rotation degrees from the mean, or an extreme thickness level. We do not see such clear distinction in other models. We provide in the appendix B.4 the ranking of the full set of images.</paragraph_1> <paragraph_2>(c) GAN Figure 4: 100 highest-ranked images out of 1000 generated and reals (bounding box) samples.</paragraph_2> <paragraph_3>• It is inaccurate in magnitude. As we can see, the entropy approximation gradient (Fig. (2,3)) has much larger norm than the discriminator gradient (Fig. (2,2)). As a result, the total gradient (Fig. (2,4)) is fully dominated by the entropy approximation gradient. Thus, it usually takes much longer for the generator to learn to generate rare samples, and the training also proceeds much slower compared to the nearest neighbor based approximation.</paragraph_3> <paragraph_4>In comparison, the nearest neighbor based gradient approximation is much more accurate as shown in 8b. As a result, it leads to more accurate energy contour, as well as faster training. What’s more, from Figure 8b Fig. (2,4), we can see the entropy gradient does have the cancel-out effect on the discriminator gradient, which again matches our theory.</paragraph_4>
diagram
0.880793
OpenReview
ICLR
2,017
On Detecting Adversarial Perturbations
Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small ``detector'' subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.
Computer vision, Deep learning, Supervised Learning
We present and evaluate an approach for detecting adversarial perturbations in images based on attaching a small subnetwork to a deep neural network that is trained specifically to detect adversarial perturbations.
[ 5, 7, 7 ]
Accept (Poster)
Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff
JanHendrik.Metzen@de.bosch.com, Tim.Genewein@de.bosch.com, Volker.Fischer@de.bosch.com, Bastian.Bischoff@de.bosch.com
20161104
https://openreview.net/forum?id=SJzCSf9xg
SJzCSf9xg
@inproceedings{ metzen2017on, title={On Detecting Adversarial Perturbations}, author={Jan Hendrik Metzen and Tim Genewein and Volker Fischer and Bastian Bischoff}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=SJzCSf9xg} }
OpenReview/ICLR/figures/2017/accept_poster/SJzCSf9xg/Figure1.png
1
Figure 1: (Top) ResNet used for classification. Numbers on top of arrows denote the number of feature maps and numbers below arrows denote spatial resolutions. Conv denotes a convolutional layer, Res∗5 denotes a sequence of 5 residual blocks as introduced by He et al. (2016), GAP denotes a global-average pooling layer and Dens a fully-connected layer. Spatial resolutions are decreased by strided convolution and the number of feature maps on the residual’s shortcut is increased by 1x1 convolutions. All convolutional layers have 3x3 receptive fields and are followed by batch normalization and rectified linear units. (Bottom) Topology of detector network, which is attached to one of the AD(i) positions. MP denotes max-pooling and is optional: for AD(3), the second pooling layer is skipped, and for AD(4), both pooling layers are skipped.
<paragraph_1>We use a 32-layer Residual Network (He et al., 2016, ResNet) as classifier. The structure of the network is shown in Figure 1. The network has been trained for 100 epochs with stochastic gradient descent and momentum on 45000 data points from the train set. The momentum term was set to 0.9 and the initial learning rate was set to 0.1, reduced to 0.01 after 41 epochs, and further reduced to 0.001 after 61 epochs. After each epoch, the network’s performance on the validation data (the remaining 5000 data points from the train set) was determined. The network with maximal performance on the validation data was used in the subsequent experiments (with all tunable weights being fixed). This network’s accuracy on non-adversarial test data is 91.3%. We attach an adversary detection subnetwork (called “detector” below) to the ResNet. The detector is a convolutional neural network using batch normalization (Ioffe & Szegedy, 2015) and rectified linear units. In the experiments, we investigate different positions where the detector can be attached (see also Figure 1).</paragraph_1> <paragraph_2>In this subsection, we investigate a static adversary, i.e., an adversary that only has access to the classification network but not to the detector. The detector was trained for 20 epochs on 45000 data points from the train set and their corresponding adversarial examples using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.0001 and β1 = 0.99, β2 = 0.999. The remaining 5000 data points from the CIFAR10 train set are used as validation data and used for model selection. The detector was attached to position AD(2) (see Figure 1) except for the DeepFool-based adversaries where the detector was attached to AD(4); see below for a discussion. For the “Fast” and “Iterative” adversaries, the parameter ε from Section 3.1 was chosen from [1, 2, 3, 4] for ℓ∞-based methods and from [20, 40, 60, 80] for ℓ2-based methods; larger values of ε generally result in reduced accuracy of the classifier but increased detectability. For the “Iterative” method with ℓ2-norm, we used α = 20, i.e., in each iteration we make a step of ℓ2 distance 20. Please note that these values of ε are based on assuming a range of [0, 255] per color channel of the input.</paragraph_2>
diagram
0.994278
OpenReview
ICLR
2,017
Learning to Remember Rare Events
Despite recent advances, memory-augmented deep neural networks are still limited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training. Our memory module can be easily added to any part of a supervised neural network. To show its versatility we add it to a number of networks, from simple convolutional ones tested on image classification to deep sequence-to-sequence and recurrent-convolutional models. In all cases, the enhanced network gains the ability to remember and do life-long one-shot learning. Our module remembers training examples shown many thousands of steps in the past and it can successfully generalize from them. We set new state-of-the-art for one-shot learning on the Omniglot dataset and demonstrate, for the first time, life-long one-shot learning in recurrent neural networks on a large-scale machine translation task.
Deep learning
We introduce a memory module for life-long learning that adds one-shot learning capability to any supervised neural network.
[ 7, 8, 6 ]
Accept (Poster)
Lukasz Kaiser, Ofir Nachum, Aurko Roy, Samy Bengio
lukaszkaiser@google.com, ofirnachum@google.com, aurko@gatech.edu, bengio@google.com
20161104
https://openreview.net/forum?id=SJTQLdqlg
SJTQLdqlg
@inproceedings{ kaiser2017learning, title={Learning to Remember Rare Events}, author={Lukasz Kaiser and Ofir Nachum and Aurko Roy and Samy Bengio}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=SJTQLdqlg} }
OpenReview/ICLR/figures/2017/accept_poster/SJTQLdqlg/Figure3.png
3
Figure 3: Extended Neural GPU with memory module. Memory query is read from the position one below the current output logit, and the embedded memory value is put at the same position of the output tape p. The network learns to use these values to produce the output in the next step.
<paragraph_1>Extended Neural GPU with Memory. To test versatility of our memory module, we also add it to the Extended Neural GPU, a convolutional-recurrent model introduced by Kaiser & Bengio (2016). The Extended Neural GPU is a sequence-to-sequence model too, but its decoder is convolutional and the size of its state changes depending on the size of the input. Again, we leave the encoder part of the model intact, and extend the decoder part by a memory query. This time, we use the position one step ahead to query memory, and we put the embedded result to the output tape, as shown in Figure 3. Note that in this model the result of the memory will be processed by two recurrent-convolutional cells before the corresponding output is produced. The fact that this model still does one-shot learning confirms that the output of our memory module can be used deep inside a network, not just near the output layer.</paragraph_1>
diagram
0.974898
OpenReview
ICLR
2,017
Deep Probabilistic Programming
We propose Edward, a Turing-complete probabilistic programming language. Edward defines two compositional representations—random variables and inference. By treating inference as a first class citizen, on a par with modeling, we show that probabilistic programming can be as flexible and computationally efficient as traditional deep learning. For flexibility, Edward makes it easy to fit the same model using a variety of composable inference methods, ranging from point estimation to variational inference to MCMC. In addition, Edward can reuse the modeling representation as part of inference, facilitating the design of rich variational models and generative adversarial networks. For efficiency, Edward is integrated into TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs no runtime overhead: it is as fast as handwritten TensorFlow.
[ 5, 8, 7 ]
Accept (Poster)
Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, David M. Blei
dustin@cs.columbia.edu, mathoffm@adobe.com, rif@google.com, ebrevdo@google.com, kpmurphy@google.com, david.blei@columbia.edu
20161104
https://openreview.net/forum?id=Hy6b4Pqee
Hy6b4Pqee
@inproceedings{ tran2017deep, title={Deep Probabilistic Programming}, author={Dustin Tran and Matthew D. Hoffman and Rif A. Saurous and Eugene Brevdo and Kevin Murphy and David M. Blei}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=Hy6b4Pqee} }
OpenReview/ICLR/figures/2017/accept_poster/Hy6b4Pqee/Figure10.png
10
Figure 10: Bayesian neural network for classification.
<paragraph_1>where NN is a 2-layer neural network whose weights and biases form the latent variables W0, b0, W1, b1. Define the prior on the weights and biases to be the standard normal. See Figure 10. There are N data points, D features, and H hidden units.</paragraph_1>
diagram
0.996771
OpenReview
ICLR
2,017
Neural Program Lattices
We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hierarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.
Deep learning, Semi-Supervised Learning
[ 7, 4, 7 ]
Accept (Poster)
Chengtao Li, Daniel Tarlow, Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman
ctli@mit.edu, dtarlow@microsoft.com, algaunt@microsoft.com, mabrocks@microsoft.com, nkushman@microsoft.com
20161104
https://openreview.net/forum?id=HJjiFK5gx
HJjiFK5gx
@inproceedings{ li2017neural, title={Neural Program Lattices}, author={Chengtao Li and Daniel Tarlow and Alexander L. Gaunt and Marc Brockschmidt and Nate Kushman}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=HJjiFK5gx} }
OpenReview/ICLR/figures/2017/accept_poster/HJjiFK5gx/Figure1.png
1
Figure 1: Stack-based NPI: Four time steps from the execution of the stack-based NPI model. Each color/hash pattern represents a unique set of unchanging data values which, over time, move up and down (and in and out of) the stack. Operations below the dotted line to calculate the new world state are executed only at test time, since we do not have access to fworld at training time, and the training data contains the correct sequence of world states.
<paragraph_1>The basic structure of the reformulated model can be seen in Figure 1. The model learns a library of programs, G, and arguments, R, to these programs, where each program g ∈Rn and each argument</paragraph_1> <paragraph_2>An LSTM-based controller, shown in Figure 2, is used to generate the sequence of actions, deciding the action at timestep t based on the currently running program and arguments, gt in, the LSTM’s internal state ht in and an observation of the current world state, wt. To support calls to and returns from subprograms, the controller state contains two call stacks, one for the internal RNN state, which we denote as M (green in Figure 1), and one for the program and arguments, which we denote as S (red in Figure 1). M t d and St d refer to the elements at depth-d of the stacks at timestep t.</paragraph_2> <paragraph_3>In our implementation we group together execution paths at each timestep by call depth, l ∈L, and number of elementary operations performed so far, i ∈I, and maintain at each timestep a separate embedded state representation for each group of execution paths. Thus the unrolled linear architecture shown in Figure 1 becomes instead a lattice, as shown in Figure 3, with a grid of approximate program states at each timestep. Each node in this lattice represents the state of all paths that are at depth l and elementary operation i when they reach timestep t. Each node contains a soft-argmax of the stack states in M and S and an RNN cell identical to that in Figure 21. For each node we must also compute yt,l i , the probability that at timestep t the execution is at depth l and at elementary operation i and has output the elementary operation sequence λ1:i. As before we can compute this recursively as:</paragraph_3>
diagram
0.85818
OpenReview
ICLR
2,017
Transfer of View-manifold Learning to Similarity Perception of Novel Objects
We develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in view- manifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence could be an important constraint in the development of perceptual similarity judgment in biological neural networks.
Deep learning, Transfer Learning
DCNN trained with multiple views of the same object can develop human-like perpetual similarity judgment that can transfer to novel objects
[ 6, 5, 7 ]
Accept (Poster)
Xingyu Lin, Hao Wang, Zhihao Li, Yimeng Zhang, Alan Yuille, Tai Sing Lee
sean.linxingyu@pku.edu.cn, hao.wang@pku.edu.cn, zhihaol@andrew.cmu.edu, yimengzh@andrew.cmu.edu, alan.yuille@jhu.edu, tai@cnbc.cmu.edu
20161105
https://openreview.net/forum?id=B1gtu5ilg
B1gtu5ilg
@inproceedings{ lin2017transfer, title={Transfer of View-manifold Learning to Similarity Perception of Novel Objects}, author={Xingyu Lin and Hao Wang and Zhihao Li and Yimeng Zhang and Alan Yuille and Tai Sing Lee}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=B1gtu5ilg} }
OpenReview/ICLR/figures/2017/accept_poster/B1gtu5ilg/Figure4.png
4
Figure 4: Hierarchical clustering of the alien objects, based on (a) human perceptions, (b)A lexNet features and (c) OPnet features. The dendrograms illustrate how each cluster is composed by drawing a U-shaped link between a cluster and its children. The height of each U-link denotes the distance between its children clusters when they are merged.
<paragraph_1>Using the novel objects from Tenenbaum et al. (2011), we are able to compare our networks with human similarity perception. We collect 41 images from the paper, one image per object. A pairwise similarity matrix is calculated based on the cosine distance of their feature representations. We can then perform hierarchical agglomerative clustering to obtain a tree structure, using the Nearest Point Algorithm. That is, for all points i in cluster u and points j in cluster v, the distance of the two clusters are calculated by dist(u, v) = min(D(u[i], v[j])), where D(·) is the cosine distance function. We merge two clusters with the shortest distance successively to construct the tree. The tree based on human perception is constructed by giving human subjects all the images and asking them to merge two clusters that are most similar each time, similar to the hierarchical agglomerative clustering algorithm. Results are shown in Figure 4.</paragraph_1>
diagram
0.958847
OpenReview
ICLR
2,017
End-to-end Optimized Image Compression
We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.
[ 8, 8, 7, 8, 9 ]
Accept (Oral)
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
johannes.balle@nyu.edu, valero.laparra@uv.es, eero.simoncelli@nyu.edu
20161105
https://openreview.net/forum?id=rJxdQ3jeg
rJxdQ3jeg
@inproceedings{ ball{\'e}2017endtoend, title={End-to-end Optimized Image Compression}, author={Johannes Ball{\'e} and Valero Laparra and Eero P. Simoncelli}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=rJxdQ3jeg} }
OpenReview/ICLR/figures/2017/accept_oral/rJxdQ3jeg/Figure1.png
1
Figure 1: General nonlinear transform coding framework (Ballé, Laparra, and Simoncelli, 2016). A vector of image intensities x ∈ RN is mapped to a latent code space via a parametric analysis transform, y = ga(x;φ). This representation is quantized, yielding a discrete-valued vector q ∈ ZM which is then compressed. The rate of this discrete code, R, is lower-bounded by the entropy of the discrete probability distribution of the quantized vector, H[Pq]. To reconstruct the compressed image, the discrete elements of q are reinterpreted as a continuous-valued vector ŷ, which is transformed back to the data space using a parametric synthesis transform x̂ = gs(ŷ;θ). Distortion is assessed by transforming to a perceptual space using a (fixed) transform, ẑ = gp(x̂), and evaluating a metric d(z, ẑ). We optimize the parameter vectors φ and θ for a weighted sum of the rate and distortion measures, R+ λD, over a set of images.
diagram
0.989504
OpenReview
ICLR
2,017
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
[ 9, 9, 9 ]
Accept (Oral)
Barret Zoph, Quoc Le
barretzoph@google.com, qvl@google.com
20161104
https://openreview.net/forum?id=r1Ue8Hcxg
r1Ue8Hcxg
@inproceedings{ zoph2017neural, title={Neural Architecture Search with Reinforcement Learning}, author={Barret Zoph and Quoc Le}, booktitle={International Conference on Learning Representations}, year={2017}, url={https://openreview.net/forum?id=r1Ue8Hcxg} }
OpenReview/ICLR/figures/2017/accept_oral/r1Ue8Hcxg/Figure5.png
5
Figure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2) and one internal node. Left: the tree that defines the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller.
<paragraph_1>To make this process more clear, we show an example in Figure 5, for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by 0 and 1, and the internal node is indexed by 2. The controller RNN needs to first predict 3 blocks, each block specifying a combination method and an activation function for each tree index. After that it needs to predict the last 2 blocks that specify how to connect ct and ct−1 to temporary variables inside the tree. Specifically,</paragraph_1>
diagram
0.96554
OpenReview
ICLR
2,018
On Unifying Deep Generative Models
Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.
deep generative models, generative adversarial networks, variational autoencoders, variational inference
A unified statistical view of the broad class of deep generative models
[ 6, 7, 7 ]
Accept (Poster)
Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric P. Xing
zhitinghu@gmail.com, yangtze2301@gmail.com, rsalakhu@cs.cmu.edu, epxing@cs.cmu.edu
20171027
https://openreview.net/forum?id=rylSzl-R-
rylSzl-R-
@inproceedings{ hu2018on, title={On Unifying Deep Generative Models}, author={Zhiting Hu and Zichao Yang and Ruslan Salakhutdinov and Eric P. Xing}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=rylSzl-R-}, }
OpenReview/ICLR/figures/2018/accept_poster/rylSzl-R-/Figure1.png
1
Figure 1: (a) Conventional view of ADA. To make direct correspondence to GANs, we use z to denote the data and x the feature. Subscripts src and tgt denote source and target domains, respectively. (b) Conventional view of GANs. (c) Schematic graphical model of both ADA and GANs (Eq.3). Arrows with solid lines denote generative process; arrows with dashed lines denote inference; hollow arrows denote deterministic transformation leading to implicit distributions; and blue arrows denote adversarial mechanism that involves respective conditional distribution q and its reverse qr , e.g., q(y|x) and qr(y|x) (denoted as q(r)(y|x) for short). Note that in GANs we have interpreted x as latent variable and (z, y) as visible. (d) InfoGAN (Eq.9), which, compared to GANs, adds conditional generation of code z with distribution qη(z|x, y). (e) VAEs (Eq.12), which is obtained by swapping the generation and inference processes of InfoGAN, i.e., in terms of the schematic graphical model, swapping solid-line arrows (generative process) and dashed-line arrows (inference) of (d).
<paragraph_1>We first review the conventional formulation of ADA. Figure 1(a) illustrates the computation flow. Let z be a data example either in the source or target domain, and y ∈{0, 1} the domain indicator with y = 0 indicating the target domain and y = 1 the source domain. The data distributions conditioning on the domain are then denoted as p(z|y). The feature extractor Gθ parameterized with θ maps z to feature x = Gθ(z). To enforce domain invariance of feature x, a discriminator Dφ is learned. Specifically, Dφ(x) outputs the probability that x comes from the source domain, and the discriminator is trained to maximize the binary classification accuracy of recognizing the domains:</paragraph_1> <paragraph_2>GANs (Goodfellow et al., 2014) can be seen as a special case of ADA. Taking image generation for example, intuitively, we want to transfer the properties of real image (source domain) to generated image (target domain), making them indistinguishable to the discriminator. Figure 1(b) shows the conventional view of GANs.</paragraph_2> <paragraph_3>New Interpretation Let us take a closer look into the form of Eq.(3). It closely resembles the data reconstruction term of a variational lower bound by treating y as visible variable while x as latent (as in ADA). That is, we are essentially reconstructing the real/fake indicator y (or its reverse 1 −y) with the “generative distribution” qφ(y|x) and conditioning on x from the “inference distribution” pθ(x|y). Figure 1(c) shows a schematic graphical model that illustrates such generative and inference processes. (Sec.D in the supplementary materials gives an example of translating a given schematic graphical model into mathematical formula.) We go a step further to reformulate the objectives and reveal more insights to the problem. In particular, for each optimization step of pθ(x|y) at point (θ0, φ0) in the parameter space, we have:</paragraph_3> <paragraph_4>Again, note that z is encapsulated in the implicit distribution pθ(x|y). The model is expressed as the schematic graphical model in Figure 1(d). Let qr(x|z, y) ∝qη0(z|x, y)qr φ0(y|x)pθ0(x) be the augmented “posterior”, the result in the form of Lemma.1 still holds by adding z-related conditionals:</paragraph_4> <paragraph_5>Table 1: Correspondence between different approaches in the proposed formulation. The label “[G]” in bold indicates the respective component is involved in the generative process within our interpretation, while “[I]” indicates inference process. This is also expressed in the schematic graphical models in Figure 1.</paragraph_5> <paragraph_6>We provide the proof of Lemma 2 in the supplementary materials. Figure 1(e) shows the schematic graphical model of the new interpretation of VAEs, where the only difference from InfoGAN (Figure 1(d)) is swapping the solid-line arrows (generative process) and dashed-line arrows (inference). As in GANs and InfoGAN, for the real example domain with y = 1, both qη(z|x, y = 1) and pθ(x|z, y = 1) are constant distributions. Since given a fake sample x from pθ0(x), the reversed perfect discriminator qr ∗(y|x) always predicts y = 1 with probability 1, the loss on fake samples is therefore degenerated to a constant, which blocks out fake samples from contributing to learning.</paragraph_6>
diagram
0.997632
OpenReview
ICLR
2,018
Communication Algorithms via Deep Learning
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.
coding theory, recurrent neural network, communication
We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances.
[ 6, 2, 9 ]
Accept (Poster)
Hyeji Kim, Yihan Jiang, Ranvir B. Rana, Sreeram Kannan, Sewoong Oh, Pramod Viswanath
hyejikim@illinois.edu, yihanrogerjiang@gmail.com, rbrana2@illinois.edu, ksreeram@uw.edu, sewoong79@gmail.com, pramodv@illinois.edu
20171027
https://openreview.net/forum?id=ryazCMbR-
ryazCMbR-
@inproceedings{ kim2018communication, title={Communication Algorithms via Deep Learning}, author={Hyeji Kim and Yihan Jiang and Ranvir B. Rana and Sreeram Kannan and Sewoong Oh and Pramod Viswanath}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=ryazCMbR-}, }
OpenReview/ICLR/figures/2018/accept_poster/ryazCMbR-/Figure12.png
12
Figure 12: rate-1/3 turbo encoder (top) and neural turbo decoder N-Turbo (bottom)
<paragraph_1>in Figure 12. Two identical rate-1/2 RSC encoders are used, encoder 1 with original sequence b as input and encoder 2 with a randomly permuted version of b as input. Interleaver performs the random permutation. As the first output sequence c1(1) of encoder 1 is identical to the output sequence c1(2) of encoder 2, and hence redundant. So the sequence c1(2) is thrown away, and the rest of the sequences (c1(1), c2(1), c2(2)) are transmitted; hence, rate is 1/3.</paragraph_1> <paragraph_2>Training. We propose a neural decoder for turbo codes that we call N-Turbo in Figure 12. Following the deep layered architecture of the turbo decoder, we stack layers of a variation of our N-RSC decoder, which we call N-BCJR. However, end-to-end training (using examples of the input sequence</paragraph_2>
diagram
0.988866
OpenReview
ICLR
2,018
DORA The Explorer: Directed Outreaching Reinforcement Action-Selection
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose $E$-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using $E$-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.
Reinforcement Learning, Exploration, Model-Free
We propose a generalization of visit-counters that evaluate the propagating exploratory value over trajectories, enabling efficient exploration for model-free RL
[ 6, 6, 7 ]
Accept (Poster)
Lior Fox, Leshem Choshen, Yonatan Loewenstein
lior.fox@mail.huji.ac.il, leshem.choshen@mail.huji.ac.il, yonatan.loewenstein@mail.huji.ac.il
20171027
https://openreview.net/forum?id=ry1arUgCW
ry1arUgCW
@inproceedings{ fox2018dora, title={{DORA} The Explorer: Directed Outreaching Reinforcement Action-Selection}, author={Lior Fox and Leshem Choshen and Yonatan Loewenstein}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=ry1arUgCW}, }
OpenReview/ICLR/figures/2018/accept_poster/ry1arUgCW/Figure2.png
2
Figure 2: Bridge MDP
<paragraph_1>To demonstrate the advantage of using E-values over standard counters, we tested an ϵ-greedy agent with an exploration bonus of 1 log1−α E added to the observed reward on the bridge MDP (Figure 2). To measure the learning progress and its convergence, we calculated the mean square error</paragraph_1> <paragraph_2>To test this algorithm, the first set of experiments were done on Bridge environments of various lengths k (Figure 2). We considered the following agents: ϵ-greedy, Softmax and their respective LLL determinizations (as described in 3.2.1) using both counters and E-values. In addition, we compared a more standard counter-based agent in the form of a UCB-like algorithm (Auer et al.,</paragraph_2>
diagram
0.865206
OpenReview
ICLR
2,018
Stochastic Variational Video Prediction
Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.
video prediction, stochastic prediction, variational inference, unsupervised learning
Stochastic variational video prediction in real-world settings.
[ 7, 7, 7 ]
Accept (Poster)
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H. Campbell, Sergey Levine
mb2@uiuc.edu, cbfinn@eecs.berkeley.edu, dumitru@google.com, rhc@illinois.edu, svlevine@eecs.berkeley.edu
20171027
https://openreview.net/forum?id=rk49Mg-CW
rk49Mg-CW
@inproceedings{ babaeizadeh2018stochastic, title={Stochastic Variational Video Prediction}, author={Mohammad Babaeizadeh and Chelsea Finn and Dumitru Erhan and Roy H. Campbell and Sergey Levine}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=rk49Mg-CW}, }
OpenReview/ICLR/figures/2018/accept_poster/rk49Mg-CW/Figure2.png
2
Figure 2: Probabilistic graphical model of stochastic variational video prediction, assuming time-invariant latent. The generative model predicts the next frame conditioned on the previous frames and latent variables (solid lines), while the variational inference model approximates the posterior given all the frames (dotted lines).
<paragraph_1>In order to construct our stochastic variational video prediction model, we first formulate a probabilistic graphical model that explains the stochasticity in the video. Since our goal is to perform conditional video prediction, the predictions are conditioned on a set of c context frames x0, . . . , xc−1 (e.g., if we are conditioning on one frame, c = 1), and our goal is to sample from p(xc:T |x0:c−1), where xi denotes the ith frame of the video (Figure 2).</paragraph_1>
diagram
0.980932
OpenReview
ICLR
2,018
Deep Sensing: Active Sensing using Multi-directional Recurrent Neural Networks
For every prediction we might wish to make, we must decide what to observe (what source of information) and when to observe it. Because making observations is costly, this decision must trade off the value of information against the cost of observation. Making observations (sensing) should be an active choice. To solve the problem of active sensing we develop a novel deep learning architecture: Deep Sensing. At training time, Deep Sensing learns how to issue predictions at various cost-performance points. To do this, it creates multiple representations at various performance levels associated with different measurement rates (costs). This requires learning how to estimate the value of real measurements vs. inferred measurements, which in turn requires learning how to infer missing (unobserved) measurements. To infer missing measurements, we develop a Multi-directional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it sequentially operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. At runtime, the operator prescribes a performance level or a cost constraint, and Deep Sensing determines what measurements to take and what to infer from those measurements, and then issues predictions. To demonstrate the power of our method, we apply it to two real-world medical datasets with significantly improved performance.
Active Sensing, Timely Prediction, Irregular Sampling, Missing Data
[ 7, 8, 6 ]
Accept (Poster)
Jinsung Yoon, William R. Zame, Mihaela van der Schaar
jsyoon0823@gmail.com, zame@econ.ucla.edu, mihaela.vanderschaar@oxford-man.ox.ac.uk
20171027
https://openreview.net/forum?id=r1SnX5xCb
r1SnX5xCb
@inproceedings{ yoon2018deep, title={Deep Sensing: Active Sensing using Multi-directional Recurrent Neural Networks}, author={Jinsung Yoon and William R. Zame and Mihaela van der Schaar}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=r1SnX5xCb}, }
OpenReview/ICLR/figures/2018/accept_poster/r1SnX5xCb/Figure3.png
3
Figure 3: Diagram of the neural networks for M-RNN
<paragraph_1>avoids overfitting and leads to significant performance improvements as compared to a standard Bi-RNN. (See the Interpolation part of Fig. 3.)</paragraph_1> <paragraph_2>Imputation: The objective of the imputation block is to construct an imputation function Ψ that operates across streams. Again, we abuse notation and write ˇxd t = Ψ(D −xd t ). Keep in mind that now we are using only data at time stamp t, not data from other time stamps. We construct the function Ψ to be independent of the time stamp t; so we construct it using fully connected layers (FC); see Imputation part of Fig 3:</paragraph_2> <paragraph_3>We refer to the entire structure as a Multi-directional Recurrent Neural Network (M-RNN); see Fig.3.</paragraph_3>
diagram
0.994405
OpenReview
ICLR
2,018
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.
motion synthesis, motion prediction, human pose, human motion, recurrent networks, lstm
Synthesize complex and extended human motions using an auto-conditioned LSTM network
[ 7, 6, 7 ]
Accept (Poster)
Yi Zhou, Zimo Li, Shuangjiu Xiao, Chong He, Zeng Huang, Hao Li
zhou859@usc.edu, zimoli@usc.edu, xsjiu99@sjtu.edu.cn, sal@sjtu.edu.cn, zenghuang@usc.edu, hao@hao-li.com
20171027
https://openreview.net/forum?id=r11Q2SlRW
r11Q2SlRW
@inproceedings{ zhou2018autoconditioned, title={Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis}, author={Yi Zhou and Zimo Li and Shuangjiu Xiao and Chong He and Zeng Huang and Hao Li}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=r11Q2SlRW}, }
OpenReview/ICLR/figures/2018/accept_poster/r11Q2SlRW/Figure1.png
1
Figure 1: Visual diagram of an unrolled Auto-Conditioned RNN (right) with condition length v = 4 and ground truth length u = 4. It is the input at time step t. St is the hidden state. Ot is the output.
<paragraph_1>The acRNN, on the other hand, deals with poor network output explicitly by using it during training. Instead of only feeding in ground-truth instances, we use subsequences of the network’s own outputs at periodic intervals. For instance, sticking with the example above, instead of conditioning the network on G1,k = [g1, ..., gk], we use ˆG1,k = [g1, ..., gu, pu+1, ..., pu+v, gu+v+1.., gk] to predict G2,k+1 = [g2, ..., gk+1]. The variable pu+1 is the network output conditioned on [g1, ..., gu], and pu+2 is conditioned on [g1, ..., gu, pu+1]. In this example, we refer to v as the "condition length" and u as the "ground-truth length". As the network is conditioned on its own output during training, it is able to deal with such input during synthesis. Figure 1 details an unrolled Auto-Conditioned RNN with condition length u = v = 4, and Figure 10 shows a more detailed view or our network. The method of (Bengio et al., 2015) also proposes using network output during training, but does so stochastically, without fixing condition lengths. However, we found that changing the condition/ground-truth length while keeping the proportion of ground-truth input fixed affects both the accuracy and variation of the output. See Figure 9 in the appendix.</paragraph_1>
diagram
0.999305
OpenReview
ICLR
2,018
Quantitatively Evaluating GANs With Divergences Proposed for Training
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores.
Generative adversarial networks
An empirical evaluation on generative adversarial networks
[ 7, 7, 4 ]
Accept (Poster)
Daniel Jiwoong Im, He Ma, Graham W. Taylor, Kristin Branson
daniel.im@aifounded.com, hma02@uoguelph.ca, gwtaylor@uoguelph.ca, kristinbranson@gmail.com
20171027
https://openreview.net/forum?id=SJQHjzZ0-
SJQHjzZ0-
@inproceedings{ jiwoong2018quantitatively, title={Quantitatively Evaluating {GAN}s With Divergences Proposed for Training}, author={Daniel Jiwoong Im and Alllan He Ma and Graham W. Taylor and Kristin Branson}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=SJQHjzZ0-}, }
OpenReview/ICLR/figures/2018/accept_poster/SJQHjzZ0-/Figure9.png
9
Figure 9: GAN Topology for MNIST.
diagram
0.975679
OpenReview
ICLR
2,018
Hierarchical and Interpretable Skill Acquisition in Multi-task Reinforcement Learning
Learning policies for complex tasks that require multiple different skills is a major challenge in reinforcement learning (RL). It is also a requirement for its deployment in real-world scenarios. This paper proposes a novel framework for efficient multi-task reinforcement learning. Our framework trains agents to employ hierarchical policies that decide when to use a previously learned policy and when to learn a new skill. This enables agents to continually acquire new skills during different stages of training. Each learned task corresponds to a human language description. Because agents can only access previously learned skills through these descriptions, the agent can always provide a human-interpretable description of its choices. In order to help the agent learn the complex temporal dependencies necessary for the hierarchical policy, we provide it with a stochastic temporal grammar that modulates when to rely on previously learned skills and when to execute new skills. We validate our approach on Minecraft games designed to explicitly test the ability to reuse previously learned skills while simultaneously learning new skills.
Hierarchical Policy, Interpretable Policy, Deep Reinforcement Learning, Multi-task Reinforcement Learning, Skill Acquisition, Language Grounding
A novel hierarchical policy network which can reuse previously learned skills alongside and as subcomponents of new skills by discovering the underlying relations between skills.
[ 6, 6, 6 ]
Accept (Poster)
Tianmin Shu, Caiming Xiong, Richard Socher
tianmin.shu@ucla.edu, cxiong@salesforce.com, richard@socher.org
20171027
https://openreview.net/forum?id=SJJQVZW0b
SJJQVZW0b
@inproceedings{ shu2018hierarchical, title={Hierarchical and Interpretable Skill Acquisition in Multi-task Reinforcement Learning}, author={Tianmin Shu and Caiming Xiong and Richard Socher}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=SJJQVZW0b}, }
OpenReview/ICLR/figures/2018/accept_poster/SJJQVZW0b/Figure7.png
7
Figure 7: Hierarchical plans for “Put x on y” tasks. Top: an example of performing trained tasks; bottom: an example of generalizing the plan composition to unseen tasks.
<paragraph_1>We visualize typical hierarchical plans of several tasks generated by global policies learned by our full model in Appendix C (Figure 6 and Figure 7)1. It can been seen from the examples that our global policies adjust the composed plans in different scenarios. For instance, in the second plan on the first row, π1 did not deploy base policy π0 as the agent was already in front of the target item at the beginning of the episode, whereas in the plan on the second row, π1 deployed π0 for the “Find x” base task twice consecutively, as it did not finish the base task in the first call.</paragraph_1> <paragraph_2>Figure 6 and Figure 7 show several plans for different tasks composed by executing our hierarchical policies.</paragraph_2>
diagram
0.932981
OpenReview
ICLR
2,018
Compositional Attention Networks for Machine Reasoning
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results.
Deep Learning, Reasoning, Memory, Attention, VQA, CLEVR, Recurrent Neural Networks, Module Networks, Compositionality
We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning.
[ 7, 6, 7 ]
Accept (Poster)
Drew A. Hudson, Christopher D. Manning
dorarad@cs.stanford.edu, manning@cs.stanford.edu
20171027
https://openreview.net/forum?id=S1Euwz-Rb
S1Euwz-Rb
@inproceedings{ arad2018compositional, title={Compositional Attention Networks for Machine Reasoning}, author={Drew Arad Hudson and Christopher D. Manning}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=S1Euwz-Rb}, }
OpenReview/ICLR/figures/2018/accept_poster/S1Euwz-Rb/Figure4.png
4
Figure 4: The Read Unit (RU) diagram. Blue refers to control flow, purple to knowledge flow and red to memory flow. See section 3.2.2 for description.
<paragraph_1>The Read Unit is provided with access to the knowledge base KBV , along with the previous memory state mi−1 and the current control ci. It is responsible for retrieving relevant content from the Knowledge Base KBV for the reasoning task that the MAC cell should accomplish at this step, which is represented by the current control state ci, as explained above. Figure 4 shows a diagram.</paragraph_1>
diagram
0.999744
OpenReview
ICLR
2,018
DCN+: Mixed Objective And Deep Residual Coattention for Question Answering
Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state of the art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.
question answering, deep learning, natural language processing, reinforcement learning
We introduce the DCN+ with deep residual coattention and mixed-objective RL, which achieves state of the art performance on the Stanford Question Answering Dataset.
[ 6, 8, 7 ]
Accept (Poster)
Caiming Xiong, Victor Zhong, Richard Socher
cxiong@salesforce.com, richard@socher.org, victor@victorzhong.com
20171027
https://openreview.net/forum?id=H1meywxRW
H1meywxRW
@inproceedings{ xiong2018dcn, title={{DCN}+: Mixed Objective And Deep Residual Coattention for Question Answering}, author={Caiming Xiong and Victor Zhong and Richard Socher}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=H1meywxRW}, }
OpenReview/ICLR/figures/2018/accept_poster/H1meywxRW/Figure1.png
1
Figure 1: Deep residual coattention encoder.
<paragraph_1>Because it only has a single-layer coattention encoder, the DCN is limited in its ability to compose complex input representations. Vaswani et al. (2017) proposed stacked self-attention modules to facilitate signal traversal. They also showed that the network’s ability to model long-range dependencies can be improved by reducing the length of signal paths. We propose two modifications to the coattention encoder to leverage these findings. First, we extend the coattention encoder with self-attention by stacking coattention layers. This allows the network to build richer representations over the input. Second, we merge coattention outputs from each layer with residual connections. This reduces the length of signal paths. Our encoder is shown in Figure 1.</paragraph_1>
diagram
0.867131
OpenReview
ICLR
2,018
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling
Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN.
deep learning, attention mechanism, sequence modeling, natural language processing, sentence embedding
A self-attention network for RNN/CNN-free sequence encoding with small memory consumption, highly parallelizable computation and state-of-the-art performance on several NLP tasks
[ 6, 6, 9 ]
Accept (Poster)
Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
tao.shen@student.uts.edu.au, tianyizh@uw.edu, guodong.long@uts.edu.au, jing.jiang@uts.edu.au, chengqi.zhang@uts.edu.au
20171027
https://openreview.net/forum?id=H1cWzoxA-
H1cWzoxA-
@inproceedings{ shen2018bidirectional, title={Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling}, author={Tao Shen and Tianyi Zhou and Guodong Long and Jing Jiang and Chengqi Zhang}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=H1cWzoxA-}, }
OpenReview/ICLR/figures/2018/accept_poster/H1cWzoxA-/Figure2.png
2
Figure 2: Masked self-attention mechanism. fij denotes f(xi, xj) in Eq.(9).
<paragraph_1>where W (1) ∈Rde×de, W (2) ∈Rde×dq. The procedures to calculate the attention output from f(xi, xj) are identical to those in token2token self-attention. We use s = gm(x, M) to denote the complete process of masked self-attention with s = [s1, s2, . . . , sn] as the output sequence. An illustration of masked self-attention is given in Figure 2.</paragraph_1> <paragraph_2>We visualize the progress of training models on CR dataset in Figure 5. The convergence speed of Bi-BloSAN is∼6× and∼2× faster than Bi-LSTM and DiSAN respectively. Although Bi-BloSAN is less time-efficient than CNN and multi-head attention, it has much better prediction quality.</paragraph_2>
diagram
0.991336
OpenReview
ICLR
2,018
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs when applied in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
Active Learning, Convolutional Neural Networks, Core-Set Selection
We approach to the problem of active learning as a core-set selection problem and show that this approach is especially useful in the batch active learning setting which is crucial when training CNNs.
[ 7, 7, 7 ]
Accept (Poster)
Ozan Sener, Silvio Savarese
ozansener@cs.stanford.edu, ssilvio@stanford.edu
20171027
https://openreview.net/forum?id=H1aIuk-RW
H1aIuk-RW
@inproceedings{ sener2018active, title={Active Learning for Convolutional Neural Networks: A Core-Set Approach}, author={Ozan Sener and Silvio Savarese}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=H1aIuk-RW}, }
OpenReview/ICLR/figures/2018/accept_poster/H1aIuk-RW/Figure1.png
1
Figure 1: Visualization of the Theorem 1. Consider the set of selected points s and the points in the remainder of the dataset [n] \ s, our results shows that if s is the δs cover of the dataset,∣∣∣ 1 n ∑ i∈[n] l(xi, yi, As)− 1 |s| ∑ j∈s l(xj , yj ;As) ∣∣∣ ≤ O (δs) +O (√ 1 n )
<paragraph_1>n P i∈[n] l(xi, yi; As). We state the theorem in this form to be consistent with (3). We visualize this theorem in Figure 1 and defer its proof to the appendix. In this theorem, “a set s is a δ cover of a set s⋆” means a set of balls with radius δ centered at each member of s can cover the entire s⋆. Informally, this theorem suggests that we can bound the core-set loss with covering radius and a term which goes to zero with rate depends solely on n. This is an interesting result since this bound does not depend on the number of labelled points. In other words, a provided label does not help the core-set loss unless it decreases the covering radius.</paragraph_1>
diagram
0.917242
OpenReview
ICLR
2,018
Interactive Grounded Language Acquisition and Generalization in a 2D World
We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix.
grounded language learning and generalization, zero-shot language learning
Training an agent in a 2D virtual world for grounded language acquisition and generalization.
[ 6, 7, 6 ]
Accept (Poster)
Haonan Yu, Haichao Zhang, Wei Xu
haonanyu@baidu.com, zhanghaichao@baidu.com, wei.xu@baidu.com
20171027
https://openreview.net/forum?id=H1UOm4gA-
H1UOm4gA-
@inproceedings{ yu2018interactive, title={Interactive Grounded Language Acquisition and Generalization in a 2D World}, author={Haonan Yu and Haichao Zhang and Wei Xu}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=H1UOm4gA-}, }
OpenReview/ICLR/figures/2018/accept_poster/H1UOm4gA-/Figure17.png
17
Figure 17: An overview of the baseline VL. The computations of NAV and QA only differ in the last MLPs.
<paragraph_1>VIS-LSTM [VL] An adaptation of a model devised by Ren et al. (2015) which was originally proposed for VQA. We flatten h and project it to the word embedding space RD. Then it is appended to the input sentence s as the first word. The augmented sentence goes through an LSTM whose last state is used for both NAV and QA (Figure 17, Appendix D).</paragraph_1> <paragraph_2>[VL] Its CNN has four convolutional layers p3, 2, 64q, p3, 2, 64q, p3, 2, 128q, and p3, 1, 128q. This is followed by a fully-connected layer of size 512, which projects the feature cube to the word embedding space. The RNN has 512 units. For either QA or NAV, the RNN’s last state goes through a three-layer MLP of which each layer has 512 units (Figure 17). [CE] It has the same layer-size configuration with VL (Figure 18). [SAN] Its RNN has 256 units. Following the original approach (Yang et al., 2016), we use two attention layers.</paragraph_2>
diagram
0.940663
OpenReview
ICLR
2,018
Deep Complex Networks
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.
deep learning, complex-valued neural networks
[ 8, 4, 7 ]
Accept (Poster)
Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, Christopher J Pal
chiheb.trabelsi@polymtl.ca, olexa.bilaniuk@umontreal.ca, ying.zhang@umontreal.ca, serdyuk@iro.umontreal.ca, sandeep.subramanian.1@umontreal.ca, jfsantos@emt.inrs.ca, soroush.mehri@microsoft.com, negar@elementai.com, yoshua.bengio@umontreal.ca, christopher.pal@polymtl.ca
20171027
https://openreview.net/forum?id=H1T2hmZAb
H1T2hmZAb
@inproceedings{ trabelsi2018deep, title={Deep Complex Networks}, author={Chiheb Trabelsi and Olexa Bilaniuk and Ying Zhang and Dmitriy Serdyuk and Sandeep Subramanian and Joao Felipe Santos and Soroush Mehri and Negar Rostamzadeh and Yoshua Bengio and Christopher J Pal}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=H1T2hmZAb}, }
OpenReview/ICLR/figures/2018/accept_poster/H1T2hmZAb/Figure1.png
1
Figure 1: Complex convolution and residual network implementation details.
<paragraph_1>As illustrated in Figure 1a, if we use matrix notation to represent real and imaginary parts of the convolution operation we have:  ℜ(W ∗h) ℑ(W ∗h)</paragraph_1> <paragraph_2>A deep convolutional residual network of the nature presented in He et al. (2015a; 2016) consists of 3 stages within which feature maps maintain the same shape. At the end of a stage, the feature maps are downsampled by a factor of 2 and the number of convolution filters are doubled. The sizes of the convolution kernels are always set to 3 x 3. Within a stage, there are several residual blocks which comprise 2 convolution layers each. The contents of one such residual block in the real and complex setting is illustrated in Appendix Figure 1b.</paragraph_2> <paragraph_3>In practice, the complex convolution operation is implemented as illustrated in Fig.1a where MI, MR refer to imaginary and real feature maps and KI and KR refer to imaginary and real kernels. MIKI refers to result of a real-valued convolution between the imaginary kernels KI and the imaginary feature maps MI.</paragraph_3>
diagram
0.971942
OpenReview
ICLR
2,018
Few-Shot Learning with Graph Neural Networks
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on ‘relational’ tasks.
[ 7, 7, 7 ]
Accept (Poster)
Victor Garcia Satorras, Joan Bruna Estrach
vgsatorras@gmail.com, bruna@cims.nyu.edu
20171027
https://openreview.net/forum?id=BJj6qGbRW
BJj6qGbRW
@inproceedings{ garcia2018fewshot, title={Few-Shot Learning with Graph Neural Networks}, author={Victor Garcia Satorras and Joan Bruna Estrach}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=BJj6qGbRW}, }
OpenReview/ICLR/figures/2018/accept_poster/BJj6qGbRW/Figure1.png
1
Figure 1: Visual representation of One-Shot Learning setting.
diagram
0.989425
OpenReview
ICLR
2,018
A Simple Neural Attentive Meta-Learner
Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.
meta-learning, few-shot learning
a simple RNN-based meta-learner that achieves SOTA performance on popular benchmarks
[ 6, 7, 6 ]
Accept (Poster)
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel
nmishra@berkeley.edu, rohaninejadm@berkeley.edu, adslcx@gmail.com, pabbeel@gmail.com
20171027
https://openreview.net/forum?id=B1DmUzWAW
B1DmUzWAW
@inproceedings{ mishra2018a, title={A Simple Neural Attentive Meta-Learner}, author={Nikhil Mishra and Mostafa Rohaninejad and Xi Chen and Pieter Abbeel}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=B1DmUzWAW}, }
OpenReview/ICLR/figures/2018/accept_poster/B1DmUzWAW/Figure1.png
1
Figure 1: Overview of our simple neural attentive learner (SNAIL); in this example, two blocks of TC layers (orange) are interleaved with two causal attention layers (green). The same class of model architectures can be applied to both supervised and reinforcement learning.
<paragraph_1>Despite their individual shortcomings, temporal convolutions and attention complement each other: while the former provide high-bandwidth access at the expense of finite context size, the latter provide pinpoint access over an infinitely large context. Hence, we construct SNAIL by combining the two: we use temporal convolutions to produce the context over which we use a causal attention operation. By interleaving TC layers with causal attention layers, SNAIL can have high-bandwidth access over its past experience without constraints on the amount of experience it can effectively use. By using attention at multiple stages within a model that is trained end-to-end, SNAIL can learn what pieces of information to pick out from the experience it gathers, as well as a feature representation that is amenable to doing so easily. As an additional benefit, SNAIL architectures are easier to train than traditional RNNs such as LSTM or GRUs (where the underlying optimization can be difficult because of the temporally-linear hidden state dependency) and can be efficiently implemented so that an entire sequence can be processed in a single forward pass. Figure 1 provides an illustration of SNAIL, and we discuss architectural components in Section 3.1.</paragraph_1>
diagram
0.87452
OpenReview
ICLR
2,018
Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions
The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design.
Deep Learning, Expressive Efficiency, Dilated Convolutions, Tensor Decompositions
We introduce the notion of mixed tensor decompositions, and use it to prove that interconnecting dilated convolutional networks boosts their expressive power.
[ 9, 8, 7 ]
Accept (Oral)
Nadav Cohen, Ronen Tamari, Amnon Shashua
cohennadav@ias.edu, ronent@cs.huji.ac.il, shashua@cs.huji.ac.il
20171024
https://openreview.net/forum?id=S1JHhv6TW
S1JHhv6TW
@inproceedings{ cohen2018boosting, title={Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions}, author={Nadav Cohen and Ronen Tamari and Amnon Shashua}, booktitle={International Conference on Learning Representations}, year={2018}, url={https://openreview.net/forum?id=S1JHhv6TW}, }
OpenReview/ICLR/figures/2018/accept_oral/S1JHhv6TW/Figure4.png
4
Figure 4: Best viewed in color. (a) Two mode trees T and T̄ along with a possible choice of mixture nodes (same as in fig. 3(a)). (b) Sample of the resulting hybrid mode trees (def. 2).
diagram
0.96057
OpenReview
ICLR
2,019
Deep Layers as Stochastic Solvers
We provide a novel perspective on the forward pass through a block of layers in a deep network. In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\tau$-nice Proximal Stochastic Gradient method. We further show that replacing standard Bernoulli dropout with additive dropout is equivalent to optimizing the same convex objective with a variance-reduced proximal method. By expressing both fully-connected and convolutional layers as special cases of a high-order tensor product, we unify the underlying convex optimization problem in the tensor setting and derive a formula for the Lipschitz constant $L$ used to determine the optimal step size of the above proximal methods. We conduct experiments with standard convolutional networks applied to the CIFAR-10 and CIFAR-100 datasets and show that replacing a block of layers with multiple iterations of the corresponding solver, with step size set via $L$, consistently improves classification accuracy.
deep networks, optimization
A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design.
[ 8, 7, 7 ]
Accept (Poster)
Adel Bibi, Bernard Ghanem, Vladlen Koltun, Rene Ranftl
adel.bibi@kaust.edu.sa, bernard.ghanem@kaust.edu.sa, vkoltun@gmail.com, ranftlr@gmail.com
20180927
https://openreview.net/forum?id=ryxxCiRqYX
ryxxCiRqYX
@inproceedings{ bibi2018deep, title={Deep Layers as Stochastic Solvers}, author={Adel Bibi and Bernard Ghanem and Vladlen Koltun and Rene Ranftl}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=ryxxCiRqYX}, }
OpenReview/ICLR/figures/2019/accept_poster/ryxxCiRqYX/Figure1.png
1
Figure 1: An overview of the tight relation between a single iteration of a stochastic solver and the forward pass through the lth layer in a network that consists of dropout followed by a linear transformation and a non-linear activation. We study an instance of problem (1) with quadratic F (x), where xl−1 are the input activations and xl, the variables being optimized, correspond to the output activations. Varying the type of stochastic solver changes the nature of the dropout layer, while the prior g(x) on the output activations determines the non-linearity Prox 1 L g (.).
<paragraph_1>This section is organized as follows. We introduce our notation and preliminaries in Section 3.1. In Section 3.2, we present a motivational example relating a single iteration of proximal gradient descent (Prox-GD) on (1) to the forward pass through a fully-connected layer followed by a nonlinear activation. We will show that several commonly used non-linear activations can be exactly or approximately represented as proximal operators of g(x). In Section 3.3, we unify fully-connected and convolutional layers as special cases of a high-order tensor product. We propose a generic instance of (1) in a tensor setting, where we provide a formula for the Lipschitz constant L of the finite sum structure of (1). In Section 3.4, we derive an intimate relation between stochastic solvers, namely τ-nice Prox-SG and mS2GD, and two types of dropout layers. Figure 1 shows an overview of the connections that will be developed.</paragraph_1>
diagram
0.997574
OpenReview
ICLR
2,019
Learning Factorized Multimodal Representations
Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.
multimodal learning, representation learning
We propose a model to learn factorized multimodal representations that are discriminative, generative, and interpretable.
[ 6, 7, 7 ]
Accept (Poster)
Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov
yaohungt@cs.cmu.edu, pliang@cs.cmu.edu, abagherz@cs.cmu.edu, morency@cs.cmu.edu, rsalakhu@cs.cmu.edu
20180927
https://openreview.net/forum?id=rygqqsA9KX
rygqqsA9KX
@inproceedings{ tsai2018learning, title={Learning Factorized Multimodal Representations}, author={Yao-Hung Hubert Tsai and Paul Pu Liang and Amir Zadeh and Louis-Philippe Morency and Ruslan Salakhutdinov}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=rygqqsA9KX}, }
OpenReview/ICLR/figures/2019/accept_poster/rygqqsA9KX/Figure6.png
6
Figure 6: The surrogate inference graphical model to deal with missing modalities in MFM. Red lines denote original inference in MFM and green lines denote surrogate inference to infer latent codes given present modalities.
<paragraph_1>We illustrate the surrogate inference for addressing the missing modalities issue in Figure 6. The surrogate inference model infers the latent codes given the present modalities. These inferred latent codes can then be used for reconstructing the missing modalities or label prediction in the presence of missing modalities.</paragraph_1>
diagram
0.96599
OpenReview
ICLR
2,019
Conditional Network Embeddings
Network Embeddings (NEs) map the nodes of a given network into $d$-dimensional Euclidean space $\mathbb{R}^d$. Ideally, this mapping is such that 'similar' nodes are mapped onto nearby points, such that the NE can be used for purposes such as link prediction (if 'similar' means being 'more likely to be connected') or classification (if 'similar' means 'being more likely to have the same label'). In recent years various methods for NE have been introduced, all following a similar strategy: defining a notion of similarity between nodes (typically some distance measure within the network), a distance measure in the embedding space, and a loss function that penalizes large distances for similar nodes and small distances for dissimilar nodes. A difficulty faced by existing methods is that certain networks are fundamentally hard to embed due to their structural properties: (approximate) multipartiteness, certain degree distributions, assortativity, etc. To overcome this, we introduce a conceptual innovation to the NE literature and propose to create \emph{Conditional Network Embeddings} (CNEs); embeddings that maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.). We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm for fitting it efficiently. We demonstrate that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity. Finally, we illustrate the potential of CNE for network visualization.
Network embedding, graph embedding, learning node representations, link prediction, multi-label classification of nodes
We introduce a network embedding method that accounts for prior information about the network, yielding superior empirical performance.
[ 5, 6, 4 ]
Accept (Poster)
Bo Kang, Jefrey Lijffijt, Tijl De Bie
bo.kang@ugent.be, jefrey.lijffijt@ugent.be, tijl.debie@ugent.be
20180927
https://openreview.net/forum?id=ryepUj0qtX
ryepUj0qtX
@inproceedings{ kang2018conditional, title={Conditional Network Embeddings}, author={Bo Kang and Jefrey Lijffijt and Tijl De Bie}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=ryepUj0qtX}, }
OpenReview/ICLR/figures/2019/accept_poster/ryepUj0qtX/Figure2.png
2
Figure 2: The entity relationship diagram of the studentdb dataset.
<paragraph_1>• Facebook (Leskovec & Krevl, 2015): In this network, nodes are the users and links represent the friendships between the users. The network has 4,039 nodes and 88,234 links. • arXiv ASTRO-PH (Leskovec & Krevl, 2015): In this network nodes represent authors of papers submitted to arXiv. The links represents the collaborations: two authors are connected if they co-authored at least one paper. The network has 18,722 nodes and 198,110 links. • studentdb (Goethals et al., 2010): This is a snapshot of the student database from the University of Antwerp’s Computer Science department. There are 403 nodes that belong to one of the following node types including: course, student, professor, program, track, contract, and room. There 3429 links that are the binary relationships between the nodes: student-in-track, student-in-program, student-in-contract, student-take-course, professorteach-course, course-in-room. The database schema is given in Figure 2. • Gowalla (Cho et al., 2011): This is a undirected location-based friendship network. The network has 196,591 nodes, 950,327 links. • BlogCatalog (Zafarani & Liu, 2009): This social network contains nodes representing bloggers and links representing their relations with other bloggers. The labels are the bloggers’ interests inferred from the meta data. The network has 10,312 nodes, 333,983 links, and 39 labels (used for multi-label classifications). • Protein-Protein Interactions (PPI) (Breitkreutz et al., 2007): A subnetwork of the PPI network for Homo Sapiens. The subnetwork has 3,890 nodes, 76,584 links, and 50 labels. • Wikipedia (Mahoney, 2011): This network contains nodes representing words and links representing the co-occurrence of words in Wikipedia pages. The labels represents the inferred Part-of-Speech tags (Toutanova et al., 2003). The network has 4,777 nodes, 184,812 links, and 40 different labels.</paragraph_1>
diagram
0.992351
OpenReview
ICLR
2,019
DPSNet: End-to-end Deep Plane Sweep Stereo
Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.
Deep Learning, Stereo, Depth, Geometry
A convolution neural network for multi-view stereo matching whose design is inspired by best practices of traditional geometry-based approaches
[ 6, 6, 7 ]
Accept (Poster)
Sunghoon Im, Hae-Gon Jeon, Stephen Lin, In So Kweon
dlarl8927@kaist.ac.kr, haegonj@andrew.cmu.edu, stevelin@microsoft.com, iskweon77@kaist.ac.kr
20180927
https://openreview.net/forum?id=ryeYHi0ctQ
ryeYHi0ctQ
@inproceedings{ im2018dpsnet, title={{DPSN}et: End-to-end Deep Plane Sweep Stereo}, author={Sunghoon Im and Hae-Gon Jeon and Stephen Lin and In So Kweon}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=ryeYHi0ctQ}, }
OpenReview/ICLR/figures/2019/accept_poster/ryeYHi0ctQ/Figure2.png
2
Figure 2: Overview of the DPSNet pipeline.
<paragraph_1>Our Deep Plane Sweep Network (DPSNet) is inspired by traditional multiview stereo practices for dense depth estimation and consists of four parts: feature extraction, cost volume generation, cost aggregation and depth map regression. The overall framework is shown in Figure 2.</paragraph_1>
diagram
0.962483
OpenReview
ICLR
2,019
Graph HyperNetworks for Neural Architecture Search
Neural architecture search (NAS) automatically finds the best task-specific neural network topology, outperforming many manual architecture designs. However, it can be prohibitively expensive as the search requires training thousands of different networks, while each training run can last for hours. In this work, we propose the Graph HyperNetwork (GHN) to amortize the search cost: given an architecture, it directly generates the weights by running inference on a graph neural network. GHNs model the topology of an architecture and therefore can predict network performance more accurately than regular hypernetworks and premature early stopping. To perform NAS, we randomly sample architectures and use the validation accuracy of networks with GHN generated weights as the surrogate search signal. GHNs are fast - they can search nearly 10× faster than other random search methods on CIFAR-10 and ImageNet. GHNs can be further extended to the anytime prediction setting, where they have found networks with better speed-accuracy tradeoff than the state-of-the-art manual designs.
neural, architecture, search, graph, network, hypernetwork, meta, learning, anytime, prediction
[ 7, 6, 7 ]
Accept (Poster)
Chris Zhang, Mengye Ren, Raquel Urtasun
cjzhang@edu.uwaterloo.ca, mren@cs.toronto.edu, urtasun@cs.toronto.edu
20180927
https://openreview.net/forum?id=rkgW0oA9FX
rkgW0oA9FX
@inproceedings{ zhang2018graph, title={Graph HyperNetworks for Neural Architecture Search}, author={Chris Zhang and Mengye Ren and Raquel Urtasun}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=rkgW0oA9FX}, }
OpenReview/ICLR/figures/2019/accept_poster/rkgW0oA9FX/Figure7.png
7
Figure 7: Best block found for classification
<paragraph_1>Figure 7 shows the best found block in the CIFAR-10 Experiments.</paragraph_1>
diagram
0.938143
OpenReview
ICLR
2,019
Learning Implicitly Recurrent CNNs Through Parameter Sharing
We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. Compared to traditional CNNs, we demonstrate substantial parameter savings on standard image classification tasks, while maintaining accuracy. Our simple parameter sharing scheme, though defined via soft weights, in practice often yields trained networks with near strict recurrent structure; with negligible side effects, they convert into networks with actual loops. Training these networks thus implicitly involves discovery of suitable recurrent architectures. Though considering only the aspect of recurrent links, our trained networks achieve accuracy competitive with those built using state-of-the-art neural architecture search (NAS) procedures. Our hybridization of recurrent and convolutional networks may also represent a beneficial architectural bias. Specifically, on synthetic tasks which are algorithmic in nature, our hybrid networks both train faster and extrapolate better to test examples outside the span of the training set.
deep learning, architecture search, computer vision
We propose a method that enables CNN folding to create recurrent connections
[ 6, 7, 8 ]
Accept (Poster)
Pedro Savarese, Michael Maire
savarese@ttic.edu, mmaire@uchicago.edu
20180927
https://openreview.net/forum?id=rJgYxn09Fm
rJgYxn09Fm
@inproceedings{ savarese2018learning, title={Learning Implicitly Recurrent {CNN}s Through Parameter Sharing}, author={Pedro Savarese and Michael Maire}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=rJgYxn09Fm}, }
OpenReview/ICLR/figures/2019/accept_poster/rJgYxn09Fm/Figure6.png
6
Figure 6: SWRN 40-8-8 (8 parameter templates shared among groups of 40−4 3 − 2 = 10 layers) trained with soft parameter sharing on CIFAR-10. Each stage (originally with 12 layers – the first two do not participate in parameter sharing) can be folded to yield blocks with complex recurrences. For clarity, we use colors to indicate the computational flow: red takes precedence over green, which in turn has precedence over blue. Colored paths are only taken once per stage. Although not trivial to see, recurrences in each stage’s folded form are determined by row/column repetitions in the respective Layer Similarity Matrix. For example, for stage 2 we have S5,3 ≈ S6,4 ≈ 1, meaning that layers 3, 4, 5 and 6 can be folded into layers 3 and 4 with a loop (captured by the red edge). The same holds for S7,1, S8,2, S9,3 and S10,4, hence after the loop with layers 3 and 4, the flow returns to layer 1 and goes all the way to layer 4, which generates the stage’s output. Even though there is an approximation when folding the network (in this example, we are tying layers with similarity close to 0.8), the impact on the test error is less than 0.3%. Also note that the folded model has a total of 24 layers (20 in the stage diagrams, plus 4 which are not shown, corresponding to the first layer of the network and three 1× 1 convolutions in skip-connections), instead of the original 40.
<paragraph_1>Figure 6 presents an additional example, where non-trivial recurrences (unlike the one in Figure 4) emerge naturally, resulting in a model that is rich in structure.</paragraph_1>
diagram
0.970068
OpenReview
ICLR
2,019
Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments. This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal. In this paper, we introduce a self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress. We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components. Using our proposed method, we set the new state of the art by a significant margin (8% absolute increase in success rate on the unseen test set). Code is available at https://github.com/chihyaoma/selfmonitoring-agent.
visual grounding, textual grounding, instruction-following, navigation agent
We propose a self-monitoring agent for the Vision-and-Language Navigation task.
[ 7, 6, 8 ]
Accept (Poster)
Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, Caiming Xiong
cyma@gatech.edu, jiasenlu@gatech.edu, zxwu@cs.umd.edu, alregib@gatech.edu, zkira@gatech.edu, rsocher@salesforce.com, cxiong@salesforce.com
20180927
https://openreview.net/forum?id=r1GAsjC5Fm
r1GAsjC5Fm
@misc{ ma2019selfmonitoring, title={Self-Monitoring Navigation Agent via Auxiliary Progress Estimation}, author={Chih-Yao Ma and Jiasen Lu and Zuxuan Wu and Ghassan AlRegib and Zsolt Kira and Richard Socher and Caiming Xiong}, year={2019}, url={https://openreview.net/forum?id=r1GAsjC5Fm}, }
OpenReview/ICLR/figures/2019/accept_poster/r1GAsjC5Fm/Figure2.png
2
Figure 2: Proposed self-monitoring agent consisting of visual-textual co-grounding, progress monitoring, and action selection modules. Textual grounding: identify which part of the instruction has been completed or ongoing and which part is potentially needed for next action. Visual grounding: summarize the observed surrounding images. Progress monitor: regularize and ensure grounded instruction reflects progress towards the goal. Action selection: identify which direction to go.
<paragraph_1>First, we propose a visual and textual co-grounding model for the vision and language navigation task, as illustrated in Fig. 2. We model the agent with a sequence-to-sequence architecture with attention by using a recurrent neural network. More specifically, we use Long Short Term Memory (LSTM) to carry the flow of information effectively. At each step t, the decoder observes representations of the current attended panoramic image feature ˆvt, previous selected action at−1 and current grounded instruction feature ˆxt as input, and outputs an encoder context ht:</paragraph_1> <paragraph_2>Textual grounding. When the agent moves from one viewpoint to another, it is required to identify which direction to go by relying on a grounded instruction, i.e. which parts of the instruction should be used. This can either be the instruction matched with the past (ongoing action) or predicted for the future (next action). To capture the relative position between words within an instruction, we incorporate the positional encoding PE(·) (Vaswani et al., 2017) into the instruction features. We then perform soft-attention on the instruction features X, as shown on the left side of Fig. 2. The attention distribution over L words of the instructions is computed as:</paragraph_2> <paragraph_3>The progress monitor aims to estimate the navigation progress by conditioning on three inputs: the history of grounded images and instructions, the current observation of the surrounding images, and the positions of grounded instructions. We therefore represent these inputs by using (1) the previous hidden state ht−1 and the current cell state ct of the LSTM, (2) the grounded surrounding images ˆvt, and (3) the distribution of attention weights of textual-grounding αt, as shown at the bottom of Fig. 2 represented by dotted lines.</paragraph_3> <paragraph_4>In Fig. 7 (b) step 2, the attention on instruction only focuses on ”go down” and thus failed to associate the ”go down steps” with the stairs previously mentioned in ”turn right to stairs”. The agent was however able to follow the rest of the instruction correctly by turning right and stopping near a mirror. Note that, different from Fig. 7 (a), the final estimated completeness of instruction-following from progress monitor is much higher (16%), which indicates that the agent failed to be aware that it was not correctly following the instruction.</paragraph_4>
diagram
0.967222
OpenReview
ICLR
2,019
LEARNING TO PROPAGATE LABELS: TRANSDUCTIVE PROPAGATION NETWORK FOR FEW-SHOT LEARNING
The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class. The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task. Yet, even with such meta-learning, the low-data problem in the novel classification task still remains. In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data. TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results.
few-shot learning, meta-learning, label propagation, manifold learning
We propose a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.
[ 7, 6, 5 ]
Accept (Poster)
Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang
csyanbin@gmail.com, juho.lee@stats.ox.ac.uk, mike_seop@aitrics.com, shkim@aitrics.com, eunhoy@kaist.ac.kr, sjhwang82@kaist.ac.kr, yi.yang@uts.edu.au
20180927
https://openreview.net/forum?id=SyVuRiC5K7
SyVuRiC5K7
@inproceedings{ liu2018learning, title={{LEARNING} {TO} {PROPAGATE} {LABELS}: {TRANSDUCTIVE} {PROPAGATION} {NETWORK} {FOR} {FEW}-{SHOT} {LEARNING}}, author={Yanbin Liu and Juho Lee and Minseop Park and Saehoon Kim and Eunho Yang and Sungju Hwang and Yi Yang}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=SyVuRiC5K7}, }
OpenReview/ICLR/figures/2019/accept_poster/SyVuRiC5K7/Figure1.png
1
Figure 1: A conceptual illustration of our transductive meta-learning framework, where lines between nodes represent graph connections and their colors represent the potential direction of label propagation. The neighborhood graph is episodic-wisely trained for transductive inference.
<paragraph_1>Yet, with the meta-learning by episodic training, we can learn the label propagation network as the query examples sampled from the training set can be used to simulate the real test set for transductive inference. Motivated by this finding, we propose Transductive Propagation Network (TPN) to deal with the low-data problem. Instead of applying the inductive inference, we utilize the entire query set for transductive inference (see Figure 1). Specifically, we first map the input to an embedding space using a deep neural network. Then a graph construction module is proposed to exploit the manifold structure of the novel class space using the union of support set and query set. According to the graph structure, iterative label propagation is applied to propagate labels from the support set to the query set and finally leads to a closed-form solution. With the propagated scores and ground truth labels of the query set, we compute the cross-entropy loss with respect to the feature embedding and graph construction parameters. Finally, all parameters can be updated end-to-end using backpropagation.</paragraph_1> <paragraph_2>Graph construction in each episode We follow the episodic paradigm for few-shot meta-learner training. This means that the graph is individually constructed for each task in each episode, as shown in Figure 1. Typically, in 5-way 5-shot training, N = 5, K = 5, T = 75, the dimension of W is only 100 × 100, which is quite efficient.</paragraph_2>
diagram
0.955474
OpenReview
ICLR
2,019
Learning Programmatically Structured Representations with Perceptor Gradients
We present the perceptor gradients algorithm -- a novel approach to learning symbolic representations based on the idea of decomposing an agent's policy into i) a perceptor network extracting symbols from raw observation data and ii) a task encoding program which maps the input symbols to output actions. We show that the proposed algorithm is able to learn representations that can be directly fed into a Linear-Quadratic Regulator (LQR) or a general purpose A* planner. Our experimental results confirm that the perceptor gradients algorithm is able to efficiently learn transferable symbolic representations as well as generate new observations according to a semantically meaningful specification.
representation learning, structured representations, symbols, programs
[ 5, 6, 7 ]
Accept (Poster)
Svetlin Penkov, Subramanian Ramamoorthy
sv.penkov@gmail.com, s.ramamoorthy@ed.ac.uk
20180927
https://openreview.net/forum?id=SJggZnRcFQ
SJggZnRcFQ
@inproceedings{ penkov2018learning, title={Learning Programmatically Structured Representations with Perceptor Gradients}, author={Svetlin Penkov and Subramanian Ramamoorthy}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=SJggZnRcFQ}, }
OpenReview/ICLR/figures/2019/accept_poster/SJggZnRcFQ/Figure2.png
2
Figure 2: A diagram of the cart-pole experimental setup.
<paragraph_1>We first consider the problem of balancing a cart-pole system by learning symbolic representations from the raw image observations. The cart-pole system is well studied in optimal control theory and it is typically balanced with an LQR (Zhou et al., 1996). We exploit this knowledge and set the program ρ to implement an LQR. The perceptor ψθ is a convolutional neural network (see A.1) as shown in the overall experiment diagram in Figure 2. We define the state vector as</paragraph_1> <paragraph_2>where x ∈R is the linear position of the cart and α ∈R is the angle of the pendulum with respect to its vertical position as shown in Figure 2.</paragraph_2> <paragraph_3>The input of the cart-pole feedforward perceptor is a stack of 4 consecutive grayscale 32 × 128 images that we render the cart-pole system onto as shown in Figure 2. This is a setup similar to the one proposed in (Mnih et al., 2015) which preserves temporary information in the input such that it can be processed by a convolutional neural network. The architecture of the perceptor ψθ is shown in Figure 12. Note that the perceptor shares its convolutional layers with the baseline network bφ. The outputs of the perceptor are the mean and the diagonal covariance matrix of a 4-dimensional normal distribution.</paragraph_3> <paragraph_4>For this experiment we designed an autoencoding perceptor, the architecture of which is shown in Figure 13. The input is a single color image containing 2.5D rendering of the world as shown in</paragraph_4> <paragraph_5>where x ∈R is the linear position of the cart and α ∈R is the angle of the pendulum with respect to its vertical position as shown in Figure 2. By following the derivation in (Lam, 2004) of the linearised state space model of the system around the unstable equilibrium [0 0 0 0]T (we ignore the modelling of the gearbox and the motor) we set the system matrix A and input matrix B to</paragraph_5>
diagram
0.996308
OpenReview
ICLR
2,019
Improving Sequence-to-Sequence Learning via Optimal Transport
Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning.
NLP, optimal transport, sequence to sequence, natural language processing
[ 5, 7, 6 ]
Accept (Poster)
Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, Lawrence Carin
liqun.chen@duke.edu, yizhe.zhang@microsoft.com, rz68@duke.edu, chenyang.tao@duke.edu, zhe.gan@microsoft.com, hczhang1@gmail.com, bai.li@duke.edu, dinghan.shen@duke.edu, cchangyou@gmail.com, lcarin@duke.edu
20180927
https://openreview.net/forum?id=S1xtAjR5tX
S1xtAjR5tX
@inproceedings{ chen2018improving, title={Improving Sequence-to-Sequence Learning via Optimal Transport}, author={Liqun Chen and Yizhe Zhang and Ruiyi Zhang and Chenyang Tao and Zhe Gan and Haichao Zhang and Bai Li and Dinghan Shen and Changyou Chen and Lawrence Carin}, booktitle={International Conference on Learning Representations}, year={2019}, url={https://openreview.net/forum?id=S1xtAjR5tX}, }
OpenReview/ICLR/figures/2019/accept_poster/S1xtAjR5tX/Figure2.png
2
Figure 2: Schematic computation graph of OT loss.
<paragraph_1>2.2 OPTIMAL TRANSPORT DISTANCE AS A SEQUENCE LEVEL LOSS Figure 2 illustrates how OT is computed to construct the sequence-level loss. Given two sentences, we can construct their word-level or phrase-level embedding matrices S and S′, where S = {zi} is usually recognized as the reference sequence embedding and S′ = {z′ j} for the model output sequence embedding. The cost matrix C is then computed by Cij = c(zi, z′ j) and passed on to the IPOT algorithm to get the OT distance. Our full algorithm is summarized in Algorithm 2, and more detailed model specifications are given below.</paragraph_1>
diagram
0.991626