huggingface-top-papers / huggingface-top-papers.csv
ronantakizawa's picture
Add HuggingFace top papers dataset (2025, 663 entries)
dde82b5 verified
rank,name,times_trended,best_rank,avg_rank,median_rank,publish_date,max_upvotes,max_github_stars,arxiv_link
1,LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models,432,2,11.28,11,"Mar 20, 2024",173,63300,https://arxiv.org/abs/2403.13372
2,DINOv3,346,1,21.02,15,"Aug 13, 2025",284,8590,https://arxiv.org/abs/2508.10104
3,Agent Lightning: Train ANY AI Agents with Reinforcement Learning,325,1,20.18,20,"Aug 5, 2025",120,9090,https://arxiv.org/abs/2508.03680
4,WebDancer: Towards Autonomous Information Seeking Agency,398,3,25.85,26,"May 28, 2025",33,17300,https://arxiv.org/abs/2505.22648
5,"WebShaper: Agentically Data Synthesizing via Information-Seeking
Formalization",402,2,26.34,26,"Jul 20, 2025",60,17300,https://arxiv.org/abs/2507.15061
6,WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent,393,1,25.86,26,"Aug 7, 2025",139,17300,https://arxiv.org/abs/2508.05748
7,WebSailor: Navigating Super-human Reasoning for Web Agent,394,2,26.03,26,"Jul 3, 2025",122,17300,https://arxiv.org/abs/2507.02592
8,"InternVL3: Exploring Advanced Training and Test-Time Recipes for
Open-Source Multimodal Models",349,8,23.07,21,"Apr 14, 2025",304,8850,https://arxiv.org/abs/2504.10479
9,"AgentScope 1.0: A Developer-Centric Framework for Building Agentic
Applications",290,1,18.33,14,"Aug 22, 2025",53,14100,https://arxiv.org/abs/2508.16279
10,Qwen-Image Technical Report,302,1,20.35,19,"Aug 4, 2025",261,6150,https://arxiv.org/abs/2508.02324
11,"MinerU2.5: A Decoupled Vision-Language Model for Efficient
High-Resolution Document Parsing",179,2,10.82,9,"Sep 26, 2025",134,49600,https://arxiv.org/abs/2509.22186
12,Scaling Agents via Continual Pre-training,196,1,18.28,17,"Sep 16, 2025",115,17300,https://arxiv.org/abs/2509.13310
13,"Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM
Fine-Tuning Data from Unstructured Documents",345,13,32.46,32,"Jul 5, 2025",51,12100,https://arxiv.org/abs/2507.04009
14,"WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic
Data and Scalable Reinforcement Learning",201,1,19.4,17,"Sep 16, 2025",90,17300,https://arxiv.org/abs/2509.13305
15,"ReSum: Unlocking Long-Horizon Search Intelligence via Context
Summarization",198,1,18.98,16,"Sep 16, 2025",78,17300,https://arxiv.org/abs/2509.13313
16,"PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex
Task Automation on PC",222,3,22.85,18,"Feb 20, 2025",29,6440,https://arxiv.org/abs/2502.14282
17,"WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for
Open-Ended Deep Research",194,4,19,16,"Sep 16, 2025",105,17300,https://arxiv.org/abs/2509.13312
18,"Look Before You Leap: A GUI-Critic-R1 Model for Pre-Operative Error
Diagnosis in GUI Automation",220,4,22.92,19,"Jun 5, 2025",19,6430,https://arxiv.org/abs/2506.04614
19,"PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B
Ultra-Compact Vision-Language Model",124,1,3.35,3,"Oct 16, 2025",98,65400,https://arxiv.org/abs/2510.14528
20,Mobile-Agent-v3: Foundamental Agents for GUI Automation,198,3,21.24,17,"Aug 21, 2025",64,6440,https://arxiv.org/abs/2508.15144
21,AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs,144,1,12.94,8,"Aug 22, 2025",153,1740,https://arxiv.org/abs/2508.16153
22,FastVLM: Efficient Vision Encoding for Vision Language Models,162,1,17.6,11,"Dec 17, 2024",70,6680,https://arxiv.org/abs/2412.13303
23,MIRIX: Multi-Agent Memory System for LLM-Based Agents,272,5,31.7,35,"Jul 10, 2025",79,3380,https://arxiv.org/abs/2507.07957
24,"GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models",243,11,30.1,30,"Aug 8, 2025",185,3000,https://arxiv.org/abs/2508.06471
25,Prompt Orchestration Markup Language,153,3,18.99,12,"Aug 19, 2025",49,4560,https://arxiv.org/abs/2508.13948
26,OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation,112,3,9.8,9,"Oct 23, 2024",5,50300,https://arxiv.org/abs/2410.17799
27,UI-TARS: Pioneering Automated GUI Interaction with Native Agents,208,4,29.12,28,"Jan 21, 2025",65,7830,https://arxiv.org/abs/2501.12326
28,"A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm
Bridging Foundation Models and Lifelong Agentic Systems",191,5,27.24,29,"Aug 10, 2025",97,1140,https://arxiv.org/abs/2508.07407
29,Qwen3 Technical Report,266,2,34.64,34,"May 14, 2025",316,25500,https://arxiv.org/abs/2505.09388
30,The Landscape of Agentic Reinforcement Learning for LLMs: A Survey,125,2,17.14,15,"Sep 2, 2025",214,941,https://arxiv.org/abs/2509.02547
31,DeepAnalyze: Agentic Large Language Models for Autonomous Data Science,115,2,14.33,14,"Oct 19, 2025",102,2630,https://arxiv.org/abs/2510.16872
32,TradingAgents: Multi-Agents LLM Financial Trading Framework,111,6,13.67,13,"Dec 28, 2024",14,25800,https://arxiv.org/abs/2412.20138
33,MinerU: An Open-Source Solution for Precise Document Content Extraction,112,3,14.65,16,"Sep 27, 2024",32,49600,https://arxiv.org/abs/2409.18839
34,"IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot
Text-To-Speech System",112,7,14.97,14,"Feb 8, 2025",5,16000,https://arxiv.org/abs/2502.05512
35,StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation,137,1,21.63,20,"Aug 11, 2025",27,1020,https://arxiv.org/abs/2508.08248
36,"Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI
Agents",135,5,22.01,20,"Sep 8, 2025",41,1660,https://arxiv.org/abs/2509.06917
37,PaddleOCR 3.0 Technical Report,100,1,12.08,11,"Jul 8, 2025",17,57600,https://arxiv.org/abs/2507.05595
38,"Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with
Long-Term Memory",121,4,18.92,14,"Aug 13, 2025",54,893,https://arxiv.org/abs/2508.09736
39,rStar2-Agent: Agentic Reasoning Technical Report,112,4,16.78,16,"Aug 28, 2025",106,1230,https://arxiv.org/abs/2508.20722
40,Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory,112,6,17.88,15,"Apr 28, 2025",33,43700,https://arxiv.org/abs/2504.19413
41,"ScreenCoder: Advancing Visual-to-Code Generation for Front-End
Automation via Modular Multimodal Agents",122,1,21.56,24,"Jul 30, 2025",98,2290,https://arxiv.org/abs/2507.22827
42,Step-Audio 2 Technical Report,94,2,13.01,12,"Jul 22, 2025",71,1050,https://arxiv.org/abs/2507.16632
43,"USO: Unified Style and Subject-Driven Generation via Disentangled and
Reward Learning",92,2,12.67,6,"Aug 26, 2025",56,1080,https://arxiv.org/abs/2508.18966
44,TempFlow-GRPO: When Timing Matters for GRPO in Flow Models,96,4,14.52,10,"Aug 6, 2025",12,782,https://arxiv.org/abs/2508.04324
45,RAG-Anything: All-in-One RAG Framework,129,2,25.02,28,"Oct 14, 2025",49,10600,https://arxiv.org/abs/2510.12323
46,"ComfyUI-Copilot: An Intelligent Assistant for Automated Workflow
Development",172,6,32.85,36,"Jun 5, 2025",79,3660,https://arxiv.org/abs/2506.05010
47,VGGT: Visual Geometry Grounded Transformer,232,12,37.63,39,"Mar 14, 2025",33,11700,https://arxiv.org/abs/2503.11651
48,MixGRPO: Unlocking Flow-based GRPO Efficiency with Mixed ODE-SDE,170,12,32.78,34,"Jul 29, 2025",15,1010,https://arxiv.org/abs/2507.21802
49,4DNeX: Feed-Forward 4D Generative Modeling Made Easy,178,2,33.96,37,"Aug 18, 2025",61,751,https://arxiv.org/abs/2508.13154
50,"FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable
Reasoning",86,3,15.94,11,"Oct 26, 2025",10,17000,https://arxiv.org/abs/2510.22543
51,"Agent S2: A Compositional Generalist-Specialist Framework for Computer
Use Agents",145,2,30.46,35,"Apr 1, 2025",26,8450,https://arxiv.org/abs/2504.00906
52,The Unreasonable Effectiveness of Scaling Agents for Computer Use,138,2,29.52,34,"Oct 2, 2025",24,8450,https://arxiv.org/abs/2510.02250
53,"VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action
Model",131,7,28.43,29,"Sep 11, 2025",235,1590,https://arxiv.org/abs/2509.09372
54,A Survey of Reinforcement Learning for Large Reasoning Models,110,4,24.15,24,"Sep 10, 2025",183,1800,https://arxiv.org/abs/2509.08827
55,Less is More: Recursive Reasoning with Tiny Networks,90,1,20.33,18,"Oct 6, 2025",483,5670,https://arxiv.org/abs/2510.04871
56,LightRAG: Simple and Fast Retrieval-Augmented Generation,92,2,21.25,9,"Oct 8, 2024",20,24900,https://arxiv.org/abs/2410.05779
57,PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel,104,10,27.97,30,"Apr 21, 2023",4,95500,https://arxiv.org/abs/2304.11277
58,Matrix-Game: Interactive World Foundation Model,78,1,20.45,14,"Jun 23, 2025",72,1550,https://arxiv.org/abs/2506.18701
59,PyTorch Distributed: Experiences on Accelerating Data Parallel Training,105,9,28.43,31,"Jun 28, 2020",3,95500,https://arxiv.org/abs/2006.15704
60,Thyme: Think Beyond Images,82,1,23.55,23,"Aug 15, 2025",78,466,https://arxiv.org/abs/2508.11630
61,"RepoMaster: Autonomous Exploration and Understanding of GitHub
Repositories for Complex Task Solving",83,14,23.9,21,"May 27, 2025",2,348,https://arxiv.org/abs/2505.21577
62,UI-Venus Technical Report: Building High-performance UI Agents with RFT,74,7,21.26,17,"Aug 14, 2025",41,470,https://arxiv.org/abs/2508.10833
63,"ToonComposer: Streamlining Cartoon Production with Generative
Post-Keyframing",65,4,17.37,14,"Aug 14, 2025",50,358,https://arxiv.org/abs/2508.10881
64,OpenCUA: Open Foundations for Computer-Use Agents,85,14,25.86,24,"Aug 12, 2025",30,415,https://arxiv.org/abs/2508.09123
65,LongSplat: Robust Unposed 3D Gaussian Splatting for Casual Long Videos,59,3,14.98,10,"Aug 19, 2025",59,600,https://arxiv.org/abs/2508.14041
66,"Stand-In: A Lightweight and Plug-and-Play Identity Control for Video
Generation",63,2,17.37,9,"Aug 11, 2025",38,512,https://arxiv.org/abs/2508.07901
67,Zep: A Temporal Knowledge Graph Architecture for Agent Memory,95,11,28.88,31,"Jan 20, 2025",6,20600,https://arxiv.org/abs/2501.13956
68,Paper2Video: Automatic Video Generation from Scientific Papers,55,2,13.04,11,"Oct 6, 2025",115,1790,https://arxiv.org/abs/2510.05096
69,"Youtu-GraphRAG: Vertically Unified Agents for Graph Retrieval-Augmented
Complex Reasoning",72,6,22.89,17,"Aug 27, 2025",7,730,https://arxiv.org/abs/2508.19855
70,"VeOmni: Scaling Any Modality Model Training with Model-Centric
Distributed Recipe Zoo",66,1,20.94,18,"Aug 4, 2025",17,1180,https://arxiv.org/abs/2508.02317
71,"NextStep-1: Toward Autoregressive Image Generation with Continuous
Tokens at Scale",52,1,12.92,5,"Aug 14, 2025",134,496,https://arxiv.org/abs/2508.10711
72,Qwen3-Omni Technical Report,72,1,23.96,26,"Sep 22, 2025",128,2690,https://arxiv.org/abs/2509.17765
73,3D and 4D World Modeling: A Survey,66,4,22.12,20,"Sep 4, 2025",57,568,https://arxiv.org/abs/2509.07996
74,"ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform
Data",78,10,27.24,29,"Sep 18, 2025",105,610,https://arxiv.org/abs/2509.15221
75,VibeVoice Technical Report,37,1,1,1,"Aug 26, 2025",118,7980,https://arxiv.org/abs/2508.19205
76,"PokeeResearch: Effective Deep Research via Reinforcement Learning from
AI Feedback and Robust Reasoning Scaffold",47,4,12.15,7,"Oct 17, 2025",8,1610,https://arxiv.org/abs/2510.15862
77,"MCP-Bench: Benchmarking Tool-Using LLM Agents with Complex Real-World
Tasks via MCP Servers",69,5,25.25,23,"Aug 28, 2025",58,296,https://arxiv.org/abs/2508.20453
78,Depth Anything 3: Recovering the Visual Space from Any Views,40,1,6.58,3,"Nov 13, 2025",89,3010,https://arxiv.org/abs/2511.10647
79,"MCP-Universe: Benchmarking Large Language Models with Real-World Model
Context Protocol Servers",57,5,20.07,18,"Aug 20, 2025",41,352,https://arxiv.org/abs/2508.14704
80,HunyuanImage 3.0 Technical Report,50,1,15.78,7,"Sep 28, 2025",21,2210,https://arxiv.org/abs/2509.23951
81,"olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language
Models",56,3,19.68,20,"Feb 25, 2025",9,16000,https://arxiv.org/abs/2502.18443
82,Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation,68,2,25.31,22,"Sep 30, 2025",32,1300,https://arxiv.org/abs/2510.01284
83,Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing,54,1,18.81,9,"Oct 22, 2025",28,1580,https://arxiv.org/abs/2510.19808
84,A Survey of Context Engineering for Large Language Models,151,1,39.59,40,"Jul 17, 2025",256,2330,https://arxiv.org/abs/2507.13334
85,"HuMo: Human-Centric Video Generation via Collaborative Multi-Modal
Conditioning",46,2,14.24,9,"Sep 10, 2025",120,596,https://arxiv.org/abs/2509.08519
86,"EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for
General Robot Control",52,2,20.06,15,"Aug 28, 2025",76,282,https://arxiv.org/abs/2508.21112
87,Waver: Wave Your Way to Lifelike Video Generation,53,5,21.06,22,"Aug 21, 2025",33,499,https://arxiv.org/abs/2508.15761
88,"AudioStory: Generating Long-Form Narrative Audio with Large Language
Models",59,16,24.81,21,"Aug 27, 2025",20,264,https://arxiv.org/abs/2508.20088
89,Towards a Unified View of Large Language Model Post-Training,46,5,17.63,19,"Sep 4, 2025",61,100,https://arxiv.org/abs/2509.04419
90,"InternVL3.5: Advancing Open-Source Multimodal Models in Versatility,
Reasoning, and Efficiency",109,12,37.03,40,"Aug 25, 2025",191,9240,https://arxiv.org/abs/2508.18265
91,Transition Models: Rethinking the Generative Learning Objective,47,8,19.38,19,"Sep 4, 2025",28,94,https://arxiv.org/abs/2509.04394
92,"AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making
through Multi-Turn Reinforcement Learning",43,1,16.98,8,"Sep 10, 2025",56,362,https://arxiv.org/abs/2509.08755
93,"The Dragon Hatchling: The Missing Link between the Transformer and
Models of the Brain",38,1,13.08,1,"Sep 30, 2025",489,3120,https://arxiv.org/abs/2509.26507
94,STream3R: Scalable Sequential 3D Reconstruction with Causal Transformer,39,5,14.41,8,"Aug 14, 2025",30,194,https://arxiv.org/abs/2508.10893
95,SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning,56,1,25.86,28,"Sep 11, 2025",73,740,https://arxiv.org/abs/2509.09674
96,"Enterprise Deep Research: Steerable Multi-Agent Deep Research for
Enterprise Analytics",46,7,21.17,18,"Oct 20, 2025",9,952,https://arxiv.org/abs/2510.17797
97,Code2Video: A Code-centric Paradigm for Educational Video Generation,48,2,22.77,19,"Oct 1, 2025",33,1100,https://arxiv.org/abs/2510.01174
98,RynnEC: Bringing MLLMs into Embodied World,62,6,29.16,29,"Aug 19, 2025",18,337,https://arxiv.org/abs/2508.14160
99,"Kimi Linear: An Expressive, Efficient Attention Architecture",39,1,16.28,4,"Oct 30, 2025",102,1150,https://arxiv.org/abs/2510.26692
100,UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning,67,15,30.96,31,"Sep 15, 2025",47,6440,https://arxiv.org/abs/2509.11543
101,"Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent
Distillation and Agentic RL",43,5,20.6,19,"Aug 6, 2025",115,333,https://arxiv.org/abs/2508.13167
102,"MOSAIC: Multi-Subject Personalized Generation via Correspondence-Aware
Alignment and Disentanglement",33,5,11.58,9,"Sep 2, 2025",11,418,https://arxiv.org/abs/2509.01977
103,"A Survey of Scientific Large Language Models: From Data Foundations to
Agent Frontiers",44,7,21.52,16,"Aug 28, 2025",133,312,https://arxiv.org/abs/2508.21148
104,MeshCoder: LLM-Powered Structured Mesh Code Generation from Point Clouds,40,4,18.7,16,"Aug 20, 2025",67,374,https://arxiv.org/abs/2508.14879
105,From Editor to Dense Geometry Estimator,55,13,28.02,28,"Sep 4, 2025",87,156,https://arxiv.org/abs/2509.04338
106,"FlashVSR: Towards Real-Time Diffusion-Based Streaming Video
Super-Resolution",55,7,28.15,29,"Oct 14, 2025",37,910,https://arxiv.org/abs/2510.12747
107,"FantasyPortrait: Enhancing Multi-Character Portrait Animation with
Expression-Augmented Diffusion Transformers",55,11,28.22,26,"Jul 17, 2025",24,435,https://arxiv.org/abs/2507.12956
108,SpatialLM: Training Large Language Models for Structured Indoor Modeling,50,4,26.46,31,"Jun 9, 2025",49,3900,https://arxiv.org/abs/2506.07491
109,Reconstruction Alignment Improves Unified Multimodal Models,34,3,14.94,14,"Sep 8, 2025",38,198,https://arxiv.org/abs/2509.07295
110,Transformer Explainer: Interactive Learning of Text-Generative Models,81,21,35.89,35,"Aug 8, 2024",172,5960,https://arxiv.org/abs/2408.04619
111,VerlTool: Towards Holistic Agentic Reinforcement Learning with Tool Use,52,7,27.48,28,"Sep 1, 2025",64,499,https://arxiv.org/abs/2509.01055
112,"Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale
Asynchronous RL",47,3,25.02,19,"Aug 11, 2025",48,373,https://arxiv.org/abs/2508.07976
113,MobiAgent: A Systematic Framework for Customizable Mobile Agents,43,6,22.74,21,"Aug 30, 2025",6,1020,https://arxiv.org/abs/2509.00531
114,"AReaL: A Large-Scale Asynchronous Reinforcement Learning System for
Language Reasoning",59,18,30.47,29,"May 30, 2025",28,2660,https://arxiv.org/abs/2505.24298
115,SAM 3D: 3Dfy Anything in Images,25,1,2.56,3,"Nov 20, 2025",100,4200,https://arxiv.org/abs/2511.16624
116,Continuous Autoregressive Language Models,46,5,24.72,25,"Oct 31, 2025",64,582,https://arxiv.org/abs/2510.27688
117,"We-Math 2.0: A Versatile MathBook System for Incentivizing Visual
Mathematical Reasoning",31,1,12.16,3,"Aug 14, 2025",139,138,https://arxiv.org/abs/2508.10433
118,Matrix-3D: Omnidirectional Explorable 3D World Generation,45,6,24.27,23,"Aug 11, 2025",67,398,https://arxiv.org/abs/2508.08086
119,DeepAgent: A General Reasoning Agent with Scalable Toolsets,43,5,23.09,25,"Oct 24, 2025",93,739,https://arxiv.org/abs/2510.21618
120,Logics-Parsing Technical Report,48,4,26.27,28,"Sep 24, 2025",7,619,https://arxiv.org/abs/2509.19760
121,"MonkeyOCR: Document Parsing with a Structure-Recognition-Relation
Triplet Paradigm",66,16,33.12,31,"Jun 5, 2025",2,6000,https://arxiv.org/abs/2506.05218
122,Emu3.5: Native Multimodal Models are World Learners,35,3,17.94,6,"Oct 30, 2025",103,1210,https://arxiv.org/abs/2510.26583
123,Multi-View 3D Point Tracking,33,4,15.97,16,"Aug 28, 2025",20,314,https://arxiv.org/abs/2508.21060
124,"Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive
World Model",41,4,22.83,26,"Aug 18, 2025",22,1550,https://arxiv.org/abs/2508.13009
125,"GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks
Through Code Repository Leveraging",50,13,27.96,23,"Aug 26, 2025",3,183,https://arxiv.org/abs/2508.18993
126,"Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual
Search",39,2,21.49,21,"Sep 9, 2025",59,290,https://arxiv.org/abs/2509.07969
127,"GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable
Reinforcement Learning",53,10,29.38,29,"Jul 1, 2025",229,1500,https://arxiv.org/abs/2507.01006
128,Diffusion Transformers with Representation Autoencoders,29,1,12.24,7,"Oct 13, 2025",157,1350,https://arxiv.org/abs/2510.11690
129,SpatialVID: A Large-Scale Video Dataset with Spatial Annotations,35,8,19.11,16,"Sep 11, 2025",28,331,https://arxiv.org/abs/2509.09676
130,Back to Basics: Let Denoising Generative Models Denoise,28,2,11.43,11,"Nov 17, 2025",59,1450,https://arxiv.org/abs/2511.13720
131,"ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long
Narrative Reasoning",44,8,25.86,28,"Aug 14, 2025",70,192,https://arxiv.org/abs/2508.10419
132,"Pixie: Fast and Generalizable Supervised Learning of 3D Physics from
Pixels",34,9,19.24,19,"Aug 20, 2025",34,179,https://arxiv.org/abs/2508.17437
133,"In-the-Flow Agentic System Optimization for Effective Planning and Tool
Use",36,2,21.08,22,"Oct 7, 2025",89,851,https://arxiv.org/abs/2510.05592
134,"Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model
Self-Distillation",36,10,21.92,15,"Sep 23, 2025",21,458,https://arxiv.org/abs/2509.19296
135,"HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D
Worlds from Words or Pixels",76,22,37.25,38,"Jul 29, 2025",126,2170,https://arxiv.org/abs/2507.21809
136,"Concerto: Joint 2D-3D Self-Supervised Learning Emerges Spatial
Representations",22,1,3.5,2,"Oct 27, 2025",172,2610,https://arxiv.org/abs/2510.23607
137,LongCat-Video Technical Report,40,6,25.38,28,"Oct 25, 2025",24,1110,https://arxiv.org/abs/2510.22200
138,Parallel-R1: Towards Parallel Thinking via Reinforcement Learning,41,5,26.07,27,"Sep 9, 2025",95,151,https://arxiv.org/abs/2509.07980
139,"Fish-Speech: Leveraging Large Language Models for Advanced Multilingual
Text-to-Speech Synthesis",28,2,14.79,10,"Nov 2, 2024",11,23900,https://arxiv.org/abs/2411.01156
140,LongLive: Real-time Interactive Long Video Generation,29,1,16.55,18,"Sep 26, 2025",174,647,https://arxiv.org/abs/2509.22622
141,"Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model
Reasoning Ability in VibeThinker-1.5B",29,7,16.55,10,"Nov 9, 2025",117,483,https://arxiv.org/abs/2511.06221
142,OmniWorld: A Multi-Domain and Multi-Modal Dataset for 4D World Modeling,26,1,13.12,13,"Sep 15, 2025",98,342,https://arxiv.org/abs/2509.12201
143,JoyAgent-JDGenie: Technical Report on the GAIA,61,14,35.08,38,"Oct 1, 2025",3,11100,https://arxiv.org/abs/2510.00510
144,Ovis2.5 Technical Report,39,6,26.36,27,"Aug 15, 2025",102,1280,https://arxiv.org/abs/2508.11737
145,Marco-Voice Technical Report,53,25,32.89,33,"Aug 4, 2025",15,332,https://arxiv.org/abs/2508.02038
146,"A.S.E: A Repository-Level Benchmark for Evaluating Security in
AI-Generated Code",62,7,35.66,37,"Aug 25, 2025",340,709,https://arxiv.org/abs/2508.18106
147,"Revolutionizing Reinforcement Learning Framework for Diffusion Large
Language Models",25,2,12.96,5,"Sep 8, 2025",51,171,https://arxiv.org/abs/2509.06949
148,Intern-S1: A Scientific Multimodal Foundation Model,27,1,15.81,14,"Aug 21, 2025",236,538,https://arxiv.org/abs/2508.15763
149,SAM 3: Segment Anything with Concepts,19,1,1.63,2,"Nov 20, 2025",95,4760,https://arxiv.org/abs/2511.16719
150,Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning,24,6,12.17,9,"Nov 20, 2025",97,705,https://arxiv.org/abs/2511.16043
151,"Tinker: Diffusion's Gift to 3D--Multi-View Consistent Editing From
Sparse Inputs without Per-Scene Optimization",29,7,19.86,21,"Aug 20, 2025",39,129,https://arxiv.org/abs/2508.14811
152,WithAnyone: Towards Controllable and ID Consistent Image Generation,24,1,13.58,9,"Oct 16, 2025",76,415,https://arxiv.org/abs/2510.14975
153,ST-Raptor: LLM-Powered Semi-Structured Table Question Answering,32,3,22.97,24,"Aug 25, 2025",6,243,https://arxiv.org/abs/2508.18190
154,"Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable
Text-to-Image Reinforcement Learning",26,2,16.62,17,"Aug 28, 2025",85,123,https://arxiv.org/abs/2508.20751
155,Arch-Router: Aligning LLM Routing with Human Preferences,60,21,36.28,37,"Jun 19, 2025",17,4170,https://arxiv.org/abs/2506.16655
156,GenCompositor: Generative Video Compositing with Diffusion Transformer,44,12,31.52,29,"Sep 2, 2025",24,122,https://arxiv.org/abs/2509.02460
157,"Voost: A Unified and Scalable Diffusion Transformer for Bidirectional
Virtual Try-On and Try-Off",37,9,28.11,26,"Aug 6, 2025",56,264,https://arxiv.org/abs/2508.04825
158,"FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in
Virtual 3D Spaces",20,1,8.75,3,"Jan 22, 2025",74,1050,https://arxiv.org/abs/2501.12909
159,"Story2Board: A Training-Free Approach for Expressive Storyboard
Generation",45,14,32.31,34,"Aug 13, 2025",61,119,https://arxiv.org/abs/2508.09983
160,StreamingVLM: Real-Time Understanding for Infinite Video Streams,26,3,18.77,23,"Oct 10, 2025",46,557,https://arxiv.org/abs/2510.09608
161,LightMem: Lightweight and Efficient Memory-Augmented Generation,24,2,16.12,13,"Oct 21, 2025",105,295,https://arxiv.org/abs/2510.18866
162,"VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D
Space",27,3,20.3,23,"Aug 26, 2025",38,130,https://arxiv.org/abs/2508.19247
163,RLinf-VLA: A Unified and Efficient Framework for VLA+RL Training,86,25,41.38,42,"Oct 8, 2025",38,1450,https://arxiv.org/abs/2510.06710
164,"CommonForms: A Large, Diverse Dataset for Form Field Detection",29,2,22.59,27,"Sep 20, 2025",18,819,https://arxiv.org/abs/2509.16506
165,"Thinking with Video: Video Generation as a Promising Multimodal
Reasoning Paradigm",26,2,19.81,10,"Nov 6, 2025",188,182,https://arxiv.org/abs/2511.04570
166,"Speed Always Wins: A Survey on Efficient Architectures for Large
Language Models",36,15,28.69,25,"Aug 13, 2025",51,281,https://arxiv.org/abs/2508.09834
167,"Scaling Instruction-Based Video Editing with a High-Quality Synthetic
Dataset",23,3,16.39,15,"Oct 17, 2025",49,385,https://arxiv.org/abs/2510.15742
168,PhysX-Anything: Simulation-Ready Physical 3D Assets from Single Image,25,2,19.24,15,"Nov 17, 2025",51,593,https://arxiv.org/abs/2511.13648
169,"Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved
Image Generation",27,4,21.67,20,"Aug 13, 2025",23,73,https://arxiv.org/abs/2508.09987
170,TTT3R: 3D Reconstruction as Test-Time Training,29,3,24.1,28,"Sep 30, 2025",13,396,https://arxiv.org/abs/2509.26645
171,"SINQ: Sinkhorn-Normalized Quantization for Calibration-Free
Low-Precision LLM Weights",25,5,19.96,19,"Sep 26, 2025",73,472,https://arxiv.org/abs/2509.22944
172,Detect Anything via Next Point Prediction,22,4,16.45,10,"Oct 14, 2025",42,449,https://arxiv.org/abs/2510.12798
173,General Agentic Memory Via Deep Research,17,3,7.47,8,"Nov 23, 2025",146,485,https://arxiv.org/abs/2511.18423
174,"Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective
with Transformers",21,2,15.81,10,"Dec 31, 2020",3,1100,https://arxiv.org/abs/2012.15840
175,"Killing Two Birds with One Stone:Efficient and Robust Training of Face
Recognition CNNs by Partial FC",92,26,43.16,45,"Mar 28, 2022",3,26700,https://arxiv.org/abs/2203.15565
176,Reinforcement Learning in Vision: A Survey,41,25,33.49,33,"Aug 11, 2025",27,184,https://arxiv.org/abs/2508.08189
177,HunyuanOCR Technical Report,15,2,3.27,3,"Nov 24, 2025",17,878,https://arxiv.org/abs/2511.19575
178,"Collaborating Action by Action: A Multi-agent LLM Framework for Embodied
Reasoning",26,10,23.54,21,"Apr 24, 2025",5,4250,https://arxiv.org/abs/2504.17950
179,Training Video Foundation Models with NVIDIA NeMo,87,19,42.82,44,"Mar 17, 2025",7,15700,https://arxiv.org/abs/2503.12964
180,"MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling",29,4,26.66,30,"Nov 14, 2025",156,1160,https://arxiv.org/abs/2511.11793
181,Agent0-VL: Exploring Self-Evolving Agent for Tool-Integrated Vision-Language Reasoning,15,4,5.33,5,"Nov 25, 2025",46,702,https://arxiv.org/abs/2511.19900
182,OmniTry: Virtual Try-On Anything without Masks,41,13,34.39,34,"Aug 19, 2025",16,166,https://arxiv.org/abs/2508.13632
183,FlashWorld: High-quality 3D Scene Generation within Seconds,18,3,13.17,8,"Oct 15, 2025",66,397,https://arxiv.org/abs/2510.13678
184,"Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion
Forcing",30,10,28.57,29,"Aug 8, 2025",29,79,https://arxiv.org/abs/2508.09192
185,Real-Time Object Detection Meets DINOv3,37,14,32.84,37,"Sep 25, 2025",10,684,https://arxiv.org/abs/2509.20787
186,"OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding
LLM",28,7,27.36,24,"Oct 17, 2025",86,502,https://arxiv.org/abs/2510.15870
187,"QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning
for LLMs",21,1,19.67,20,"Oct 13, 2025",162,350,https://arxiv.org/abs/2510.11696
188,TimeGPT-1,21,6,19.86,12,"Oct 5, 2023",7,3470,https://arxiv.org/abs/2310.03589
189,"LucidFlux: Caption-Free Universal Image Restoration via a Large-Scale
Diffusion Transformer",37,24,33.35,32,"Sep 26, 2025",21,476,https://arxiv.org/abs/2509.22414
190,Ark: An Open-source Python-based Framework for Robot Learning,25,10,25.04,18,"Jun 24, 2025",16,245,https://arxiv.org/abs/2506.21628
191,Kwai Keye-VL Technical Report,40,20,34.85,33,"Jul 2, 2025",130,623,https://arxiv.org/abs/2507.01949
192,MHR: Momentum Human Rig,24,4,25,28,"Nov 19, 2025",13,419,https://arxiv.org/abs/2511.15586
193,P3-SAM: Native 3D Part Segmentation,20,1,19.85,13,"Sep 8, 2025",21,249,https://arxiv.org/abs/2509.06784
194,"Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with
Single-Stream Decoupled Speech Tokens",23,14,24.22,22,"Mar 3, 2025",6,10400,https://arxiv.org/abs/2503.01710
195,Diffusion Language Models are Super Data Learners,25,16,26.36,19,"Nov 5, 2025",112,259,https://arxiv.org/abs/2511.03276
196,"Hulu-Med: A Transparent Generalist Model towards Holistic Medical
Vision-Language Understanding",30,13,30.57,35,"Oct 9, 2025",4,478,https://arxiv.org/abs/2510.08668
197,"HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video
Narratives",20,10,20.55,17,"Oct 23, 2025",37,333,https://arxiv.org/abs/2510.20822
198,Training-Free Group Relative Policy Optimization,34,25,33.18,32,"Oct 9, 2025",42,3590,https://arxiv.org/abs/2510.08191
199,Visual Spatial Tuning,21,3,22.29,18,"Nov 7, 2025",46,124,https://arxiv.org/abs/2511.05491
200,Step-Audio-R1 Technical Report,25,10,26.92,32,"Nov 19, 2025",51,316,https://arxiv.org/abs/2511.15848
201,WizardCoder: Empowering Code Large Language Models with Evol-Instruct,12,1,1,1,"Jun 14, 2023",30,9470,https://arxiv.org/abs/2306.08568
202,Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting,18,8,17.78,15,"May 20, 2025",3,7670,https://arxiv.org/abs/2505.14059
203,"FLUX-Reason-6M & PRISM-Bench: A Million-Scale Text-to-Image Reasoning
Dataset and Comprehensive Benchmark",19,3,19.74,15,"Sep 11, 2025",35,74,https://arxiv.org/abs/2509.09680
204,MiMo-Embodied: X-Embodied Foundation Model Technical Report,23,9,25.26,28,"Nov 20, 2025",23,260,https://arxiv.org/abs/2511.16518
205,Robot Learning: A Tutorial,20,6,21.5,15,"Oct 14, 2025",80,281,https://arxiv.org/abs/2510.12403
206,GigaWorld-0: World Models as Data Engine to Empower Embodied AI,15,8,11.93,13,"Nov 25, 2025",28,214,https://arxiv.org/abs/2511.19861
207,"Large Language Model Agent: A Survey on Methodology, Applications and
Challenges",82,35,43.88,45,"Mar 27, 2025",82,1710,https://arxiv.org/abs/2503.21460
208,Chronos-2: From Univariate to Universal Forecasting,32,23,32.75,31,"Oct 17, 2025",17,4190,https://arxiv.org/abs/2510.15821
209,ViSTA-SLAM: Visual SLAM with Symmetric Two-view Association,38,16,35.74,37,"Sep 1, 2025",6,134,https://arxiv.org/abs/2509.01584
210,ReCode: Unify Plan and Action for Universal Granularity Control,29,16,31.76,30,"Oct 27, 2025",117,359,https://arxiv.org/abs/2510.23564
211,"Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite
Imagery",37,9,36.08,41,"Oct 17, 2025",45,576,https://arxiv.org/abs/2510.15869
212,"OpenVision 2: A Family of Generative Pretrained Visual Encoders for
Multimodal Learning",27,11,30.74,30,"Sep 1, 2025",27,356,https://arxiv.org/abs/2509.01644
213,4KAgent: Agentic Any Image to 4K Super-Resolution,21,19,25.19,22,"Jul 9, 2025",104,547,https://arxiv.org/abs/2507.07105
214,"ChronoEdit: Towards Temporal Reasoning for Image Editing and World
Simulation",45,23,38.96,42,"Oct 5, 2025",14,578,https://arxiv.org/abs/2510.04290
215,X-Part: high fidelity and structure coherent shape decomposition,17,12,19.53,14,"Sep 10, 2025",25,234,https://arxiv.org/abs/2509.08643
216,"Huxley-Gödel Machine: Human-Level Coding Agent Development by an
Approximation of the Optimal Self-Improving Machine",21,10,25.71,21,"Oct 24, 2025",18,251,https://arxiv.org/abs/2510.21614
217,"Build Your Personalized Research Group: A Multiagent Framework for
Continual and Interactive Science Automation",17,2,19.82,24,"Oct 17, 2025",14,267,https://arxiv.org/abs/2510.15624
218,"OpenTSLM: Time-Series Language Models for Reasoning over Multivariate
Medical Text- and Time-Series Data",21,8,26.43,27,"Oct 2, 2025",15,950,https://arxiv.org/abs/2510.02410
219,Trace Anything: Representing Any Video in 4D via Trajectory Fields,17,11,21.18,22,"Oct 15, 2025",30,271,https://arxiv.org/abs/2510.13802
220,Democratizing AI scientists using ToolUniverse,29,17,33.59,38,"Sep 27, 2025",38,455,https://arxiv.org/abs/2509.23426
221,"S^2-Guidance: Stochastic Self Guidance for Training-Free Enhancement of
Diffusion Models",25,15,30.96,23,"Aug 18, 2025",45,134,https://arxiv.org/abs/2508.12880
222,UniVerse-1: Unified Audio-Video Generation via Stitching of Experts,15,9,18.07,10,"Sep 7, 2025",14,53,https://arxiv.org/abs/2509.06155
223,R-Zero: Self-Evolving Reasoning LLM from Zero Data,39,28,38.41,38,"Aug 7, 2025",123,559,https://arxiv.org/abs/2508.05004
224,"On the Generalization of SFT: A Reinforcement Learning Perspective with
Reward Rectification",29,26,34.34,35,"Aug 7, 2025",148,322,https://arxiv.org/abs/2508.05629
225,"InternScenes: A Large-scale Simulatable Indoor Scene Dataset with
Realistic Layouts",17,7,22.94,20,"Sep 13, 2025",29,161,https://arxiv.org/abs/2509.10813
226,"LightReasoner: Can Small Language Models Teach Large Language Models
Reasoning?",15,9,19.33,10,"Oct 9, 2025",7,362,https://arxiv.org/abs/2510.07962
227,The Denario project: Deep knowledge AI agents for scientific discovery,22,15,29.73,26,"Oct 30, 2025",6,372,https://arxiv.org/abs/2510.26887
228,"R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs
via Bi-Mode Annealing and Reinforce Learning",19,21,26.63,23,"Aug 28, 2025",102,84,https://arxiv.org/abs/2508.21113
229,Rolling Forcing: Autoregressive Long Video Diffusion in Real Time,15,3,20.13,18,"Sep 29, 2025",20,109,https://arxiv.org/abs/2509.25161
230,Robot Learning from a Physical World Model,23,23,31.09,30,"Nov 10, 2025",26,146,https://arxiv.org/abs/2511.07416
231,"Genie Envisioner: A Unified World Foundation Platform for Robotic
Manipulation",24,19,31.96,29,"Aug 7, 2025",72,218,https://arxiv.org/abs/2508.05635
232,"RLVE: Scaling Up Reinforcement Learning for Language Models with
Adaptive Verifiable Environments",20,18,28.15,23,"Nov 10, 2025",10,119,https://arxiv.org/abs/2511.07317
233,"Packing Input Frame Context in Next-Frame Prediction Models for Video
Generation",47,15,41.34,44,"Apr 17, 2025",52,15800,https://arxiv.org/abs/2504.12626
234,"M^3FinMeeting: A Multilingual, Multi-Sector, and Multi-Task Financial
Meeting Understanding Evaluation Dataset",22,12,30.41,28,"Jun 3, 2025",3,287,https://arxiv.org/abs/2506.02510
235,"Beyond Outlining: Heterogeneous Recursive Planning for Adaptive
Long-form Writing with Language Models",13,9,16.15,12,"Mar 11, 2025",3,736,https://arxiv.org/abs/2503.08275
236,"Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding
and Generation",25,18,33,31,"Aug 5, 2025",59,727,https://arxiv.org/abs/2508.03320
237,Tree Search for LLM Agent Reinforcement Learning,15,13,21.2,19,"Sep 25, 2025",79,95,https://arxiv.org/abs/2509.21240
238,VoXtream: Full-Stream Text-to-Speech with Extremely Low Latency,24,24,32.42,31,"Sep 19, 2025",2,129,https://arxiv.org/abs/2509.15969
239,A Survey of Data Agents: Emerging Paradigm or Overstated Hype?,17,11,24.82,19,"Oct 27, 2025",63,202,https://arxiv.org/abs/2510.23587
240,SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation,15,17,22.07,21,"Nov 24, 2025",38,189,https://arxiv.org/abs/2511.19320
241,Latent Collaboration in Multi-Agent Systems,12,12,14.92,14,"Nov 25, 2025",92,240,https://arxiv.org/abs/2511.20639
242,Puppeteer: Rig and Animate Your 3D Models,31,27,37.1,37,"Aug 14, 2025",30,178,https://arxiv.org/abs/2508.10898
243,GEM: A Gym for Agentic LLMs,21,24,31.29,30,"Oct 1, 2025",79,284,https://arxiv.org/abs/2510.01051
244,MedRAX: Medical Reasoning Agent for Chest X-ray,18,9,28,29,"Feb 4, 2025",2,993,https://arxiv.org/abs/2502.02673
245,"Omni-Effects: Unified and Spatially-Controllable Visual Effects
Generation",17,14,26.71,28,"Aug 11, 2025",56,122,https://arxiv.org/abs/2508.07981
246,FlowRL: Matching Reward Distributions for LLM Reasoning,14,13,21.5,17,"Sep 18, 2025",90,58,https://arxiv.org/abs/2509.15207
247,"Uniworld-V2: Reinforce Image Editing with Diffusion Negative-aware
Finetuning and MLLM Implicit Feedback",14,3,21.5,21,"Oct 19, 2025",18,94,https://arxiv.org/abs/2510.16888
248,"UMO: Scaling Multi-Identity Consistency for Image Customization via
Matching Reward",19,18,29.37,26,"Sep 8, 2025",29,131,https://arxiv.org/abs/2509.06818
249,"From Pixels to Words -- Towards Native Vision-Language Primitives at
Scale",12,8,16.75,13,"Oct 16, 2025",60,161,https://arxiv.org/abs/2510.14979
250,LIMI: Less is More for Agency,26,28,35.31,35,"Sep 22, 2025",91,101,https://arxiv.org/abs/2509.17567
251,"TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language
Modeling",18,12,28.44,31,"Aug 22, 2025",7,149,https://arxiv.org/abs/2508.16790
252,SceneGen: Single-Image 3D Scene Generation in One Feedforward Pass,23,26,33.39,31,"Aug 21, 2025",18,66,https://arxiv.org/abs/2508.15769
253,The Markovian Thinker,19,15,29.95,29,"Oct 8, 2025",27,251,https://arxiv.org/abs/2510.06557
254,Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising,21,22,32.05,30,"Nov 9, 2025",50,191,https://arxiv.org/abs/2511.08633
255,V-Thinker: Interactive Thinking with Images,16,5,26.31,30,"Nov 6, 2025",92,104,https://arxiv.org/abs/2511.04460
256,"UniGenBench++: A Unified Semantic Evaluation Benchmark for Text-to-Image
Generation",12,5,18.17,14,"Oct 21, 2025",64,100,https://arxiv.org/abs/2510.18701
257,"MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and
Training Recipe",21,17,32.43,31,"Sep 16, 2025",45,22000,https://arxiv.org/abs/2509.18154
258,WorldGrow: Generating Infinite 3D World,17,12,28.12,25,"Oct 24, 2025",40,313,https://arxiv.org/abs/2510.21682
259,Video-As-Prompt: Unified Semantic Control for Video Generation,16,11,26.88,22,"Oct 23, 2025",41,217,https://arxiv.org/abs/2510.20888
260,"SongBloom: Coherent Song Generation via Interleaved Autoregressive
Sketching and Diffusion Refinement",21,23,32.86,31,"Jun 9, 2025",6,604,https://arxiv.org/abs/2506.07634
261,"FireRedTTS-2: Towards Long Conversational Speech Generation for Podcast
and Chatbot",24,21,35.21,35,"Sep 2, 2025",1,1180,https://arxiv.org/abs/2509.02020
262,LTX-Video: Realtime Video Latent Diffusion,18,19,30.17,30,"Dec 30, 2024",47,8590,https://arxiv.org/abs/2501.00103
263,OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe,14,6,24.29,23,"Nov 20, 2025",86,106,https://arxiv.org/abs/2511.16334
264,SciReasoner: Laying the Scientific Reasoning Ground Across Disciplines,13,12,23.23,18,"Sep 25, 2025",86,43,https://arxiv.org/abs/2509.21320
265,"Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data",14,15,25.36,27,"Nov 16, 2025",99,1010,https://arxiv.org/abs/2511.12609
266,"Advancing End-to-End Pixel Space Generative Modeling via Self-supervised
Pre-training",14,4,25.43,28,"Oct 14, 2025",104,98,https://arxiv.org/abs/2510.12586
267,"IGGT: Instance-Grounded Geometry Transformer for Semantic 3D
Reconstruction",15,16,27.2,23,"Oct 26, 2025",38,156,https://arxiv.org/abs/2510.22706
268,"CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer
Use Agent with Decoupled Reinforcement Learning",15,5,27.47,30,"Aug 27, 2025",31,25,https://arxiv.org/abs/2508.20096
269,"Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM
Diversity",20,26,33.45,34,"Oct 1, 2025",15,219,https://arxiv.org/abs/2510.01171
270,MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation,11,7,19.09,21,"Nov 12, 2025",64,248,https://arxiv.org/abs/2511.09611
271,Kandinsky 5.0: A Family of Foundation Models for Image and Video Generation,14,13,25.93,22,"Nov 19, 2025",207,471,https://arxiv.org/abs/2511.14993
272,MemOS: A Memory OS for AI System,36,13,41.33,45,"Jul 4, 2025",155,3110,https://arxiv.org/abs/2507.03724
273,DA^2: Depth Anything in Any Direction,17,8,30.53,37,"Sep 30, 2025",20,120,https://arxiv.org/abs/2509.26618
274,"OneReward: Unified Mask-Guided Image Generation via Multi-Task Human
Preference Learning",15,11,28,30,"Aug 28, 2025",13,208,https://arxiv.org/abs/2508.21066
275,"Paper2Code: Automating Code Generation from Scientific Papers in Machine
Learning",32,27,40.34,40,"Apr 24, 2025",120,3730,https://arxiv.org/abs/2504.17192
276,"ThinkMorph: Emergent Properties in Multimodal Interleaved
Chain-of-Thought Reasoning",17,9,31,28,"Oct 30, 2025",78,87,https://arxiv.org/abs/2510.27492
277,"The Well: a Large-Scale Collection of Diverse Physics Simulations for
Machine Learning",12,13,23,14,"Nov 30, 2024",20,1110,https://arxiv.org/abs/2412.00568
278,Versatile Framework for Song Generation with Prompt-based Control,44,34,43.5,45,"Apr 27, 2025",6,201,https://arxiv.org/abs/2504.19062
279,Durian: Dual Reference-guided Portrait Animation with Attribute Transfer,15,16,29.13,22,"Sep 4, 2025",4,14,https://arxiv.org/abs/2509.04434
280,Self-Forcing++: Towards Minute-Scale High-Quality Video Generation,19,22,34,30,"Oct 2, 2025",86,145,https://arxiv.org/abs/2510.02283
281,OmniSVG: A Unified Scalable Vector Graphics Generation Model,16,16,30.88,28,"Apr 8, 2025",180,2100,https://arxiv.org/abs/2504.06263
282,"SimpleTIR: End-to-End Reinforcement Learning for Multi-Turn
Tool-Integrated Reasoning",22,19,36.55,43,"Sep 2, 2025",76,250,https://arxiv.org/abs/2509.02479
283,VGGT-X: When VGGT Meets Dense Novel View Synthesis,13,15,26.69,25,"Sep 29, 2025",15,69,https://arxiv.org/abs/2509.25191
284,"VCode: a Multimodal Coding Benchmark with SVG as Symbolic Visual
Representation",11,5,22.45,18,"Nov 4, 2025",95,88,https://arxiv.org/abs/2511.02778
285,"Thinking with Camera: A Unified Multimodal Model for Camera-Centric
Understanding and Generation",13,4,26.92,29,"Oct 9, 2025",115,200,https://arxiv.org/abs/2510.08673
286,Latent Diffusion Model without Variational Autoencoder,14,13,28.71,24,"Oct 17, 2025",39,156,https://arxiv.org/abs/2510.15301
287,"Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement
Learning",10,8,20.5,19,"Sep 29, 2025",4,164,https://arxiv.org/abs/2509.24372
288,"VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video
Diffusion Models",18,18,34.39,36,"Sep 22, 2025",25,83,https://arxiv.org/abs/2509.17985
289,"VolSplat: Rethinking Feed-Forward 3D Gaussian Splatting with
Voxel-Aligned Prediction",14,16,29.86,24,"Sep 23, 2025",22,65,https://arxiv.org/abs/2509.19297
290,MolmoAct: Action Reasoning Models that can Reason in Space,22,28,37.68,37,"Aug 11, 2025",42,163,https://arxiv.org/abs/2508.07917
291,"Ming-UniAudio: Speech LLM for Joint Understanding, Generation and
Editing with Unified Representation",14,9,30.21,28,"Oct 26, 2025",9,370,https://arxiv.org/abs/2511.05516
292,MAPO: Mixed Advantage Policy Optimization,12,12,27.08,23,"Sep 23, 2025",25,34,https://arxiv.org/abs/2509.18849
293,DepthLM: Metric Depth From Vision Language Models,18,26,35.06,32,"Sep 29, 2025",5,130,https://arxiv.org/abs/2509.25413
294,"REINFORCE++: A Simple and Efficient Approach for Aligning Large Language
Models",41,26,44.05,45,"Jan 4, 2025",102,8130,https://arxiv.org/abs/2501.03262
295,DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning,26,25,40.15,42,"Aug 7, 2025",62,143,https://arxiv.org/abs/2508.05405
296,"Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal
Generation and Understanding",13,18,29.38,30,"Oct 7, 2025",48,834,https://arxiv.org/abs/2510.06308
297,DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research,16,24,33.62,29,"Nov 24, 2025",48,404,https://arxiv.org/abs/2511.19399
298,"IMAGGarment-1: Fine-Grained Garment Generation for Controllable Fashion
Design",14,21,31.64,31,"Apr 17, 2025",1,263,https://arxiv.org/abs/2504.13176
299,Interleaving Reasoning for Better Text-to-Image Generation,11,16,26.64,17,"Sep 8, 2025",12,24,https://arxiv.org/abs/2509.06945
300,ARE: Scaling Up Agent Environments and Evaluations,17,24,35.24,34,"Sep 21, 2025",29,257,https://arxiv.org/abs/2509.17158
301,GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization,11,9,27.09,28,"Nov 19, 2025",88,170,https://arxiv.org/abs/2511.15705
302,"Scrub It Out! Erasing Sensitive Memorization in Code Language Models via
Machine Unlearning",10,16,24.8,19,"Sep 17, 2025",18,24,https://arxiv.org/abs/2509.13755
303,NaTex: Seamless Texture Generation as Latent Color Diffusion,9,10,21.89,21,"Nov 20, 2025",15,85,https://arxiv.org/abs/2511.16317
304,"StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided
Illusions",9,2,22.11,23,"Oct 2, 2025",55,51,https://arxiv.org/abs/2510.02314
305,Human-Agent Collaborative Paper-to-Page Crafting for Under $0.1,9,13,22.56,19,"Oct 22, 2025",64,98,https://arxiv.org/abs/2510.19600
306,"F1: A Vision-Language-Action Model Bridging Understanding and Generation
to Actions",10,15,25.8,23,"Sep 8, 2025",26,64,https://arxiv.org/abs/2509.06951
307,ROSE: Remove Objects with Side Effects in Videos,12,14,30.67,33,"Aug 26, 2025",4,32,https://arxiv.org/abs/2508.18633
308,"VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal
Patches via In-Context Conditioning",7,2,16.43,16,"Oct 9, 2025",59,50,https://arxiv.org/abs/2510.08555
309,"EditScore: Unlocking Online RL for Image Editing via High-Fidelity
Reward Modeling",9,16,24.33,21,"Sep 28, 2025",26,60,https://arxiv.org/abs/2509.23909
310,Reinforced Visual Perception with Tools,9,18,24.44,22,"Sep 1, 2025",27,28,https://arxiv.org/abs/2509.01656
311,Is Diversity All You Need for Scalable Robotic Manipulation?,18,30,37.78,37,"Jul 8, 2025",20,2460,https://arxiv.org/abs/2507.06219
312,RLFR: Extending Reinforcement Learning for LLMs with Flow Environment,7,5,17,14,"Oct 11, 2025",32,34,https://arxiv.org/abs/2510.10201
313,"BAPO: Stabilizing Off-Policy Reinforcement Learning for LLMs via
Balanced Policy Optimization with Adaptive Clipping",9,15,24.56,23,"Oct 21, 2025",77,62,https://arxiv.org/abs/2510.18927
314,"GenoMAS: A Multi-Agent Framework for Scientific Discovery via
Code-Driven Gene Expression Analysis",11,25,29.45,26,"Jul 28, 2025",3,93,https://arxiv.org/abs/2507.21035
315,"Latent Zoning Network: A Unified Principle for Generative Modeling,
Representation Learning, and Classification",9,12,24.89,19,"Sep 19, 2025",43,39,https://arxiv.org/abs/2509.15591
316,"Grasp Any Region: Towards Precise, Contextual Pixel Understanding for
Multimodal LLMs",10,13,27.6,27,"Oct 21, 2025",33,57,https://arxiv.org/abs/2510.18876
317,"FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for
Audio-Driven Portrait Animation",13,22,33.15,31,"Aug 15, 2025",8,21,https://arxiv.org/abs/2508.11255
318,"OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision
Encoders for Multimodal Learning",17,26,37.47,39,"May 7, 2025",28,351,https://arxiv.org/abs/2505.04601
319,"Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large
Multimodal Models",10,10,28.2,27,"Oct 6, 2025",42,89,https://arxiv.org/abs/2510.05034
320,"AWorld: Dynamic Multi-Agent System with Stable Maneuvering for Robust
GAIA Problem Solving",31,34,43.71,45,"Aug 13, 2025",32,694,https://arxiv.org/abs/2508.09889
321,"Video-Thinker: Sparking ""Thinking with Videos"" via Reinforcement
Learning",10,13,28.5,22,"Oct 27, 2025",81,95,https://arxiv.org/abs/2510.23473
322,Cook and Clean Together: Teaching Embodied Agents for Parallel Task Execution,11,19,30.64,28,"Nov 24, 2025",7,270,https://arxiv.org/abs/2511.19430
323,"BEAVR: Bimanual, multi-Embodiment, Accessible, Virtual Reality
Teleoperation System for Robots",11,19,30.82,31,"Aug 13, 2025",0,50,https://arxiv.org/abs/2508.09606
324,"NaViL: Rethinking Scaling Properties of Native Multimodal Large Language
Models under Data Constraints",8,19,23.38,23,"Oct 9, 2025",17,71,https://arxiv.org/abs/2510.08565
325,Human3R: Everyone Everywhere All at Once,9,22,26.44,26,"Oct 7, 2025",8,304,https://arxiv.org/abs/2510.06219
326,"BRIDGE - Building Reinforcement-Learning Depth-to-Image Data Generation
Engine for Monocular Depth Estimation",17,33,38.06,37,"Sep 29, 2025",13,106,https://arxiv.org/abs/2509.25077
327,"THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical
Reasoning",10,16,29.1,28,"Sep 17, 2025",11,17,https://arxiv.org/abs/2509.13761
328,"T2I-ReasonBench: Benchmarking Reasoning-Informed Text-to-Image
Generation",8,11,23.75,17,"Aug 24, 2025",25,24,https://arxiv.org/abs/2508.17472
329,SPATIALGEN: Layout-guided 3D Indoor Scene Generation,10,15,29.5,30,"Sep 18, 2025",22,255,https://arxiv.org/abs/2509.14981
330,G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning,10,22,29.5,27,"Nov 26, 2025",8,151,https://arxiv.org/abs/2511.21688
331,"CogVLA: Cognition-Aligned Vision-Language-Action Model via
Instruction-Driven Routing & Sparsification",18,31,39.22,39,"Aug 28, 2025",8,56,https://arxiv.org/abs/2508.21046
332,DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion,14,26,35.86,31,"Oct 23, 2025",33,248,https://arxiv.org/abs/2510.20766
333,Hunyuan-MT Technical Report,14,26,36,35,"Sep 5, 2025",13,517,https://arxiv.org/abs/2509.05209
334,CrossOver: 3D Scene Cross-Modal Alignment,8,4,24.75,22,"Feb 20, 2025",2,204,https://arxiv.org/abs/2502.15011
335,FullPart: Generating each 3D Part at Full Resolution,8,15,25.12,22,"Oct 30, 2025",4,57,https://arxiv.org/abs/2510.26140
336,"Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains
RLVR",8,14,25.25,21,"Aug 19, 2025",109,26,https://arxiv.org/abs/2508.14029
337,RegionE: Adaptive Region-Aware Generation for Efficient Image Editing,9,14,28.22,24,"Oct 29, 2025",24,46,https://arxiv.org/abs/2510.25590
338,pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation,13,26,35.38,33,"Oct 16, 2025",7,97,https://arxiv.org/abs/2510.14974
339,Reinforcement Learning Foundations for Deep Research Systems: A Survey,8,15,25.88,24,"Sep 8, 2025",25,10,https://arxiv.org/abs/2509.06733
340,Efficient Part-level 3D Object Generation via Dual Volume Packing,13,24,35.62,36,"Jun 11, 2025",8,701,https://arxiv.org/abs/2506.09980
341,3D Gaussian Splatting for Real-Time Radiance Field Rendering,41,41,46.12,46,"Aug 8, 2023",192,19600,https://arxiv.org/abs/2308.04079
342,A Style is Worth One Code: Unlocking Code-to-Style Image Generation with Discrete Style Space,9,19,29.67,33,"Nov 13, 2025",53,122,https://arxiv.org/abs/2511.10555
343,"A Vision-Language-Action-Critic Model for Robotic Real-World
Reinforcement Learning",11,21,33.64,33,"Sep 19, 2025",16,145,https://arxiv.org/abs/2509.15937
344,"LivePortrait: Efficient Portrait Animation with Stitching and
Retargeting Control",29,39,44.48,44,"Jul 3, 2024",3,16900,https://arxiv.org/abs/2407.03168
345,"MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and
Open Resources",11,24,33.91,31,"Sep 25, 2025",90,190,https://arxiv.org/abs/2509.21268
346,"Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language
Models",13,25,36.62,34,"Aug 12, 2025",31,38,https://arxiv.org/abs/2508.09138
347,Yume: An Interactive World Generation Model,4,3,5.25,4,"Jul 23, 2025",69,185,https://arxiv.org/abs/2507.17744
348,Fast-dLLM v2: Efficient Block-Diffusion LLM,7,7,25,22,"Sep 30, 2025",40,537,https://arxiv.org/abs/2509.26328
349,"DeepScientist: Advancing Frontier-Pushing Scientific Findings
Progressively",11,26,34.64,32,"Sep 30, 2025",16,119,https://arxiv.org/abs/2509.26603
350,GigaBrain-0: A World Model-Powered Vision-Language-Action Model,13,30,37.15,36,"Oct 22, 2025",46,218,https://arxiv.org/abs/2510.19430
351,Visual Jigsaw Post-Training Improves MLLMs,8,20,28.62,26,"Sep 29, 2025",34,29,https://arxiv.org/abs/2509.25190
352,GenExam: A Multidisciplinary Text-to-Image Exam,8,17,28.75,28,"Sep 17, 2025",16,17,https://arxiv.org/abs/2509.14232
353,"CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement
Learning",8,11,29.12,28,"Sep 26, 2025",30,62,https://arxiv.org/abs/2509.22647
354,"InternSVG: Towards Unified SVG Tasks with Multimodal Large Language
Models",6,10,21.83,25,"Oct 13, 2025",31,54,https://arxiv.org/abs/2510.11341
355,"UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity
MoE",12,32,36.5,35,"Oct 15, 2025",62,1010,https://arxiv.org/abs/2510.13344
356,"VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing,
Speaking, and Acting",8,20,29.5,25,"Oct 21, 2025",41,120,https://arxiv.org/abs/2510.21817
357,EdgeTAM: On-Device Track Anything Model,16,31,40.38,40,"Jan 13, 2025",1,757,https://arxiv.org/abs/2501.07256
358,"Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised
Reinforcement Learning",11,25,36.09,34,"Oct 31, 2025",25,54,https://arxiv.org/abs/2510.27606
359,Self-Rewarding Vision-Language Model via Reasoning Decomposition,16,33,40.88,42,"Aug 27, 2025",77,79,https://arxiv.org/abs/2508.19652
360,Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts,11,32,36.27,35,"May 18, 2024",19,1010,https://arxiv.org/abs/2405.11273
361,"U-Bench: A Comprehensive Understanding of U-Net through 100-Variant
Benchmarking",8,21,31,28,"Oct 8, 2025",3,65,https://arxiv.org/abs/2510.07041
362,VChain: Chain-of-Visual-Thought for Reasoning in Video Generation,9,25,33.33,29,"Oct 6, 2025",34,60,https://arxiv.org/abs/2510.05094
363,"EasySteer: A Unified Framework for High-Performance and Extensible LLM
Steering",9,23,33.56,30,"Sep 29, 2025",25,49,https://arxiv.org/abs/2509.25175
364,LongCodeZip: Compress Long Context for Code Language Models,8,23,31.38,30,"Oct 1, 2025",88,63,https://arxiv.org/abs/2510.00446
365,"ARTDECO: Towards Efficient and High-Fidelity On-the-Fly 3D
Reconstruction with Structured Scene Representation",9,25,33.56,32,"Oct 9, 2025",30,73,https://arxiv.org/abs/2510.08551
366,DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation,10,25,35.4,33,"Nov 24, 2025",58,75,https://arxiv.org/abs/2511.19365
367,π^3: Scalable Permutation-Equivariant Visual Geometry Learning,6,10,25.17,17,"Jul 17, 2025",64,1100,https://arxiv.org/abs/2507.13347
368,LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls,9,21,33.78,39,"Nov 12, 2025",15,20,https://arxiv.org/abs/2511.09148
369,Explain Before You Answer: A Survey on Compositional Visual Reasoning,19,35,42.89,44,"Aug 24, 2025",4,279,https://arxiv.org/abs/2508.17298
370,IMG: Calibrating Diffusion Models via Implicit Multimodal Guidance,7,16,29.14,21,"Sep 30, 2025",16,25,https://arxiv.org/abs/2509.26231
371,ObjectClear: Complete Object Removal via Object-Effect Attention,4,8,13,11,"May 28, 2025",1,328,https://arxiv.org/abs/2505.22636
372,"LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal
Training",16,33,41.5,41,"Sep 28, 2025",40,508,https://arxiv.org/abs/2509.23661
373,"Stable Video Infinity: Infinite-Length Video Generation with Error
Recycling",19,34,43.05,45,"Oct 10, 2025",14,388,https://arxiv.org/abs/2510.09212
374,SAC: Neural Speech Codec with Semantic-Acoustic Dual-Stream Quantization,7,16,29.57,31,"Oct 19, 2025",0,60,https://arxiv.org/abs/2510.16841
375,ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability,13,28,39.54,40,"Aug 9, 2025",109,101,https://arxiv.org/abs/2508.07050
376,DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training,9,21,34.56,38,"Oct 13, 2025",29,105,https://arxiv.org/abs/2510.11712
377,AnyUp: Universal Feature Upsampling,11,31,37.73,37,"Oct 14, 2025",10,268,https://arxiv.org/abs/2510.12764
378,"Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive
Token-Level Computation",4,11,15,13,"Jul 14, 2025",60,311,https://arxiv.org/abs/2507.10524
379,Performance Prediction for Large Systems via Text-to-Text Regression,15,29,41.47,45,"Jun 26, 2025",6,255,https://arxiv.org/abs/2506.21718
380,Sequential Diffusion Language Models,7,22,30.71,29,"Sep 28, 2025",29,29,https://arxiv.org/abs/2509.24007
381,TiViBench: Benchmarking Think-in-Video Reasoning for Video Generative Models,5,17,22.6,23,"Nov 17, 2025",40,50,https://arxiv.org/abs/2511.13704
382,Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild,10,32,36.9,35,"Jun 6, 2019",1,40700,https://arxiv.org/abs/1906.02569
383,"Elevating 3D Models: High-Quality Texture and Geometry Refinement from a
Low-Quality Model",4,9,17.25,15,"Jul 15, 2025",11,106,https://arxiv.org/abs/2507.11465
384,"Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in
MLLMs",7,23,31.86,29,"Oct 2, 2025",12,183,https://arxiv.org/abs/2510.01954
385,"More Thought, Less Accuracy? On the Dual Nature of Reasoning in
Vision-Language Models",9,26,36.33,35,"Sep 30, 2025",56,44,https://arxiv.org/abs/2509.25848
386,"Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal
Evidence",9,22,36.44,34,"Oct 23, 2025",46,63,https://arxiv.org/abs/2510.20579
387,AutoPR: Let's Automate Your Academic Promotion!,5,8,25.2,26,"Oct 10, 2025",43,36,https://arxiv.org/abs/2510.09558
388,"VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed
View Memory",4,18,19,18,"Jun 23, 2025",22,255,https://arxiv.org/abs/2506.18903
389,RLP: Reinforcement as a Pretraining Objective,9,25,36.78,34,"Sep 26, 2025",32,149,https://arxiv.org/abs/2510.01265
390,"Gated Attention for Large Language Models: Non-linearity, Sparsity, and
Attention-Sink-Free",10,31,38.2,36,"May 10, 2025",6,240,https://arxiv.org/abs/2505.06708
391,"CMPhysBench: A Benchmark for Evaluating Large Language Models in
Condensed Matter Physics",5,13,25.6,19,"Aug 25, 2025",45,16,https://arxiv.org/abs/2508.18124
392,Spotlight on Token Perception for Multimodal Reinforcement Learning,6,10,29.83,27,"Oct 10, 2025",31,26,https://arxiv.org/abs/2510.09285
393,Video-as-Answer: Predict and Generate Next Video Event with Joint-GRPO,6,23,29.83,29,"Nov 20, 2025",29,44,https://arxiv.org/abs/2511.16669
394,"SyGra: A Unified Graph-Based Framework for Scalable Generation, Quality
Tagging, and Management of Synthetic Data",6,16,30,28,"Aug 21, 2025",6,16,https://arxiv.org/abs/2508.15432
395,"Uni-cot: Towards Unified Chain-of-Thought Reasoning Across Text and
Vision",7,24,33.29,32,"Aug 7, 2025",0,136,https://arxiv.org/abs/2508.05606
396,SRUM: Fine-Grained Self-Rewarding for Unified Multimodal Models,9,15,37.22,44,"Oct 14, 2025",17,51,https://arxiv.org/abs/2510.12784
397,"JanusCoder: Towards a Foundational Visual-Programmatic Interface for
Code Intelligence",7,19,33.29,38,"Oct 27, 2025",90,54,https://arxiv.org/abs/2510.23538
398,"SearchInstruct: Enhancing Domain Adaptation via Retrieval-Based
Instruction Dataset Creation",5,18,26.4,26,"Sep 12, 2025",9,8,https://arxiv.org/abs/2509.10708
399,RynnVLA-002: A Unified Vision-Language-Action and World Model,11,29,39.82,41,"Nov 21, 2025",24,669,https://arxiv.org/abs/2511.17502
400,BitNet Distillation,13,35,41.62,41,"Oct 15, 2025",44,24300,https://arxiv.org/abs/2510.13998
401,UniREditBench: A Unified Reasoning-based Image Editing Benchmark,8,28,35.88,32,"Nov 3, 2025",35,37,https://arxiv.org/abs/2511.01295
402,Artificial Hippocampus Networks for Efficient Long-Context Modeling,6,24,31,28,"Oct 8, 2025",22,67,https://arxiv.org/abs/2510.07318
403,Agentic Entropy-Balanced Policy Optimization,8,30,36.12,33,"Oct 16, 2025",90,694,https://arxiv.org/abs/2510.14545
404,"Pass@k Training for Adaptively Balancing Exploration and Exploitation of
Large Reasoning Models",21,40,45.43,45,"Aug 14, 2025",21,47,https://arxiv.org/abs/2508.10751
405,"URSA: Understanding and Verifying Chain-of-thought Reasoning in
Multimodal Mathematics",6,26,31.83,32,"Jan 8, 2025",53,117,https://arxiv.org/abs/2501.04686
406,"MCPMark: A Benchmark for Stress-Testing Realistic and Comprehensive MCP
Use",8,32,36.62,35,"Sep 28, 2025",145,201,https://arxiv.org/abs/2509.24002
407,"AMFT: Aligning LLM Reasoners by Meta-Learning the Optimal
Imitation-Exploration Balance",16,31,43.88,45,"Aug 9, 2025",2,163,https://arxiv.org/abs/2508.06944
408,"Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos
with Spatio-Temporal Diffusion Models",4,14,22.75,25,"Jul 17, 2025",50,246,https://arxiv.org/abs/2507.13344
409,"Cache-to-Cache: Direct Semantic Communication Between Large Language
Models",10,35,39.8,40,"Oct 3, 2025",95,239,https://arxiv.org/abs/2510.03215
410,"SageAttention3: Microscaling FP4 Attention for Inference and An
Exploration of 8-Bit Training",12,26,41.75,43,"May 16, 2025",76,2450,https://arxiv.org/abs/2505.11594
411,NANO3D: A Training-Free Approach for Efficient 3D Editing Without Masks,5,22,28.8,25,"Oct 16, 2025",52,47,https://arxiv.org/abs/2510.15019
412,"MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian
Splatting",6,22,32.67,28,"Aug 25, 2025",4,18,https://arxiv.org/abs/2508.17811
413,"Drawing2CAD: Sequence-to-Sequence Learning for CAD Generation from
Vector Drawings",8,22,37.38,41,"Aug 26, 2025",2,50,https://arxiv.org/abs/2508.18733
414,OceanGym: A Benchmark Environment for Underwater Embodied Agents,5,22,29.2,28,"Sep 30, 2025",30,28,https://arxiv.org/abs/2509.26536
415,GR00T N1: An Open Foundation Model for Generalist Humanoid Robots,13,27,42.77,47,"Mar 18, 2025",4,4800,https://arxiv.org/abs/2503.14734
416,TexVerse: A Universe of 3D Objects with High-Resolution Textures,12,37,42.08,40,"Aug 14, 2025",13,180,https://arxiv.org/abs/2508.10868
417,"GSM8K-V: Can Vision Language Models Solve Grade School Math Word
Problems in Visual Contexts",6,24,33.17,31,"Sep 29, 2025",25,21,https://arxiv.org/abs/2509.25160
418,"TOUCAN: Synthesizing 1.5M Tool-Agentic Data from Real-World MCP
Environments",9,30,39.11,41,"Oct 1, 2025",16,60,https://arxiv.org/abs/2510.01179
419,"CoDiEmb: A Collaborative yet Distinct Framework for Unified
Representation Learning in Information Retrieval and Semantic Textual
Similarity",10,36,40.3,39,"Aug 15, 2025",3,125,https://arxiv.org/abs/2508.11442
420,3D-R1: Enhancing Reasoning in 3D VLMs for Unified Scene Understanding,10,27,40.4,42,"Jul 31, 2025",15,307,https://arxiv.org/abs/2507.23478
421,"VLA-RFT: Vision-Language-Action Reinforcement Fine-tuning with Verified
Rewards in World Simulators",5,20,30,25,"Oct 1, 2025",55,28,https://arxiv.org/abs/2510.00406
422,"SeC: Advancing Complex Video Object Segmentation via Progressive Concept
Construction",3,12,16.67,19,"Jul 21, 2025",34,98,https://arxiv.org/abs/2507.15852
423,Aligning Multimodal LLM with Human Preference: A Survey,20,31,45.95,48,"Mar 18, 2025",26,16300,https://arxiv.org/abs/2503.14504
424,Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought,8,23,38.5,42,"Apr 8, 2025",86,3080,https://arxiv.org/abs/2504.05599
425,"HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating
Local and Web Searches",8,33,38.5,37,"Aug 11, 2025",26,30,https://arxiv.org/abs/2508.08088
426,SIM-CoT: Supervised Implicit Chain-of-Thought,8,33,38.5,34,"Sep 24, 2025",33,46,https://arxiv.org/abs/2509.20317
427,"RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via
Multi-Stage Reinforcement Learning",5,18,31.2,29,"Oct 2, 2025",15,22,https://arxiv.org/abs/2510.02240
428,Generating Physically Stable and Buildable LEGO Designs from Text,7,27,37,41,"May 8, 2025",28,1430,https://arxiv.org/abs/2505.05469
429,"Sparse VideoGen2: Accelerate Video Generation with Sparse Attention via
Semantic-Aware Permutation",8,35,39.12,38,"May 24, 2025",42,472,https://arxiv.org/abs/2505.18875
430,"Sparse VideoGen: Accelerating Video Diffusion Transformers with
Spatial-Temporal Sparsity",8,34,39.12,38,"Feb 3, 2025",3,472,https://arxiv.org/abs/2502.01776
431,"Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified
Self-Play",9,29,40.44,44,"Sep 29, 2025",120,40,https://arxiv.org/abs/2509.25541
432,"OmniVideoBench: Towards Audio-Visual Understanding Evaluation for Omni
MLLMs",6,23,35.17,33,"Oct 12, 2025",40,25,https://arxiv.org/abs/2510.10689
433,"E^2Rank: Your Text Embedding can Also be an Effective
and Efficient Listwise Reranker",5,29,32,33,"Oct 26, 2025",29,22,https://arxiv.org/abs/2510.22733
434,"LIBERO-Plus: In-depth Robustness Analysis of Vision-Language-Action
Models",9,30,40.56,41,"Oct 15, 2025",40,46,https://arxiv.org/abs/2510.13626
435,Drax: Speech Recognition with Discrete Flow Matching,4,21,27.75,21,"Oct 5, 2025",23,18,https://arxiv.org/abs/2510.04162
436,"OS-Sentinel: Towards Safety-Enhanced Mobile GUI Agents via Hybrid
Validation in Realistic Workflows",9,29,40.67,45,"Oct 28, 2025",69,31,https://arxiv.org/abs/2510.24411
437,Symbolic Graphics Programming with Large Language Models,10,33,42,44,"Sep 5, 2025",37,18,https://arxiv.org/abs/2509.05208
438,SparseD: Sparse Attention for Diffusion Language Models,6,30,36,34,"Sep 28, 2025",26,33,https://arxiv.org/abs/2509.24014
439,"Language Model Council: Benchmarking Foundation Models on Highly
Subjective Tasks by Consensus",14,35,44.57,46,"Jun 12, 2024",6,180,https://arxiv.org/abs/2406.08598
440,"MM-HELIX: Boosting Multimodal Long-Chain Reflective Reasoning with
Holistic Platform and Adaptive Hybrid Policy Optimization",7,33,38.43,37,"Oct 9, 2025",98,55,https://arxiv.org/abs/2510.08540
441,PICABench: How Far Are We from Physically Realistic Image Editing?,3,12,21.67,15,"Oct 20, 2025",57,13,https://arxiv.org/abs/2510.17681
442,"The Tool Decathlon: Benchmarking Language Agents for Diverse, Realistic,
and Long-Horizon Task Execution",10,38,42.4,42,"Oct 29, 2025",42,99,https://arxiv.org/abs/2510.25726
443,Next Visual Granularity Generation,7,33,38.86,35,"Aug 18, 2025",37,7,https://arxiv.org/abs/2508.12811
444,Droplet3D: Commonsense Priors from Videos Facilitate 3D Generation,8,35,40.5,39,"Aug 28, 2025",62,18,https://arxiv.org/abs/2508.20470
445,OmniGen2: Exploration to Advanced Multimodal Generation,10,32,42.7,42,"Jun 23, 2025",75,3860,https://arxiv.org/abs/2506.18871
446,"JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical
Spatio-Temporal Prior Synchronization",16,35,45.88,47,"Mar 30, 2025",57,265,https://arxiv.org/abs/2503.23377
447,"UP2You: Fast Reconstruction of Yourself from Unconstrained Photo
Collections",7,22,39.29,45,"Sep 29, 2025",8,161,https://arxiv.org/abs/2509.24817
448,"LLaSO: A Foundational Framework for Reproducible Research in Large
Language and Speech Model",7,28,39.43,36,"Aug 21, 2025",4,79,https://arxiv.org/abs/2508.15418
449,"MUG-V 10B: High-efficiency Training Pipeline for Large Video Generation
Models",3,18,24,24,"Oct 20, 2025",9,59,https://arxiv.org/abs/2510.17519
450,"PartCrafter: Structured 3D Mesh Generation via Compositional Latent
Diffusion Transformers",7,23,39.71,40,"Jun 5, 2025",79,2180,https://arxiv.org/abs/2506.05573
451,Discrete Diffusion in Large Language and Multimodal Models: A Survey,13,36,44.92,46,"Jun 16, 2025",43,267,https://arxiv.org/abs/2506.13759
452,"PosterGen: Aesthetic-Aware Paper-to-Poster Generation via Multi-Agent
LLMs",11,35,43.91,43,"Aug 24, 2025",15,105,https://arxiv.org/abs/2508.17188
453,"Implicit Actor Critic Coupling via a Supervised Learning Framework for
RLVR",4,28,31.75,32,"Sep 2, 2025",19,18,https://arxiv.org/abs/2509.02522
454,UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning,6,27,38.17,39,"Oct 15, 2025",11,26,https://arxiv.org/abs/2510.13515
455,Part-X-MLLM: Part-aware 3D Multimodal Large Language Model,3,25,25.33,25,"Nov 17, 2025",65,49,https://arxiv.org/abs/2511.13647
456,Scaling RL to Long Videos,3,21,25.67,28,"Jul 10, 2025",141,538,https://arxiv.org/abs/2507.07966
457,"X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment
Vision-Language-Action Model",7,32,40.14,42,"Oct 11, 2025",12,56,https://arxiv.org/abs/2510.10274
458,"Promptomatix: An Automatic Prompt Optimization Framework for Large
Language Models",3,19,26.33,30,"Jul 17, 2025",13,61,https://arxiv.org/abs/2507.14241
459,Optimized Minimal 4D Gaussian Splatting,5,30,36.4,35,"Oct 4, 2025",3,29,https://arxiv.org/abs/2510.03857
460,Defeating the Training-Inference Mismatch via FP16,9,33,42.89,47,"Oct 30, 2025",23,112,https://arxiv.org/abs/2510.26788
461,"Teaching Pretrained Language Models to Think Deeper with Retrofitted
Recurrence",7,36,40.57,36,"Nov 10, 2025",11,23,https://arxiv.org/abs/2511.07384
462,RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulation,7,32,40.71,41,"Sep 18, 2025",20,208,https://arxiv.org/abs/2509.15212
463,MediaPipe: A Framework for Building Perception Pipelines,11,39,44.45,45,"Jun 14, 2019",1,32100,https://arxiv.org/abs/1906.08172
464,PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides,2,15,15,15,"Jan 7, 2025",22,2430,https://arxiv.org/abs/2501.03936
465,WideSearch: Benchmarking Agentic Broad Info-Seeking,7,37,40.86,41,"Aug 11, 2025",96,62,https://arxiv.org/abs/2508.07999
466,"Efficient Multi-modal Large Language Models via Progressive Consistency
Distillation",4,23,33.25,33,"Oct 1, 2025",37,13,https://arxiv.org/abs/2510.00515
467,UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity,3,27,28,28,"Nov 17, 2025",4,34,https://arxiv.org/abs/2511.13714
468,"SonicMaster: Towards Controllable All-in-One Music Restoration and
Mastering",7,32,41.29,39,"Aug 5, 2025",1,74,https://arxiv.org/abs/2508.03448
469,DINO-Foresight: Looking into the Future with DINO,7,39,41.29,41,"Dec 16, 2024",1,107,https://arxiv.org/abs/2412.11673
470,"Euclid's Gift: Enhancing Spatial Perception and Reasoning in
Vision-Language Models via Geometric Surrogate Tasks",4,29,34.25,31,"Sep 29, 2025",15,13,https://arxiv.org/abs/2509.24473
471,CoDA: Coding LM via Diffusion Adaptation,5,34,37.6,37,"Sep 27, 2025",25,21,https://arxiv.org/abs/2510.03270
472,"MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning
Attention",8,34,42.75,44,"Jun 16, 2025",265,2890,https://arxiv.org/abs/2506.13585
473,"Hyperspherical Latents Improve Continuous-Token Autoregressive
Generation",6,30,40,42,"Sep 29, 2025",6,43,https://arxiv.org/abs/2509.24335
474,VLA-0: Building State-of-the-Art VLAs with Zero Modification,6,33,40,39,"Oct 15, 2025",8,103,https://arxiv.org/abs/2510.13054
475,"Self Forcing: Bridging the Train-Test Gap in Autoregressive Video
Diffusion",17,41,47.18,49,"Jun 9, 2025",27,2420,https://arxiv.org/abs/2506.08009
476,"RealUnify: Do Unified Models Truly Benefit from Unification? A
Comprehensive Benchmark",5,29,38,38,"Sep 29, 2025",41,14,https://arxiv.org/abs/2509.24897
477,Uniform Discrete Diffusion with Metric Path for Video Generation,5,24,38,41,"Oct 28, 2025",39,49,https://arxiv.org/abs/2510.24717
478,"InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for
Generalist Robot Policy",7,35,42,44,"Oct 15, 2025",13,167,https://arxiv.org/abs/2510.13778
479,Vidi: Large Multimodal Models for Video Understanding and Editing,4,27,35.5,37,"Apr 22, 2025",14,247,https://arxiv.org/abs/2504.15681
480,UniVid: Unifying Vision Tasks with Pre-trained Video Generation Models,5,23,38.8,48,"Sep 26, 2025",11,25,https://arxiv.org/abs/2509.21760
481,Sparser Block-Sparse Attention via Token Permutation,3,22,31,26,"Oct 24, 2025",22,19,https://arxiv.org/abs/2510.21270
482,DeepEyesV2: Toward Agentic Multimodal Model,8,39,43.5,45,"Nov 7, 2025",35,963,https://arxiv.org/abs/2511.05271
483,Rethinking Reward Models for Multi-Domain Test-Time Scaling,3,28,31.33,33,"Oct 1, 2025",23,14,https://arxiv.org/abs/2510.00492
484,Depth Anything with Any Prior,6,37,41.17,40,"May 15, 2025",12,389,https://arxiv.org/abs/2505.10565
485,AU-Harness: An Open-Source Toolkit for Holistic Evaluation of Audio LLMs,4,30,36.5,33,"Sep 9, 2025",18,35,https://arxiv.org/abs/2509.08031
486,NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards,3,31,31.67,32,"Nov 18, 2025",7,26,https://arxiv.org/abs/2511.14659
487,"Learning to Optimize Multi-Objective Alignment Through Dynamic Reward
Weighting",4,35,37.5,35,"Sep 14, 2025",8,4,https://arxiv.org/abs/2509.11452
488,UQ: Assessing Language Models on Unsolved Questions,5,27,40.4,45,"Aug 25, 2025",11,15,https://arxiv.org/abs/2508.17580
489,"MAS-Bench: A Unified Benchmark for Shortcut-Augmented Hybrid Mobile GUI
Agents",4,27,37.75,40,"Sep 8, 2025",2,8,https://arxiv.org/abs/2509.06477
490,"ObjFiller-3D: Consistent Multi-view 3D Inpainting via Video Diffusion
Models",4,25,38,42,"Aug 25, 2025",5,25,https://arxiv.org/abs/2508.18271
491,"RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with
Global Illumination",5,34,40.6,39,"May 28, 2025",37,765,https://arxiv.org/abs/2505.21925
492,Upsample Anything: A Simple and Hard to Beat Baseline for Feature Upsampling,9,38,45.22,46,"Nov 20, 2025",6,105,https://arxiv.org/abs/2511.16301
493,Foundations of Large Language Models,5,31,40.8,40,"Jan 16, 2025",11,264,https://arxiv.org/abs/2501.09223
494,DanceGRPO: Unleashing GRPO on Visual Generation,11,37,46.45,47,"May 12, 2025",32,942,https://arxiv.org/abs/2505.07818
495,SPARK: Synergistic Policy And Reward Co-Evolving Framework,3,24,34.33,30,"Sep 26, 2025",15,17,https://arxiv.org/abs/2509.22624
496,WEAVE: Unleashing and Benchmarking the In-context Interleaved Comprehension and Generation,4,30,38.5,38,"Nov 14, 2025",42,25,https://arxiv.org/abs/2511.11434
497,"Eliciting Fine-Tuned Transformer Capabilities via Inference-Time
Techniques",7,34,44,48,"Jun 9, 2025",8,2910,https://arxiv.org/abs/2506.08060
498,SpatialTrackerV2: 3D Point Tracking Made Easy,3,32,35.33,32,"Jul 16, 2025",14,658,https://arxiv.org/abs/2507.12462
499,Reasoning with Sampling: Your Base Model is Smarter Than You Think,3,27,35.33,33,"Oct 16, 2025",35,229,https://arxiv.org/abs/2510.14901
500,"DocETL: Agentic Query Rewriting and Evaluation for Complex Document
Processing",5,37,41.6,43,"Oct 16, 2024",1,3180,https://arxiv.org/abs/2410.12189
501,Nav-R1: Reasoning and Navigation in Embodied Scenes,6,39,43.33,43,"Sep 13, 2025",4,21,https://arxiv.org/abs/2509.10884
502,"EPO: Entropy-regularized Policy Optimization for LLM Agents
Reinforcement Learning",4,23,39.5,44,"Sep 26, 2025",113,20,https://arxiv.org/abs/2509.22576
503,"PUSA V1.0: Surpassing Wan-I2V with $500 Training Cost by Vectorized
Timestep Adaptation",4,28,39.75,42,"Jul 22, 2025",9,545,https://arxiv.org/abs/2507.16116
504,"Efficient Multi-turn RL for GUI Agents via Decoupled Training and
Adaptive Data Curation",3,32,36,35,"Sep 28, 2025",7,9,https://arxiv.org/abs/2509.23866
505,OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot,2,26,28.5,28,"Oct 8, 2025",14,24,https://arxiv.org/abs/2510.06751
506,GUI-G^2: Gaussian Reward Modeling for GUI Grounding,2,24,29,29,"Jul 21, 2025",118,138,https://arxiv.org/abs/2507.15846
507,GIR-Bench: Versatile Benchmark for Generating Images with Reasoning,3,29,37,39,"Oct 13, 2025",16,23,https://arxiv.org/abs/2510.11026
508,Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs,7,42,45,46,"Nov 10, 2025",31,92,https://arxiv.org/abs/2511.07003
509,"Embodied-R1: Reinforced Embodied Reasoning for General Robotic
Manipulation",4,31,40.75,42,"Aug 19, 2025",12,24,https://arxiv.org/abs/2508.13998
510,"SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step
Diffusion",3,36,37.33,36,"Dec 5, 2024",41,7,https://arxiv.org/abs/2412.04301
511,CLEAR: Error Analysis via LLM-as-a-Judge Made Easy,1,11,11,11,"Jul 24, 2025",8,10,https://arxiv.org/abs/2507.18392
512,MindSearch: Mimicking Human Minds Elicits Deep AI Searcher,7,43,45.29,46,"Jul 29, 2024",44,6580,https://arxiv.org/abs/2407.20183
513,dParallel: Learnable Parallel Decoding for dLLMs,5,40,43,42,"Sep 30, 2025",17,16,https://arxiv.org/abs/2509.26488
514,"ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous
Driving",8,39,46.12,47,"Jun 9, 2025",1,183,https://arxiv.org/abs/2506.08052
515,AWorld: Orchestrating the Training Recipe for Agentic AI,7,36,45.43,46,"Aug 28, 2025",37,692,https://arxiv.org/abs/2508.20404
516,Reverse-Engineered Reasoning for Open-Ended Generation,4,38,41.25,40,"Sep 7, 2025",127,34,https://arxiv.org/abs/2509.06160
517,MiniCPM4: Ultra-Efficient LLMs on End Devices,7,41,45.57,47,"Jun 9, 2025",90,8300,https://arxiv.org/abs/2506.07900
518,SpaceVista: All-Scale Visual Spatial Reasoning from mm to km,2,30,32,32,"Oct 10, 2025",16,22,https://arxiv.org/abs/2510.09606
519,Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning,3,30,38.67,41,"Oct 13, 2025",14,15,https://arxiv.org/abs/2510.11027
520,"Generating an Image From 1,000 Words: Enhancing Text-to-Image With
Structured Captions",5,38,43.6,41,"Nov 10, 2025",19,248,https://arxiv.org/abs/2511.06876
521,"VideoGen-of-Thought: A Collaborative Framework for Multi-Shot Video
Generation",4,38,42,41,"Dec 3, 2024",60,42,https://arxiv.org/abs/2412.02259
522,"Spatial Forcing: Implicit Spatial Representation Alignment for
Vision-language-action Model",4,39,42.25,41,"Oct 14, 2025",134,43,https://arxiv.org/abs/2510.12276
523,MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech,5,40,44.2,42,"Sep 29, 2025",11,133,https://arxiv.org/abs/2509.25131
524,"Building a Foundational Guardrail for General Agentic Systems via
Synthetic Data",3,33,39.67,42,"Oct 10, 2025",21,26,https://arxiv.org/abs/2510.09781
525,Mantis: A Versatile Vision-Language-Action Model with Disentangled Visual Foresight,2,32,34,34,"Nov 20, 2025",10,18,https://arxiv.org/abs/2511.16175
526,"Franca: Nested Matryoshka Clustering for Scalable Visual Representation
Learning",4,28,42.75,47,"Jul 18, 2025",27,189,https://arxiv.org/abs/2507.14137
527,"How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven
Perspective",3,37,40,37,"Sep 23, 2025",22,10,https://arxiv.org/abs/2509.18905
528,Regression Language Models for Code,6,40,45.5,47,"Sep 30, 2025",13,257,https://arxiv.org/abs/2509.26476
529,"CoIRL-AD: Collaborative-Competitive Imitation-Reinforcement Learning in
Latent World Models for Autonomous Driving",3,36,40,42,"Oct 14, 2025",4,10,https://arxiv.org/abs/2510.12560
530,"Scaling Text-Rich Image Understanding via Code-Guided Synthetic
Multimodal Data Generation",2,35,35,35,"Feb 20, 2025",13,103,https://arxiv.org/abs/2502.14846
531,"StreamDiffusion: A Pipeline-level Solution for Real-time Interactive
Generation",2,34,35,35,"Dec 19, 2023",73,10400,https://arxiv.org/abs/2312.12491
532,"LLMs Can Get ""Brain Rot""!",4,38,43,42,"Oct 15, 2025",19,72,https://arxiv.org/abs/2510.13928
533,"Reinforcement Learning Optimization for Large-Scale Learning: An
Efficient and User-Friendly Scaling Library",5,40,44.6,45,"Jun 6, 2025",7,2270,https://arxiv.org/abs/2506.06122
534,ConsistEdit: Highly Consistent and Precise Training-free Visual Editing,2,35,35.5,35,"Oct 20, 2025",11,25,https://arxiv.org/abs/2510.17803
535,"TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head
Synthesis",4,34,43.5,45,"Aug 19, 2025",16,72,https://arxiv.org/abs/2508.13618
536,Energy-Based Transformers are Scalable Learners and Thinkers,8,44,47.25,47,"Jul 2, 2025",65,463,https://arxiv.org/abs/2507.02092
537,"PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model
Reasoning",3,28,41,47,"Sep 24, 2025",29,93,https://arxiv.org/abs/2509.19894
538,Multi-Agent Tool-Integrated Policy Optimization,2,35,36,36,"Oct 6, 2025",19,20,https://arxiv.org/abs/2510.04678
539,"VisionThink: Smart and Efficient Vision Language Model via Reinforcement
Learning",3,33,41.33,45,"Jul 17, 2025",69,330,https://arxiv.org/abs/2507.13348
540,Rectified Point Flow: Generic Point Cloud Pose Estimation,3,34,41.33,44,"Jun 5, 2025",3,123,https://arxiv.org/abs/2506.05282
541,"Equilibrium Matching: Generative Modeling with Implicit Energy-Based
Models",4,41,43.75,44,"Oct 2, 2025",5,77,https://arxiv.org/abs/2510.02300
542,A decoder-only foundation model for time-series forecasting,6,44,46.17,46,"Oct 14, 2023",6,7060,https://arxiv.org/abs/2310.10688
543,MoDA: Multi-modal Diffusion Architecture for Talking Head Generation,9,46,47.89,48,"Jul 4, 2025",2,140,https://arxiv.org/abs/2507.03256
544,"Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust
Text-based Person Retrieval",3,30,41.67,45,"Sep 11, 2025",6,10,https://arxiv.org/abs/2509.09118
545,Streaming 4D Visual Geometry Transformer,1,25,25,25,"Jul 15, 2025",10,454,https://arxiv.org/abs/2507.11539
546,s3: You Don't Need That Much Data to Train a Search Agent via RL,5,41,45.8,47,"May 20, 2025",18,564,https://arxiv.org/abs/2505.14146
547,"MesaTask: Towards Task-Driven Tabletop Scene Generation via 3D Spatial
Reasoning",2,36,38,38,"Sep 26, 2025",27,24,https://arxiv.org/abs/2509.22281
548,"Ming-UniVision: Joint Image Understanding and Generation with a Unified
Continuous Tokenizer",3,38,42.33,42,"Oct 8, 2025",63,72,https://arxiv.org/abs/2510.06590
549,"iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation",2,34,38,38,"Nov 25, 2025",30,120,https://arxiv.org/abs/2511.20635
550,"ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language
Models for Audio Generation and Editing",2,38,38.5,38,"Jun 26, 2025",7,876,https://arxiv.org/abs/2506.21448
551,"Q-Sched: Pushing the Boundaries of Few-Step Diffusion Models with
Quantization-Aware Scheduling",2,30,38.5,38,"Sep 1, 2025",5,6,https://arxiv.org/abs/2509.01624
552,"GAS: Improving Discretization of Diffusion ODEs via Generalized
Adversarial Solver",2,30,38.5,38,"Oct 20, 2025",2,10,https://arxiv.org/abs/2510.17699
553,MUR: Momentum Uncertainty guided Reasoning for Large Language Models,3,40,43,44,"Jul 20, 2025",36,32,https://arxiv.org/abs/2507.14958
554,"Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model
Reasoning",3,39,43,42,"Oct 5, 2025",18,6,https://arxiv.org/abs/2510.04081
555,"AniMaker: Automated Multi-Agent Animated Storytelling with MCTS-Driven
Clip Generation",8,46,48.12,48,"Jun 12, 2025",37,173,https://arxiv.org/abs/2506.10540
556,"D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to
Embodied AI",2,39,39.5,39,"Oct 7, 2025",101,22,https://arxiv.org/abs/2510.05684
557,V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models,3,38,43.33,43,"Nov 20, 2025",48,15,https://arxiv.org/abs/2511.16668
558,"DetailFlow: 1D Coarse-to-Fine Autoregressive Image Generation via
Next-Detail Prediction",1,29,29,29,"May 27, 2025",16,144,https://arxiv.org/abs/2505.21473
559,MM-BrowseComp: A Comprehensive Benchmark for Multimodal Browsing Agents,4,40,45.5,46,"Aug 14, 2025",13,10,https://arxiv.org/abs/2508.13186
560,"Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments
with a Hierarchical Spatial-Cognition Long-Short Memory System",4,41,45.5,46,"Jun 24, 2025",3,89,https://arxiv.org/abs/2506.19433
561,"Qwen3 Embedding: Advancing Text Embedding and Reranking Through
Foundation Models",5,44,46.8,46,"Jun 5, 2025",72,1390,https://arxiv.org/abs/2506.05176
562,"Part II: ROLL Flash -- Accelerating RLVR and Agentic Training with
Asynchrony",4,44,45.75,45,"Oct 13, 2025",15,2270,https://arxiv.org/abs/2510.11345
563,Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning,4,44,45.75,46,"Nov 18, 2025",15,951,https://arxiv.org/abs/2511.14460
564,Deep Researcher with Test-Time Diffusion,3,36,44.33,47,"Jul 21, 2025",63,2910,https://arxiv.org/abs/2507.16075
565,RoboOmni: Proactive Robot Manipulation in Omni-modal Context,2,40,41,41,"Oct 27, 2025",52,27,https://arxiv.org/abs/2510.23763
566,LiteAttention: A Temporal Sparse Attention for Diffusion Transformers,3,42,44.33,43,"Nov 14, 2025",24,23,https://arxiv.org/abs/2511.11062
567,"SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from
Experience",4,45,46.25,46,"Aug 6, 2025",46,141,https://arxiv.org/abs/2508.04700
568,Quantile Advantage Estimation for Entropy-Safe Reasoning,2,35,41.5,41,"Sep 26, 2025",100,8,https://arxiv.org/abs/2509.22611
569,"TensorBLEU: Vectorized GPU-based BLEU Score Implementation for
Per-Sentence In-Training Evaluation",4,40,46.25,48,"Oct 7, 2025",7,13,https://arxiv.org/abs/2510.05485
570,MATRIX: Mask Track Alignment for Interaction-aware Video Generation,2,37,41.5,41,"Oct 8, 2025",29,22,https://arxiv.org/abs/2510.07310
571,"HSCodeComp: A Realistic and Expert-level Benchmark for Deep Search
Agents in Hierarchical Rule Application",4,43,46.25,46,"Oct 22, 2025",26,85,https://arxiv.org/abs/2510.19631
572,VACE: All-in-One Video Creation and Editing,3,43,45,44,"Mar 10, 2025",54,3000,https://arxiv.org/abs/2503.07598
573,Open Deep Search: Democratizing Search with Open-source Reasoning Agents,4,45,46.5,46,"Mar 26, 2025",48,3620,https://arxiv.org/abs/2503.20201
574,TUN3D: Towards Real-World Scene Understanding from Unposed Images,1,33,33,33,"Sep 23, 2025",12,11,https://arxiv.org/abs/2509.21388
575,Efficient Guided Generation for Large Language Models,3,41,45,45,"Jul 19, 2023",8,12900,https://arxiv.org/abs/2307.09702
576,UltraFlux: Data-Model Co-Design for High-quality Native 4K Text-to-Image Generation across Diverse Aspect Ratios,2,36,42,42,"Nov 22, 2025",34,42,https://arxiv.org/abs/2511.18050
577,STARFlow-V: End-to-End Video Generative Modeling with Normalizing Flow,6,45,48,48,"Nov 25, 2025",16,40,https://arxiv.org/abs/2511.20462
578,Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning,1,34,34,34,"Jul 22, 2025",28,42,https://arxiv.org/abs/2507.16746
579,"From reactive to cognitive: brain-inspired spatial intelligence for
embodied agents",2,42,42.5,42,"Aug 24, 2025",3,18,https://arxiv.org/abs/2508.17198
580,"Search-R1: Training LLMs to Reason and Leverage Search Engines with
Reinforcement Learning",4,46,46.75,47,"Mar 12, 2025",35,3130,https://arxiv.org/abs/2503.09516
581,Retrieval-Augmented Generation with Hierarchical Knowledge,1,34,34,34,"Mar 13, 2025",2,359,https://arxiv.org/abs/2503.10150
582,"REASONING GYM: Reasoning Environments for Reinforcement Learning with
Verifiable Rewards",1,34,34,34,"May 30, 2025",72,1130,https://arxiv.org/abs/2505.24760
583,Chem-R: Learning to Reason as a Chemist,2,35,42.5,42,"Oct 19, 2025",45,9,https://arxiv.org/abs/2510.16880
584,Hierarchical Budget Policy Optimization for Adaptive Reasoning,1,35,35,35,"Jul 21, 2025",12,14,https://arxiv.org/abs/2507.15844
585,Qwen2.5-Omni Technical Report,4,45,47,47,"Mar 26, 2025",165,3510,https://arxiv.org/abs/2503.20215
586,Variational Reasoning for Language Models,2,36,43,43,"Sep 26, 2025",57,34,https://arxiv.org/abs/2509.22637
587,OpenVoice: Versatile Instant Voice Cloning,6,46,48.33,49,"Dec 3, 2023",3,35300,https://arxiv.org/abs/2312.01479
588,"BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of
Deep-Research Agent",3,43,46,45,"Aug 8, 2025",35,49,https://arxiv.org/abs/2508.06600
589,"CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven
Images",3,38,46,50,"Oct 13, 2025",12,22,https://arxiv.org/abs/2510.11718
590,Continuous Thought Machines,2,42,43.5,43,"May 8, 2025",12,1490,https://arxiv.org/abs/2505.05522
591,MMaDA: Multimodal Large Diffusion Language Models,1,37,37,37,"May 21, 2025",94,1230,https://arxiv.org/abs/2505.15809
592,"Cognitive Kernel-Pro: A Framework for Deep Research Agents and Agent
Foundation Models Training",4,46,47.5,47,"Aug 1, 2025",91,388,https://arxiv.org/abs/2508.00414
593,"DrugReasoner: Interpretable Drug Approval Prediction with a
Reasoning-augmented Language Model",2,44,44,44,"Aug 26, 2025",10,2,https://arxiv.org/abs/2508.18579
594,TikZero: Zero-Shot Text-Guided Graphics Program Synthesis,2,44,44,44,"Mar 14, 2025",3,1530,https://arxiv.org/abs/2503.11509
595,VideoNSA: Native Sparse Attention Scales Video Understanding,2,42,44,44,"Oct 2, 2025",8,29,https://arxiv.org/abs/2510.02295
596,OmniNWM: Omniscient Driving Navigation World Models,2,43,44,44,"Oct 21, 2025",6,48,https://arxiv.org/abs/2510.18313
597,EVTAR: End-to-End Try on with Additional Unpaired Visual Reference,2,43,44,44,"Nov 2, 2025",4,17,https://arxiv.org/abs/2511.00956
598,P1: Mastering Physics Olympiads with Reinforcement Learning,2,43,44,44,"Nov 17, 2025",106,45,https://arxiv.org/abs/2511.13612
599,nablaNABLA: Neighborhood Adaptive Block-Level Attention,1,38,38,38,"Jul 17, 2025",85,10,https://arxiv.org/abs/2507.13546
600,"Evolving Language Models without Labels: Majority Drives Selection,
Novelty Promotes Variation",3,41,46.67,49,"Sep 18, 2025",29,19,https://arxiv.org/abs/2509.15194
601,YuE: Scaling Open Foundation Models for Long-Form Music Generation,2,40,44.5,44,"Mar 11, 2025",69,5520,https://arxiv.org/abs/2503.08638
602,LightsOut: Diffusion-based Outpainting for Enhanced Lens Flare Removal,3,41,46.67,49,"Oct 17, 2025",20,10,https://arxiv.org/abs/2510.15868
603,"ToolScope: An Agentic Framework for Vision-Guided and Long-Horizon Tool
Use",5,47,48.4,49,"Oct 31, 2025",20,15,https://arxiv.org/abs/2510.27363
604,Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights,1,39,39,39,"Jun 19, 2025",122,58,https://arxiv.org/abs/2506.16406
605,"DCReg: Decoupled Characterization for Efficient Degenerate LiDAR
Registration",3,41,47,50,"Sep 8, 2025",1,58,https://arxiv.org/abs/2509.06285
606,"VoiceAssistant-Eval: Benchmarking AI Assistants across Listening,
Speaking, and Viewing",1,39,39,39,"Sep 26, 2025",19,4,https://arxiv.org/abs/2509.22651
607,"Beyond the Exploration-Exploitation Trade-off: A Hidden State Approach
for LLM Reasoning in RLVR",3,46,47,47,"Sep 28, 2025",38,7,https://arxiv.org/abs/2509.23808
608,"RAPO++: Cross-Stage Prompt Optimization for Text-to-Video Generation via
Data Alignment and Test-Time Scaling",2,40,45,45,"Oct 23, 2025",11,104,https://arxiv.org/abs/2510.20206
609,Scaling Language-Centric Omnimodal Representation Learning,1,40,40,40,"Oct 13, 2025",52,11,https://arxiv.org/abs/2510.11693
610,"ImagerySearch: Adaptive Test-Time Search for Video Generation Beyond
Semantic Dependency Constraints",2,43,45.5,45,"Oct 16, 2025",47,47,https://arxiv.org/abs/2510.14847
611,"Automatic Synthetic Data and Fine-grained Adaptive Feature Alignment for
Composed Person Retrieval",2,46,46,46,"Nov 25, 2023",1,49,https://arxiv.org/abs/2311.16515
612,Character Mixing for Video Generation,2,42,46,46,"Oct 6, 2025",5,52,https://arxiv.org/abs/2510.05093
613,"TIR-Bench: A Comprehensive Benchmark for Agentic Thinking-with-Images
Reasoning",2,46,46,46,"Nov 3, 2025",12,14,https://arxiv.org/abs/2511.01833
614,"Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement
Fine-Tuning of Large Language Models",3,48,48,48,"May 23, 2025",9,269,https://arxiv.org/abs/2505.17826
615,Trainable Dynamic Mask Sparse Attention,3,46,48,49,"Aug 4, 2025",17,366,https://arxiv.org/abs/2508.02124
616,"FlashAdventure: A Benchmark for GUI Agents Solving Full Story Arcs in
Diverse Adventure Games",2,44,46.5,46,"Sep 1, 2025",8,7,https://arxiv.org/abs/2509.01052
617,HoloScene: Simulation-Ready Interactive 3D Worlds from a Single Video,1,42,42,42,"Oct 7, 2025",5,5,https://arxiv.org/abs/2510.05560
618,"SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond
Feature Alignment",2,46,46.5,46,"Jul 3, 2025",2,80,https://arxiv.org/abs/2507.02705
619,"Learning on the Job: Test-Time Curricula for Targeted Reinforcement
Learning",1,43,43,43,"Oct 6, 2025",1,3,https://arxiv.org/abs/2510.04786
620,"Discrete Diffusion Models with MLLMs for Unified Medical Multimodal
Generation",1,43,43,43,"Oct 7, 2025",4,4,https://arxiv.org/abs/2510.06131
621,Reasoning in Space via Grounding in the World,3,45,48.33,50,"Oct 15, 2025",13,16,https://arxiv.org/abs/2510.13800
622,First Frame Is the Place to Go for Video Content Customization,2,45,47,47,"Nov 19, 2025",45,36,https://arxiv.org/abs/2511.15700
623,Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation,2,44,47,47,"Nov 20, 2025",12,43,https://arxiv.org/abs/2511.16671
624,Computer-Use Agents as Judges for Generative User Interface,2,45,47,47,"Nov 19, 2025",48,24,https://arxiv.org/abs/2511.15567
625,"Graph2Eval: Automatic Multimodal Task Generation for Agents via
Knowledge Graphs",1,44,44,44,"Oct 1, 2025",1,5,https://arxiv.org/abs/2510.00507
626,"MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal
Mathematical Reasoning",2,47,47.5,47,"Oct 16, 2025",11,5,https://arxiv.org/abs/2510.14958
627,Grounding Computer Use Agents on Human Demonstrations,4,47,49.25,50,"Nov 10, 2025",18,15,https://arxiv.org/abs/2511.07332
628,"Feedback-Driven Tool-Use Improvements in Large Language Models via
Automated Build Environments",2,46,48,48,"Aug 12, 2025",12,17,https://arxiv.org/abs/2508.08791
629,Agentic Reinforced Policy Optimization,4,48,49.5,50,"Jul 26, 2025",154,691,https://arxiv.org/abs/2507.19849
630,"SAM2Act: Integrating Visual Foundation Model with A Memory Architecture
for Robotic Manipulation",3,48,49,49,"Jan 30, 2025",2,131,https://arxiv.org/abs/2501.18564
631,Generative AI for Autonomous Driving: Frontiers and Opportunities,2,48,48,48,"May 13, 2025",1,176,https://arxiv.org/abs/2505.08854
632,CHARM: Control-point-based 3D Anime Hairstyle Auto-Regressive Modeling,2,46,48,48,"Sep 25, 2025",11,9,https://arxiv.org/abs/2509.21114
633,DeepPrune: Parallel Scaling without Inter-trace Redundancy,1,45,45,45,"Oct 9, 2025",21,9,https://arxiv.org/abs/2510.08483
634,UI2Code^N: A Visual Language Model for Test-Time Scalable Interactive UI-to-Code Generation,2,47,48,48,"Nov 11, 2025",28,18,https://arxiv.org/abs/2511.08195
635,WorldVLA: Towards Autoregressive Action World Model,1,45,45,45,"Jun 26, 2025",40,669,https://arxiv.org/abs/2506.21539
636,"Gaze into the Heart: A Multi-View Video Dataset for rPPG and Health
Biomarkers Estimation",1,46,46,46,"Aug 25, 2025",11,3,https://arxiv.org/abs/2508.17924
637,Uncertainty-Aware Testing-Time Optimization for 3D Human Pose Estimation,1,46,46,46,"Feb 4, 2024",1,5,https://arxiv.org/abs/2402.02339
638,"AssetOpsBench: Benchmarking AI Agents for Task Automation in Industrial
Asset Operations and Maintenance",1,46,46,46,"Jun 4, 2025",14,252,https://arxiv.org/abs/2506.03828
639,"Entropy Regularizing Activation: Boosting Continuous Control, Large
Language Models, and Image Classification with Activation as Entropy
Constraints",1,46,46,46,"Oct 9, 2025",5,7,https://arxiv.org/abs/2510.08549
640,"UniPixel: Unified Object Referring and Segmentation for Pixel-Level
Visual Reasoning",1,47,47,47,"Sep 22, 2025",3,40,https://arxiv.org/abs/2509.18094
641,"GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface
Reconstruction",1,47,47,47,"Sep 22, 2025",2,65,https://arxiv.org/abs/2509.18090
642,PIPer: On-Device Environment Setup via Online Reinforcement Learning,1,47,47,47,"Sep 29, 2025",23,4,https://arxiv.org/abs/2509.25455
643,"Factuality Matters: When Image Generation and Editing Meet Structured
Visuals",1,47,47,47,"Oct 6, 2025",12,4,https://arxiv.org/abs/2510.05091
644,"BIRD-INTERACT: Re-imagining Text-to-SQL Evaluation for Large Language
Models via Lens of Dynamic Interactions",1,47,47,47,"Oct 6, 2025",4,260,https://arxiv.org/abs/2510.05318
645,Instant4D: 4D Gaussian Splatting in Minutes,1,47,47,47,"Oct 1, 2025",5,58,https://arxiv.org/abs/2510.01119
646,"ASTRA: Autonomous Spatial-Temporal Red-teaming for AI Software
Assistants",2,49,49.5,49,"Aug 5, 2025",8,37,https://arxiv.org/abs/2508.03936
647,"InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy
Optimization",2,49,49.5,49,"Aug 7, 2025",25,56,https://arxiv.org/abs/2508.05731
648,"NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware
Embeddings",3,50,50,50,"Sep 4, 2025",19,19,https://arxiv.org/abs/2509.04011
649,Where LLM Agents Fail and How They can Learn From Failures,1,48,48,48,"Sep 29, 2025",11,8,https://arxiv.org/abs/2509.25370
650,"MarS: a Financial Market Simulation Engine Powered by Generative
Foundation Model",1,48,48,48,"Sep 4, 2024",1,1560,https://arxiv.org/abs/2409.07486
651,"Audio-visual Controlled Video Diffusion with Masked Selective State
Spaces Modeling for Natural Talking Head Generation",1,49,49,49,"Apr 3, 2025",49,344,https://arxiv.org/abs/2504.02542
652,"SynParaSpeech: Automated Synthesis of Paralinguistic Datasets for Speech
Generation and Understanding",1,49,49,49,"Sep 18, 2025",1,37,https://arxiv.org/abs/2509.14946
653,"Beyond Log Likelihood: Probability-Based Objectives for Supervised
Fine-Tuning across the Model Capability Continuum",1,49,49,49,"Oct 1, 2025",7,3,https://arxiv.org/abs/2510.00526
654,"SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable
Sparse-Linear Attention",2,50,50,50,"Sep 28, 2025",107,38,https://arxiv.org/abs/2509.24006
655,Scaling Large-Language-Model-based Multi-Agent Collaboration,2,50,50,50,"Jun 11, 2024",3,27600,https://arxiv.org/abs/2406.07155
656,A Survey of Vibe Coding with Large Language Models,1,49,49,49,"Oct 14, 2025",21,7,https://arxiv.org/abs/2510.12399
657,Gaussian Splatting with Discretized SDF for Relightable Assets,1,50,50,50,"Jul 21, 2025",19,61,https://arxiv.org/abs/2507.15629
658,Flow-GRPO: Training Flow Matching Models via Online RL,1,50,50,50,"May 8, 2025",83,1120,https://arxiv.org/abs/2505.05470
659,One-Minute Video Generation with Test-Time Training,1,50,50,50,"Apr 7, 2025",109,2090,https://arxiv.org/abs/2504.05298
660,LongCat-Flash-Thinking Technical Report,1,50,50,50,"Sep 23, 2025",2,195,https://arxiv.org/abs/2509.18883
661,"Efficient Audio-Visual Speech Separation with Discrete Lip Semantics and
Multi-Scale Global-Local Attention",1,50,50,50,"Sep 28, 2025",13,22,https://arxiv.org/abs/2509.23610
662,EVODiff: Entropy-aware Variance Optimized Diffusion Inference,1,50,50,50,"Sep 30, 2025",1,9,https://arxiv.org/abs/2509.26096
663,"GLiNER2: An Efficient Multi-Task Information Extraction System with
Schema-Driven Interface",1,50,50,50,"Jul 24, 2025",28,186,https://arxiv.org/abs/2507.18546