rank
int64 | name
string | times_trended
int64 | best_rank
int64 | avg_rank
float64 | median_rank
int64 | publish_date
string | max_upvotes
int64 | max_github_stars
int64 | arxiv_link
string |
|---|---|---|---|---|---|---|---|---|---|
501
|
Nav-R1: Reasoning and Navigation in Embodied Scenes
| 6
| 39
| 43.33
| 43
|
Sep 13, 2025
| 4
| 21
|
https://arxiv.org/abs/2509.10884
|
502
|
EPO: Entropy-regularized Policy Optimization for LLM Agents
Reinforcement Learning
| 4
| 23
| 39.5
| 44
|
Sep 26, 2025
| 113
| 20
|
https://arxiv.org/abs/2509.22576
|
503
|
PUSA V1.0: Surpassing Wan-I2V with $500 Training Cost by Vectorized
Timestep Adaptation
| 4
| 28
| 39.75
| 42
|
Jul 22, 2025
| 9
| 545
|
https://arxiv.org/abs/2507.16116
|
504
|
Efficient Multi-turn RL for GUI Agents via Decoupled Training and
Adaptive Data Curation
| 3
| 32
| 36
| 35
|
Sep 28, 2025
| 7
| 9
|
https://arxiv.org/abs/2509.23866
|
505
|
OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot
| 2
| 26
| 28.5
| 28
|
Oct 8, 2025
| 14
| 24
|
https://arxiv.org/abs/2510.06751
|
506
|
GUI-G^2: Gaussian Reward Modeling for GUI Grounding
| 2
| 24
| 29
| 29
|
Jul 21, 2025
| 118
| 138
|
https://arxiv.org/abs/2507.15846
|
507
|
GIR-Bench: Versatile Benchmark for Generating Images with Reasoning
| 3
| 29
| 37
| 39
|
Oct 13, 2025
| 16
| 23
|
https://arxiv.org/abs/2510.11026
|
508
|
Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs
| 7
| 42
| 45
| 46
|
Nov 10, 2025
| 31
| 92
|
https://arxiv.org/abs/2511.07003
|
509
|
Embodied-R1: Reinforced Embodied Reasoning for General Robotic
Manipulation
| 4
| 31
| 40.75
| 42
|
Aug 19, 2025
| 12
| 24
|
https://arxiv.org/abs/2508.13998
|
510
|
SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step
Diffusion
| 3
| 36
| 37.33
| 36
|
Dec 5, 2024
| 41
| 7
|
https://arxiv.org/abs/2412.04301
|
511
|
CLEAR: Error Analysis via LLM-as-a-Judge Made Easy
| 1
| 11
| 11
| 11
|
Jul 24, 2025
| 8
| 10
|
https://arxiv.org/abs/2507.18392
|
512
|
MindSearch: Mimicking Human Minds Elicits Deep AI Searcher
| 7
| 43
| 45.29
| 46
|
Jul 29, 2024
| 44
| 6,580
|
https://arxiv.org/abs/2407.20183
|
513
|
dParallel: Learnable Parallel Decoding for dLLMs
| 5
| 40
| 43
| 42
|
Sep 30, 2025
| 17
| 16
|
https://arxiv.org/abs/2509.26488
|
514
|
ReCogDrive: A Reinforced Cognitive Framework for End-to-End Autonomous
Driving
| 8
| 39
| 46.12
| 47
|
Jun 9, 2025
| 1
| 183
|
https://arxiv.org/abs/2506.08052
|
515
|
AWorld: Orchestrating the Training Recipe for Agentic AI
| 7
| 36
| 45.43
| 46
|
Aug 28, 2025
| 37
| 692
|
https://arxiv.org/abs/2508.20404
|
516
|
Reverse-Engineered Reasoning for Open-Ended Generation
| 4
| 38
| 41.25
| 40
|
Sep 7, 2025
| 127
| 34
|
https://arxiv.org/abs/2509.06160
|
517
|
MiniCPM4: Ultra-Efficient LLMs on End Devices
| 7
| 41
| 45.57
| 47
|
Jun 9, 2025
| 90
| 8,300
|
https://arxiv.org/abs/2506.07900
|
518
|
SpaceVista: All-Scale Visual Spatial Reasoning from mm to km
| 2
| 30
| 32
| 32
|
Oct 10, 2025
| 16
| 22
|
https://arxiv.org/abs/2510.09606
|
519
|
Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning
| 3
| 30
| 38.67
| 41
|
Oct 13, 2025
| 14
| 15
|
https://arxiv.org/abs/2510.11027
|
520
|
Generating an Image From 1,000 Words: Enhancing Text-to-Image With
Structured Captions
| 5
| 38
| 43.6
| 41
|
Nov 10, 2025
| 19
| 248
|
https://arxiv.org/abs/2511.06876
|
521
|
VideoGen-of-Thought: A Collaborative Framework for Multi-Shot Video
Generation
| 4
| 38
| 42
| 41
|
Dec 3, 2024
| 60
| 42
|
https://arxiv.org/abs/2412.02259
|
522
|
Spatial Forcing: Implicit Spatial Representation Alignment for
Vision-language-action Model
| 4
| 39
| 42.25
| 41
|
Oct 14, 2025
| 134
| 43
|
https://arxiv.org/abs/2510.12276
|
523
|
MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech
| 5
| 40
| 44.2
| 42
|
Sep 29, 2025
| 11
| 133
|
https://arxiv.org/abs/2509.25131
|
524
|
Building a Foundational Guardrail for General Agentic Systems via
Synthetic Data
| 3
| 33
| 39.67
| 42
|
Oct 10, 2025
| 21
| 26
|
https://arxiv.org/abs/2510.09781
|
525
|
Mantis: A Versatile Vision-Language-Action Model with Disentangled Visual Foresight
| 2
| 32
| 34
| 34
|
Nov 20, 2025
| 10
| 18
|
https://arxiv.org/abs/2511.16175
|
526
|
Franca: Nested Matryoshka Clustering for Scalable Visual Representation
Learning
| 4
| 28
| 42.75
| 47
|
Jul 18, 2025
| 27
| 189
|
https://arxiv.org/abs/2507.14137
|
527
|
How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven
Perspective
| 3
| 37
| 40
| 37
|
Sep 23, 2025
| 22
| 10
|
https://arxiv.org/abs/2509.18905
|
528
|
Regression Language Models for Code
| 6
| 40
| 45.5
| 47
|
Sep 30, 2025
| 13
| 257
|
https://arxiv.org/abs/2509.26476
|
529
|
CoIRL-AD: Collaborative-Competitive Imitation-Reinforcement Learning in
Latent World Models for Autonomous Driving
| 3
| 36
| 40
| 42
|
Oct 14, 2025
| 4
| 10
|
https://arxiv.org/abs/2510.12560
|
530
|
Scaling Text-Rich Image Understanding via Code-Guided Synthetic
Multimodal Data Generation
| 2
| 35
| 35
| 35
|
Feb 20, 2025
| 13
| 103
|
https://arxiv.org/abs/2502.14846
|
531
|
StreamDiffusion: A Pipeline-level Solution for Real-time Interactive
Generation
| 2
| 34
| 35
| 35
|
Dec 19, 2023
| 73
| 10,400
|
https://arxiv.org/abs/2312.12491
|
532
|
LLMs Can Get "Brain Rot"!
| 4
| 38
| 43
| 42
|
Oct 15, 2025
| 19
| 72
|
https://arxiv.org/abs/2510.13928
|
533
|
Reinforcement Learning Optimization for Large-Scale Learning: An
Efficient and User-Friendly Scaling Library
| 5
| 40
| 44.6
| 45
|
Jun 6, 2025
| 7
| 2,270
|
https://arxiv.org/abs/2506.06122
|
534
|
ConsistEdit: Highly Consistent and Precise Training-free Visual Editing
| 2
| 35
| 35.5
| 35
|
Oct 20, 2025
| 11
| 25
|
https://arxiv.org/abs/2510.17803
|
535
|
TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head
Synthesis
| 4
| 34
| 43.5
| 45
|
Aug 19, 2025
| 16
| 72
|
https://arxiv.org/abs/2508.13618
|
536
|
Energy-Based Transformers are Scalable Learners and Thinkers
| 8
| 44
| 47.25
| 47
|
Jul 2, 2025
| 65
| 463
|
https://arxiv.org/abs/2507.02092
|
537
|
PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model
Reasoning
| 3
| 28
| 41
| 47
|
Sep 24, 2025
| 29
| 93
|
https://arxiv.org/abs/2509.19894
|
538
|
Multi-Agent Tool-Integrated Policy Optimization
| 2
| 35
| 36
| 36
|
Oct 6, 2025
| 19
| 20
|
https://arxiv.org/abs/2510.04678
|
539
|
VisionThink: Smart and Efficient Vision Language Model via Reinforcement
Learning
| 3
| 33
| 41.33
| 45
|
Jul 17, 2025
| 69
| 330
|
https://arxiv.org/abs/2507.13348
|
540
|
Rectified Point Flow: Generic Point Cloud Pose Estimation
| 3
| 34
| 41.33
| 44
|
Jun 5, 2025
| 3
| 123
|
https://arxiv.org/abs/2506.05282
|
541
|
Equilibrium Matching: Generative Modeling with Implicit Energy-Based
Models
| 4
| 41
| 43.75
| 44
|
Oct 2, 2025
| 5
| 77
|
https://arxiv.org/abs/2510.02300
|
542
|
A decoder-only foundation model for time-series forecasting
| 6
| 44
| 46.17
| 46
|
Oct 14, 2023
| 6
| 7,060
|
https://arxiv.org/abs/2310.10688
|
543
|
MoDA: Multi-modal Diffusion Architecture for Talking Head Generation
| 9
| 46
| 47.89
| 48
|
Jul 4, 2025
| 2
| 140
|
https://arxiv.org/abs/2507.03256
|
544
|
Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust
Text-based Person Retrieval
| 3
| 30
| 41.67
| 45
|
Sep 11, 2025
| 6
| 10
|
https://arxiv.org/abs/2509.09118
|
545
|
Streaming 4D Visual Geometry Transformer
| 1
| 25
| 25
| 25
|
Jul 15, 2025
| 10
| 454
|
https://arxiv.org/abs/2507.11539
|
546
|
s3: You Don't Need That Much Data to Train a Search Agent via RL
| 5
| 41
| 45.8
| 47
|
May 20, 2025
| 18
| 564
|
https://arxiv.org/abs/2505.14146
|
547
|
MesaTask: Towards Task-Driven Tabletop Scene Generation via 3D Spatial
Reasoning
| 2
| 36
| 38
| 38
|
Sep 26, 2025
| 27
| 24
|
https://arxiv.org/abs/2509.22281
|
548
|
Ming-UniVision: Joint Image Understanding and Generation with a Unified
Continuous Tokenizer
| 3
| 38
| 42.33
| 42
|
Oct 8, 2025
| 63
| 72
|
https://arxiv.org/abs/2510.06590
|
549
|
iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation
| 2
| 34
| 38
| 38
|
Nov 25, 2025
| 30
| 120
|
https://arxiv.org/abs/2511.20635
|
550
|
ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language
Models for Audio Generation and Editing
| 2
| 38
| 38.5
| 38
|
Jun 26, 2025
| 7
| 876
|
https://arxiv.org/abs/2506.21448
|
551
|
Q-Sched: Pushing the Boundaries of Few-Step Diffusion Models with
Quantization-Aware Scheduling
| 2
| 30
| 38.5
| 38
|
Sep 1, 2025
| 5
| 6
|
https://arxiv.org/abs/2509.01624
|
552
|
GAS: Improving Discretization of Diffusion ODEs via Generalized
Adversarial Solver
| 2
| 30
| 38.5
| 38
|
Oct 20, 2025
| 2
| 10
|
https://arxiv.org/abs/2510.17699
|
553
|
MUR: Momentum Uncertainty guided Reasoning for Large Language Models
| 3
| 40
| 43
| 44
|
Jul 20, 2025
| 36
| 32
|
https://arxiv.org/abs/2507.14958
|
554
|
Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model
Reasoning
| 3
| 39
| 43
| 42
|
Oct 5, 2025
| 18
| 6
|
https://arxiv.org/abs/2510.04081
|
555
|
AniMaker: Automated Multi-Agent Animated Storytelling with MCTS-Driven
Clip Generation
| 8
| 46
| 48.12
| 48
|
Jun 12, 2025
| 37
| 173
|
https://arxiv.org/abs/2506.10540
|
556
|
D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to
Embodied AI
| 2
| 39
| 39.5
| 39
|
Oct 7, 2025
| 101
| 22
|
https://arxiv.org/abs/2510.05684
|
557
|
V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models
| 3
| 38
| 43.33
| 43
|
Nov 20, 2025
| 48
| 15
|
https://arxiv.org/abs/2511.16668
|
558
|
DetailFlow: 1D Coarse-to-Fine Autoregressive Image Generation via
Next-Detail Prediction
| 1
| 29
| 29
| 29
|
May 27, 2025
| 16
| 144
|
https://arxiv.org/abs/2505.21473
|
559
|
MM-BrowseComp: A Comprehensive Benchmark for Multimodal Browsing Agents
| 4
| 40
| 45.5
| 46
|
Aug 14, 2025
| 13
| 10
|
https://arxiv.org/abs/2508.13186
|
560
|
Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments
with a Hierarchical Spatial-Cognition Long-Short Memory System
| 4
| 41
| 45.5
| 46
|
Jun 24, 2025
| 3
| 89
|
https://arxiv.org/abs/2506.19433
|
561
|
Qwen3 Embedding: Advancing Text Embedding and Reranking Through
Foundation Models
| 5
| 44
| 46.8
| 46
|
Jun 5, 2025
| 72
| 1,390
|
https://arxiv.org/abs/2506.05176
|
562
|
Part II: ROLL Flash -- Accelerating RLVR and Agentic Training with
Asynchrony
| 4
| 44
| 45.75
| 45
|
Oct 13, 2025
| 15
| 2,270
|
https://arxiv.org/abs/2510.11345
|
563
|
Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning
| 4
| 44
| 45.75
| 46
|
Nov 18, 2025
| 15
| 951
|
https://arxiv.org/abs/2511.14460
|
564
|
Deep Researcher with Test-Time Diffusion
| 3
| 36
| 44.33
| 47
|
Jul 21, 2025
| 63
| 2,910
|
https://arxiv.org/abs/2507.16075
|
565
|
RoboOmni: Proactive Robot Manipulation in Omni-modal Context
| 2
| 40
| 41
| 41
|
Oct 27, 2025
| 52
| 27
|
https://arxiv.org/abs/2510.23763
|
566
|
LiteAttention: A Temporal Sparse Attention for Diffusion Transformers
| 3
| 42
| 44.33
| 43
|
Nov 14, 2025
| 24
| 23
|
https://arxiv.org/abs/2511.11062
|
567
|
SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from
Experience
| 4
| 45
| 46.25
| 46
|
Aug 6, 2025
| 46
| 141
|
https://arxiv.org/abs/2508.04700
|
568
|
Quantile Advantage Estimation for Entropy-Safe Reasoning
| 2
| 35
| 41.5
| 41
|
Sep 26, 2025
| 100
| 8
|
https://arxiv.org/abs/2509.22611
|
569
|
TensorBLEU: Vectorized GPU-based BLEU Score Implementation for
Per-Sentence In-Training Evaluation
| 4
| 40
| 46.25
| 48
|
Oct 7, 2025
| 7
| 13
|
https://arxiv.org/abs/2510.05485
|
570
|
MATRIX: Mask Track Alignment for Interaction-aware Video Generation
| 2
| 37
| 41.5
| 41
|
Oct 8, 2025
| 29
| 22
|
https://arxiv.org/abs/2510.07310
|
571
|
HSCodeComp: A Realistic and Expert-level Benchmark for Deep Search
Agents in Hierarchical Rule Application
| 4
| 43
| 46.25
| 46
|
Oct 22, 2025
| 26
| 85
|
https://arxiv.org/abs/2510.19631
|
572
|
VACE: All-in-One Video Creation and Editing
| 3
| 43
| 45
| 44
|
Mar 10, 2025
| 54
| 3,000
|
https://arxiv.org/abs/2503.07598
|
573
|
Open Deep Search: Democratizing Search with Open-source Reasoning Agents
| 4
| 45
| 46.5
| 46
|
Mar 26, 2025
| 48
| 3,620
|
https://arxiv.org/abs/2503.20201
|
574
|
TUN3D: Towards Real-World Scene Understanding from Unposed Images
| 1
| 33
| 33
| 33
|
Sep 23, 2025
| 12
| 11
|
https://arxiv.org/abs/2509.21388
|
575
|
Efficient Guided Generation for Large Language Models
| 3
| 41
| 45
| 45
|
Jul 19, 2023
| 8
| 12,900
|
https://arxiv.org/abs/2307.09702
|
576
|
UltraFlux: Data-Model Co-Design for High-quality Native 4K Text-to-Image Generation across Diverse Aspect Ratios
| 2
| 36
| 42
| 42
|
Nov 22, 2025
| 34
| 42
|
https://arxiv.org/abs/2511.18050
|
577
|
STARFlow-V: End-to-End Video Generative Modeling with Normalizing Flow
| 6
| 45
| 48
| 48
|
Nov 25, 2025
| 16
| 40
|
https://arxiv.org/abs/2511.20462
|
578
|
Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning
| 1
| 34
| 34
| 34
|
Jul 22, 2025
| 28
| 42
|
https://arxiv.org/abs/2507.16746
|
579
|
From reactive to cognitive: brain-inspired spatial intelligence for
embodied agents
| 2
| 42
| 42.5
| 42
|
Aug 24, 2025
| 3
| 18
|
https://arxiv.org/abs/2508.17198
|
580
|
Search-R1: Training LLMs to Reason and Leverage Search Engines with
Reinforcement Learning
| 4
| 46
| 46.75
| 47
|
Mar 12, 2025
| 35
| 3,130
|
https://arxiv.org/abs/2503.09516
|
581
|
Retrieval-Augmented Generation with Hierarchical Knowledge
| 1
| 34
| 34
| 34
|
Mar 13, 2025
| 2
| 359
|
https://arxiv.org/abs/2503.10150
|
582
|
REASONING GYM: Reasoning Environments for Reinforcement Learning with
Verifiable Rewards
| 1
| 34
| 34
| 34
|
May 30, 2025
| 72
| 1,130
|
https://arxiv.org/abs/2505.24760
|
583
|
Chem-R: Learning to Reason as a Chemist
| 2
| 35
| 42.5
| 42
|
Oct 19, 2025
| 45
| 9
|
https://arxiv.org/abs/2510.16880
|
584
|
Hierarchical Budget Policy Optimization for Adaptive Reasoning
| 1
| 35
| 35
| 35
|
Jul 21, 2025
| 12
| 14
|
https://arxiv.org/abs/2507.15844
|
585
|
Qwen2.5-Omni Technical Report
| 4
| 45
| 47
| 47
|
Mar 26, 2025
| 165
| 3,510
|
https://arxiv.org/abs/2503.20215
|
586
|
Variational Reasoning for Language Models
| 2
| 36
| 43
| 43
|
Sep 26, 2025
| 57
| 34
|
https://arxiv.org/abs/2509.22637
|
587
|
OpenVoice: Versatile Instant Voice Cloning
| 6
| 46
| 48.33
| 49
|
Dec 3, 2023
| 3
| 35,300
|
https://arxiv.org/abs/2312.01479
|
588
|
BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of
Deep-Research Agent
| 3
| 43
| 46
| 45
|
Aug 8, 2025
| 35
| 49
|
https://arxiv.org/abs/2508.06600
|
589
|
CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven
Images
| 3
| 38
| 46
| 50
|
Oct 13, 2025
| 12
| 22
|
https://arxiv.org/abs/2510.11718
|
590
|
Continuous Thought Machines
| 2
| 42
| 43.5
| 43
|
May 8, 2025
| 12
| 1,490
|
https://arxiv.org/abs/2505.05522
|
591
|
MMaDA: Multimodal Large Diffusion Language Models
| 1
| 37
| 37
| 37
|
May 21, 2025
| 94
| 1,230
|
https://arxiv.org/abs/2505.15809
|
592
|
Cognitive Kernel-Pro: A Framework for Deep Research Agents and Agent
Foundation Models Training
| 4
| 46
| 47.5
| 47
|
Aug 1, 2025
| 91
| 388
|
https://arxiv.org/abs/2508.00414
|
593
|
DrugReasoner: Interpretable Drug Approval Prediction with a
Reasoning-augmented Language Model
| 2
| 44
| 44
| 44
|
Aug 26, 2025
| 10
| 2
|
https://arxiv.org/abs/2508.18579
|
594
|
TikZero: Zero-Shot Text-Guided Graphics Program Synthesis
| 2
| 44
| 44
| 44
|
Mar 14, 2025
| 3
| 1,530
|
https://arxiv.org/abs/2503.11509
|
595
|
VideoNSA: Native Sparse Attention Scales Video Understanding
| 2
| 42
| 44
| 44
|
Oct 2, 2025
| 8
| 29
|
https://arxiv.org/abs/2510.02295
|
596
|
OmniNWM: Omniscient Driving Navigation World Models
| 2
| 43
| 44
| 44
|
Oct 21, 2025
| 6
| 48
|
https://arxiv.org/abs/2510.18313
|
597
|
EVTAR: End-to-End Try on with Additional Unpaired Visual Reference
| 2
| 43
| 44
| 44
|
Nov 2, 2025
| 4
| 17
|
https://arxiv.org/abs/2511.00956
|
598
|
P1: Mastering Physics Olympiads with Reinforcement Learning
| 2
| 43
| 44
| 44
|
Nov 17, 2025
| 106
| 45
|
https://arxiv.org/abs/2511.13612
|
599
|
nablaNABLA: Neighborhood Adaptive Block-Level Attention
| 1
| 38
| 38
| 38
|
Jul 17, 2025
| 85
| 10
|
https://arxiv.org/abs/2507.13546
|
600
|
Evolving Language Models without Labels: Majority Drives Selection,
Novelty Promotes Variation
| 3
| 41
| 46.67
| 49
|
Sep 18, 2025
| 29
| 19
|
https://arxiv.org/abs/2509.15194
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.