rank int64 | name string | times_trended int64 | best_rank int64 | avg_rank float64 | median_rank int64 | publish_date string | max_upvotes int64 | max_github_stars int64 | arxiv_link string |
|---|---|---|---|---|---|---|---|---|---|
1 | LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models | 432 | 2 | 11.28 | 11 | Mar 20, 2024 | 173 | 63,300 | https://arxiv.org/abs/2403.13372 |
2 | DINOv3 | 346 | 1 | 21.02 | 15 | Aug 13, 2025 | 284 | 8,590 | https://arxiv.org/abs/2508.10104 |
3 | Agent Lightning: Train ANY AI Agents with Reinforcement Learning | 325 | 1 | 20.18 | 20 | Aug 5, 2025 | 120 | 9,090 | https://arxiv.org/abs/2508.03680 |
4 | WebDancer: Towards Autonomous Information Seeking Agency | 398 | 3 | 25.85 | 26 | May 28, 2025 | 33 | 17,300 | https://arxiv.org/abs/2505.22648 |
5 | WebShaper: Agentically Data Synthesizing via Information-Seeking
Formalization | 402 | 2 | 26.34 | 26 | Jul 20, 2025 | 60 | 17,300 | https://arxiv.org/abs/2507.15061 |
6 | WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent | 393 | 1 | 25.86 | 26 | Aug 7, 2025 | 139 | 17,300 | https://arxiv.org/abs/2508.05748 |
7 | WebSailor: Navigating Super-human Reasoning for Web Agent | 394 | 2 | 26.03 | 26 | Jul 3, 2025 | 122 | 17,300 | https://arxiv.org/abs/2507.02592 |
8 | InternVL3: Exploring Advanced Training and Test-Time Recipes for
Open-Source Multimodal Models | 349 | 8 | 23.07 | 21 | Apr 14, 2025 | 304 | 8,850 | https://arxiv.org/abs/2504.10479 |
9 | AgentScope 1.0: A Developer-Centric Framework for Building Agentic
Applications | 290 | 1 | 18.33 | 14 | Aug 22, 2025 | 53 | 14,100 | https://arxiv.org/abs/2508.16279 |
10 | Qwen-Image Technical Report | 302 | 1 | 20.35 | 19 | Aug 4, 2025 | 261 | 6,150 | https://arxiv.org/abs/2508.02324 |
11 | MinerU2.5: A Decoupled Vision-Language Model for Efficient
High-Resolution Document Parsing | 179 | 2 | 10.82 | 9 | Sep 26, 2025 | 134 | 49,600 | https://arxiv.org/abs/2509.22186 |
12 | Scaling Agents via Continual Pre-training | 196 | 1 | 18.28 | 17 | Sep 16, 2025 | 115 | 17,300 | https://arxiv.org/abs/2509.13310 |
13 | Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM
Fine-Tuning Data from Unstructured Documents | 345 | 13 | 32.46 | 32 | Jul 5, 2025 | 51 | 12,100 | https://arxiv.org/abs/2507.04009 |
14 | WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic
Data and Scalable Reinforcement Learning | 201 | 1 | 19.4 | 17 | Sep 16, 2025 | 90 | 17,300 | https://arxiv.org/abs/2509.13305 |
15 | ReSum: Unlocking Long-Horizon Search Intelligence via Context
Summarization | 198 | 1 | 18.98 | 16 | Sep 16, 2025 | 78 | 17,300 | https://arxiv.org/abs/2509.13313 |
16 | PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex
Task Automation on PC | 222 | 3 | 22.85 | 18 | Feb 20, 2025 | 29 | 6,440 | https://arxiv.org/abs/2502.14282 |
17 | WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for
Open-Ended Deep Research | 194 | 4 | 19 | 16 | Sep 16, 2025 | 105 | 17,300 | https://arxiv.org/abs/2509.13312 |
18 | Look Before You Leap: A GUI-Critic-R1 Model for Pre-Operative Error
Diagnosis in GUI Automation | 220 | 4 | 22.92 | 19 | Jun 5, 2025 | 19 | 6,430 | https://arxiv.org/abs/2506.04614 |
19 | PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B
Ultra-Compact Vision-Language Model | 124 | 1 | 3.35 | 3 | Oct 16, 2025 | 98 | 65,400 | https://arxiv.org/abs/2510.14528 |
20 | Mobile-Agent-v3: Foundamental Agents for GUI Automation | 198 | 3 | 21.24 | 17 | Aug 21, 2025 | 64 | 6,440 | https://arxiv.org/abs/2508.15144 |
21 | AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs | 144 | 1 | 12.94 | 8 | Aug 22, 2025 | 153 | 1,740 | https://arxiv.org/abs/2508.16153 |
22 | FastVLM: Efficient Vision Encoding for Vision Language Models | 162 | 1 | 17.6 | 11 | Dec 17, 2024 | 70 | 6,680 | https://arxiv.org/abs/2412.13303 |
23 | MIRIX: Multi-Agent Memory System for LLM-Based Agents | 272 | 5 | 31.7 | 35 | Jul 10, 2025 | 79 | 3,380 | https://arxiv.org/abs/2507.07957 |
24 | GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models | 243 | 11 | 30.1 | 30 | Aug 8, 2025 | 185 | 3,000 | https://arxiv.org/abs/2508.06471 |
25 | Prompt Orchestration Markup Language | 153 | 3 | 18.99 | 12 | Aug 19, 2025 | 49 | 4,560 | https://arxiv.org/abs/2508.13948 |
26 | OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation | 112 | 3 | 9.8 | 9 | Oct 23, 2024 | 5 | 50,300 | https://arxiv.org/abs/2410.17799 |
27 | UI-TARS: Pioneering Automated GUI Interaction with Native Agents | 208 | 4 | 29.12 | 28 | Jan 21, 2025 | 65 | 7,830 | https://arxiv.org/abs/2501.12326 |
28 | A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm
Bridging Foundation Models and Lifelong Agentic Systems | 191 | 5 | 27.24 | 29 | Aug 10, 2025 | 97 | 1,140 | https://arxiv.org/abs/2508.07407 |
29 | Qwen3 Technical Report | 266 | 2 | 34.64 | 34 | May 14, 2025 | 316 | 25,500 | https://arxiv.org/abs/2505.09388 |
30 | The Landscape of Agentic Reinforcement Learning for LLMs: A Survey | 125 | 2 | 17.14 | 15 | Sep 2, 2025 | 214 | 941 | https://arxiv.org/abs/2509.02547 |
31 | DeepAnalyze: Agentic Large Language Models for Autonomous Data Science | 115 | 2 | 14.33 | 14 | Oct 19, 2025 | 102 | 2,630 | https://arxiv.org/abs/2510.16872 |
32 | TradingAgents: Multi-Agents LLM Financial Trading Framework | 111 | 6 | 13.67 | 13 | Dec 28, 2024 | 14 | 25,800 | https://arxiv.org/abs/2412.20138 |
33 | MinerU: An Open-Source Solution for Precise Document Content Extraction | 112 | 3 | 14.65 | 16 | Sep 27, 2024 | 32 | 49,600 | https://arxiv.org/abs/2409.18839 |
34 | IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot
Text-To-Speech System | 112 | 7 | 14.97 | 14 | Feb 8, 2025 | 5 | 16,000 | https://arxiv.org/abs/2502.05512 |
35 | StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation | 137 | 1 | 21.63 | 20 | Aug 11, 2025 | 27 | 1,020 | https://arxiv.org/abs/2508.08248 |
36 | Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI
Agents | 135 | 5 | 22.01 | 20 | Sep 8, 2025 | 41 | 1,660 | https://arxiv.org/abs/2509.06917 |
37 | PaddleOCR 3.0 Technical Report | 100 | 1 | 12.08 | 11 | Jul 8, 2025 | 17 | 57,600 | https://arxiv.org/abs/2507.05595 |
38 | Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with
Long-Term Memory | 121 | 4 | 18.92 | 14 | Aug 13, 2025 | 54 | 893 | https://arxiv.org/abs/2508.09736 |
39 | rStar2-Agent: Agentic Reasoning Technical Report | 112 | 4 | 16.78 | 16 | Aug 28, 2025 | 106 | 1,230 | https://arxiv.org/abs/2508.20722 |
40 | Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory | 112 | 6 | 17.88 | 15 | Apr 28, 2025 | 33 | 43,700 | https://arxiv.org/abs/2504.19413 |
41 | ScreenCoder: Advancing Visual-to-Code Generation for Front-End
Automation via Modular Multimodal Agents | 122 | 1 | 21.56 | 24 | Jul 30, 2025 | 98 | 2,290 | https://arxiv.org/abs/2507.22827 |
42 | Step-Audio 2 Technical Report | 94 | 2 | 13.01 | 12 | Jul 22, 2025 | 71 | 1,050 | https://arxiv.org/abs/2507.16632 |
43 | USO: Unified Style and Subject-Driven Generation via Disentangled and
Reward Learning | 92 | 2 | 12.67 | 6 | Aug 26, 2025 | 56 | 1,080 | https://arxiv.org/abs/2508.18966 |
44 | TempFlow-GRPO: When Timing Matters for GRPO in Flow Models | 96 | 4 | 14.52 | 10 | Aug 6, 2025 | 12 | 782 | https://arxiv.org/abs/2508.04324 |
45 | RAG-Anything: All-in-One RAG Framework | 129 | 2 | 25.02 | 28 | Oct 14, 2025 | 49 | 10,600 | https://arxiv.org/abs/2510.12323 |
46 | ComfyUI-Copilot: An Intelligent Assistant for Automated Workflow
Development | 172 | 6 | 32.85 | 36 | Jun 5, 2025 | 79 | 3,660 | https://arxiv.org/abs/2506.05010 |
47 | VGGT: Visual Geometry Grounded Transformer | 232 | 12 | 37.63 | 39 | Mar 14, 2025 | 33 | 11,700 | https://arxiv.org/abs/2503.11651 |
48 | MixGRPO: Unlocking Flow-based GRPO Efficiency with Mixed ODE-SDE | 170 | 12 | 32.78 | 34 | Jul 29, 2025 | 15 | 1,010 | https://arxiv.org/abs/2507.21802 |
49 | 4DNeX: Feed-Forward 4D Generative Modeling Made Easy | 178 | 2 | 33.96 | 37 | Aug 18, 2025 | 61 | 751 | https://arxiv.org/abs/2508.13154 |
50 | FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable
Reasoning | 86 | 3 | 15.94 | 11 | Oct 26, 2025 | 10 | 17,000 | https://arxiv.org/abs/2510.22543 |
51 | Agent S2: A Compositional Generalist-Specialist Framework for Computer
Use Agents | 145 | 2 | 30.46 | 35 | Apr 1, 2025 | 26 | 8,450 | https://arxiv.org/abs/2504.00906 |
52 | The Unreasonable Effectiveness of Scaling Agents for Computer Use | 138 | 2 | 29.52 | 34 | Oct 2, 2025 | 24 | 8,450 | https://arxiv.org/abs/2510.02250 |
53 | VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action
Model | 131 | 7 | 28.43 | 29 | Sep 11, 2025 | 235 | 1,590 | https://arxiv.org/abs/2509.09372 |
54 | A Survey of Reinforcement Learning for Large Reasoning Models | 110 | 4 | 24.15 | 24 | Sep 10, 2025 | 183 | 1,800 | https://arxiv.org/abs/2509.08827 |
55 | Less is More: Recursive Reasoning with Tiny Networks | 90 | 1 | 20.33 | 18 | Oct 6, 2025 | 483 | 5,670 | https://arxiv.org/abs/2510.04871 |
56 | LightRAG: Simple and Fast Retrieval-Augmented Generation | 92 | 2 | 21.25 | 9 | Oct 8, 2024 | 20 | 24,900 | https://arxiv.org/abs/2410.05779 |
57 | PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel | 104 | 10 | 27.97 | 30 | Apr 21, 2023 | 4 | 95,500 | https://arxiv.org/abs/2304.11277 |
58 | Matrix-Game: Interactive World Foundation Model | 78 | 1 | 20.45 | 14 | Jun 23, 2025 | 72 | 1,550 | https://arxiv.org/abs/2506.18701 |
59 | PyTorch Distributed: Experiences on Accelerating Data Parallel Training | 105 | 9 | 28.43 | 31 | Jun 28, 2020 | 3 | 95,500 | https://arxiv.org/abs/2006.15704 |
60 | Thyme: Think Beyond Images | 82 | 1 | 23.55 | 23 | Aug 15, 2025 | 78 | 466 | https://arxiv.org/abs/2508.11630 |
61 | RepoMaster: Autonomous Exploration and Understanding of GitHub
Repositories for Complex Task Solving | 83 | 14 | 23.9 | 21 | May 27, 2025 | 2 | 348 | https://arxiv.org/abs/2505.21577 |
62 | UI-Venus Technical Report: Building High-performance UI Agents with RFT | 74 | 7 | 21.26 | 17 | Aug 14, 2025 | 41 | 470 | https://arxiv.org/abs/2508.10833 |
63 | ToonComposer: Streamlining Cartoon Production with Generative
Post-Keyframing | 65 | 4 | 17.37 | 14 | Aug 14, 2025 | 50 | 358 | https://arxiv.org/abs/2508.10881 |
64 | OpenCUA: Open Foundations for Computer-Use Agents | 85 | 14 | 25.86 | 24 | Aug 12, 2025 | 30 | 415 | https://arxiv.org/abs/2508.09123 |
65 | LongSplat: Robust Unposed 3D Gaussian Splatting for Casual Long Videos | 59 | 3 | 14.98 | 10 | Aug 19, 2025 | 59 | 600 | https://arxiv.org/abs/2508.14041 |
66 | Stand-In: A Lightweight and Plug-and-Play Identity Control for Video
Generation | 63 | 2 | 17.37 | 9 | Aug 11, 2025 | 38 | 512 | https://arxiv.org/abs/2508.07901 |
67 | Zep: A Temporal Knowledge Graph Architecture for Agent Memory | 95 | 11 | 28.88 | 31 | Jan 20, 2025 | 6 | 20,600 | https://arxiv.org/abs/2501.13956 |
68 | Paper2Video: Automatic Video Generation from Scientific Papers | 55 | 2 | 13.04 | 11 | Oct 6, 2025 | 115 | 1,790 | https://arxiv.org/abs/2510.05096 |
69 | Youtu-GraphRAG: Vertically Unified Agents for Graph Retrieval-Augmented
Complex Reasoning | 72 | 6 | 22.89 | 17 | Aug 27, 2025 | 7 | 730 | https://arxiv.org/abs/2508.19855 |
70 | VeOmni: Scaling Any Modality Model Training with Model-Centric
Distributed Recipe Zoo | 66 | 1 | 20.94 | 18 | Aug 4, 2025 | 17 | 1,180 | https://arxiv.org/abs/2508.02317 |
71 | NextStep-1: Toward Autoregressive Image Generation with Continuous
Tokens at Scale | 52 | 1 | 12.92 | 5 | Aug 14, 2025 | 134 | 496 | https://arxiv.org/abs/2508.10711 |
72 | Qwen3-Omni Technical Report | 72 | 1 | 23.96 | 26 | Sep 22, 2025 | 128 | 2,690 | https://arxiv.org/abs/2509.17765 |
73 | 3D and 4D World Modeling: A Survey | 66 | 4 | 22.12 | 20 | Sep 4, 2025 | 57 | 568 | https://arxiv.org/abs/2509.07996 |
74 | ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform
Data | 78 | 10 | 27.24 | 29 | Sep 18, 2025 | 105 | 610 | https://arxiv.org/abs/2509.15221 |
75 | VibeVoice Technical Report | 37 | 1 | 1 | 1 | Aug 26, 2025 | 118 | 7,980 | https://arxiv.org/abs/2508.19205 |
76 | PokeeResearch: Effective Deep Research via Reinforcement Learning from
AI Feedback and Robust Reasoning Scaffold | 47 | 4 | 12.15 | 7 | Oct 17, 2025 | 8 | 1,610 | https://arxiv.org/abs/2510.15862 |
77 | MCP-Bench: Benchmarking Tool-Using LLM Agents with Complex Real-World
Tasks via MCP Servers | 69 | 5 | 25.25 | 23 | Aug 28, 2025 | 58 | 296 | https://arxiv.org/abs/2508.20453 |
78 | Depth Anything 3: Recovering the Visual Space from Any Views | 40 | 1 | 6.58 | 3 | Nov 13, 2025 | 89 | 3,010 | https://arxiv.org/abs/2511.10647 |
79 | MCP-Universe: Benchmarking Large Language Models with Real-World Model
Context Protocol Servers | 57 | 5 | 20.07 | 18 | Aug 20, 2025 | 41 | 352 | https://arxiv.org/abs/2508.14704 |
80 | HunyuanImage 3.0 Technical Report | 50 | 1 | 15.78 | 7 | Sep 28, 2025 | 21 | 2,210 | https://arxiv.org/abs/2509.23951 |
81 | olmOCR: Unlocking Trillions of Tokens in PDFs with Vision Language
Models | 56 | 3 | 19.68 | 20 | Feb 25, 2025 | 9 | 16,000 | https://arxiv.org/abs/2502.18443 |
82 | Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation | 68 | 2 | 25.31 | 22 | Sep 30, 2025 | 32 | 1,300 | https://arxiv.org/abs/2510.01284 |
83 | Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing | 54 | 1 | 18.81 | 9 | Oct 22, 2025 | 28 | 1,580 | https://arxiv.org/abs/2510.19808 |
84 | A Survey of Context Engineering for Large Language Models | 151 | 1 | 39.59 | 40 | Jul 17, 2025 | 256 | 2,330 | https://arxiv.org/abs/2507.13334 |
85 | HuMo: Human-Centric Video Generation via Collaborative Multi-Modal
Conditioning | 46 | 2 | 14.24 | 9 | Sep 10, 2025 | 120 | 596 | https://arxiv.org/abs/2509.08519 |
86 | EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for
General Robot Control | 52 | 2 | 20.06 | 15 | Aug 28, 2025 | 76 | 282 | https://arxiv.org/abs/2508.21112 |
87 | Waver: Wave Your Way to Lifelike Video Generation | 53 | 5 | 21.06 | 22 | Aug 21, 2025 | 33 | 499 | https://arxiv.org/abs/2508.15761 |
88 | AudioStory: Generating Long-Form Narrative Audio with Large Language
Models | 59 | 16 | 24.81 | 21 | Aug 27, 2025 | 20 | 264 | https://arxiv.org/abs/2508.20088 |
89 | Towards a Unified View of Large Language Model Post-Training | 46 | 5 | 17.63 | 19 | Sep 4, 2025 | 61 | 100 | https://arxiv.org/abs/2509.04419 |
90 | InternVL3.5: Advancing Open-Source Multimodal Models in Versatility,
Reasoning, and Efficiency | 109 | 12 | 37.03 | 40 | Aug 25, 2025 | 191 | 9,240 | https://arxiv.org/abs/2508.18265 |
91 | Transition Models: Rethinking the Generative Learning Objective | 47 | 8 | 19.38 | 19 | Sep 4, 2025 | 28 | 94 | https://arxiv.org/abs/2509.04394 |
92 | AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making
through Multi-Turn Reinforcement Learning | 43 | 1 | 16.98 | 8 | Sep 10, 2025 | 56 | 362 | https://arxiv.org/abs/2509.08755 |
93 | The Dragon Hatchling: The Missing Link between the Transformer and
Models of the Brain | 38 | 1 | 13.08 | 1 | Sep 30, 2025 | 489 | 3,120 | https://arxiv.org/abs/2509.26507 |
94 | STream3R: Scalable Sequential 3D Reconstruction with Causal Transformer | 39 | 5 | 14.41 | 8 | Aug 14, 2025 | 30 | 194 | https://arxiv.org/abs/2508.10893 |
95 | SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning | 56 | 1 | 25.86 | 28 | Sep 11, 2025 | 73 | 740 | https://arxiv.org/abs/2509.09674 |
96 | Enterprise Deep Research: Steerable Multi-Agent Deep Research for
Enterprise Analytics | 46 | 7 | 21.17 | 18 | Oct 20, 2025 | 9 | 952 | https://arxiv.org/abs/2510.17797 |
97 | Code2Video: A Code-centric Paradigm for Educational Video Generation | 48 | 2 | 22.77 | 19 | Oct 1, 2025 | 33 | 1,100 | https://arxiv.org/abs/2510.01174 |
98 | RynnEC: Bringing MLLMs into Embodied World | 62 | 6 | 29.16 | 29 | Aug 19, 2025 | 18 | 337 | https://arxiv.org/abs/2508.14160 |
99 | Kimi Linear: An Expressive, Efficient Attention Architecture | 39 | 1 | 16.28 | 4 | Oct 30, 2025 | 102 | 1,150 | https://arxiv.org/abs/2510.26692 |
100 | UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning | 67 | 15 | 30.96 | 31 | Sep 15, 2025 | 47 | 6,440 | https://arxiv.org/abs/2509.11543 |
End of preview. Expand
in Data Studio
HuggingFace Top Trending Papers (2025)
A ranked dataset of the most trending papers on HuggingFace Daily Papers in 2025, based on weighted scoring of their trending appearances. This dataset captures which AI/ML research papers gained the most community attention and sustained visibility.
π Dataset Overview
- Total Entries: 663 ranked papers
- Time Period: 2025 (Jan - Nov)
- Source: Wayback Machine snapshots of HuggingFace Daily Papers
- Unique Papers: 663
- Data Order: Sorted by rank (1 = highest score)
- Scoring Method: Weighted by appearance frequency and rank position
π Scoring Methodology
Each paper's score is calculated using:
Score = Ξ£ (51 - rank) for each trending appearance
Where:
- Rank 1 = 50 points
- Rank 2 = 49 points
- ...
- Rank 50 = 1 point
Why this works:
- β Rewards frequent appearances (more days trending = more points)
- β Rewards high rankings (rank 1 is worth more than rank 25)
- β Balances viral papers with consistently popular papers
Example Calculation
Paper appears 3 times:
- Day 1: Rank 1 β 50 points
- Day 2: Rank 5 β 46 points
- Day 3: Rank 10 β 41 points
- Total Score: 137
π Key Insights
1. Top 10 Most Trending Papers (2025)
| Rank | Paper | Score | Appearances | Avg Rank | Key Topic |
|---|---|---|---|---|---|
| 1 | LlamaFactory | 17,160 | 432 | 11.28 | LLM Fine-tuning |
| 2 | DINOv3 | 10,372 | 346 | 21.02 | Computer Vision |
| 3 | Agent Lightning | 10,015 | 325 | 20.18 | RL for Agents |
| 4 | WebDancer | 10,010 | 398 | 25.85 | Information Seeking |
| 5 | WebShaper | 9,912 | 402 | 26.34 | Data Synthesis |
| 6 | WebWatcher | 9,881 | 393 | 25.86 | Vision-Language |
| 7 | WebSailor | 9,839 | 394 | 26.03 | Web Reasoning |
| 8 | InternVL3 | 9,746 | 349 | 23.07 | Vision-Language |
| 9 | AgentScope 1.0 | 9,474 | 290 | 18.33 | Agent Framework |
| 10 | Qwen-Image | 9,255 | 302 | 20.35 | Vision Language Model |
Notable: LlamaFactory appeared on trending 432 times - nearly every day in 2025!
2. Research Themes Dominating 2025
Web Agents & Information Systems (Major Trend)
- 4 "Web" papers in top 10 (WebDancer, WebShaper, WebWatcher, WebSailor)
- 400+ appearances each showing sustained interest
- Focus on autonomous web navigation and reasoning
Agent Systems
- Agent Lightning, AgentScope in top 10
- Strong focus on reinforcement learning for agents
- Training frameworks and methodologies driving adoption
Vision-Language Models
- DINOv3, InternVL3, Qwen-Image all in top 10
- Combined 1,000+ appearances
- Integration of vision and language continues to trend
LLM Fine-tuning Tools
- LlamaFactory dominates with 432 appearances (#1)
- 63,300 GitHub stars shows massive adoption
- Democratizing access to fine-tuning
3. Appearance Patterns
Most Consistent Papers (High Frequency):
- LlamaFactory: 432 appearances (nearly every day!)
- WebShaper: 402 appearances
- WebDancer: 398 appearances
- WebSailor: 394 appearances
- WebWatcher: 393 appearances
Highest Quality Rankings (Best Avg Rank):
- Papers consistently ranking in top 10-15 positions
- LlamaFactory: 11.28 avg rank (best combination of frequency + quality)
- AgentScope: 18.33 avg rank
- Agent Lightning: 20.18 avg rank
- Qwen-Image: 20.35 avg rank
Viral Papers (Few Appearances, High Impact):
- Papers with <50 appearances but top 100 ranking show viral breakthrough moments
- Often coincide with major model releases or novel capabilities
4. Community Engagement Metrics
Most Upvoted:
- Qwen3: 316 upvotes
- DINOv3: 284 upvotes
- Qwen-Image: 261 upvotes
- A Survey of Context Engineering: 256 upvotes
Most GitHub Stars:
- PaddleOCR-VL: 65,400 stars
- LlamaFactory: 63,300 stars
- PaddleOCR 3.0: 57,600 stars
- OmniFlatten: 50,300 stars
- MinerU2.5: 49,600 stars
- Downloads last month
- 30