- 
	
	
	
Video Creation by Demonstration
Paper • 2412.09551 • Published • 9 - 
	
	
	
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation
Paper • 2412.07589 • Published • 48 - 
	
	
	
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
Paper • 2412.06531 • Published • 72 - 
	
	
	
APOLLO: SGD-like Memory, AdamW-level Performance
Paper • 2412.05270 • Published • 38 
Collections
Discover the best community collections!
Collections including paper arxiv:2410.23743 
						
					
				- 
	
	
	
Rethinking Data Selection at Scale: Random Selection is Almost All You Need
Paper • 2410.09335 • Published • 17 - 
	
	
	
From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning
Paper • 2410.06456 • Published • 37 - 
	
	
	
Emergent properties with repeated examples
Paper • 2410.07041 • Published • 8 - 
	
	
	
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 
- 
	
	
	
Writing in the Margins: Better Inference Pattern for Long Context Retrieval
Paper • 2408.14906 • Published • 144 - 
	
	
	
Training Language Models to Self-Correct via Reinforcement Learning
Paper • 2409.12917 • Published • 140 - 
	
	
	
Towards a Unified View of Preference Learning for Large Language Models: A Survey
Paper • 2409.02795 • Published • 72 - 
	
	
	
Attention Heads of Large Language Models: A Survey
Paper • 2409.03752 • Published • 92 
- 
	
	
	
A Survey of Small Language Models
Paper • 2410.20011 • Published • 46 - 
	
	
	
TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
Paper • 2410.23168 • Published • 24 - 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
GPT or BERT: why not both?
Paper • 2410.24159 • Published • 13 
- 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level
Paper • 2411.03562 • Published • 68 - 
	
	
	
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
Paper • 2411.03884 • Published • 28 - 
	
	
	
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models
Paper • 2502.00698 • Published • 24 
- 
	
	
	
CLEAR: Character Unlearning in Textual and Visual Modalities
Paper • 2410.18057 • Published • 209 - 
	
	
	
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
Paper • 2410.23090 • Published • 55 - 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization
Paper • 2411.02355 • Published • 51 
- 
	
	
	
DataComp-LM: In search of the next generation of training sets for language models
Paper • 2406.11794 • Published • 54 - 
	
	
	
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Paper • 2410.02749 • Published • 13 - 
	
	
	
Fewer Truncations Improve Language Modeling
Paper • 2404.10830 • Published • 3 - 
	
	
	
How to Train Long-Context Language Models (Effectively)
Paper • 2410.02660 • Published • 2 
- 
	
	
	
Video Creation by Demonstration
Paper • 2412.09551 • Published • 9 - 
	
	
	
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation
Paper • 2412.07589 • Published • 48 - 
	
	
	
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
Paper • 2412.06531 • Published • 72 - 
	
	
	
APOLLO: SGD-like Memory, AdamW-level Performance
Paper • 2412.05270 • Published • 38 
- 
	
	
	
A Survey of Small Language Models
Paper • 2410.20011 • Published • 46 - 
	
	
	
TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
Paper • 2410.23168 • Published • 24 - 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
GPT or BERT: why not both?
Paper • 2410.24159 • Published • 13 
- 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level
Paper • 2411.03562 • Published • 68 - 
	
	
	
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
Paper • 2411.03884 • Published • 28 - 
	
	
	
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models
Paper • 2502.00698 • Published • 24 
- 
	
	
	
Rethinking Data Selection at Scale: Random Selection is Almost All You Need
Paper • 2410.09335 • Published • 17 - 
	
	
	
From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning
Paper • 2410.06456 • Published • 37 - 
	
	
	
Emergent properties with repeated examples
Paper • 2410.07041 • Published • 8 - 
	
	
	
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 
- 
	
	
	
CLEAR: Character Unlearning in Textual and Visual Modalities
Paper • 2410.18057 • Published • 209 - 
	
	
	
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
Paper • 2410.23090 • Published • 55 - 
	
	
	
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
Paper • 2410.23743 • Published • 63 - 
	
	
	
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization
Paper • 2411.02355 • Published • 51 
- 
	
	
	
Writing in the Margins: Better Inference Pattern for Long Context Retrieval
Paper • 2408.14906 • Published • 144 - 
	
	
	
Training Language Models to Self-Correct via Reinforcement Learning
Paper • 2409.12917 • Published • 140 - 
	
	
	
Towards a Unified View of Preference Learning for Large Language Models: A Survey
Paper • 2409.02795 • Published • 72 - 
	
	
	
Attention Heads of Large Language Models: A Survey
Paper • 2409.03752 • Published • 92 
- 
	
	
	
DataComp-LM: In search of the next generation of training sets for language models
Paper • 2406.11794 • Published • 54 - 
	
	
	
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Paper • 2410.02749 • Published • 13 - 
	
	
	
Fewer Truncations Improve Language Modeling
Paper • 2404.10830 • Published • 3 - 
	
	
	
How to Train Long-Context Language Models (Effectively)
Paper • 2410.02660 • Published • 2