| title
				 stringlengths 21 128 | content_TLDR
				 stringlengths 40 250 | abstract
				 stringlengths 613 2.09k | authors
				 listlengths 1 42 | openreview_url
				 stringlengths 42 42 | id
				 stringlengths 10 10 | forum
				 stringlengths 10 10 | authorids
				 listlengths 1 42 | venue
				 dict | venueid
				 dict | pdf_url
				 dict | invitation
				 stringclasses 1
				value | group
				 stringclasses 1
				value | venue_name
				 stringclasses 1
				value | year
				 int64 2.03k 2.03k | conference
				 stringclasses 1
				value | content_keywords
				 listlengths 1 16 | content_code_of_ethics
				 stringclasses 1
				value | content_author_guide
				 stringclasses 1
				value | content_flagged_for_ethics_review
				 bool 1
				class | content_ethics_comments
				 stringclasses 11
				values | content__bibtex
				 stringlengths 246 1.01k | content_paperhash
				 stringlengths 29 134 | content_supplementary_material
				 stringclasses 73
				values | content_award_nomination
				 bool 1
				class | content_reciprocal_reviewing_status
				 stringclasses 1
				value | content_reciprocal_reviewing_author
				 stringclasses 4
				values | content_reciprocal_reviewing_exemption_reason
				 dict | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models in Multi-turn Interactions | 
	In this work, we propose Jigsaw Puzzles (JSP), a straightforward yet effective multi-turn jailbreak strategy, exposing LLM vulnerabilities to inform future safety improvements. | 
	Large language models (LLMs) have exhibited outstanding performance in engaging with humans and addressing complex questions by leveraging their vast implicit knowledge and robust reasoning capabilities. However, such models are vulnerable to jailbreak attacks, leading to the generation of harmful responses. Despite recent research on single-turn jailbreak strategies to facilitate the development of defence mechanisms, the challenge of revealing vulnerabilities under multi-turn setting remains relatively under-explored. In this work, we propose Jigsaw Puzzles (JSP), a straightforward yet effective multi-turn jailbreak strategy against the advanced LLMs. JSP splits questions into harmless fractions as the input of each turn, and requests LLMs to reconstruct and respond to questions under multi-turn interaction. Our results demonstrate the proposed JSP jailbreak bypasses original safeguards against explicitly harmful content, achieving an average attack success rate of 93.76% on 189 harmful queries across 5 advanced LLMs (Gemini-1.5-Pro, Llama-3.1-70B, GPT-4, GPT-4o, GPT-4o-mini), and exhibits consistent performance on jailbreaking benchmarks. Moreover, JSP exhibits strong resistance to input-side and output-side defence tactics. Warning: this paper contains offensive examples. | 
	[
  "Hao Yang",
  "Lizhen Qu",
  "Ehsan Shareghi",
  "Gholamreza Haffari"
] | 
	https://openreview.net/forum?id=zuNM3eoPVi | 
	zuNM3eoPVi | 
	zuNM3eoPVi | 
	[
  "~Hao_Yang26",
  "~Lizhen_Qu2",
  "~Ehsan_Shareghi1",
  "~Gholamreza_Haffari2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/37b5d26cf61599e9f7a4d742ff910b1026aec236.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Jailbreak",
  "Red teaming"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | true | 
	Does not discuss potentially harmful ramifications and dual use
 | 
	@inproceedings{
yang2025jigsaw,
title={Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models in Multi-turn Interactions},
author={Hao Yang and Lizhen Qu and Ehsan Shareghi and Gholamreza Haffari},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zuNM3eoPVi}
} | 
	yang|jigsaw_puzzles_splitting_harmful_questions_to_jailbreak_large_language_models_in_multiturn_interactions | null | null | null | null | null | |
| 
	Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents | 
	State-of-the-art results on Computer Use using a framework of Generalist and Specialist modules. | 
	Computer use agents automate digital tasks by directly interacting with graphical user interfaces (GUIs) on computers and mobile devices, offering significant potential to enhance human productivity by completing an open-ended space of user queries. However, current agents face significant challenges: imprecise grounding of GUI elements, difficulties with long-horizon task planning, and performance bottlenecks from relying on single generalist models for diverse cognitive tasks. To this end, we introduce Agent S2, a novel compositional framework that delegates cognitive responsibilities across various generalist and specialist models. We propose a novel Mixture-of-Grounding technique to achieve precise GUI localization and introduce Proactive Hierarchical Planning, dynamically refining action plans at multiple temporal scales in response to evolving observations. Evaluations demonstrate that Agent S2 establishes new state-of-the-art (SOTA) performance on three prominent computer use benchmarks. Specifically, Agent S2 achieves 18.9% and 32.7% relative improvements over leading baseline agents such as Claude Computer Use and UI-TARS on the OSWorld 15-step and 50-step evaluation. Moreover, Agent S2 generalizes effectively to other operating systems and applications, surpassing previous best methods by 52.8% on WindowsAgentArena and by 16.52% on AndroidWorld relatively. Code available at https://github.com/simular-ai/Agent-S. | 
	[
  "Saaket Agashe",
  "Kyle Wong",
  "Vincent Tu",
  "Jiachen Yang",
  "Ang Li",
  "Xin Eric Wang"
] | 
	https://openreview.net/forum?id=zg5is4GJ3R | 
	zg5is4GJ3R | 
	zg5is4GJ3R | 
	[
  "~Saaket_Agashe1",
  "~Kyle_Wong1",
  "~Vincent_Tu1",
  "~Jiachen_Yang1",
  "~Ang_Li1",
  "~Xin_Eric_Wang2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/51a372ef953dfccf2d22d4657f707fe1bf9383b2.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Computer Use",
  "GUI Agents",
  "Multimodal Large Language Models",
  "Planning",
  "Grounding",
  "Vision"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
agashe2025agent,
title={Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents},
author={Saaket Agashe and Kyle Wong and Vincent Tu and Jiachen Yang and Ang Li and Xin Eric Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zg5is4GJ3R}
} | 
	agashe|agent_s2_a_compositional_generalistspecialist_framework_for_computer_use_agents | 
	/attachment/ecc357b402b416c7ea4804242770e4c521a46cfd.zip | null | null | null | null | |
| 
	GenerationPrograms:  Fine-grained Attribution with Executable Programs | 
	GenerationPrograms: Fine-grained Attribution via Neural Modular Trees | 
	Recent large language models (LLMs) achieve impressive performance in text generation but often fail to accurately attribute their outputs, undermining trust and verifiability. Moreover, existing attribution methods do not explain how and why models leverage the provided source documents to generate their final responses, limiting interpretability. Furthermore, current attributions fail to provide a reason as to how and why the model uses the context to arrive at the final output. To overcome these challenges, we introduce a modular generation framework, GenerationPrograms, inspired by recent advancements in executable ``code agent'' architectures. Unlike conventional generation methods that simultaneously generate outputs and attributions or rely on post-hoc attribution, GenerationPrograms decomposes the process into two distinct stages: first, creating an executable program plan composed of modular text operations (such as paraphrasing, compression, and fusion) explicitly tailored to the query, and second, executing these operations following the program's specified instructions to produce the final response. Empirical evaluations demonstrate that GenerationPrograms significantly improves attribution quality at both document-level and sentence-level granularity across two long-form question-answering tasks. We further demonstrate that GenerationPrograms can effectively function as a post-hoc attribution method, outperforming traditional techniques in recovering accurate attributions. In addition, the interpretable programs generated by GenerationPrograms enable localized refinement through modular-level improvements that further enhance overall attribution quality. | 
	[
  "David Wan",
  "Eran Hirsch",
  "Elias Stengel-Eskin",
  "Ido Dagan",
  "Mohit Bansal"
] | 
	https://openreview.net/forum?id=zTKYKiWzIm | 
	zTKYKiWzIm | 
	zTKYKiWzIm | 
	[
  "~David_Wan1",
  "~Eran_Hirsch1",
  "~Elias_Stengel-Eskin1",
  "~Ido_Dagan1",
  "~Mohit_Bansal2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/879c600a2233a7fa325b3ccd7a81cd7277332584.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "long-form qa",
  "rag",
  "summarization",
  "attributed text generation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wan2025generationprograms,
title={GenerationPrograms:  Fine-grained Attribution with Executable Programs},
author={David Wan and Eran Hirsch and Elias Stengel-Eskin and Ido Dagan and Mohit Bansal},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zTKYKiWzIm}
} | 
	wan|generationprograms_finegrained_attribution_with_executable_programs | 
	/attachment/22e800fcc6dc11a936f21b1f04a507d404c9515d.zip | null | null | null | null | |
| 
	Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy | 
	Investigate if a multi LLM agent system can simulate human health behaviors and inform policymaking. | 
	Can we simulate a sandbox society with generative agents to model human behavior, thereby reducing the over-reliance on real human trials for assessing public policies? In this work, we investigate the feasibility of simulating health-related decision-making, using vaccine hesitancy, defined as the delay in acceptance or refusal of vaccines despite the availability of vaccination services (MacDonald, 2015), as a case study. To this end, we introduce the VacSim framework with 100 generative agents powered by Large Language Models (LLMs). VacSim simulates vaccine policy outcomes with the following steps: 1) instantiate a population of agents with demographics based on census data; 2) connect the agents via a social network and model vaccine attitudes as a function of social dynamics and disease-related information; 3) design and evaluate various public health interventions aimed at mitigating vaccine hesitancy. To align with real-world results, we also introduce simulation warmup and attitude modulation to adjust agents' attitudes. We propose a series of evaluations to assess the reliability of various LLM simulations. Experiments indicate that models like Llama and Qwen can simulate aspects of human behavior but also highlight real-world alignment challenges, such as inconsistent responses with demographic profiles. This early exploration of LLM-driven simulations is not meant to serve as definitive policy guidance; instead, it serves as a call for action to examine social simulation for policy development. | 
	[
  "Abe Bohan Hou",
  "Hongru Du",
  "Yichen Wang",
  "Jingyu Zhang",
  "Zixiao Wang",
  "Paul Pu Liang",
  "Daniel Khashabi",
  "Lauren M Gardner",
  "Tianxing He"
] | 
	https://openreview.net/forum?id=zSbecER9il | 
	zSbecER9il | 
	zSbecER9il | 
	[
  "~Abe_Bohan_Hou1",
  "~Hongru_Du1",
  "~Yichen_Wang4",
  "~Jingyu_Zhang2",
  "~Zixiao_Wang6",
  "~Paul_Pu_Liang1",
  "~Daniel_Khashabi2",
  "~Lauren_M_Gardner1",
  "~Tianxing_He1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/efdbcf1c87e1f5c9d8ded3b5bd028477a557f88e.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "LLM agent",
  "multi-agent system",
  "social simulation",
  "public health",
  "AI for health"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
hou2025can,
title={Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy},
author={Abe Bohan Hou and Hongru Du and Yichen Wang and Jingyu Zhang and Zixiao Wang and Paul Pu Liang and Daniel Khashabi and Lauren M Gardner and Tianxing He},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zSbecER9il}
} | 
	hou|can_a_society_of_generative_agents_simulate_human_behavior_and_inform_public_health_policy_a_case_study_on_vaccine_hesitancy | null | null | null | null | null | |
| 
	REFA: Reference Free Alignment with Fine-Grained Length Control | 
	Reference-free alignment methods that optimize over multiple user preferences with fine-grained control of length | 
	To mitigate reward hacking from response verbosity, modern preference optimization methods are increasingly adopting length normalization (e.g., SimPO, ORPO, LN-DPO). While effective against this bias, we demonstrate that length normalization itself introduces a failure mode: the **URSLA shortcut**. Here models learn to satisfy the alignment objective by prematurely truncating low-quality responses rather than learning from their semantic content. To address this, we introduce **REFA**, a new alignment framework that proposes probabilistic control on a structural token that controls termination. Our core innovation is a new class of regularizers that operate directly on the probability of the End-of-Sequence (EOS) token, a previously unexploited control lever. This token-level intervention provides a principled solution to the URSLA shortcut, ensuring genuine quality improvements. Furthermore, it unlocks a versatile mechanism for managing the alignment-efficiency tradeoff, enabling practitioners to fine-tune models that adhere to specific token budgets. Empirically, REFA achieves a **60.29\%** win rate and a **52.17\%** length-controlled win rate on AlpacaEval2 with Llama-3-8B-Instruct, demonstrating the power of our token-level control paradigm. | 
	[
  "Taneesh Gupta",
  "Rahul Madhavan",
  "Xuchao Zhang",
  "Chetan Bansal",
  "Saravan Rajmohan"
] | 
	https://openreview.net/forum?id=zP6DJaBBcR | 
	zP6DJaBBcR | 
	zP6DJaBBcR | 
	[
  "~Taneesh_Gupta1",
  "~Rahul_Madhavan1",
  "~Xuchao_Zhang1",
  "~Chetan_Bansal1",
  "~Saravan_Rajmohan2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/dc1c639c05cb38e22dd2b44774293041d8886ec9.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Model Alignment",
  "RLHF",
  "Preference Optimization"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
gupta2025refa,
title={{REFA}: Reference Free Alignment with Fine-Grained Length Control},
author={Taneesh Gupta and Rahul Madhavan and Xuchao Zhang and Chetan Bansal and Saravan Rajmohan},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zP6DJaBBcR}
} | 
	gupta|refa_reference_free_alignment_with_finegrained_length_control | null | null | null | null | null | |
| 
	Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution | 
	We propose a fairness benchmark that evaluates intersectional biases in LLMs based on disparities in model confidence while performing coreference resolution on different intersectional identities | 
	Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional bias, recognizing that when multiple axes of discrimination intersect, they create distinct patterns of disadvantage. We create a new benchmark called WinoIdentity by augmenting the WinoBias dataset with 25 demographic markers across 10 attributes, including age, nationality, and race, intersected with binary gender, yielding 245,700 prompts to evaluate 50 distinct bias patterns. Focusing on harms of omission due to underrepresentation, we investigate bias through the lens of uncertainty and propose a group (un)fairness metric called \emph{Coreference Confidence Disparity} which measures whether models are more or less confident for some intersectional identities than others. We evaluate five recently published LLMs and find confidence disparities as high as 40\% along various demographic attributes including body type, sexual orientation and socio-economic status, with models being most uncertain about doubly-disadvantaged identities in anti-stereotypical settings, such as when assigning transgender women to historically male-dominated occupations. Surprisingly, coreference confidence decreases even for hegemonic or privileged markers (e.g., 'White' or 'cisgender'), indicating that the recent impressive performance of LLMs is more likely due to memorization than logical reasoning. Notably, these are two independent failures in value alignment and validity that can compound to cause social harm. | 
	[
  "Falaah Arif Khan",
  "Nivedha Sivakumar",
  "Yinong Oliver Wang",
  "Katherine Metcalf",
  "Cezanne Camacho",
  "Barry-John Theobald",
  "Luca Zappella",
  "Nicholas Apostoloff"
] | 
	https://openreview.net/forum?id=zOw2it5Ni6 | 
	zOw2it5Ni6 | 
	zOw2it5Ni6 | 
	[
  "~Falaah_Arif_Khan1",
  "~Nivedha_Sivakumar1",
  "~Yinong_Oliver_Wang1",
  "~Katherine_Metcalf1",
  "~Cezanne_Camacho1",
  "~Barry-John_Theobald1",
  "~Luca_Zappella1",
  "~Nicholas_Apostoloff1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/036165617f0c285d7a8f21490ec7432f50b68556.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "fairness",
  "uncertainty",
  "intersectionality"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
khan2025investigating,
title={Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution},
author={Falaah Arif Khan and Nivedha Sivakumar and Yinong Oliver Wang and Katherine Metcalf and Cezanne Camacho and Barry-John Theobald and Luca Zappella and Nicholas Apostoloff},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zOw2it5Ni6}
} | 
	khan|investigating_intersectional_bias_in_large_language_models_using_confidence_disparities_in_coreference_resolution | null | null | null | null | null | |
| 
	MeMAD: Structured Memory of Debates for Enhanced Multi-Agent Reasoning | 
	We propose Memory-Augmented Multi-Agent Debate (MeMAD), which systematically organizes and reuses past debate transcripts to  improve performance on complex reasoning tasks without requiring parameter updates. | 
	Large Language Models (LLMs) demonstrate remarkable in-context learning capabilities but often struggle with complex, multi-step reasoning. Multi-Agent Debate (MAD) frameworks partially address these limitations by enabling iterative agent interactions. However, they neglect valuable historical insights by treating each new debate independently. In this paper, we propose Memory-Augmented MAD (MeMAD), a parameter-free memory-augmented MAD framework that systematically organizes and reuses past debate transcripts. MeMAD stores structured representations of successful and unsuccessful reasoning attempts enriched with self-reflections and peer feedback. It systematically retrieves them via semantic similarity at inference time to inform new reasoning tasks. Our experiments on challenging mathematical reasoning, scientific question answering, and language understanding benchmarks show that MeMAD achieves significant accuracy gains (up to 3.3\% over conventional MAD baselines) without parameter updates. Our findings underscore structured memory as a pivotal mechanism for achieving deeper and more reliable multi-agent reasoning in LLMs. Code is available in ~\url{https://github.com/LSHCoding/MeMAD}. | 
	[
  "Shuai Ling",
  "Lizi Liao",
  "Dongmei Jiang",
  "Weili Guan"
] | 
	https://openreview.net/forum?id=zLbmsdyTiN | 
	zLbmsdyTiN | 
	zLbmsdyTiN | 
	[
  "~Shuai_Ling1",
  "~Lizi_Liao1",
  "~Dongmei_Jiang2",
  "~Weili_Guan4"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/a0b1930d621647e4dd73a698d9879287d9ebcb8d.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Multi-Agent Debate",
  "Memory Augmentation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ling2025memad,
title={Me{MAD}: Structured Memory of Debates for Enhanced Multi-Agent Reasoning},
author={Shuai Ling and Lizi Liao and Dongmei Jiang and Weili Guan},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zLbmsdyTiN}
} | 
	ling|memad_structured_memory_of_debates_for_enhanced_multiagent_reasoning | null | null | null | null | null | |
| 
	Values in the Wild: Discovering and Mapping Values in Real-World Language Model Interactions | 
	Our privacy-preserving analysis of values in real-world language model interactions reveals a novel taxonomy of AI values that differs from human frameworks, is highly context-dependent, and becomes most explicit/legible during moments of resistance. | 
	AI assistants interact with millions of real users everyday, imparting normative judgments that can have significant personal and societal impact—but little is known about what values guide these interactions in practice. To address this, we develop a method to empirically analyze values expressed in hundreds of thousands of real-world conversations with Claude models. We empirically discover and taxonomize 3,308 AI values, and study how model values and responses depend on context. We find that Claude expresses many professional and intellectual values, and typically supports prosocial human values while resisting values like "moral nihilism." While some values appear consistently (e.g. "professionalism"), most are highly context-dependent—"harm prevention" emerges when the model resists users, "historical accuracy" when discussing controversial events, "healthy boundaries" in relationship advice, and "human agency" in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, this work creates a foundation for more grounded evaluation and design of values in increasingly influential AI systems. | 
	[
  "Saffron Huang",
  "Esin DURMUS",
  "Kunal Handa",
  "Miles McCain",
  "Alex Tamkin",
  "Michael Stern",
  "Jerry Hong",
  "Deep Ganguli"
] | 
	https://openreview.net/forum?id=zJHZJClG1Z | 
	zJHZJClG1Z | 
	zJHZJClG1Z | 
	[
  "~Saffron_Huang1",
  "~Esin_DURMUS2",
  "~Kunal_Handa1",
  "~Miles_McCain1",
  "~Alex_Tamkin1",
  "~Michael_Stern1",
  "~Jerry_Hong1",
  "~Deep_Ganguli2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d9cb13b573fe0a00779ee4fba3d32dee93dce2d3.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "language models",
  "values",
  "AI ethics",
  "AI values",
  "empirical analysis",
  "human-AI interaction",
  "value alignment",
  "privacy-preserving analysis",
  "value pluralism",
  "AI and society"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
huang2025values,
title={Values in the Wild: Discovering and Mapping Values in Real-World Language Model Interactions},
author={Saffron Huang and Esin DURMUS and Kunal Handa and Miles McCain and Alex Tamkin and Michael Stern and Jerry Hong and Deep Ganguli},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zJHZJClG1Z}
} | 
	huang|values_in_the_wild_discovering_and_mapping_values_in_realworld_language_model_interactions | null | null | null | null | null | |
| 
	Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions | 
	We propose a method to build virtual personas for deeper user binding and demonstrate its superiority in approximating metaperception in political science. | 
	Large language models (LLMs) are increasingly capable of simulating human behavior, offering cost-effective ways to estimate user responses during the early phases of survey design. While previous studies have examined whether models can reflect individual opinions or attitudes, we argue that a higher-order binding of virtual personas requires successfully approximating not only the opinions of a user as an identified member of a group, but also the nuanced ways in which that user perceives and evaluates those outside the group. In particular, faithfully simulating how humans perceive different social groups is critical for applying LLMs to various political science studies, including timely topics on polarization dynamics, inter-group conflict, and democratic backsliding. To this end, we propose a novel methodology for constructing virtual personas with synthetic user "backstories" generated as extended, multi-turn interview transcripts. Our generated backstories are longer, rich in detail, and consistent in authentically describing a singular individual,  compared to previous methods. We show that virtual personas conditioned on our backstories closely replicate human response distributions (up to an 87% improvement as measured by Wasserstein Distance) and produce effect sizes that closely match those observed in the original studies.
Altogether, our work extends the applicability of LLMs beyond estimating individual self-opinions, enabling their use in a broader range of human studies. | 
	[
  "Minwoo Kang",
  "Suhong Moon",
  "Seung Hyeong Lee",
  "Ayush Raj",
  "Joseph Suh",
  "David Chan"
] | 
	https://openreview.net/forum?id=zHdSCtNmM4 | 
	zHdSCtNmM4 | 
	zHdSCtNmM4 | 
	[
  "~Minwoo_Kang1",
  "~Suhong_Moon1",
  "~Seung_Hyeong_Lee1",
  "~Ayush_Raj3",
  "~Joseph_Suh1",
  "~David_Chan3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/08c4c62957f19503b9cb14a781f9366ed5d2ff58.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "user approximation",
  "metaperception",
  "social psycholog",
  "democratic backsliding",
  "outgroup hostility"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kang2025deep,
title={Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions},
author={Minwoo Kang and Suhong Moon and Seung Hyeong Lee and Ayush Raj and Joseph Suh and David Chan},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zHdSCtNmM4}
} | 
	kang|deep_binding_of_language_model_virtual_personas_a_study_on_approximating_political_partisan_misperceptions | 
	/attachment/887605a60f6440213636e72d74e01e35df30335b.zip | null | null | null | null | |
| 
	QUDsim: Quantifying Discourse Similarities in LLM-Generated Text | 
	We introduce an abstraction based on linguistics theories in Questions Under Discussion (QUD) and question semantics to quantify repetitive discourse structures found in texts generated by large language models. | 
	As large language models become increasingly capable at various tasks including writing, the need to generate unique and creative content arises. Although LLMs have the ability to generate text covering diverse topics, there is an overall sense of repetitiveness across texts that we aim to formalize. Such familiarity between documents is induced through the persistence of underlying discourse structures. However, existing similarity metrics dependent on lexical overlap and syntactic patterns are overly sensitive to volatility in content overlap, thus making them unsuitable for detecting $\textit{structural}$ similarities. We introduce an abstraction based on linguistics theories in Questions Under Discussion (QUD) and question semantics to help quantify differences in discourse progression. We then use this framework to build $\textbf{QUDsim}$, a similarity metric that can detect discursive parallels between documents. Using QUDsim, we find that LLMs often reuse discourse structures (more so than humans) to create seemingly new documents by simply swapping content. Furthermore, LLMs are not only repetitive and structurally uniform, but are also divergent from human authors in the types of structures they use. | 
	[
  "Ramya Namuduri",
  "Yating Wu",
  "Anshun Asher Zheng",
  "Manya Wadhwa",
  "Greg Durrett",
  "Junyi Jessy Li"
] | 
	https://openreview.net/forum?id=zFz1BJu211 | 
	zFz1BJu211 | 
	zFz1BJu211 | 
	[
  "~Ramya_Namuduri1",
  "~Yating_Wu1",
  "~Anshun_Asher_Zheng1",
  "~Manya_Wadhwa1",
  "~Greg_Durrett1",
  "~Junyi_Jessy_Li2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/5974197cbc65ce889f5cb6d2e8a8bf2a2394d65c.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "discourse diversity",
  "discourse structure",
  "large language models",
  "Questions Under Discussion"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
namuduri2025qudsim,
title={{QUD}sim: Quantifying Discourse Similarities in {LLM}-Generated Text},
author={Ramya Namuduri and Yating Wu and Anshun Asher Zheng and Manya Wadhwa and Greg Durrett and Junyi Jessy Li},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=zFz1BJu211}
} | 
	namuduri|qudsim_quantifying_discourse_similarities_in_llmgenerated_text | null | null | null | null | null | |
| 
	Probing then Editing Response Personality of Large Language Models | 
	This paper introduces a layer-wise probing framework revealing how LLMs encode personality traits within parameters and further proposes a progressive perturbation method that edits personality during inference using the probing classifier. | 
	Large Language Models (LLMs) have demonstrated promising capabilities to generate responses that simulate consistent personality traits. 
Despite the major attempts to analyze personality expression through output-based evaluations, little is known about how such traits are internally encoded within LLM parameters. In this paper, we introduce a layer-wise probing framework to systematically investigate the layer-wise capability of LLMs in simulating personality for responding. We conduct probing experiments on 11 open-source LLMs over the PersonalityEdit benchmark and find that LLMs predominantly simulate personality for responding in their middle and upper layers, with instruction-tuned models demonstrating a slightly clearer separation of personality traits. Furthermore, by interpreting the trained probing hyperplane as a layer-wise boundary for each personality category, we propose a layer-wise perturbation method to edit the personality expressed by LLMs during inference. Our results show that even when the prompt explicitly specifies a particular personality, our method can still successfully alter the response personality of LLMs. Interestingly, the difficulty of converting between certain personality traits varies substantially, which aligns with the representational distances in our probing experiments. Finally, we conduct a comprehensive MMLU benchmark evaluation and time overhead analysis, demonstrating that our proposed personality editing method incurs only minimal degradation in general capabilities while maintaining low training costs and acceptable inference latency. Our code is publicly available at https://github.com/universe-sky/probing-then-editing-personality. | 
	[
  "Tianjie Ju",
  "Zhenyu Shao",
  "Bowen Wang",
  "Yujia Chen",
  "Zhuosheng Zhang",
  "Hao Fei",
  "Mong-Li Lee",
  "Wynne Hsu",
  "Sufeng Duan",
  "Gongshen Liu"
] | 
	https://openreview.net/forum?id=z9SbcYYP0M | 
	z9SbcYYP0M | 
	z9SbcYYP0M | 
	[
  "~Tianjie_Ju1",
  "~Zhenyu_Shao2",
  "~Bowen_Wang10",
  "~Yujia_Chen5",
  "~Zhuosheng_Zhang1",
  "~Hao_Fei1",
  "~Mong-Li_Lee1",
  "~Wynne_Hsu1",
  "~Sufeng_Duan1",
  "~Gongshen_Liu2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/498cc4dc5e0ee0f10ac6252c267617bc18a5cc09.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "large language model",
  "personality",
  "interpretability",
  "knowledge editing"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ju2025probing,
title={Probing then Editing Response Personality of Large Language Models},
author={Tianjie Ju and Zhenyu Shao and Bowen Wang and Yujia Chen and Zhuosheng Zhang and Hao Fei and Mong-Li Lee and Wynne Hsu and Sufeng Duan and Gongshen Liu},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=z9SbcYYP0M}
} | 
	ju|probing_then_editing_response_personality_of_large_language_models | null | null | null | null | null | |
| 
	CodeXEmbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval | 
	We introduce CodeXEmbed, a large-scale code embedding model achieving SOTA on CoIR and strong BeIR performance, enhancing code retrieval and RAG. | 
	Despite the success of text retrieval in many NLP tasks, code retrieval remains a largely underexplored area. Most text retrieval systems are tailored for natural language queries, often neglecting the specific challenges of retrieving code. This gap leaves existing models unable to effectively capture the diversity of programming languages and tasks across different domains, highlighting the need for more focused research in code retrieval. To address this, we introduce CodeXEmbed, a family of large-scale code embedding models ranging from 400M to 7B parameters. Our novel training pipeline unifies multiple programming languages and transforms various code-related tasks into a common retrieval framework, enhancing model generalizability and retrieval performance. Our 7B model achieves a new state-of-the-art (SOTA) in code retrieval, topping the CoIR Leaderboard. In addition to excelling in code retrieval, our models demonstrate competitive performance on the widely adopted BeIR text retrieval benchmark, offering versatility across domains. Experimental results demonstrate that improving retrieval performance significantly enhances end-to-end Retrieval-Augmented Generation (RAG) performance for code-related tasks. | 
	[
  "Ye Liu",
  "Rui Meng",
  "Shafiq Joty",
  "silvio savarese",
  "Caiming Xiong",
  "Yingbo Zhou",
  "Semih Yavuz"
] | 
	https://openreview.net/forum?id=z3lG70Azbg | 
	z3lG70Azbg | 
	z3lG70Azbg | 
	[
  "~Ye_Liu4",
  "~Rui_Meng1",
  "~Shafiq_Joty1",
  "~silvio_savarese2",
  "~Caiming_Xiong1",
  "~Yingbo_Zhou1",
  "~Semih_Yavuz1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/556d386c8c3457c2f182c786da442e3b3cbd673a.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Code and Text Retrieval; Code Embedding Model; Text Embedding Model; Retrieval-Augmented Code Generation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
liu2025codexembed,
title={Code{XE}mbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval},
author={Ye Liu and Rui Meng and Shafiq Joty and silvio savarese and Caiming Xiong and Yingbo Zhou and Semih Yavuz},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=z3lG70Azbg}
} | 
	liu|codexembed_a_generalist_embedding_model_family_for_multilingual_and_multitask_code_retrieval | null | null | null | null | null | |
| 
	Retrieval-Augmented Generation with Conflicting Evidence | 
	We propose a benchmark and multi-agent framework for RAG systems to handle ambiguity, conflicting evidence, and misinformation in real-world retrieval scenarios. | 
	Large language model (LLM) agents are increasingly employing retrieval-augmented generation (RAG) to improve the factuality of their responses. However, in practice, these systems often need to handle ambiguous user queries and potentially conflicting information from multiple sources while also suppressing inaccurate information from noisy or irrelevant documents. Prior work has generally studied and addressed these challenges in isolation, considering only one aspect at a time, such as handling ambiguity or robustness to noise and misinformation. We instead consider multiple factors simultaneously, proposing (i) RAMDocs (Retrieval with Ambiguity and Misinformation in Documents), a new dataset that simulates complex and realistic scenarios for conflicting evidence for a user query, including ambiguity, misinformation, and noise; and (ii) MADAM-RAG, a multi-agent approach in which LLM agents debate over the merits of an answer over multiple rounds, allowing an aggregator to collate responses corresponding to disambiguated entities while discarding misinformation and noise, thereby handling diverse sources of conflict jointly. We demonstrate the effectiveness of MADAM-RAG using both closed and open-source models on AmbigDocs – which requires presenting all valid answers for ambiguous queries – improving over strong RAG baselines by up to 11.40%, and on FaithEval – which requires suppressing misinformation – where we improve by up to 15.80% (absolute) with Llama3.3-70B-Instruct. Furthermore, we find that our proposed RAMDocs dataset poses a challenge for existing RAG baselines (the most performant Llama3.3-70B-Instruct only yields up to a 32.60 exact match score), as it requires handling conflicting information due to ambiguity, noise, and misinformation simultaneously. While MADAM-RAG begins to address these conflicting factors, our analysis indicates that a substantial gap remains, especially when increasing the level of imbalance in supporting evidence and misinformation. | 
	[
  "Han Wang",
  "Archiki Prasad",
  "Elias Stengel-Eskin",
  "Mohit Bansal"
] | 
	https://openreview.net/forum?id=z1MHB2m3V9 | 
	z1MHB2m3V9 | 
	z1MHB2m3V9 | 
	[
  "~Han_Wang9",
  "~Archiki_Prasad1",
  "~Elias_Stengel-Eskin1",
  "~Mohit_Bansal2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/1452223fd8e091c35c8990c14f2d5e4979e749bc.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Retrieval-augmented Generation",
  "Knowledge Conflict",
  "Multi-agent"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025retrievalaugmented,
title={Retrieval-Augmented Generation with Conflicting Evidence},
author={Han Wang and Archiki Prasad and Elias Stengel-Eskin and Mohit Bansal},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=z1MHB2m3V9}
} | 
	wang|retrievalaugmented_generation_with_conflicting_evidence | null | null | null | null | null | |
| 
	Déjà Vu: Multilingual LLM Evaluation through the Lens of Machine Translation Evaluation | 
	What can multilingual LLM evaluation learn from MT evaluation? | 
	Generation capabilities and language coverage of multilingual large language models (mLLMs) are advancing rapidly. However, evaluation practices for generative abilities of mLLMs are still lacking comprehensiveness, scientific rigor, and consistent adoption across research labs, which undermines their potential to meaningfully guide mLLM development. We draw parallels with machine translation (MT) evaluation, a field that faced similar challenges and has, over decades, developed transparent reporting standards and reliable evaluations for multilingual generative models.
Through targeted experiments across key stages of the generative evaluation pipeline, we demonstrate how best practices from MT evaluation can deepen the understanding of quality differences between models. Additionally, we identify essential components for robust meta-evaluation of mLLMs, ensuring the evaluation methods themselves are rigorously assessed. We distill these insights into a checklist of actionable recommendations for mLLM research and development. | 
	[
  "Julia Kreutzer",
  "Eleftheria Briakou",
  "Sweta Agrawal",
  "Marzieh Fadaee",
  "Tom Kocmi"
] | 
	https://openreview.net/forum?id=yxzVanFoij | 
	yxzVanFoij | 
	yxzVanFoij | 
	[
  "~Julia_Kreutzer1",
  "~Eleftheria_Briakou1",
  "~Sweta_Agrawal1",
  "~Marzieh_Fadaee2",
  "~Tom_Kocmi1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d390b099c4e0bc139be3a1a975837803ed0bf6db.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "multilingual",
  "evaluation",
  "meta-evaluation",
  "machine translation evaluation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kreutzer2025dj,
title={D\'ej\`a Vu: Multilingual {LLM} Evaluation through the Lens of Machine Translation Evaluation},
author={Julia Kreutzer and Eleftheria Briakou and Sweta Agrawal and Marzieh Fadaee and Tom Kocmi},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yxzVanFoij}
} | 
	kreutzer|déjà_vu_multilingual_llm_evaluation_through_the_lens_of_machine_translation_evaluation | null | null | null | null | null | |
| 
	CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning | 
	Image captioning with concept and captions retrieval augmented generation. | 
	Multilingual vision-language models have made significant strides in image captioning, yet they still lag behind their English counterparts due to limited multilingual training data and costly large-scale model parameterization. Retrieval-augmented generation (RAG) offers a promising alternative by conditioning caption generation on retrieved examples in the target language, reducing the need for extensive multilingual training. 
However, multilingual RAG captioning models often depend on retrieved captions translated from English, which can introduce mismatches and linguistic biases relative to the source language. We introduce CONCAP, a multilingual image captioning model that integrates retrieved captions with image-specific concepts, enhancing the contextualization of the input image and grounding the captioning process across different languages. 
Experiments on the XM3600 dataset indicate that CONCAP enables strong performance on low- and mid-resource languages, with highly reduced data requirements. Our findings highlight the effectiveness of concept-aware retrieval augmentation in bridging multilingual performance gaps. | 
	[
  "George Ibrahim",
  "Rita Ramos",
  "Yova Kementchedjhieva"
] | 
	https://openreview.net/forum?id=yfnaK1pZxu | 
	yfnaK1pZxu | 
	yfnaK1pZxu | 
	[
  "~George_Ibrahim1",
  "~Rita_Ramos1",
  "~Yova_Kementchedjhieva1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/b261ae04f4cd4225e18480084dc5543014d43819.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Image Captioning",
  "Concepts",
  "Retrieval",
  "RAG",
  "Multilingual"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ibrahim2025concap,
title={{CONCAP}: Seeing Beyond English with Concepts Retrieval-Augmented Captioning},
author={George Ibrahim and Rita Ramos and Yova Kementchedjhieva},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yfnaK1pZxu}
} | 
	ibrahim|concap_seeing_beyond_english_with_concepts_retrievalaugmented_captioning | null | null | null | null | null | |
| 
	Prompt-Reverse Inconsistency: LLM Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing | 
	This paper introduces Prompt-Reverse Inconsistency (PRIN), where Large Language Models give conflicting answers when identifying correct versus incorrect responses, raising concerns about their logical reliability. | 
	While the inconsistency of LLMs is not a novel topic, prior research has predominantly addressed two types of generative inconsistencies: i) Randomness Inconsistency: running the same LLM multiple trials, yielding varying responses; ii) Paraphrase Inconsistency: paraphrased prompts result in different responses from the same LLM. Randomness Inconsistency arises from the inherent randomness due to stochastic sampling in generative models, while Paraphrase Inconsistency is a consequence of the language modeling objectives, where paraphrased prompts alter the distribution of vocabulary logits. This research discovers Prompt-Reverse Inconsistency (PRIN), a new form of LLM self-inconsistency: given a question and a couple of LLM-generated answer candidates, the LLM often has conflicting responses when prompted "Which are correct answers?" and "Which are incorrect answers?". PRIN poses a big concern as it undermines the credibility of LLM-as-a-judge, and suggests a challenge for LLMs to adhere to basic logical rules. We conduct a series of experiments to investigate PRIN, examining the extent of PRIN across different LLMs, methods to mitigate it, potential applications, and its relationship with Randomness Inconsistency and Paraphrase Inconsistency. As the first study to explore PRIN, our findings offer valuable insights into the inner workings of LLMs and contribute to advancing trustworthy AI. | 
	[
  "Jihyun Janice Ahn",
  "Wenpeng Yin"
] | 
	https://openreview.net/forum?id=yfRkNRFLzl | 
	yfRkNRFLzl | 
	yfRkNRFLzl | 
	[
  "~Jihyun_Janice_Ahn1",
  "~Wenpeng_Yin1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/44b6a89c82f5042a567ea17d3070285af52ead04.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Model",
  "Natural Language Process",
  "Inconsistency of LLMs",
  "Prompt-Reverse Inconsistency",
  "Randomness Inconsistency",
  "Paraphrase Inconsistency"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ahn2025promptreverse,
title={Prompt-Reverse Inconsistency: {LLM} Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing},
author={Jihyun Janice Ahn and Wenpeng Yin},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yfRkNRFLzl}
} | 
	ahn|promptreverse_inconsistency_llm_selfinconsistency_beyond_generative_randomness_and_prompt_paraphrasing | null | null | null | null | null | |
| 
	Learning to Generate Unit Tests for Automated Debugging | 
	LLM training pipeline for generating unit tests for code debugging and assessing code correctness | 
	Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to large language models (LLMs), motivating automated test generation. However, we uncover a trade-off between generating unit test inputs that reveal errors when given a faulty code and correctly predicting the unit test output without access to the gold solution. To address this trade-off, we propose UTGen, which teaches LLMs to generate unit test inputs that reveal errors along with their correct expected outputs based on task descriptions. Since model-generated tests can provide noisy signals (e.g., from incorrectly predicted outputs), we propose UTDebug that (i) scales UTGen via test-time compute to improve UT output prediction, and (ii) validates and backtracks edits based on multiple generated UTs to avoid overfitting, and helps LLMs debug effectively. We show that UTGen outperforms other LLM-based baselines by 7.59% based on a metric measuring the presence of both error-revealing UT inputs and correct UT outputs. When used with UTDebug, we find that feedback from UTGen's unit tests improves pass@1 accuracy of Qwen2.5 32B on HumanEvalFix and our own harder debugging split of MBPP+ by over 3.17% and 12.35% (respectively) over other LLM-based UT generation baselines. Moreover, we observe that feedback from Qwen2.5 32B-based UTGen model can enhance debugging with frontier LLMs like GPT-4o by 13.8%. Lastly, we demonstrate that UTGen is a better judge for code correctness, outperforming a state-of-the-art trained 8B reward model by 4.43% on HumanEval+ with best-of-10 sampling using Qwen2.5 7B. | 
	[
  "Archiki Prasad",
  "Elias Stengel-Eskin",
  "Justin Chen",
  "Zaid Khan",
  "Mohit Bansal"
] | 
	https://openreview.net/forum?id=yeVBHPLXxi | 
	yeVBHPLXxi | 
	yeVBHPLXxi | 
	[
  "~Archiki_Prasad1",
  "~Elias_Stengel-Eskin1",
  "~Justin_Chen1",
  "~Zaid_Khan1",
  "~Mohit_Bansal2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d0e3dc4e72f75d7f7f18fe8c0ab78512b18f88a0.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Unit Tests Generation",
  "LLMs for code generation",
  "LLMs for code debugging"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
prasad2025learning,
title={Learning to Generate Unit Tests for Automated Debugging},
author={Archiki Prasad and Elias Stengel-Eskin and Justin Chen and Zaid Khan and Mohit Bansal},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yeVBHPLXxi}
} | 
	prasad|learning_to_generate_unit_tests_for_automated_debugging | 
	/attachment/f3e3abd874e6c1ad0a461bbcbb6c9cf7fe922262.zip | null | null | null | null | |
| 
	VideoSAVi: Self-Aligned Video Language Models without Human Supervision | 
	VideoSAVi introduces a self-aligning approach that enables video-language models to generate high-quality preference pairs from their own outputs, achieving state-of-the-art performance without external supervision. | 
	Recent advances in video-large language models (Video-LLMs) have led to significant progress in video understanding. Current preference optimization methods often rely on proprietary APIs or ground-truth captions to generate preference data (i.e., pairs of model outputs ranked based on their quality or alignment with human judgment), which is then used to train models for video-language alignment. This approach is both costly and labor-intensive. To address this limitation, we introduce $\textbf{VideoSAVi}$ ($\underline{\textbf{S}}$elf-$\underline{\textbf{A}}$ligned $\underline{\textbf{Vi}}$deo Language Model), a self-training pipeline that enables Video-LLMs to reason over video content without external supervision. Our approach includes a self-critiquing mechanism that identifies reasoning errors in the model's initial responses and generates improved alternatives, creating preference pairs directly from video content. VideoSAVi then applies Direct Preference Optimization (DPO) to iteratively train the model using the preference data, thus enhancing its temporal and spatial reasoning for video understanding. Experiments show that VideoSAVi delivers significant improvements across multiple benchmarks, including a +4.2 percentage point gain on MVBench, +3.9 on PerceptionTest, and +6.8 on the challenging EgoSchema dataset compared to baseline models. Our model-agnostic approach is computationally efficient, requiring only 32 frames, offering a promising direction for self-aligned video understanding without reliance on external models or annotations. | 
	[
  "Yogesh Kulkarni",
  "Pooyan Fazli"
] | 
	https://openreview.net/forum?id=ybcZEWaM7U | 
	ybcZEWaM7U | 
	ybcZEWaM7U | 
	[
  "~Yogesh_Kulkarni1",
  "~Pooyan_Fazli1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/5846d0a6f494f556df2f25ba346da586faf3d28b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Video understanding",
  "Self-alignment",
  "Video-language models",
  "Direct preference optimization",
  "Self-critiquing"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kulkarni2025videosavi,
title={Video{SAV}i: Self-Aligned Video Language Models without Human Supervision},
author={Yogesh Kulkarni and Pooyan Fazli},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=ybcZEWaM7U}
} | 
	kulkarni|videosavi_selfaligned_video_language_models_without_human_supervision | null | null | null | null | null | |
| 
	Streaming DiLoCo with overlapping communication | 
	Distributed training where only a subset of the outer gradients is communicated | 
	Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time. Since internal states and parameter gradients need to be exchanged at each and every single gradient step, all devices need to be co-located using low-latency high-bandwidth communication links to support the required high volume of data exchange. Recently, algorithms like DiLoCo have relaxed the constraint that all devices need co-location: accelerators can be grouped into ``workers'', where synchronizations between workers need only occur infrequently. This in turn means that workers can afford being connected by lower bandwidth communication links without affecting learning quality. However, in these methods, communication across workers still requires the same peak bandwidth as before, as the synchronizations require all parameters to be exchanged across all workers. In this paper, we improve DiLoCo in three ways. First, we synchronize only subsets of parameters in sequence, rather than all at once, which greatly reduces peak bandwidth. Second, we allow workers to continue training while synchronizing, which decreases wall clock time. Third, we quantize the data exchanged by workers, which further reduces bandwith across workers. We show experimentally that by properly combining these modifications we can distribute training of billion-scale parameters and attain models of similar quality as before, but reducing required bandwidth by a factor of up to two orders of magnitude. | 
	[
  "Arthur Douillard",
  "Yani Donchev",
  "J Keith Rush",
  "Satyen Kale",
  "Zachary Charles",
  "Gabriel Teston",
  "Zachary Garrett",
  "Jiajun Shen",
  "Ross McIlroy",
  "David Lacey",
  "Alexandre Rame",
  "Arthur Szlam",
  "MarcAurelio Ranzato",
  "Paul R Barham"
] | 
	https://openreview.net/forum?id=yYk3zK0X6Q | 
	yYk3zK0X6Q | 
	yYk3zK0X6Q | 
	[
  "~Arthur_Douillard1",
  "~Yani_Donchev1",
  "~J_Keith_Rush1",
  "~Satyen_Kale2",
  "~Zachary_Charles1",
  "~Gabriel_Teston1",
  "~Zachary_Garrett1",
  "~Jiajun_Shen1",
  "~Ross_McIlroy1",
  "~David_Lacey1",
  "~Alexandre_Rame1",
  "~Arthur_Szlam3",
  "~MarcAurelio_Ranzato1",
  "~Paul_R_Barham2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/cfdba2f22f8184c00ce2bee496c5685b22b6922e.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "distributed training",
  "large-scale",
  "llm"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
douillard2025streaming,
title={Streaming DiLoCo with overlapping communication},
author={Arthur Douillard and Yani Donchev and J Keith Rush and Satyen Kale and Zachary Charles and Gabriel Teston and Zachary Garrett and Jiajun Shen and Ross McIlroy and David Lacey and Alexandre Rame and Arthur Szlam and MarcAurelio Ranzato and Paul R Barham},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yYk3zK0X6Q}
} | 
	douillard|streaming_diloco_with_overlapping_communication | null | null | null | null | null | |
| 
	Multilingual and  Multi-Accent Jailbreaking of Audio LLMs | 
	We propose Multi-AudioJail --- a novel audio jailbreak attack that exploits multilingual and multi-accent audio inputs enhanced with audio adversarial perturbations. | 
	Large Audio Language Models (LALMs) have significantly advanced audio understanding but introduce critical security risks, particularly through audio jailbreaks. While prior work has focused on English-centric attacks, we expose a far more severe vulnerability: adversarial multilingual and multi-accent audio jailbreaks, where linguistic and acoustic variations dramatically amplify attack success. In this paper, we introduce Multi-AudioJail, the first systematic framework to exploit these vulnerabilities through (1) a novel dataset of adversarially perturbed multilingual/multi-accent audio jailbreaking prompts, and (2) a hierarchical evaluation pipeline revealing that how acoustic perturbations (e.g., reverberation, echo, and whisper effects) interacts with cross-lingual phonetics to cause jailbreak success rates (JSRs) to surge by up to +57.25 percentage points (e.g., reverberated Kenyan-accented attack on MERaLiON). Crucially, our work further reveals that multimodal LLMs are inherently more vulnerable than unimodal systems: attackers need only exploit the weakest link (e.g., non-English audio inputs) to compromise the entire model, which we empirically show by multilingual audio-only attacks achieving 3.1x higher success rates than text-only attacks. We plan to release our dataset to spur research into cross-modal defenses, urging the community to address this expanding attack surface in multimodality as LALMs evolve. | 
	[
  "Jaechul Roh",
  "Virat Shejwalkar",
  "Amir Houmansadr"
] | 
	https://openreview.net/forum?id=yGa8CYT8kS | 
	yGa8CYT8kS | 
	yGa8CYT8kS | 
	[
  "~Jaechul_Roh1",
  "~Virat_Shejwalkar1",
  "~Amir_Houmansadr1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/6147a6961d7ae656a86a7d1238824560951dada6.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Audio",
  "LLM",
  "Jailbreak",
  "Multilingual",
  "Security"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | true | 
	the audio jailbreaking might be offensive to some audience.
 | 
	@inproceedings{
roh2025multilingual,
title={Multilingual and  Multi-Accent Jailbreaking of Audio {LLM}s},
author={Jaechul Roh and Virat Shejwalkar and Amir Houmansadr},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yGa8CYT8kS}
} | 
	roh|multilingual_and_multiaccent_jailbreaking_of_audio_llms | null | null | null | null | null | |
| 
	Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models | 
	We introduce a novel inference-time algorithm, ThoughtTracing, which uses LLMs to probabilistically trace and weight hypotheses about agents’ evolving mental states without relying on questions and ground-truth answers in benchmarks. | 
	Existing LLM reasoning methods have shown impressive capabilities across various tasks, such as solving math and coding problems. However, applying these methods to scenarios without ground-truth answers or rule-based verification methods - such as tracking the mental states of an agent - remains challenging. Inspired by the sequential Monte Carlo algorithm, we introduce ThoughtTracing, an inference-time reasoning algorithm designed to trace the mental states of specific agents by generating hypotheses and weighting them based on observations without relying on ground-truth solutions to questions in datasets. Our algorithm is modeled after the Bayesian theory-of-mind framework, using LLMs to approximate probabilistic inference over agents' evolving mental states based on their perceptions and actions. We evaluate ThoughtTracing on diverse theory-of-mind benchmarks, demonstrating significant performance improvements compared to baseline LLMs. Our experiments also reveal interesting behaviors of the recent reasoning models - e.g., o3 and R1 - on theory-of-mind, highlighting the difference of social reasoning compared to other domains. | 
	[
  "Hyunwoo Kim",
  "Melanie Sclar",
  "Tan Zhi-Xuan",
  "Lance Ying",
  "Sydney Levine",
  "Yang Liu",
  "Joshua B. Tenenbaum",
  "Yejin Choi"
] | 
	https://openreview.net/forum?id=yGQqTuSJPK | 
	yGQqTuSJPK | 
	yGQqTuSJPK | 
	[
  "~Hyunwoo_Kim3",
  "~Melanie_Sclar1",
  "~Tan_Zhi-Xuan1",
  "~Lance_Ying1",
  "~Sydney_Levine1",
  "~Yang_Liu60",
  "~Joshua_B._Tenenbaum1",
  "~Yejin_Choi1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/5d8ab1d8f4c22c924ea0d6ee9ae0a18ef00a958b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "theory of mind",
  "reasoning",
  "large language model",
  "inference-time algorithm"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kim2025hypothesisdriven,
title={Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models},
author={Hyunwoo Kim and Melanie Sclar and Tan Zhi-Xuan and Lance Ying and Sydney Levine and Yang Liu and Joshua B. Tenenbaum and Yejin Choi},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=yGQqTuSJPK}
} | 
	kim|hypothesisdriven_theoryofmind_reasoning_for_large_language_models | null | null | null | null | null | |
| 
	IterKey: Iterative Keyword Generation with LLMs for Enhanced Retrieval Augmented Generation | 
	We introduce IterKey, an LLM-based iterative keyword generation method that optimize the Retrieval-Augmented Generation process, improving accuracy by refining keywords and self-evaluating responses. | 
	Retrieval Augmented Generation (RAG) has emerged as a way to complement the in-context knowledge of Large Language Models (LLMs) by integrating external documents.
However, real-world applications demand not only accuracy but also interpretability. 
Dense retrieval methods provide high accuracy but lack interpretability, while sparse retrieval is transparent but often misses query intent due to keyword matching.
Thus, balancing accuracy and interpretability remains a challenge.
To address these issues, we introduce IterKey, an LLM-driven iterative keyword generation framework that enhances RAG via sparse retrieval. 
IterKey consists of three LLM-driven stages: generating keywords for retrieval, generating answers based on retrieved documents, and validating the answers. If validation fails, the process iteratively repeats with refined keywords.
Across four QA tasks, experimental results show that IterKey achieves 5% to 20% accuracy improvements over BM25-based RAG and simple baselines. Its performance is comparable to dense retrieval based RAG and prior iterative query refinement methods using dense models.
In summary, IterKey is a novel BM25-based iterative RAG framework that leverages LLMs to balance accuracy and interpretability. | 
	[
  "Kazuki Hayashi",
  "Hidetaka Kamigaito",
  "Shinya Kouda",
  "Taro Watanabe"
] | 
	https://openreview.net/forum?id=y56BuSo8Uj | 
	y56BuSo8Uj | 
	y56BuSo8Uj | 
	[
  "~Kazuki_Hayashi1",
  "~Hidetaka_Kamigaito2",
  "~Shinya_Kouda1",
  "~Taro_Watanabe1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/89b81fc6b363dd0e2529ebd9dbc0474cdee121a7.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "retrieval-augmented generation",
  "RAG",
  "sparse retrieval",
  "LLM",
  "Iterative"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
hayashi2025iterkey,
title={IterKey: Iterative Keyword Generation with {LLM}s for Enhanced Retrieval Augmented Generation},
author={Kazuki Hayashi and Hidetaka Kamigaito and Shinya Kouda and Taro Watanabe},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=y56BuSo8Uj}
} | 
	hayashi|iterkey_iterative_keyword_generation_with_llms_for_enhanced_retrieval_augmented_generation | null | null | null | null | null | |
| 
	Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots | 
	Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots | 
	A chatbot’s personality design is key to interaction quality. As chatbots evolved from rule-based systems to those powered by large language models (LLMs), evaluating the effectiveness of their personality design has become increasingly complex, particularly due to the open-ended nature of interactions. A recent and widely adopted method for assessing the personality design of LLM-based chatbots is the use of self-report questionnaires. These questionnaires, often borrowed from established human personality inventories, ask the chatbot to rate itself on various personality traits. Can LLM-based chatbots meaningfully "self-report" their personality? We created 500 chatbots with distinct personality designs and evaluated the validity of their self-report personality scores by examining human perceptions formed during interactions with these chatbots. Our findings indicate that the chatbot's answers on human personality scales exhibit weak correlations with both human-perceived personality traits and the overall interaction quality. These findings raise concerns about both the criterion validity and the predictive validity of self-report methods in this context. Further analysis revealed the role of task context and interaction in the chatbot's personality design assessment. We further discuss design implications for creating more contextualized and interactive evaluation. | 
	[
  "Huiqi Zou",
  "Pengda Wang",
  "Zihan Yan",
  "Tianjun Sun",
  "Ziang Xiao"
] | 
	https://openreview.net/forum?id=xqIwK9mNkj | 
	xqIwK9mNkj | 
	xqIwK9mNkj | 
	[
  "~Huiqi_Zou1",
  "~Pengda_Wang1",
  "~Zihan_Yan1",
  "~Tianjun_Sun1",
  "~Ziang_Xiao1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/388bb244be1dd675a49c4295cc3cd296fe6906a1.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "human factors in NLP; evaluation methodologies"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zou2025can,
title={Can {LLM} ''Self-report''?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in {LLM}-based Chatbots},
author={Huiqi Zou and Pengda Wang and Zihan Yan and Tianjun Sun and Ziang Xiao},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=xqIwK9mNkj}
} | 
	zou|can_llm_selfreport_evaluating_the_validity_of_selfreport_scales_in_measuring_personality_design_in_llmbased_chatbots | null | null | null | null | null | |
| 
	Always Tell Me The Odds: Fine-grained Conditional Probability Estimation | 
	We present a state-of-the-art model for fine-grained probability estimation of textual outcomes conditioned on context. | 
	We present a state-of-the-art model for fine-grained probability estimation of propositions conditioned on context. Recent advances in large language models (LLMs) have significantly enhanced their reasoning capabilities, particularly on well-defined tasks with complete information. However, LLMs continue to struggle with making accurate and well-calibrated \emph{probabilistic} predictions under uncertainty or partial information. While incorporating uncertainty into model predictions often boosts performance, obtaining reliable estimates of that uncertainty remains understudied. In particular, LLM probability estimates tend to be coarse and biased towards more frequent numbers. Through a combination of human and synthetic data creation and assessment, scaling to larger models, and better supervision, we propose a set of strong and precise probability estimation models. We conduct systematic evaluations across tasks that rely on conditional probability estimation and show that our approach consistently outperforms existing fine-tuned and prompting-based methods by a large margin. | 
	[
  "Liaoyaqi Wang",
  "Zhengping Jiang",
  "Anqi Liu",
  "Benjamin Van Durme"
] | 
	https://openreview.net/forum?id=xhDcG8qtw9 | 
	xhDcG8qtw9 | 
	xhDcG8qtw9 | 
	[
  "~Liaoyaqi_Wang1",
  "~Zhengping_Jiang1",
  "~Anqi_Liu2",
  "~Benjamin_Van_Durme2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/c53d10b668e237deedcef8b115cc9a506df677e2.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Model",
  "Probabilistic Reasoning",
  "Semantics",
  "Calibration"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025always,
title={Always Tell Me The Odds: Fine-grained Conditional Probability Estimation},
author={Liaoyaqi Wang and Zhengping Jiang and Anqi Liu and Benjamin Van Durme},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=xhDcG8qtw9}
} | 
	wang|always_tell_me_the_odds_finegrained_conditional_probability_estimation | null | null | null | null | null | |
| 
	CALLME: Call Graph Augmentation with Large Language Models for Javascript | 
	Handling edge cases in call graph construction for Javascript that cannot be handled with static analysis using large language models. | 
	Building precise call graphs for Javascript programs is a fundamental build-
ing block for many important software engineering and security applications
such as bug detection, program repair, and refactoring. However, resolving
dynamic calls using static analysis is challenging because it requires
enumerating all possible values of both the object and the field. As a result,
static call graph construction algorithms for Javascript ignore such dynamic
calls, resulting in missed edges and a high false negative rate. We present
a new approach, CALLME, that combines Language Models (LMs) with a
custom static analyzer to address this challenge. Our key insight is in using
LMs to incorporate additional modalities such as variable names, natural
language documentation, and calling contexts, which are often sufficient to
resolve dynamic property calls, but are difficult to incorporate in traditional
static analysis. We implement our approach in CALLME and evaluate it
on a dataset of call edges that are dependent on dynamic property accesses.
CALLME achieves 80% accuracy and .79 F1, outperforming the state-of-the-
art static analyzer by 30% and .60, respectively. To study the effectiveness
of CALLME on downstream analysis tasks, we evaluate it on our manually
curated dataset with 25 known Javascript vulnerabilities. CALLME can
detect 24 vulnerabilities with only 3 false positives, whereas static analysis
tools based on current call graph construction algorithms miss all of them. | 
	[
  "Michael Wang",
  "Kexin Pei",
  "Armando Solar-Lezama"
] | 
	https://openreview.net/forum?id=xZi2rMUcAO | 
	xZi2rMUcAO | 
	xZi2rMUcAO | 
	[
  "~Michael_Wang1",
  "~Kexin_Pei1",
  "~Armando_Solar-Lezama1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/41f82f781ab3f27465ee5b45ef2e3218c13a0f63.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Javascript",
  "program analysis"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025callme,
title={{CALLME}: Call Graph Augmentation with Large Language Models for Javascript},
author={Michael Wang and Kexin Pei and Armando Solar-Lezama},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=xZi2rMUcAO}
} | 
	wang|callme_call_graph_augmentation_with_large_language_models_for_javascript | null | null | null | null | null | |
| 
	Adaptive Computation Pruning for the Forgetting Transformer | 
	We propose a method that adaptively prunes computations in the Forgetting Transformer based on forget gate values. | 
	The recently proposed Forgetting Transformer (FoX) incorporates a forget gate into softmax attention and has shown consistently better or on-par performance compared to the standard RoPE-based Transformer. Notably, many attention heads in FoX tend to forget quickly, causing their output at each timestep to rely primarily on local context. Based on this observation, we propose Adaptive Computation Pruning (ACP) for FoX, a method that dynamically prunes computations involving input-output dependencies that are strongly decayed by the forget gate. In particular, our method performs *provably safe* pruning via a dynamically set pruning threshold that guarantees the pruned attention weights are negligible. 
We apply ACP to language model pretraining with FoX and show it consistently reduces the number of FLOPs and memory accesses in softmax attention by around 70\% across different model sizes and context lengths, resulting in a roughly 50\% to 70\% reduction in attention runtime (or a 2--3$\times$ speedup) and a roughly 10\% to 40\% increase in end-to-end training throughput. Furthermore, longer context lengths yield greater computational savings. All these speed improvements are achieved without any performance degradation. Our code is available at https://github.com/zhixuan-lin/forgetting-transformer. | 
	[
  "Zhixuan Lin",
  "Johan Obando-Ceron",
  "Xu Owen He",
  "Aaron Courville"
] | 
	https://openreview.net/forum?id=xNj14CY5S1 | 
	xNj14CY5S1 | 
	xNj14CY5S1 | 
	[
  "~Zhixuan_Lin1",
  "~Johan_Obando-Ceron1",
  "~Xu_Owen_He1",
  "~Aaron_Courville3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/dabdb7eb3af1dbd15e1d91d6bb30f2a59bd77334.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "transformer",
  "forgetting transformer",
  "efficient transformer",
  "sequence modeling",
  "adaptive computation pruning",
  "forget gate",
  "sparse attention",
  "FlashAttention",
  "hardware-aware optimization"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
lin2025adaptive,
title={Adaptive Computation Pruning for the Forgetting Transformer},
author={Zhixuan Lin and Johan Obando-Ceron and Xu Owen He and Aaron Courville},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=xNj14CY5S1}
} | 
	lin|adaptive_computation_pruning_for_the_forgetting_transformer | null | null | null | null | null | |
| 
	Energy-Based Reward Models for Robust Language Model Alignment | 
	We introduce Energy-Based Reward Model (EBRM), a post-hoc method to refine reward models using EBMs. | 
	Reward models (RMs) are essential for aligning Large Language Models (LLMs) with human preference. However, they often struggle with capturing complex human preferences and generalizing to unseen data. To address these challenges, we introduce \emph{Energy-Based Reward Model} (EBRM), a lightweight post-hoc refinement framework that enhances RM robustness and generalization.
EBRM models the reward distribution explicitly, capturing uncertainty in human preferences and mitigating the impact of noisy or misaligned annotations. It achieves this through conflict-aware data filtering, label-noise-aware contrastive training, and hybrid initialization. Notably, EBRM enhances RMs without retraining, making it computationally efficient and adaptable across different models and tasks.
Empirical evaluations on RM benchmarks demonstrate significant improvements in both robustness and generalization, achieving up to a 5.97\% improvement in safety-critical alignment tasks compared to standard RMs. Furthermore, reinforcement learning experiments confirm that our refined rewards enhance alignment quality, effectively delaying reward hacking. These results demonstrate our approach as a scalable and effective enhancement for existing RMs and alignment pipelines. | 
	[
  "Anamika Lochab",
  "Ruqi Zhang"
] | 
	https://openreview.net/forum?id=x6evCULIOQ | 
	x6evCULIOQ | 
	x6evCULIOQ | 
	[
  "~Anamika_Lochab1",
  "~Ruqi_Zhang1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/0cbcf905a030463b01e16ef7de935a5d0ef38517.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Reward Models",
  "Alignment",
  "Energy Based Models"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
lochab2025energybased,
title={Energy-Based Reward Models for Robust Language Model Alignment},
author={Anamika Lochab and Ruqi Zhang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=x6evCULIOQ}
} | 
	lochab|energybased_reward_models_for_robust_language_model_alignment | 
	/attachment/d94995fd2c118eb604ec15b3b03867d739a2ce0c.zip | null | null | null | null | |
| 
	Guided Reasoning in LLM-Driven Penetration Testing Using Structured Attack Trees | 
	We propose a reasoning pipeline for penetration testing LLM agents using a structured task tree based on proven cybersecurity kill chains. Our method achieves 74.4% attack subtask completion (vs. 35.2% by the SOTA) and requires 55.9% fewer queries. | 
	Recent advances in large language models (LLMs) have driven interest in automating cybersecurity penetration testing workflows, offering the promise of faster and more consistent vulnerability assessment for enterprise systems. Existing LLM agents for penetration testing primarily rely on self‐guided reasoning, which can produce inaccurate or hallucinated procedural steps. As a result, the LLM agent may undertake unproductive actions, such as exploiting unused software libraries or generating cyclical responses that repeat prior tactics. In this work, we propose a reasoning pipeline for penetration testing LLM agents that incorporates a deterministic task tree built from the MITRE ATT\&CK Matrix, a proven penetration testing kill chain, to constrain the LLM's reasoning process to explicitly defined tactics, techniques, and procedures. This anchors reasoning in proven penetration testing methodologies and filters out ineffective actions by guiding the agent towards more productive attack procedures. To evaluate our approach, we built an automated penetration testing LLM agent using three LLMs (Llama-3-8B, Gemini-1.5, and GPT-4) and applied it to navigate 10 HackTheBox cybersecurity exercises with 103 discrete subtasks representing real-world cyberattack scenarios. Our proposed reasoning pipeline guided the LLM agent through 71.8\%, 72.8\%, and 78.6\% of subtasks using Llama-3-8B, Gemini-1.5, and GPT-4, respectively. Comparatively, the state-of-the-art LLM penetration testing tool using self-guided reasoning completed only 13.5\%, 16.5\%, and 75.7\% of subtasks and required 86.2\%, 118.7\%, and 205.9\% more model queries. This suggests that incorporating a deterministic task tree into LLM reasoning pipelines can enhance the accuracy and efficiency of automated cybersecurity assessments. | 
	[
  "Katsuaki Nakano",
  "Reza Fayyazi",
  "Shanchieh Yang",
  "Michael Zuzak"
] | 
	https://openreview.net/forum?id=x4sdXZ7Jdu | 
	x4sdXZ7Jdu | 
	x4sdXZ7Jdu | 
	[
  "~Katsuaki_Nakano1",
  "~Reza_Fayyazi1",
  "~Shanchieh_Yang1",
  "~Michael_Zuzak1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/972da2fcf7e33064c89c2aed8f19ab736d656ab1.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Penetration Testing",
  "Large Language Models",
  "Autonomous Penetration Testing Agents"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
nakano2025guided,
title={Guided Reasoning in {LLM}-Driven Penetration Testing Using Structured Attack Trees},
author={Katsuaki Nakano and Reza Fayyazi and Shanchieh Yang and Michael Zuzak},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=x4sdXZ7Jdu}
} | 
	nakano|guided_reasoning_in_llmdriven_penetration_testing_using_structured_attack_trees | null | null | null | null | null | |
| 
	Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving | 
	Introduce Goedel-Prover, an open-source language model that achieves SOTA in automated theorem proving in Lean | 
	We introduce Goedel-Prover, an open-source language model that achieves state-of-the-art performance in automated formal proof generation for mathematical problems.  A key challenge in this field is the scarcity of formalized mathematical statements and proofs, which we address through the following approaches. First, we train statement formalizers to translate natural language math problems from Numina into the formal language Lean 4, and use an LLM to verify that the formal statements accurately preserve the content of the original problems. This results in a dataset of 1.64 million formal statements. We then iteratively build a large dataset of formal proofs by training a series of provers: each prover is able to prove many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. Despite using only supervised fine-tuning, our final prover (fine-tuned on DeepSeek-Prover-V1.5-base) significantly outperforms the previous best open-source model, DeepSeek-Prover-V1.5, which uses reinforcement learning. 
On the MiniF2F benchmark, our model achieves a success rate of 57.6\% (Pass@32), surpassing DeepSeek-Prover-V1.5 by 7.6\%. 
On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. 
Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by prior work. 
We provide extensive discussion of our training methodology, highlighting the key design choices that contribute to Goedel-Prover’s strong performance. Finally, we explore reinforcement learning on top of Goedel-Prover-SFT, offering insights into its potential benefits and limitations. | 
	[
  "Yong Lin",
  "Shange Tang",
  "Bohan Lyu",
  "Jiayun Wu",
  "Hongzhou Lin",
  "Kaiyu Yang",
  "Jia LI",
  "Mengzhou Xia",
  "Danqi Chen",
  "Sanjeev Arora",
  "Chi Jin"
] | 
	https://openreview.net/forum?id=x2y9i2HDjD | 
	x2y9i2HDjD | 
	x2y9i2HDjD | 
	[
  "~Yong_Lin2",
  "~Shange_Tang1",
  "~Bohan_Lyu1",
  "~Jiayun_Wu1",
  "~Hongzhou_Lin1",
  "~Kaiyu_Yang1",
  "~Jia_LI18",
  "~Mengzhou_Xia1",
  "~Danqi_Chen1",
  "~Sanjeev_Arora1",
  "~Chi_Jin1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/047be6bba4d4ce1c4aae8dfe2c987deef335ef45.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Formal reasoning",
  "verification",
  "Lean",
  "self-improvement"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
lin2025goedelprover,
title={Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving},
author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia LI and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=x2y9i2HDjD}
} | 
	lin|goedelprover_a_frontier_model_for_opensource_automated_theorem_proving | null | null | null | null | null | |
| 
	EnrichIndex: Using LLMs to Enrich Retrieval Indices Offline | 
	EnrichIndex enriches documents offline using LLMs, improving retrieval performance on complex retrieval tasks with significantly lower latency and online cost. | 
	Existing information retrieval systems excel in cases where the language of target documents closely matches that of the user query. However, real-world retrieval systems are often required to *implicitly reason* whether a document is relevant. For example, when retrieving technical texts or tables, their relevance to the user query may be implied through a particular jargon or structure, rather than explicitly expressed in their content. Large language models (LLMs) hold great potential in identifying such implied relevance by leveraging their reasoning skills. Nevertheless, current LLM-augmented retrieval is hindered by high latency and computation cost, as the LLM typically computes the query-document relevance *online*, for every query anew. To tackle this issue we introduce EnrichIndex, a retrieval approach which instead uses the LLM *offline* to build semantically-enriched retrieval indices, by performing a single pass over all documents in the retrieval corpus once during ingestion time.
Furthermore, the semantically-enriched indices can complement existing online retrieval approaches, boosting the performance of LLM re-rankers.
We evaluated EnrichIndex on five retrieval tasks, involving passages and tables, and found that it outperforms strong online LLM-based retrieval systems, with an average improvement of 11.7 points in recall @ 10 and 10.6 points in NDCG @ 10 compared to strong baselines. In terms of online calls to the LLM, it processes 293.3 times fewer tokens which greatly reduces the online latency and cost.
Overall, EnrichIndex is an effective way to build better retrieval indices offline by leveraging the strong reasoning skills of LLMs. | 
	[
  "Peter Baile Chen",
  "Tomer Wolfson",
  "Mike Cafarella",
  "Dan Roth"
] | 
	https://openreview.net/forum?id=wyYL5Jov6e | 
	wyYL5Jov6e | 
	wyYL5Jov6e | 
	[
  "~Peter_Baile_Chen1",
  "~Tomer_Wolfson1",
  "~Mike_Cafarella1",
  "~Dan_Roth3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/7e22d77624c4e6b84c79cf58406e46c07740df64.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "retrieval",
  "offline enrichment",
  "implicit reasoning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
chen2025enrichindex,
title={EnrichIndex: Using {LLM}s to Enrich Retrieval Indices Offline},
author={Peter Baile Chen and Tomer Wolfson and Mike Cafarella and Dan Roth},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=wyYL5Jov6e}
} | 
	chen|enrichindex_using_llms_to_enrich_retrieval_indices_offline | null | null | null | null | null | |
| 
	Elucidating the Design Space of Decay in Linear Attention | 
	Elucidating the Design Space of Decay in Linear Attention | 
	This paper presents a comprehensive investigation into the decay mechanisms inherent in linear complexity sequence models. We systematically delineate the design space of decay mechanisms across four pivotal dimensions: parameterization strategy, which refers to the computational methodology for decay; parameter sharing, which involves the utilization of supplementary parameters for decay computation; decay granularity. comparing scalar versus vector-based decay; and compatibility with relative positional encoding methods, such as Rotary Position Embedding (RoPE).
Through an extensive series of experiments conducted on diverse language modeling tasks, we uncovered several critical insights. Firstly, the design of the parameterization strategy for decay requires meticulous consideration. Our findings indicate that effective configurations are typically confined to a specific range of parameters. Secondly, parameter sharing cannot be used arbitrarily, as it may cause decay values to be too large or too small, thereby significantly impacting performance. Thirdly, under identical parameterization strategies, scalar decay generally underperforms compared to its vector-based counterpart. However, in certain scenarios with alternative parameterization strategies, scalar decay may unexpectedly surpass vector decay in efficacy. Lastly, our analysis reveals that RoPE, a commonly employed relative positional encoding method, typically fails to provide tangible benefits to the majority of linear attention mechanisms. | 
	[
  "Zhen Qin",
  "Xuyang Shen",
  "Yiran Zhong"
] | 
	https://openreview.net/forum?id=whXh2YxMbt | 
	whXh2YxMbt | 
	whXh2YxMbt | 
	[
  "~Zhen_Qin6",
  "~Xuyang_Shen1",
  "~Yiran_Zhong1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/815a0f2594e2d32ff855a8a9d677e3855205a1c4.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Linear Attention"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
qin2025elucidating,
title={Elucidating the Design Space of Decay in Linear Attention},
author={Zhen Qin and Xuyang Shen and Yiran Zhong},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=whXh2YxMbt}
} | 
	qin|elucidating_the_design_space_of_decay_in_linear_attention | null | null | null | null | null | |
| 
	PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages | 
	We introduce PolyGuard, a new state-of-the-art multilingual safety model for safeguarding LLM generations along with PolyGuardMix for safety detection training and PolyGuardPrompts for safety guardrail evaluation. | 
	Truly multilingual safety moderation efforts for Large Language Models (LLMs) have been hindered by a narrow focus on a small set of languages (e.g., English, Chinese) as well as a limited scope of safety definition, resulting in significant gaps in moderation capabilities. To bridge these gaps, we release POLYGUARD, a new state-of-the-art multilingual safety model for
safeguarding LLM generations, and the corresponding training and evaluation datasets. POLYGUARD is trained on POLYGUARDMIX, the largest multilingual safety training corpus to date containing 1.91M samples across 17 languages (e.g., Chinese, Czech, English, Hindi). We also introduce POLYGUARDPROMPTS, a high quality multilingual benchmark with 29K samples for the evaluation of safety guardrails. Created by combining naturally occurring multilingual human-LLM interactions and human-verified machine translations of an English-only safety dataset (WildGuardMix; Han et al., 2024), our datasets contain prompt-output pairs with labels of prompt harmfulness, response harmfulness, and response refusal. Through extensive evaluations across multiple safety and toxicity benchmarks, we demonstrate that POLYGUARD outperforms existing state-of-the-art open-weight and commercial safety classifiers by 5.5%. Our contributions advance
efforts toward safer multilingual LLMs for all global users. | 
	[
  "Priyanshu Kumar",
  "Devansh Jain",
  "Akhila Yerukola",
  "Liwei Jiang",
  "Himanshu Beniwal",
  "Thomas Hartvigsen",
  "Maarten Sap"
] | 
	https://openreview.net/forum?id=wbAWKXNeQ4 | 
	wbAWKXNeQ4 | 
	wbAWKXNeQ4 | 
	[
  "~Priyanshu_Kumar1",
  "~Devansh_Jain1",
  "~Akhila_Yerukola1",
  "~Liwei_Jiang2",
  "~Himanshu_Beniwal1",
  "~Thomas_Hartvigsen1",
  "~Maarten_Sap1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/083b5e10f049e428ff125b8665aea78cf0284e75.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "ai safety",
  "hate-speech detection"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kumar2025polyguard,
title={PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages},
author={Priyanshu Kumar and Devansh Jain and Akhila Yerukola and Liwei Jiang and Himanshu Beniwal and Thomas Hartvigsen and Maarten Sap},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=wbAWKXNeQ4}
} | 
	kumar|polyguard_a_multilingual_safety_moderation_tool_for_17_languages | null | null | null | null | null | |
| 
	More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment | 
	LLMs learns about safety better from their own outputs than from others. | 
	Aligning large language models (LLMs) with human values is an increasingly critical step in post-training. Direct Preference Optimization (DPO) has emerged as a simple, yet effective alternative to reinforcement learning from human feedback (RLHF). Synthetic preference data with its low cost and high quality enable effective alignment through single- or multi-model generated preference data. Our study reveals a striking, safety-specific phenomenon associated with DPO alignment: Although multi-model generated data enhances performance on general tasks (ARC, Hellaswag, MMLU, TruthfulQA, Winogrande) by providing diverse responses, it also tends to facilitate reward hacking during training. This can lead to a high attack success rate (ASR) when models encounter jailbreaking prompts. The issue is particularly pronounced when employing stronger models like GPT-4o or larger models in the same family to generate chosen responses paired with target model self-generated rejected responses, resulting in dramatically poorer safety outcomes. Furthermore, with respect to safety, using solely self-generated responses (single-model generation) for both chosen and rejected pairs significantly outperforms configurations that incorporate responses from stronger models, whether used directly as chosen data or as part of a multi-model response pool. We demonstrate that multi-model preference data exhibits high linear separability between chosen and rejected responses, which allows models to exploit superficial cues rather than internalizing robust safety constraints. Our experiments, conducted on models from the Llama, Mistral, and Qwen families, consistently validate these findings. The code is available at \href{https://github.com/cacayaya/More-is-Less}{github.com/cacayaya/More-is-Less}. | 
	[
  "Yifan Wang",
  "Runjin Chen",
  "Bolian Li",
  "David Cho",
  "Yihe Deng",
  "Ruqi Zhang",
  "Tianlong Chen",
  "Zhangyang Wang",
  "Ananth Grama",
  "Junyuan Hong"
] | 
	https://openreview.net/forum?id=wXOUYzNv5k | 
	wXOUYzNv5k | 
	wXOUYzNv5k | 
	[
  "~Yifan_Wang14",
  "~Runjin_Chen1",
  "~Bolian_Li1",
  "~David_Cho2",
  "~Yihe_Deng1",
  "~Ruqi_Zhang1",
  "~Tianlong_Chen1",
  "~Zhangyang_Wang1",
  "~Ananth_Grama1",
  "~Junyuan_Hong1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/094c4e24c1d72adc8f0ceeded062c9fdf6235d7b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Alignment",
  "Synthetic Data",
  "Safety",
  "Large Language Models"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025more,
title={More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in {DPO} Safety Alignment},
author={Yifan Wang and Runjin Chen and Bolian Li and David Cho and Yihe Deng and Ruqi Zhang and Tianlong Chen and Zhangyang Wang and Ananth Grama and Junyuan Hong},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=wXOUYzNv5k}
} | 
	wang|more_is_less_the_pitfalls_of_multimodel_synthetic_preference_data_in_dpo_safety_alignment | null | null | null | null | null | |
| 
	Sherkala-Chat: Building a State-of-the-Art LLM for Kazakh in a Moderately Resourced Setting | 
	Sherkala-Chat (8B) is a state-of-the-art, instruction-tuned open LLM for Kazakh, excelling in Kazakh language tasks while remaining competitive in English. | 
	Llama-3.1-Sherkala-8B-Chat, or Sherkala-Chat (8B) for short, is a state-of-the-art instruction-tuned open generative large language model (LLM) designed for Kazakh. Sherkala-Chat (8B) aims to enhance the inclusivity of LLM advancements for Kazakh speakers. Adapted from the LLaMA-3.1-8B model, Sherkala-Chat (8B) is trained on 45.3B tokens across Kazakh, English, Russian, and Turkish. With 8 billion parameters, it demonstrates strong knowledge and reasoning abilities in Kazakh, significantly outperforming existing open Kazakh and multilingual models of similar scale while achieving competitive performance in English.  To ensure effective and responsible alignment, we leverage translated instruction datasets, a Kazakhstan-specific instruction dataset that is automatically constructed and manually verified, and Kazakh-specific safety data. We release Sherkala-Chat (8B) as an open-weight model, along with a detailed description of its training, alignment, and evaluation, to support research and real-world applications for Kazakh speakers. | 
	[
  "Fajri Koto",
  "Rituraj Joshi",
  "Nurdaulet Mukhituly",
  "Yuxia Wang",
  "Zhuohan Xie",
  "Rahul Pal",
  "Daniil Orel",
  "Parvez Mullah",
  "Diana Turmakhan",
  "Maiya Goloburda",
  "Mohammed Kamran",
  "Samujjwal Ghosh",
  "Bokang Jia",
  "Jonibek Mansurov",
  "Mukhammed Togmanov",
  "Debopriyo Banerjee",
  "Nurkhan Laiyk",
  "Akhmed Sakip",
  "Xudong Han",
  "Ekaterina Kochmar",
  "Alham Fikri Aji",
  "Aaryamonvikram Singh",
  "Alok Anil Jadhav",
  "Satheesh Katipomu",
  "Samta Kamboj",
  "Monojit Choudhury",
  "Gurpreet Gosal",
  "Gokulakrishnan Ramakrishnan",
  "Biswajit Mishra",
  "Sarath Chandran",
  "Avraham Sheinin",
  "Natalia Vassilieva",
  "Neha Sengupta",
  "Preslav Nakov"
] | 
	https://openreview.net/forum?id=wRcTCcb0H5 | 
	wRcTCcb0H5 | 
	wRcTCcb0H5 | 
	[
  "~Fajri_Koto1",
  "~Rituraj_Joshi1",
  "~Nurdaulet_Mukhituly1",
  "~Yuxia_Wang1",
  "~Zhuohan_Xie1",
  "~Rahul_Pal1",
  "~Daniil_Orel1",
  "~Parvez_Mullah1",
  "~Diana_Turmakhan1",
  "~Maiya_Goloburda1",
  "~Mohammed_Kamran1",
  "~Samujjwal_Ghosh1",
  "~Bokang_Jia1",
  "~Jonibek_Mansurov1",
  "~Mukhammed_Togmanov1",
  "~Debopriyo_Banerjee1",
  "~Nurkhan_Laiyk1",
  "~Akhmed_Sakip2",
  "~Xudong_Han2",
  "~Ekaterina_Kochmar2",
  "~Alham_Fikri_Aji1",
  "~Aaryamonvikram_Singh1",
  "~Alok_Anil_Jadhav1",
  "~Satheesh_Katipomu1",
  "~Samta_Kamboj1",
  "~Monojit_Choudhury1",
  "~Gurpreet_Gosal2",
  "~Gokulakrishnan_Ramakrishnan1",
  "~Biswajit_Mishra1",
  "~Sarath_Chandran1",
  "~Avraham_Sheinin1",
  "~Natalia_Vassilieva1",
  "~Neha_Sengupta1",
  "~Preslav_Nakov2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/81303233cce33f5509d80aa0ef16ce80cdba0fb8.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "LLM",
  "LLaMA-3.1",
  "Kazakh",
  "low-resource language modeling",
  "fine-tuning",
  "safety alignment",
  "model evaluation",
  "generative AI"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
koto2025sherkalachat,
title={Sherkala-Chat: Building a State-of-the-Art {LLM} for Kazakh in a Moderately Resourced Setting},
author={Fajri Koto and Rituraj Joshi and Nurdaulet Mukhituly and Yuxia Wang and Zhuohan Xie and Rahul Pal and Daniil Orel and Parvez Mullah and Diana Turmakhan and Maiya Goloburda and Mohammed Kamran and Samujjwal Ghosh and Bokang Jia and Jonibek Mansurov and Mukhammed Togmanov and Debopriyo Banerjee and Nurkhan Laiyk and Akhmed Sakip and Xudong Han and Ekaterina Kochmar and Alham Fikri Aji and Aaryamonvikram Singh and Alok Anil Jadhav and Satheesh Katipomu and Samta Kamboj and Monojit Choudhury and Gurpreet Gosal and Gokulakrishnan Ramakrishnan and Biswajit Mishra and Sarath Chandran and Avraham Sheinin and Natalia Vassilieva and Neha Sengupta and Preslav Nakov},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=wRcTCcb0H5}
} | 
	koto|sherkalachat_building_a_stateoftheart_llm_for_kazakh_in_a_moderately_resourced_setting | null | null | null | null | null | |
| 
	Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective | 
	We introduce a framework for assessing and analyzing bias in long text outputs at group level. | 
	Large Language Models (LLMs) often generate responses with inherent biases,
undermining their reliability in real-world applications. Existing evaluation meth-
ods often overlook biases in long-form responses and the intrinsic variability of
LLM outputs. To address these challenges, we propose FiSCo (Fine-grained Se-
mantic Comparison), a novel statistical framework to evaluate group-level fairness
in LLMs by detecting subtle semantic differences in long-form responses across
demographic groups. Unlike prior work focusing on sentiment or token-level
comparisons, FiSCo goes beyond surface-level analysis by operating at the claim
level, leveraging entailment checks to assess the consistency of meaning across
responses. We decompose model outputs into semantically distinct claims and
apply statistical hypothesis testing to compare inter- and intra-group similarities,
enabling robust detection of subtle biases. We formalize a new group counterfac-
tual fairness definition and validate FiSCo on both synthetic and human-annotated
datasets spanning gender, race, and age. Experiments show that FiSCo more
reliably identifies nuanced biases while reducing the impact of stochastic LLM
variability, outperforming various evaluation metrics. | 
	[
  "Weijie Xu",
  "Yiwen Wang",
  "Chi Xue",
  "Xiangkun Hu",
  "Xi Fang",
  "Guimin Dong",
  "Chandan K. Reddy"
] | 
	https://openreview.net/forum?id=wKVtjs0w4a | 
	wKVtjs0w4a | 
	wKVtjs0w4a | 
	[
  "~Weijie_Xu1",
  "~Yiwen_Wang4",
  "~Chi_Xue1",
  "~Xiangkun_Hu1",
  "~Xi_Fang3",
  "~Guimin_Dong1",
  "~Chandan_K._Reddy1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/371c9407949422ef1df4a9f9b27d0ad0aa911a2b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Fairness",
  "Bias",
  "Evaluation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
xu2025quantifying,
title={Quantifying Fairness in {LLM}s Beyond Tokens: A Semantic and Statistical Perspective},
author={Weijie Xu and Yiwen Wang and Chi Xue and Xiangkun Hu and Xi Fang and Guimin Dong and Chandan K. Reddy},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=wKVtjs0w4a}
} | 
	xu|quantifying_fairness_in_llms_beyond_tokens_a_semantic_and_statistical_perspective | null | null | null | null | null | |
| 
	How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence | 
	We compare base(pretrained) LLM and instruct(post-trained) LLM mechanistically in four perspectives and provide insight to what is preserved and altered. | 
	Post-training is essential for the success of large language models (LLMs), transforming pre-trained base models into more useful and aligned post-trained models. While plenty of works have studied post-training algorithms and evaluated post-training models by their outputs, it remains understudied how post-training reshapes LLMs internally. In this paper, we compare base and post-trained LLMs mechanistically from four perspectives to better understand post-training effects. Our findings across model families and datasets reveal that: (1) Post-training does not change the factual knowledge storage locations, and it adapts knowledge representations from the base model while developing new knowledge representations; (2) Both truthfulness and refusal can be represented by vectors in the hidden representation space. The truthfulness direction is highly similar between the base and post-trained model, and it is effectively transferable for interventions; (3) The refusal direction is different between the base and post-trained models, and it shows limited forward transferability; (4) Differences in confidence between the base and post-trained models cannot be attributed to entropy neurons. Our study provides insights into the fundamental mechanisms preserved and altered during post-training, facilitates downstream tasks like model steering, and could potentially benefit future research in interpretability and LLM post-training. Our code is publicly available at https://github.com/HZD01/post-training-mechanistic-analysis. | 
	[
  "Hongzhe Du",
  "Weikai Li",
  "Min Cai",
  "Karim Saraipour",
  "Zimin Zhang",
  "Himabindu Lakkaraju",
  "Yizhou Sun",
  "Shichang Zhang"
] | 
	https://openreview.net/forum?id=w5DSwn9wTC | 
	w5DSwn9wTC | 
	w5DSwn9wTC | 
	[
  "~Hongzhe_Du1",
  "~Weikai_Li2",
  "~Min_Cai2",
  "~Karim_Saraipour1",
  "~Zimin_Zhang1",
  "~Himabindu_Lakkaraju1",
  "~Yizhou_Sun1",
  "~Shichang_Zhang2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/ed7d9a75939697cf5c03dbd93501fd32e855ede6.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Mechanistic Interpretability",
  "Instruction-tuning",
  "Post-training",
  "Alignment"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
du2025how,
title={How Post-Training Reshapes {LLM}s: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence},
author={Hongzhe Du and Weikai Li and Min Cai and Karim Saraipour and Zimin Zhang and Himabindu Lakkaraju and Yizhou Sun and Shichang Zhang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=w5DSwn9wTC}
} | 
	du|how_posttraining_reshapes_llms_a_mechanistic_view_on_knowledge_truthfulness_refusal_and_confidence | null | null | null | null | null | |
| 
	The Zero Body Problem: Probing LLM Use of Sensory Language | 
	Popular large language models fail to replicate human use of sensory language, an important feature of storytelling. | 
	Sensory language expresses embodied experiences ranging from taste and sound to excitement and stomachache. It is of interest to scholars from a wide range of domains including robotics, narratology, linguistics, and cognitive science. In this work, we explore whether language models, which are not embodied, can approximate human use of embodied language. To do this, we extend an existing corpus of parallel human and model responses to short story prompts with an additional 18,000 stories generated by 18 popular language models. We find that all models generate stories that differ significantly from human usage of sensory language. However, the direction of these differences varies considerably between model families; Gemini models use significantly more sensory language than humans along most axes whereas most models from the remaining five families use significantly less. Linear probes run on five models suggest that they are capable of \textit{identifying} sensory language, meaning an inability to recognize sensory content is unlikely to be the cause of the observed differences. Instead, we find preliminary evidence indicating that instruction tuning may discourage usage of sensory language in some models. To support further work, we release \href{https://github.com/srhm-ca/sensorylanguage}{our expanded story dataset.} | 
	[
  "Rebecca M. M. Hicke",
  "Sil Hamilton",
  "David Mimno"
] | 
	https://openreview.net/forum?id=vv1ZyQF8LD | 
	vv1ZyQF8LD | 
	vv1ZyQF8LD | 
	[
  "~Rebecca_M._M._Hicke1",
  "~Sil_Hamilton1",
  "~David_Mimno1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/3c070894fc335f646c5bbdc34c6ec387206d6861.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "model evaluation",
  "model interpretability",
  "sensory language",
  "model creativity"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
hicke2025the,
title={The Zero Body Problem: Probing {LLM} Use of Sensory Language},
author={Rebecca M. M. Hicke and Sil Hamilton and David Mimno},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vv1ZyQF8LD}
} | 
	hicke|the_zero_body_problem_probing_llm_use_of_sensory_language | null | null | null | null | null | |
| 
	Base Models Beat Aligned Models at Randomness and Creativity | 
	Alignment seems to hurt performance on a set of tasks that require randomness or creativity | 
	Alignment has quickly become a default ingredient in LLM development, with techniques such as reinforcement learning from human feedback making models act safely, follow instructions, and perform ever-better on complex tasks. While these techniques are certainly useful, we propose that they should not be universally applied and demonstrate a range of tasks on which base language models consistently outperform their popular aligned forms. Particularly, we study tasks that require unpredictable outputs, such as random number generation, mixed strategy games (rock-paper-scissors and hide-and-seek), and creative writing. In each case, aligned models tend towards narrow behaviors that result in distinct disadvantages, for instance, preferring to generate ``7'' over other uniformly random numbers, becoming almost fully predictable in some game states, or prioritizing pleasant writing over originality. Across models tested, better performance on common benchmarks tends to correlate with worse performance on our tasks, suggesting an effective trade-off in the required capabilities. | 
	[
  "Peter West",
  "Christopher Potts"
] | 
	https://openreview.net/forum?id=vqN8uom4A1 | 
	vqN8uom4A1 | 
	vqN8uom4A1 | 
	[
  "~Peter_West1",
  "~Christopher_Potts1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/fac784fe4c8e095a2a44374be0e7e9378dc7c012.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "alignment",
  "pretrained",
  "limitations",
  "limits",
  "capabilities",
  "randomness",
  "creativity"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
west2025base,
title={Base Models Beat Aligned Models at Randomness and Creativity},
author={Peter West and Christopher Potts},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vqN8uom4A1}
} | 
	west|base_models_beat_aligned_models_at_randomness_and_creativity | null | null | null | null | null | |
| 
	Improving Table Understanding with LLMs and Entity-Oriented Search | 
	We introduce an entity-oriented search method to enhance table understanding in LLMs, reducing preprocessing and achieving state-of-the-art results. | 
	Our work addresses the challenges of understanding tables. Existing methods often struggle with the unpredictable nature of table content, leading to a reliance on preprocessing and keyword matching. They also face limitations due to the lack of contextual information, which complicates the reasoning processes of large language models (LLMs). To overcome these challenges, we introduce an entity-oriented search method to improve table understanding with LLMs. This approach effectively leverages the semantic similarities between questions and table data, as well as the implicit relationships between table cells, minimizing the need for data preprocessing and keyword matching. Additionally, it focuses on table entities, ensuring that table cells are semantically tightly bound, thereby enhancing contextual clarity. Furthermore, we pioneer the use of a graph query language for table understanding, establishing a new research direction. Experiments show that our approach achieves new state-of-the-art performances on standard benchmarks WikiTableQuestions and TabFact. | 
	[
  "Thi-Nhung Nguyen",
  "Hoang Ngo",
  "Dinh Phung",
  "Thuy-Trang Vu",
  "Dat Quoc Nguyen"
] | 
	https://openreview.net/forum?id=vlyl9xZVAL | 
	vlyl9xZVAL | 
	vlyl9xZVAL | 
	[
  "~Thi-Nhung_Nguyen1",
  "~Hoang_Ngo1",
  "~Dinh_Phung2",
  "~Thuy-Trang_Vu1",
  "~Dat_Quoc_Nguyen1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/68932df0b9be8315ff4155a7db786fcffd032e2b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "table understanding",
  "llm"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
nguyen2025improving,
title={Improving Table Understanding with {LLM}s and Entity-Oriented Search},
author={Thi-Nhung Nguyen and Hoang Ngo and Dinh Phung and Thuy-Trang Vu and Dat Quoc Nguyen},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vlyl9xZVAL}
} | 
	nguyen|improving_table_understanding_with_llms_and_entityoriented_search | null | null | null | null | null | |
| 
	Positional Biases Shift as Inputs Approach Context Window Limits | 
	This paper examines how input length, relative to a model’s context window, affects positional biases in LLMs. | 
	Large Language Models (LLMs) often struggle to use information across long inputs effectively. 
Prior work has identified positional biases, such as the Lost in the Middle (LiM) effect, where models perform better when information appears at the beginning (primacy bias) or end (recency bias) of the input, rather than in the middle. 
However, long-context studies have not consistently replicated these effects, raising questions about their intensity and the conditions under which they manifest.
To address this, we conducted a comprehensive analysis using relative rather than absolute input lengths, defined with respect to each model’s context window. 
Our findings reveal that the LiM effect is strongest when inputs occupy up to 50\% of a model’s context window.
Beyond that, the primacy bias weakens, while recency bias remains relatively stable.
This effectively eliminates the LiM effect; instead, we observe a distance-based bias, where model performance is better when relevant information is closer to the end of the input.
Furthermore, our results suggest that successful retrieval is a prerequisite for reasoning in LLMs, and that the observed positional biases in reasoning are largely inherited from retrieval. These insights have implications for long-context tasks, the design of future LLM benchmarks, and evaluation methodologies for LLMs handling extended inputs. | 
	[
  "Blerta Veseli",
  "Julian Chibane",
  "Mariya Toneva",
  "Alexander Koller"
] | 
	https://openreview.net/forum?id=vlUk8z8LaM | 
	vlUk8z8LaM | 
	vlUk8z8LaM | 
	[
  "~Blerta_Veseli1",
  "~Julian_Chibane1",
  "~Mariya_Toneva1",
  "~Alexander_Koller2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/1fa87a52bb87f4535aa4c24f858f215c6d329083.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Long-context understanding",
  "positional biases"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
veseli2025positional,
title={Positional Biases Shift as Inputs Approach Context Window Limits},
author={Blerta Veseli and Julian Chibane and Mariya Toneva and Alexander Koller},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vlUk8z8LaM}
} | 
	veseli|positional_biases_shift_as_inputs_approach_context_window_limits | null | null | null | null | null | |
| 
	Agree to Disagree? A Meta-Evaluation of LLM Misgendering | 
	We conduct a systematic meta-evaluation of different methods for measuring LLM misgendering across three datasets and find that they can disagree. | 
	Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with automatic heuristics or human validation).
However, it has gone unexamined whether these evaluation methods have convergent validity, that is, whether their results align.
Therefore, we conduct a systematic meta-evaluation of these methods across three existing datasets for LLM misgendering.
We propose a method to transform each dataset to enable parallel probability- and generation-based evaluation.
Then, by automatically evaluating a suite of 6 models from 3 families, we find that these methods can disagree with each other at the instance, dataset, and model levels, conflicting on 20.2% of evaluation instances.
Finally, with a human evaluation of 2400 LLM generations, we show that misgendering behaviour is complex and goes far beyond pronouns, which automatic evaluations are not currently designed to capture, suggesting essential disagreement with human evaluations.
Based on our findings, we provide recommendations for future evaluations of LLM misgendering.
Our results are also more widely relevant, as they call into question broader methodological conventions in LLM evaluation, which often assume that different evaluation methods agree. | 
	[
  "Arjun Subramonian",
  "Vagrant Gautam",
  "Preethi Seshadri",
  "Dietrich Klakow",
  "Kai-Wei Chang",
  "Yizhou Sun"
] | 
	https://openreview.net/forum?id=vgmiRvpCLA | 
	vgmiRvpCLA | 
	vgmiRvpCLA | 
	[
  "~Arjun_Subramonian1",
  "~Vagrant_Gautam1",
  "~Preethi_Seshadri2",
  "~Dietrich_Klakow1",
  "~Kai-Wei_Chang1",
  "~Yizhou_Sun1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/26bde4c0897c899d2a27271ba979a953201f4f50.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "fairness",
  "meta-evaluation",
  "misgendering"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
subramonian2025agree,
title={Agree to Disagree? A Meta-Evaluation of {LLM} Misgendering},
author={Arjun Subramonian and Vagrant Gautam and Preethi Seshadri and Dietrich Klakow and Kai-Wei Chang and Yizhou Sun},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vgmiRvpCLA}
} | 
	subramonian|agree_to_disagree_a_metaevaluation_of_llm_misgendering | null | null | null | null | null | |
| 
	Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate | 
	We introduce Critique Fine-Tuning, a training method that teaches LM to critique responses, achieving better performance than SFT with fewer training samples and comparable results to RL methods. | 
	Supervised Fine-Tuning (SFT) is commonly used to train language models to imitate annotated responses for given instructions. In this paper, we propose Critique Fine-Tuning (CFT), a method more effective than SFT for reasoning tasks. Instead of simply imitating correct responses, CFT trains models to critique noisy responses, inspired by human learning processes that emphasize critical thinking, deeper analysis, and nuanced understanding--traits often overlooked by standard SFT. To validate the effectiveness of CFT, we construct multiple critique datasets (e.g., WebInstruct, MetaMath, NuminaMath), where GPT-4o serves as the teacher to generate critiques in the form of ([query; noisy response], critique). Experiments on these datasets demonstrate that CFT consistently outperforms SFT by 4--10% across six mathematical reasoning benchmarks, and is effective across different base models including Qwen2.5, Qwen2.5-Math, and DeepSeek-Math. Notably, our model Qwen2.5-Math-CFT only requires 1 hour of training on 8xH100 over the 50K examples, yet matches or outperforms strong competitors like Qwen2.5-Math-Instruct on most benchmarks, which use over 2M samples. Moreover, it matches the performance of SimpleRL, which is a DeepSeek-r1 replication trained with 140x more compute. Experiments on IF_Eval and MT-Bench further demonstrate that CFT can significantly enhance the model's general generation and instruction-following capabilities, outperforming the Qwen2.5-Math-Instruct by a large margin. Ablation studies show that CFT is robust to noisy response sources and teacher critique models. These findings highlight that CFT offers a more effective alternative to advance the reasoning of language models. | 
	[
  "Yubo Wang",
  "Xiang Yue",
  "Wenhu Chen"
] | 
	https://openreview.net/forum?id=vTAz44GgOA | 
	vTAz44GgOA | 
	vTAz44GgOA | 
	[
  "~Yubo_Wang9",
  "~Xiang_Yue1",
  "~Wenhu_Chen3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/dc57be85188955311eeeb6b9cad46b5469c6b35e.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Reasoning",
  "Large Language Model",
  "Fine-Tuning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025critique,
title={Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate},
author={Yubo Wang and Xiang Yue and Wenhu Chen},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vTAz44GgOA}
} | 
	wang|critique_finetuning_learning_to_critique_is_more_effective_than_learning_to_imitate | null | null | null | null | null | |
| 
	SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild | 
	The paper explores zero training with rule-based rewards for emergent chain-of-thought reasoning in smaller models, producing significant improvements in both reasoning accuracy and CoT length across all settings. | 
	DeepSeek-R1 has shown that long chain-of-thought (CoT) reasoning can naturally emerge through a simple reinforcement learning (RL) framework with rule-based rewards, where the training may directly start from the base models—a paradigm referred to as zero RL training. Most recent efforts to reproduce zero RL training have primarily focused on the Qwen2.5 model series, which may not be representative as we find the base models already exhibit strong instruction-following and self-reflection abilities.
In this work, we investigate zero RL training across 10 diverse base models, spanning different families and sizes including LLama3-8B, Mistral-7B/24B, DeepSeek-Math-7B, Qwen2.5-math-7B, and all Qwen2.5 models from 0.5B to 32B. 
Leveraging several key design strategies—such as adjusting format reward and controlling query difficulty—we achieve substantial improvements in both reasoning accuracy and response length across most settings.
However, by carefully monitoring the training dynamics, we observe that different base models exhibit distinct patterns during training. For instance, the increased response length does not always correlate with the emergence of certain cognitive behaviors such as verification (i.e., the "aha moment"). Notably, we observe the ``aha moment'' for the first time in small models not from the Qwen family.
We share the key designs that enable successful zero RL training, along with our findings and practices. 
To facilitate further research, we open-source the  code, models, and analysis tools. | 
	[
  "Weihao Zeng",
  "Yuzhen Huang",
  "Qian Liu",
  "Wei Liu",
  "Keqing He",
  "Zejun MA",
  "Junxian He"
] | 
	https://openreview.net/forum?id=vSMCBUgrQj | 
	vSMCBUgrQj | 
	vSMCBUgrQj | 
	[
  "~Weihao_Zeng2",
  "~Yuzhen_Huang2",
  "~Qian_Liu2",
  "~Wei_Liu25",
  "~Keqing_He1",
  "~Zejun_MA1",
  "~Junxian_He1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/bbe20558d2168bcb2992d581750387291d026a34.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Reasoning",
  "Large Language Model"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zeng2025simplerlzoo,
title={Simple{RL}-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild},
author={Weihao Zeng and Yuzhen Huang and Qian Liu and Wei Liu and Keqing He and Zejun MA and Junxian He},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vSMCBUgrQj}
} | 
	zeng|simplerlzoo_investigating_and_taming_zero_reinforcement_learning_for_open_base_models_in_the_wild | null | null | null | null | null | |
| 
	Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task | 
	Humans significantly outperform LLMs at our complex theory of mind task | 
	Recent evidence suggests Large Language Models (LLMs) display Theory of Mind (ToM) abilities. Most ToM experiments place participants in a spectatorial role, wherein they predict and interpret other agents' behavior. However, human ToM also contributes to dynamically planning action and strategically intervening on others' mental states. We present MindGames: a novel `planning theory of mind' (PToM) task which requires agents to infer an interlocutor's beliefs and desires to persuade them to alter their behavior. Unlike previous evaluations, we explicitly evaluate use cases of ToM. We find that humans significantly outperform o1-preview (an LLM) at our PToM task (11% higher; $p=0.006$). We hypothesize this is because humans have an implicit causal model of other agents (e.g., they know, as our task requires, to ask about people's preferences). In contrast, o1-preview outperforms humans in a baseline condition which requires a similar amount of planning but minimal mental state inferences (e.g., o1-preview is better than humans at planning when already given someone's preferences). These results suggest a significant gap between human-like social reasoning and LLM abilities. | 
	[
  "Jared Moore",
  "Ned Cooper",
  "Rasmus Overmark",
  "Beba Cibralic",
  "Cameron Robert Jones",
  "Nick Haber"
] | 
	https://openreview.net/forum?id=vNJbDhgrM4 | 
	vNJbDhgrM4 | 
	vNJbDhgrM4 | 
	[
  "~Jared_Moore1",
  "~Ned_Cooper1",
  "~Rasmus_Overmark1",
  "~Beba_Cibralic1",
  "~Cameron_Robert_Jones1",
  "~Nick_Haber1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/5e63f274f962ce2282ab6b0841a50e9c17d911ec.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "theory of mind",
  "planning",
  "causal model",
  "persuasion"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
moore2025do,
title={Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task},
author={Jared Moore and Ned Cooper and Rasmus Overmark and Beba Cibralic and Cameron Robert Jones and Nick Haber},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vNJbDhgrM4}
} | 
	moore|do_large_language_models_have_a_planning_theory_of_mind_evidence_from_mindgames_a_multistep_persuasion_task | null | null | null | null | null | |
| 
	Do Biased Models Have Biased Thoughts? | 
	This paper explores whether biased language models have biased reasoning, finding that their thought processes are not strongly linked to biased outputs. | 
	The impressive performance of language models is undeniable. However, the presence of biases based on gender, race, socio-economic status, physical appearance, and sexual orientation makes the deployment of language models challenging. This paper studies the effect of chain-of-thought prompting, a recent approach that studies the steps followed by the model before it responds, on fairness. More specifically, we ask the following question: *Do biased models have biased thoughts*? To answer our question, we conduct experiments on $5$ popular large language models using fairness metrics to quantify $11$ different biases in the model's thoughts and output. Our results show that the bias in the thinking steps is not highly correlated with the output bias (less than $0.6$ correlation with a $p$-value smaller than $0.001$ in most cases). In other words, unlike human beings, the tested models with biased decisions do not always possess biased thoughts. | 
	[
  "Swati Rajwal",
  "Shivank Garg",
  "Reem Abdel-Salam",
  "Abdelrahman Zayed"
] | 
	https://openreview.net/forum?id=vDr0RV3590 | 
	vDr0RV3590 | 
	vDr0RV3590 | 
	[
  "~Swati_Rajwal2",
  "~Shivank_Garg1",
  "~Reem_Abdel-Salam1",
  "~Abdelrahman_Zayed1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/ded6a3254f5a138cc7200d82253bd49853ededb2.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Bias in language models",
  "Large Language Models",
  "biased thoughts",
  "Chain-of-Thought prompting"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
rajwal2025do,
title={Do Biased Models Have Biased Thoughts?},
author={Swati Rajwal and Shivank Garg and Reem Abdel-Salam and Abdelrahman Zayed},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vDr0RV3590}
} | 
	rajwal|do_biased_models_have_biased_thoughts | null | null | null | null | null | |
| 
	How do language models learn facts? Dynamics, curricula and hallucinations | 
	We analyze learning dynamics of language models on a synthetic memory task and show that they learn sequentially, that some data distribution properties lead to faster learning, and that hallucinations appear simulataneously to knowledge acquisition. | 
	Large language models accumulate vast amounts of knowledge during their pre-training, yet the dynamics governing this acquisition remain poorly understood. This work investigates the learning dynamics of language models on a synthetic factual recall task, uncovering three key findings: First, language models learn in three phases, with performance plateauing before they acquire precise factual knowledge. Mechanistically, this plateau coincides with the formation of attention-based circuits that support recall.
Second, the training data distribution significantly impacts learning dynamics, with imbalanced distributions shortening the plateau.
Finally, hallucinations appear simultaneously to knowledge, and integrating new knowledge into the model through fine-tuning is challenging, as it quickly corrupts its existing parametric associative memories. Our results emphasize the importance of data distribution in knowledge acquisition and suggest novel data scheduling strategies to accelerate neural network training. | 
	[
  "Nicolas Zucchet",
  "Jorg Bornschein",
  "Stephanie C.Y. Chan",
  "Andrew Kyle Lampinen",
  "Razvan Pascanu",
  "Soham De"
] | 
	https://openreview.net/forum?id=vBcGnragkr | 
	vBcGnragkr | 
	vBcGnragkr | 
	[
  "~Nicolas_Zucchet1",
  "~Jorg_Bornschein1",
  "~Stephanie_C.Y._Chan1",
  "~Andrew_Kyle_Lampinen1",
  "~Razvan_Pascanu1",
  "~Soham_De2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/462209e2d72b398c2da64f53207c0eeea6740b0d.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "learning dynamics",
  "factual recall",
  "curricula",
  "data distribution",
  "hallucinations"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zucchet2025how,
title={How do language models learn facts? Dynamics, curricula and hallucinations},
author={Nicolas Zucchet and Jorg Bornschein and Stephanie C.Y. Chan and Andrew Kyle Lampinen and Razvan Pascanu and Soham De},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=vBcGnragkr}
} | 
	zucchet|how_do_language_models_learn_facts_dynamics_curricula_and_hallucinations | null | null | null | null | null | |
| 
	News is More than a Collection of Facts: Moral Frame Preserving News Summarization | 
	The first investigation in how LLMs can summarize news articles while preserving moral framing. | 
	News articles are more than collections of facts; they reflect journalists' framing, shaping how events are presented to the audience. One key aspect of framing is the choice to write in (or quote verbatim) morally charged language as opposed to using neutral terms. This moral framing carries implicit judgments that automated news summarizers should recognize and preserve to maintain the original intent of the writer. In this work, we perform the first study on the preservation of moral framing in AI-generated news summaries. We propose an approach that leverages the intuition that journalists intentionally use or report specific moral-laden words, which should be retained in summaries. Through automated, crowd-sourced, and expert evaluations, we demonstrate that our approach enhances the preservation of moral framing while maintaining overall summary quality. | 
	[
  "Enrico Liscio",
  "Michela Lorandi",
  "Pradeep K. Murukannaiah"
] | 
	https://openreview.net/forum?id=uzauWUW9u3 | 
	uzauWUW9u3 | 
	uzauWUW9u3 | 
	[
  "~Enrico_Liscio1",
  "~Michela_Lorandi1",
  "~Pradeep_K._Murukannaiah1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/6940325f5f0ed453a99c0b091dcc120b38f1a6f4.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "LLMs",
  "news",
  "summarization",
  "morality",
  "framing"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
liscio2025news,
title={News is More than a Collection of Facts: Moral Frame Preserving News Summarization},
author={Enrico Liscio and Michela Lorandi and Pradeep K. Murukannaiah},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uzauWUW9u3}
} | 
	liscio|news_is_more_than_a_collection_of_facts_moral_frame_preserving_news_summarization | null | null | null | null | null | |
| 
	Pairwise or Pointwise? Evaluating Feedback Protocols for Bias in LLM-Based Evaluation | 
	This work examines how feedback protocols (absolute scores vs. pairwise preferences) impact biases in LLM evaluations, revealing that absolute scoring is more robust to distractor features. | 
	Large Language Models (LLMs) are widely used as proxies for human labelers in both training (Reinforcement Learning from AI Feedback) and large-scale response evaluation (LLM-as-a-judge). Alignment and evaluation are critical components in the development of reliable LLMs, and the choice of feedback protocol plays a central role in both but remains understudied. In this work, we show that the choice of feedback protocol for evaluation (absolute scores versus relative preferences) can significantly affect evaluation reliability and induce systematic biases. In the context of LLM-as-a-judge evaluation, we show that pairwise protocols are more vulnerable to **distracted evaluation**. Generator models can exploit spurious attributes (or distractor features) favored by the LLM judge, resulting in inflated scores for lower-quality outputs. We find that absolute scoring is more robust to such manipulation, producing judgments that better reflect response quality and are less influenced by distractor features. Our results demonstrate that generator models can flip preferences by embedding distractor features, skewing LLM-as-a-judge comparisons and leading to inaccurate conclusions about model quality in benchmark evaluations. **Pairwise preferences flip in about 35\% of the cases, compared to only 9\% for absolute scores**. We offer recommendations for choosing feedback protocols based on dataset characteristics and evaluation objectives. | 
	[
  "Tuhina Tripathi",
  "Manya Wadhwa",
  "Greg Durrett",
  "Scott Niekum"
] | 
	https://openreview.net/forum?id=uyX5Vnow3U | 
	uyX5Vnow3U | 
	uyX5Vnow3U | 
	[
  "~Tuhina_Tripathi1",
  "~Manya_Wadhwa1",
  "~Greg_Durrett1",
  "~Scott_Niekum1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/ac5ede1c4f3e38efcb8223c7e17ba3fb8949b2ee.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "evaluation",
  "data",
  "alignment"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
tripathi2025pairwise,
title={Pairwise or Pointwise? Evaluating Feedback Protocols for Bias in {LLM}-Based Evaluation},
author={Tuhina Tripathi and Manya Wadhwa and Greg Durrett and Scott Niekum},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uyX5Vnow3U}
} | 
	tripathi|pairwise_or_pointwise_evaluating_feedback_protocols_for_bias_in_llmbased_evaluation | null | null | null | null | null | |
| 
	Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document Summarization | 
	We present a novel method to estimate optimal context length for retrieval-augmented generation. Our estimate is a function of the retriever, summarizer and the downstream task. | 
	Recent advances in long-context reasoning abilities of language models led to interesting applications in large-scale multi-document summarization. However, prior work has shown that these long-context models are not effective at their claimed context windows. To this end, retrieval-augmented systems provide an efficient and effective alternative. However, their performance can be highly sensitive to the choice of retrieval context length. In this work, we present a hybrid method that combines retrieval-augmented systems with long-context windows supported by recent language models. Our method first estimates the optimal retrieval length as a function of the retriever, summarizer, and dataset. On a randomly sampled subset of the dataset, we use a panel of LMs to generate a pool of silver references. We use these silver references to estimate the optimal context length for a given RAG system configuration. Our results on the multi-document summarization task showcase the effectiveness of our method across model classes and sizes. We compare against length estimates from strong long-context benchmarks such as RULER and HELMET. Our analysis also highlights the effectiveness of our estimation method for very long-context LMs and its generalization to new classes of LMs. | 
	[
  "Adithya Pratapa",
  "Teruko Mitamura"
] | 
	https://openreview.net/forum?id=uh0Sf8yN7n | 
	uh0Sf8yN7n | 
	uh0Sf8yN7n | 
	[
  "~Adithya_Pratapa1",
  "~Teruko_Mitamura1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/4e0303ac5eb0858029dfdcaa571f784512a0ccdf.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "retrieval-augmented generation",
  "long-context",
  "multi-document summarization"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
pratapa2025estimating,
title={Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document Summarization},
author={Adithya Pratapa and Teruko Mitamura},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uh0Sf8yN7n}
} | 
	pratapa|estimating_optimal_context_length_for_hybrid_retrievalaugmented_multidocument_summarization | null | null | null | null | null | |
| 
	Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill? | 
	Reasoning models can’t think critically when premise is missing. | 
	We find that the response length of reasoning LLMs, whether trained by reinforcement learning or supervised learning, drastically increases for ill-posed questions with missing premises (MiP), ending up with redundant and ineffective thinking. 
    Such failures are against the ``test-time scaling law'' but have been widely observed on multiple datasets we curated with MiP, indicating the harm of cheap overthinking and a lack of critical thinking. 
    Surprisingly, LLMs not specifically trained for reasoning exhibit much better critical thinking ability, producing much shorter responses that quickly identify ill-posed queries and ask for the MiP. This implies a critical flaw of the current training recipe for reasoning LLMs, which does not encourage efficient thinking adequately, leading to the abuse of thinking patterns. 
    To further investigate the reasons behind such failures, we conduct fine-grained analyses of the reasoning length, overthinking patterns, and location of critical thinking on different types of LLMs. 
    Moreover, our extended ablation study reveals that the overthinking is contagious through the distillation of reasoning models' responses.
    These results improve the understanding of overthinking and shed novel insights into mitigating the problem. 
    Our code and data can be found in: https://github.com/tianyi-lab/MiP-Overthinking. | 
	[
  "Chenrui Fan",
  "Ming Li",
  "Lichao Sun",
  "Tianyi Zhou"
] | 
	https://openreview.net/forum?id=ufozo2Wc9e | 
	ufozo2Wc9e | 
	ufozo2Wc9e | 
	[
  "~Chenrui_Fan1",
  "~Ming_Li18",
  "~Lichao_Sun1",
  "~Tianyi_Zhou2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/018bea3ab55d1c1e4c93295a221d9e746fe84514.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "LLM",
  "Reasoning Model",
  "Overthinking",
  "Abstain"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
fan2025missing,
title={Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?},
author={Chenrui Fan and Ming Li and Lichao Sun and Tianyi Zhou},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=ufozo2Wc9e}
} | 
	fan|missing_premise_exacerbates_overthinking_are_reasoning_models_losing_critical_thinking_skill | 
	/attachment/825254c1747951fd04bf0c9067828ba4da07f36f.zip | null | null | null | null | |
| 
	Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics | 
	We evaluate large language models on Olympiad-level mathematics, revealing their inability to produce rigorous and logically sound proofs despite occasional correct final answers. | 
	Recent advancements in large language models (LLMs) have shown impressive progress in mathematical reasoning tasks. However, current evaluation benchmarks predominantly focus on the accuracy of final answers, often overlooking the logical rigor crucial for mathematical problem-solving. The claim that state-of-the-art LLMs can solve Math Olympiad-level problems requires closer examination. To explore this, we conducted both qualitative and quantitative human evaluations of proofs generated by LLMs, and developed a schema for automatically assessing their reasoning capabilities. Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. We also found that occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise in advanced mathematical reasoning and highlight the importance of developing benchmarks that prioritize the rigor and coherence of mathematical arguments rather than merely the correctness of final answers. | 
	[
  "Hamed Mahdavi",
  "Alireza Hashemi",
  "Majid Daliri",
  "Pegah Mohammadipour",
  "Alireza Farhadi",
  "Samira Malek",
  "Yekta Yazdanifard",
  "Amir Khasahmadi",
  "Vasant G Honavar"
] | 
	https://openreview.net/forum?id=uXR2KsA4L9 | 
	uXR2KsA4L9 | 
	uXR2KsA4L9 | 
	[
  "~Hamed_Mahdavi1",
  "~Alireza_Hashemi2",
  "~Majid_Daliri1",
  "~Pegah_Mohammadipour1",
  "~Alireza_Farhadi2",
  "~Samira_Malek1",
  "~Yekta_Yazdanifard1",
  "~Amir_Khasahmadi1",
  "~Vasant_G_Honavar1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d53d2675aef66e061647aa4edbe85fe1f4104521.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Mathematical Reasoning",
  "Human Evaluation",
  "Reasoning Evaluation",
  "Math Problem-Solving"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
mahdavi2025brains,
title={Brains vs. Bytes: Evaluating {LLM} Proficiency in Olympiad Mathematics},
author={Hamed Mahdavi and Alireza Hashemi and Majid Daliri and Pegah Mohammadipour and Alireza Farhadi and Samira Malek and Yekta Yazdanifard and Amir Khasahmadi and Vasant G Honavar},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uXR2KsA4L9}
} | 
	mahdavi|brains_vs_bytes_evaluating_llm_proficiency_in_olympiad_mathematics | null | null | null | null | null | |
| 
	BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity | 
	We propose BlockFFN, an effective MoE architecture more friendly for end-side acceleration, as well as its sparsity-aware training objectives and efficient acceleration kernels. | 
	To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing of vanilla MoE hurts model performance. Moreover, while each token activates only a few parameters, these sparsely-activated architectures exhibit low chunk-level sparsity, indicating that the union of multiple consecutive tokens activates a large ratio of parameters. Such a sparsity pattern is unfriendly for acceleration under low-resource conditions (e.g., end-side devices) and incompatible with mainstream acceleration techniques (e.g., speculative decoding). To address these challenges, we introduce a novel MoE architecture, BlockFFN, as well as its efficient training and deployment techniques. Specifically, we use a router integrating ReLU activation and RMSNorm for differentiable and flexible routing. Next, to promote both token-level sparsity (TLS) and chunk-level sparsity (CLS), CLS-aware training objectives are designed, making BlockFFN more acceleration-friendly. Finally, we implement efficient acceleration kernels, combining activation sparsity and speculative decoding for the first time. The experimental results demonstrate the superior performance of BlockFFN over other MoE baselines, achieving over 80\% TLS and 70\% 8-token CLS. Our kernels achieve up to 3.67$\times$ speedup on real end-side devices than dense models. All codes and checkpoints are available publicly at https://github.com/thunlp/BlockFFN. | 
	[
  "Chenyang Song",
  "Weilin Zhao",
  "Xu Han",
  "Chaojun Xiao",
  "Yingfa Chen",
  "Yuxuan Li",
  "Zhiyuan Liu",
  "Maosong Sun"
] | 
	https://openreview.net/forum?id=uLl7tSUOir | 
	uLl7tSUOir | 
	uLl7tSUOir | 
	[
  "~Chenyang_Song1",
  "~Weilin_Zhao1",
  "~Xu_Han2",
  "~Chaojun_Xiao1",
  "~Yingfa_Chen1",
  "~Yuxuan_Li19",
  "~Zhiyuan_Liu1",
  "~Maosong_Sun1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/894e65b966a485306b765993e6715a25060f8737.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "mixture-of-experts",
  "activation sparsity",
  "inference acceleration"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
song2025blockffn,
title={Block{FFN}: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity},
author={Chenyang Song and Weilin Zhao and Xu Han and Chaojun Xiao and Yingfa Chen and Yuxuan Li and Zhiyuan Liu and Maosong Sun},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uLl7tSUOir}
} | 
	song|blockffn_towards_endside_accelerationfriendly_mixtureofexperts_with_chunklevel_activation_sparsity | 
	/attachment/64657ad2b7008be33d41cfe2edd34b44a1e24875.zip | null | null | null | null | |
| 
	ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models | 
	We propose ProsodyLM, a speech language model that demonstrate impressive emerging prosody generation and understand capabilities simply through pre-training on 30k audiobooks. | 
	Speech language models refer to language models with speech processing and understanding capabilities. One key desirable capability for speech language models is the ability to capture the intricate interdependency between content and prosody. The existing mainstream paradigm of training speech language models, which converts speech into discrete tokens before feeding them into LLMs, is sub-optimal in learning prosody information --- we find that the resulting LLMs do not exhibit obvious emerging prosody processing capabilities via pre-training alone. To overcome this, we propose ProsodyLM, which introduces a simple tokenization scheme amenable to learning prosody. Each speech utterance is first transcribed into text, followed by a sequence of word-level prosody tokens. Compared with conventional speech tokenization schemes, the proposed tokenization scheme retains more complete prosody information, and is more understandable to text-based LLMs. We find that ProsodyLM can learn surprisingly diverse emerging prosody processing capabilities through pre-training alone, ranging from harnessing the prosody nuances in generated speech, such as contrastive focus, understanding emotion and stress in an utterance, to maintaining prosody consistency in long contexts. | 
	[
  "Kaizhi Qian",
  "Xulin Fan",
  "Junrui Ni",
  "Slava Shechtman",
  "Mark A. Hasegawa-Johnson",
  "Chuang Gan",
  "Yang Zhang"
] | 
	https://openreview.net/forum?id=uBg8PClMUu | 
	uBg8PClMUu | 
	uBg8PClMUu | 
	[
  "~Kaizhi_Qian1",
  "~Xulin_Fan1",
  "~Junrui_Ni1",
  "~Slava_Shechtman1",
  "~Mark_A._Hasegawa-Johnson1",
  "~Chuang_Gan1",
  "~Yang_Zhang3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/6a359ebba5a3de8baa31631883a6b21c9cbc97a3.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Speech LM: Multi-modal LLM"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
qian2025prosodylm,
title={Prosody{LM}: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models},
author={Kaizhi Qian and Xulin Fan and Junrui Ni and Slava Shechtman and Mark A. Hasegawa-Johnson and Chuang Gan and Yang Zhang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uBg8PClMUu}
} | 
	qian|prosodylm_uncovering_the_emerging_prosody_processing_capabilities_in_speech_language_models | null | null | null | null | null | |
| 
	VaPR - Vision-language Preference alignment for Reasoning | 
	VaPR, a hard-negative preference dataset that mitigates stylistic and length biases in AI feedback, enabling improved reasoning and robustness in preference finetuned (DPO) vision-language models across ten benchmarks. | 
	Preference finetuning methods like Direct Preference Optimization (DPO) with AI-generated feedback have shown promise in aligning Large Vision-Language Models (LVLMs) with human preferences. However, existing techniques overlook the prevalence of noise in synthetic preference annotations in the form of stylistic and length biases. To this end, we introduce a hard-negative response generation framework based on LLM-guided response editing, that produces rejected responses with targeted errors, maintaining stylistic and length similarity to the accepted ones. Using this framework, we develop the VaPR dataset, comprising 30K high-quality samples, to finetune three LVLM families: LLaVA-V1.5, Qwen2VL \& Qwen2.5VL (2B-13B sizes). Our VaPR models deliver significant performance improvements across ten benchmarks, achieving average gains of 6.5% (LLaVA), 4.0% (Qwen2VL), and 1.5% (Qwen2.5VL), with notable improvements on reasoning tasks. A scaling analysis shows that performance consistently improves with data size, with LLaVA models benefiting even at smaller scales. Moreover, VaPR reduces the tendency to answer "Yes" in binary questions - addressing a common failure mode in LVLMs like LLaVA. Lastly, we show that the framework generalizes to open-source LLMs as editors, with models trained on VaPR-OS achieving ~99% of the performance of models trained on VaPR, which is synthesized using GPT-4o. Our data, models, and code can be found on the project page https://vap-r.github.io/vap-r/ | 
	[
  "Rohan Wadhawan",
  "Fabrice Y Harel-Canada",
  "Zi-Yi Dou",
  "Suhaila Shakiah",
  "Robinson Piramuthu",
  "Nanyun Peng"
] | 
	https://openreview.net/forum?id=uBAubFwymy | 
	uBAubFwymy | 
	uBAubFwymy | 
	[
  "~Rohan_Wadhawan1",
  "~Fabrice_Y_Harel-Canada1",
  "~Zi-Yi_Dou1",
  "~Suhaila_Shakiah1",
  "~Robinson_Piramuthu1",
  "~Nanyun_Peng1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/70b7995114502f1826a2a3c5a3f55008f8261306.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Vision Language Models",
  "Preference Optimization",
  "DPO",
  "Data Generation",
  "Reasoning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wadhawan2025vapr,
title={Va{PR} - Vision-language Preference alignment for Reasoning},
author={Rohan Wadhawan and Fabrice Y Harel-Canada and Zi-Yi Dou and Suhaila Shakiah and Robinson Piramuthu and Nanyun Peng},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=uBAubFwymy}
} | 
	wadhawan|vapr_visionlanguage_preference_alignment_for_reasoning | null | null | null | null | null | |
| 
	DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning | 
	DeepRetrieval trains query generation models through reinforcement learning instead of supervised data, achieving state-of-the-art performance across diverse retrieval tasks while being more efficient than existing approaches. | 
	Information retrieval systems are crucial for enabling effective access to large document collections. Recent approaches have leveraged Large Language Models (LLMs) to enhance retrieval performance through query augmentation, but often rely on expensive supervised learning or distillation techniques that require significant computational resources and hand-labeled data. We introduce DeepRetrieval, a reinforcement learning approach that trains LLMs for query generation through trial and error without supervised data for reference query. Using retrieval metrics as rewards, our system generates queries that maximize retrieval performance. DeepRetrieval outperforms state-of-the-art methods on literature search with 65.07\% (vs.\ previous SOTA 24.68\%) recall for publication search and 63.18\% (vs.\ previous SOTA 32.11\%) recall for trial search using real-world search engines. DeepRetrieval also dominates in evidence-seeking retrieval, classic information retrieval and SQL database search. With only 3B parameters, it outperforms industry-leading models like GPT-4o and Claude-3.5-Sonnet on those tasks. These results demonstrate that our reinforcement learning approach offers a more efficient and effective paradigm for information retrieval. | 
	[
  "Pengcheng Jiang",
  "Jiacheng Lin",
  "Lang Cao",
  "Runchu Tian",
  "SeongKu Kang",
  "Zifeng Wang",
  "Jimeng Sun",
  "Jiawei Han"
] | 
	https://openreview.net/forum?id=u9JXu4L17I | 
	u9JXu4L17I | 
	u9JXu4L17I | 
	[
  "~Pengcheng_Jiang2",
  "~Jiacheng_Lin3",
  "~Lang_Cao2",
  "~Runchu_Tian1",
  "~SeongKu_Kang1",
  "~Zifeng_Wang3",
  "~Jimeng_Sun3",
  "~Jiawei_Han1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d6e93b6b5f6f01c6793b460b37410d7d7ec3a1cd.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Models",
  "Information Retrieval",
  "Reinforcement Learning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
jiang2025deepretrieval,
title={DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning},
author={Pengcheng Jiang and Jiacheng Lin and Lang Cao and Runchu Tian and SeongKu Kang and Zifeng Wang and Jimeng Sun and Jiawei Han},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=u9JXu4L17I}
} | 
	jiang|deepretrieval_hacking_real_search_engines_and_retrievers_with_large_language_models_via_reinforcement_learning | 
	/attachment/e1c13f35faae5175020c5266f60439cdda5f1f8f.zip | null | null | null | null | |
| 
	SecurityLingua: Efficient Defense of LLM Jailbreak Attacks via Security-Aware Prompt Compression | 
	SecurityLingua defends LLMs from jailbreak attacks using secutriy-aware prompt compression to extract the true intention. It helps the model activate its safety guardrails without altering the original prompt in minimal compute and latency overhead. | 
	Large language models (LLMs) have achieved widespread adoption across numerous applications. However, many LLMs are vulnerable to malicious attacks even after safety alignment. These attacks typically bypass LLMs’ safety guardrails by wrapping the original malicious instructions inside adversarial jailbreaks prompts. Previous research has proposed methods such as adversarial training and prompt rephrasing to mitigate these safety vulnerabilities, but these methods often reduce the utility of LLMs or lead to significant computational overhead and online latency. In this paper, we propose SecurityLingua, an effective and efficient approach to defend LLMs against jailbreak attacks via security-oriented prompt compression. Specifically, we train a prompt compressor designed to discern the “true intention” of the input prompt, with a particular focus on detecting the malicious intentions of adversarial prompts. Then, in addition to the original prompt, the intention is passed via the system prompt to the target LLM to help it identify the true intention of the request. SecurityLingua ensures a consistent user experience by leaving the original input prompt intact while revealing the user’s potentially malicious intention and stimulating the built-in safety guardrails of the LLM. Moreover, thanks to prompt compression, SecurityLingua incurs only a negligible overhead and extra token cost compared to all existing defense methods, making it an especially practical solution for LLM defense. Experimental results demonstrate that SecurityLingua can effectively defend against malicious attacks and maintain utility of the LLM with negligible compute and latency overhead. Our code is available at https://aka.ms/SecurityLingua. | 
	[
  "Yucheng Li",
  "Surin Ahn",
  "Huiqiang Jiang",
  "Amir H. Abdi",
  "Yuqing Yang",
  "Lili Qiu"
] | 
	https://openreview.net/forum?id=tybbSo6wba | 
	tybbSo6wba | 
	tybbSo6wba | 
	[
  "~Yucheng_Li5",
  "~Surin_Ahn1",
  "~Huiqiang_Jiang2",
  "~Amir_H._Abdi1",
  "~Yuqing_Yang1",
  "~Lili_Qiu3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d284f5f7d160ec273e8ba2c87857c7a6d07c1e91.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Jailbreak Attacks Defense",
  "Prompt Compression"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
li2025securitylingua,
title={SecurityLingua: Efficient Defense of {LLM} Jailbreak Attacks via Security-Aware Prompt Compression},
author={Yucheng Li and Surin Ahn and Huiqiang Jiang and Amir H. Abdi and Yuqing Yang and Lili Qiu},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=tybbSo6wba}
} | 
	li|securitylingua_efficient_defense_of_llm_jailbreak_attacks_via_securityaware_prompt_compression | null | null | null | null | null | |
| 
	Why do LLMs attend to the first token? | 
	We study why it is useful for attention heads in LLMs to learn to "dump" most attention on the first token from an "over-mixing" perspective. | 
	Large Language Models (LLMs) tend to attend heavily to the first token in the sequence -- creating a so-called attention sink. Many works have studied this phenomenon in detail, proposing various ways to either leverage or alleviate it. Attention sinks have been connected to quantisation difficulties, security issues, and streaming attention. Yet, while many works have provided conditions in which they occur or not, a critical question remains shallowly answered: Why do LLMs learn such patterns and how are they being used? In this work, we argue theoretically and empirically that this mechanism provides a method for LLMs to avoid over-mixing, connecting this to existing lines of work that study mathematically how information propagates in Transformers. We run experiments to validate our theoretical intuitions and show how choices such as context length, depth, and data packing influence the sink behaviour. We hope that this study provides a new practical perspective on why attention sinks are useful in LLMs, leading to a better understanding of the attention patterns that form during training. | 
	[
  "Federico Barbero",
  "Alvaro Arroyo",
  "Xiangming Gu",
  "Christos Perivolaropoulos",
  "Petar Veličković",
  "Razvan Pascanu",
  "Michael M. Bronstein"
] | 
	https://openreview.net/forum?id=tu4dFUsW5z | 
	tu4dFUsW5z | 
	tu4dFUsW5z | 
	[
  "~Federico_Barbero1",
  "~Alvaro_Arroyo1",
  "~Xiangming_Gu1",
  "~Christos_Perivolaropoulos1",
  "~Petar_Veličković1",
  "~Razvan_Pascanu1",
  "~Michael_M._Bronstein1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/d2773cfb28982b55b4053eed019c7c33cd2cd57e.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Models",
  "Attention Sinks",
  "Information Propagation",
  "Pre-training"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
barbero2025why,
title={Why do {LLM}s attend to the first token?},
author={Federico Barbero and Alvaro Arroyo and Xiangming Gu and Christos Perivolaropoulos and Petar Veli{\v{c}}kovi{\'c} and Razvan Pascanu and Michael M. Bronstein},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=tu4dFUsW5z}
} | 
	barbero|why_do_llms_attend_to_the_first_token | null | null | null | null | null | |
| 
	Towards User-level Private Reinforcement Learning with Human Feedback | 
	We propose AUP-RLHF, a user-level label DP framework that improves privacy-utility trade-offs in RLHF for better model alignment. | 
	Reinforcement Learning with Human Feedback (RLHF) has emerged as an influential technique, enabling the alignment of large language models (LLMs) with human preferences. However, how to protect user preference privacy has become a crucial issue, as LLMs tend to remember users' preferences. Most previous work has focused on using  differential privacy (DP) to protect the privacy of individual data. However, they have concentrated primarily on item-level privacy protection and have unsatisfactory performance for user-level privacy, which is more common in RLHF. This study proposes a novel framework, AUP-RLHF, which integrates user-level label DP into RLHF. We first show that the classical random response algorithm, which achieves an acceptable performance in item-level privacy, leads to suboptimal utility when in the user-level settings. We then establish a lower bound for the user-level label DP-RLHF and develop the AUP-RLHF algorithm, which guarantees $(\varepsilon, \delta)$ user-level privacy and achieves an improved estimation error. Experimental results show that AUP-RLHF outperforms existing baseline methods in sentiment generation and summarization tasks, achieving a better privacy-utility trade-off. | 
	[
  "Jiaming Zhang",
  "Mingxi Lei",
  "Meng Ding",
  "Mengdi Li",
  "Zihang Xiang",
  "Difei Xu",
  "Jinhui Xu",
  "Di Wang"
] | 
	https://openreview.net/forum?id=tfriX0r2Sg | 
	tfriX0r2Sg | 
	tfriX0r2Sg | 
	[
  "~Jiaming_Zhang15",
  "~Mingxi_Lei1",
  "~Meng_Ding3",
  "~Mengdi_Li1",
  "~Zihang_Xiang1",
  "~Difei_Xu1",
  "~Jinhui_Xu1",
  "~Di_Wang1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/0e585b4aba2e0dc85e2231ed96be5c03e85c9e0a.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Differential Privacy",
  "RLHF",
  "LLM alignment"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zhang2025towards,
title={Towards User-level Private Reinforcement Learning with Human Feedback},
author={Jiaming Zhang and Mingxi Lei and Meng Ding and Mengdi Li and Zihang Xiang and Difei Xu and Jinhui Xu and Di Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=tfriX0r2Sg}
} | 
	zhang|towards_userlevel_private_reinforcement_learning_with_human_feedback | null | null | null | null | null | |
| 
	A Taxonomy of Transcendence | 
	We propose a controlled setting in which to study how properties of the pretraining data influence the model's ability to transcend the performance of the sources that generated the data. | 
	Although language models are trained to mimic humans, the resulting systems display capabilities beyond the scope of any one person. To understand this phenomenon, we use a controlled setting to identify properties of the training data that lead a model to transcend the performance of its data sources. We build on previous work to outline three modes of transcendence, which we call \textit{skill denoising}, \textit{skill selection}, and \textit{skill generalization}. We then introduce a knowledge graph-based setting in which simulated experts generate data based on their individual expertise. We highlight several aspects of data diversity that help to enable the model's transcendent capabilities. Additionally, our data generation setting offers a controlled testbed that we hope is valuable for future research in the area. | 
	[
  "Natalie Abreu",
  "Edwin Zhang",
  "Eran Malach",
  "Naomi Saphra"
] | 
	https://openreview.net/forum?id=tfTn8616Gf | 
	tfTn8616Gf | 
	tfTn8616Gf | 
	[
  "~Natalie_Abreu1",
  "~Edwin_Zhang2",
  "~Eran_Malach3",
  "~Naomi_Saphra1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/fff295ffa555a1dfea26a3b7a268e1aee1446901.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "language models",
  "data diversity",
  "composition",
  "knowledge graph"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
abreu2025a,
title={A Taxonomy of Transcendence},
author={Natalie Abreu and Edwin Zhang and Eran Malach and Naomi Saphra},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=tfTn8616Gf}
} | 
	abreu|a_taxonomy_of_transcendence | null | true | null | null | null | |
| 
	One-shot Optimized Steering Vectors Mediate Safety-relevant Behaviors in LLMs | 
	Optimizing steering vectors on a single training example can yield vectors that modulate safety-relevant behavior in LLMs across wider datasets. | 
	Steering vectors (SVs) have emerged as a promising approach for interpreting and controlling LLMs, but current methods typically require large contrastive datasets that are often impractical to construct and may capture spurious correlations.
We propose directly optimizing SVs through gradient descent on a single training example, and systematically investigate how these SVs generalize.
We consider several SV optimization techniques and find that the resulting SVs effectively mediate safety-relevant behaviors in multiple models.
Indeed, in experiments on an alignment-faking model, we are able to optimize one-shot SVs that induce harmful behavior on benign examples and whose negations suppress harmful behavior on malign examples.
And in experiments on refusal suppression, we demonstrate that one-shot optimized SVs can transfer across inputs, yielding a Harmbench attack success rate of 96.9%.
Furthermore, we extend work on "emergent misalignment" and show that SVs optimized to induce a model to write vulnerable code cause the model to respond harmfully on unrelated open-ended prompts. 
Finally, we use one-shot SV optimization to investigate how an instruction-tuned LLM recovers from outputting false information, and find that this ability is independent of the model's explicit verbalization that the information was false.
Overall, our findings suggest that optimizing SVs on a single example can mediate a wide array of misaligned behaviors in LLMs.
Code can be found at https://github.com/jacobdunefsky/one-shot-steering-repro and https://github.com/jacobdunefsky/one-shot-steering-misalignment. | 
	[
  "Jacob Dunefsky",
  "Arman Cohan"
] | 
	https://openreview.net/forum?id=teW4nIZ1gy | 
	teW4nIZ1gy | 
	teW4nIZ1gy | 
	[
  "~Jacob_Dunefsky1",
  "~Arman_Cohan1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/2883e1c44b20ed2b5c2c316e1e3e525427dc9ea4.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "steering vectors",
  "interpretability",
  "alignment"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
dunefsky2025oneshot,
title={One-shot Optimized Steering Vectors Mediate Safety-relevant Behaviors in {LLM}s},
author={Jacob Dunefsky and Arman Cohan},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=teW4nIZ1gy}
} | 
	dunefsky|oneshot_optimized_steering_vectors_mediate_safetyrelevant_behaviors_in_llms | null | null | null | null | null | |
| 
	Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts | 
	This paper explores sparse adapters as a simpler and more effective building block for modular, parameter-efficient architectures, demonstrating superior model merging performance at scale. | 
	Merging parameter-efficient task experts has recently gained growing attention as a way to build modular architectures that can be rapidly adapted on the fly for specific downstream tasks, without requiring additional fine-tuning. Typically, LoRA serves as the foundational building block of such parameter-efficient modular architectures, leveraging low-rank weight structures to reduce the number of trainable parameters. In this paper, we study the properties of sparse adapters, which train only a subset of weights in the base neural network, as potential building blocks of modular architectures. First, we propose a simple method for training highly effective sparse adapters, which is conceptually simpler than existing methods in the literature and surprisingly outperforms both LoRA and full fine-tuning in our setting. Next, we investigate the merging properties of these sparse adapters by merging adapters for up to 20 natural language processing tasks, thus scaling beyond what is usually studied in the literature. Our findings demonstrate that sparse adapters yield superior in-distribution performance post-merging compared to LoRA or full model merging. Achieving strong held-out performance remains a challenge for all methods considered. | 
	[
  "Samin Yeasar Arnob",
  "Zhan Su",
  "Minseon Kim",
  "Oleksiy Ostapenko",
  "Riyasat Ohib",
  "Esra'a Saleh",
  "Doina Precup",
  "Lucas Caccia",
  "Alessandro Sordoni"
] | 
	https://openreview.net/forum?id=te7UC87Zbw | 
	te7UC87Zbw | 
	te7UC87Zbw | 
	[
  "~Samin_Yeasar_Arnob1",
  "~Zhan_Su1",
  "~Minseon_Kim1",
  "~Oleksiy_Ostapenko1",
  "~Riyasat_Ohib1",
  "~Esra'a_Saleh1",
  "~Doina_Precup1",
  "~Lucas_Caccia1",
  "~Alessandro_Sordoni2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/324e2b7704b42932f84dc53ff4f42c928cf4026b.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Sparse adapter",
  "Parameter-efficient finetuning",
  "Model merging",
  "LLM"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
arnob2025exploring,
title={Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts},
author={Samin Yeasar Arnob and Zhan Su and Minseon Kim and Oleksiy Ostapenko and Riyasat Ohib and Esra'a Saleh and Doina Precup and Lucas Caccia and Alessandro Sordoni},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=te7UC87Zbw}
} | 
	arnob|exploring_sparse_adapters_for_scalable_merging_of_parameter_efficient_experts | null | null | null | null | null | |
| 
	SpectR: Dynamically Composing LM Experts with Spectral Routing | 
	SpectR is an approach for routing and merging existing LoRA models per-token and per-layer, without any additional training or data. | 
	Training large, general-purpose language models poses significant challenges. The growing availability of specialized *expert* models, fine-tuned from pretrained models for specific tasks or domains, offers a promising alternative. Leveraging the potential of these existing expert models in real-world applications requires effective methods to select or merge the models best suited for a given task. This paper introduces SpectR, an approach for dynamically composing expert models at each time step during inference. Notably, our method requires no additional training and enables flexible, token- and layer-wise model combinations. Our experimental results demonstrate that SpectR improves routing accuracy over alternative training-free methods, increasing task performance across expert domains. | 
	[
  "William Fleshman",
  "Benjamin Van Durme"
] | 
	https://openreview.net/forum?id=tK8GHR62EX | 
	tK8GHR62EX | 
	tK8GHR62EX | 
	[
  "~William_Fleshman1",
  "~Benjamin_Van_Durme2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/42fc2228d1fe25af7d5337cee4d3d171e0dd3f67.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "MoE",
  "routing",
  "merging",
  "LoRA",
  "adapters",
  "experts"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
fleshman2025spectr,
title={SpectR: Dynamically Composing {LM} Experts with Spectral Routing},
author={William Fleshman and Benjamin Van Durme},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=tK8GHR62EX}
} | 
	fleshman|spectr_dynamically_composing_lm_experts_with_spectral_routing | null | null | null | null | null | |
| 
	D3: A Dataset for Training Code LMs to Act Diff-by-Diff | 
	D3 is a dataset of 8 billion tokens of file-diff-sequence examples sampled from 850k Human-written source files, improving LM performance on code synthesis, completion, & editing. | 
	We introduce D3 ("Diverse Data for Diff-by-Diff Coding"), a large dataset for training LMs to iteratively synthesize general-purpose Python source code by generating file diffs. D3 frames code synthesis as a goal-conditioned sequential decision-making problem, where goals, states, and actions are represented by token sequences corresponding to the description of a functionality to add, the current contents of a file, and a file diff, respectively.  The dataset contains 8 billion tokens of instruction + file-state + file-diff-sequence examples sampled from  850,000 human-written Python source files.  To construct D3, we filter, augment, and annotate source code from The Stack by sampling synthetic file-diff sequences with a code analysis tool and labeling each sample with an LLM-generated rationale. In our experiments, we show that mid-training LMs like Llama 3.2 1b and 3b on D3 prior to supervised fine-tuning (SFT) on task-curated data improves performance on synthesis & editing tasks. On benchmarks like HumanEvalSynth and HumanEvalFix, we observe improvements in pass@1 of 3 to 6 points compared to direct SFT. D3-trained models are particularly strong at completing partial human-written solutions to programming problems. | 
	[
  "Ulyana Piterbarg",
  "Kanishk Gandhi",
  "Lerrel Pinto",
  "Noah Goodman",
  "Rob Fergus"
] | 
	https://openreview.net/forum?id=sy71y74U80 | 
	sy71y74U80 | 
	sy71y74U80 | 
	[
  "~Ulyana_Piterbarg1",
  "~Kanishk_Gandhi1",
  "~Lerrel_Pinto1",
  "~Noah_Goodman1",
  "~Rob_Fergus1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/1b948345ea00973ec21fbc561eeaeadf9a67a160.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "data filtering",
  "synthetic data",
  "code synthesis",
  "code editing",
  "file diffs",
  "midtraining",
  "SFT",
  "LM agents"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
piterbarg2025d,
title={D3: A Dataset for Training Code {LM}s to Act Diff-by-Diff},
author={Ulyana Piterbarg and Kanishk Gandhi and Lerrel Pinto and Noah Goodman and Rob Fergus},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=sy71y74U80}
} | 
	piterbarg|d3_a_dataset_for_training_code_lms_to_act_diffbydiff | 
	/attachment/db1dce296e8bcb2b3440badc8ce5c55718a09742.zip | null | null | null | null | |
| 
	Supposedly Equivalent Facts That Aren’t? Entity Frequency in Pre-training Induces Asymmetry in LLMs | 
	This work demonstrates that the asymmetry in how large language models recognise equivalent facts stems from inherent biases in their pre-training data, particularly through differences in entity frequency. | 
	Understanding and mitigating hallucinations in Large Language Models (LLMs) is crucial for ensuring reliable content generation. While previous research has primarily focused on "when" LLMs hallucinate, our work explains "why" and directly links model behaviour to the pre-training data that forms their prior knowledge. Specifically, we demonstrate that an asymmetry exists in the recognition of logically equivalent facts, which can be attributed to frequency discrepancies of entities appearing as subjects versus objects. Given that most pre-training datasets are inaccessible, we leverage the fully open-source $\texttt{OLMo}$ series by indexing its $\texttt{Dolma}$ dataset to estimate entity frequencies. Using relational facts (represented as triples) from $\texttt{Wikidata5M}$, we construct probing datasets to isolate this effect. Our experiments reveal that facts with a high-frequency subject and a low-frequency object are better recognised than their inverse, despite their logical equivalence. The pattern reverses in low-to-high frequency settings, and no statistically significant asymmetry emerges when both entities are high-frequency. These findings highlight the influential role of pre-training data in shaping model predictions and provide insights for inferring the characteristics of pre-training data in closed or partially closed LLMs. | 
	[
  "Yuan He",
  "Bailan He",
  "Zifeng Ding",
  "Alisia Maria Lupidi",
  "Yuqicheng Zhu",
  "Shuo Chen",
  "Caiqi Zhang",
  "Jiaoyan Chen",
  "Yunpu Ma",
  "Volker Tresp",
  "Ian Horrocks"
] | 
	https://openreview.net/forum?id=sX4OoLKSW2 | 
	sX4OoLKSW2 | 
	sX4OoLKSW2 | 
	[
  "~Yuan_He5",
  "~Bailan_He1",
  "~Zifeng_Ding1",
  "~Alisia_Maria_Lupidi1",
  "~Yuqicheng_Zhu1",
  "~Shuo_Chen12",
  "~Caiqi_Zhang2",
  "~Jiaoyan_Chen1",
  "~Yunpu_Ma1",
  "~Volker_Tresp1",
  "~Ian_Horrocks1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/469c6230650e22a6dcdd88b9c5a78eb97a921679.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Models",
  "Asymmetry",
  "Equivalent Facts",
  "Entity Frequency",
  "Pre-training Bias",
  "Knowledge Probing",
  "Hallucinations",
  "Knowledge Graphs"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
he2025supposedly,
title={Supposedly Equivalent Facts That Aren{\textquoteright}t? Entity Frequency in Pre-training Induces Asymmetry in {LLM}s},
author={Yuan He and Bailan He and Zifeng Ding and Alisia Maria Lupidi and Yuqicheng Zhu and Shuo Chen and Caiqi Zhang and Jiaoyan Chen and Yunpu Ma and Volker Tresp and Ian Horrocks},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=sX4OoLKSW2}
} | 
	he|supposedly_equivalent_facts_that_arent_entity_frequency_in_pretraining_induces_asymmetry_in_llms | 
	/attachment/e5908026cff40ccda236868d7ddebec90ec6823c.zip | null | null | null | null | |
| 
	Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory | 
	ZO2 is a memory-efficient framework that enables zeroth-order fine-tuning of large language models like OPT-175B on a single 18GB GPU. | 
	Fine-tuning large pre-trained LLMs generally demands extensive GPU memory. Traditional first-order optimizers like SGD encounter substantial difficulties due to increased memory requirements from storing activations and gradients during both the forward and backward phases as the model size expands. Alternatively, zeroth-order (ZO) techniques can compute gradients using just forward operations, eliminating the need to store activations. Furthermore, by leveraging CPU capabilities, it's feasible to enhance both the memory and processing power available to a single GPU.
We propose a novel framework, ZO2 (Zeroth-Order Offloading), for efficient zeroth-order fine-tuning of LLMs with only limited GPU memory. Our framework dynamically shifts model parameters between the CPU and GPU as required, optimizing computation flow and maximizing GPU usage by minimizing downtime. This integration of parameter adjustments with ZO's double forward operations reduces unnecessary data movement, enhancing the fine-tuning efficacy. Additionally, our framework supports an innovative low-bit precision approach in AMP (Automatic Mixed Precision) mode to streamline data exchanges between the CPU and GPU.
Employing this approach allows us to fine-tune extraordinarily large models, such as the OPT-175B with 175 billion parameters, on a mere 18GB GPU. Moreover, our framework achieves these results with almost no additional time overhead and absolutely no accuracy loss compared to standard zeroth-order methods. ZO2's code has been open-sourced in https://github.com/liangyuwang/zo2. | 
	[
  "Liangyu Wang",
  "Jie Ren",
  "Hang Xu",
  "Junxiao Wang",
  "Huanyi Xie",
  "David E. Keyes",
  "Di Wang"
] | 
	https://openreview.net/forum?id=s0p9xpORgP | 
	s0p9xpORgP | 
	s0p9xpORgP | 
	[
  "~Liangyu_Wang1",
  "~Jie_Ren4",
  "~Hang_Xu3",
  "~Junxiao_Wang1",
  "~Huanyi_Xie1",
  "~David_E._Keyes1",
  "~Di_Wang1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/12cdf783e09bc288a3a95641696d10ece8519c56.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Zeroth-Order Optimization",
  "LLMs",
  "Fine-Tuning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025scalable,
title={Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited {GPU} Memory},
author={Liangyu Wang and Jie Ren and Hang Xu and Junxiao Wang and Huanyi Xie and David E. Keyes and Di Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=s0p9xpORgP}
} | 
	wang|scalable_zerothorder_finetuning_for_extremely_large_language_models_with_limited_gpu_memory | 
	/attachment/d1a8cbfa2abae3229e3889fc60a25827e31d5d45.zip | null | null | null | null | |
| 
	MLGym: A New Framework and Benchmark for Advancing AI Research Agents | 
	MLGym introduces a framework and benchmark suite for evaluating and developing large language model agents on diverse AI research tasks. | 
	We introduce MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 diverse and open-ended AI research tasks from diverse domains such as computer vision, natural language processing, reinforcement learning, and game theory. Solving these tasks requires real-world AI research skills such as generating new ideas and hypotheses, creating and processing data, implementing ML methods, training models, running experiments, analyzing the results, and iterating through this process to improve on a given task. We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro. Our MLGym framework makes it easy to add new tasks, integrate and evaluate models or agents, generate synthetic data at scale, as well as develop new learning algorithms for training agents on AI research tasks. We find that current frontier models can improve on the given baselines, usually by finding better hyperparameters, but do not generate novel hypotheses, algorithms, architectures, or substantial improvements. We open-source our framework and benchmark to facilitate future research in advancing the AI research capabilities of LLM agents. | 
	[
  "Deepak Nathani",
  "Lovish Madaan",
  "Nicholas Roberts",
  "Nikolay Bashlykov",
  "Ajay Menon",
  "Vincent Moens",
  "Mikhail Plekhanov",
  "Amar Budhiraja",
  "Despoina Magka",
  "Vladislav Vorotilov",
  "Gaurav Chaurasia",
  "Dieuwke Hupkes",
  "Ricardo Silveira Cabral",
  "Tatiana Shavrina",
  "Jakob Nicolaus Foerster",
  "Yoram Bachrach",
  "William Yang Wang",
  "Roberta Raileanu"
] | 
	https://openreview.net/forum?id=ryTr83DxRq | 
	ryTr83DxRq | 
	ryTr83DxRq | 
	[
  "~Deepak_Nathani2",
  "~Lovish_Madaan1",
  "~Nicholas_Roberts2",
  "~Nikolay_Bashlykov1",
  "~Ajay_Menon1",
  "~Vincent_Moens3",
  "~Mikhail_Plekhanov1",
  "~Amar_Budhiraja1",
  "~Despoina_Magka2",
  "~Vladislav_Vorotilov1",
  "~Gaurav_Chaurasia1",
  "~Dieuwke_Hupkes1",
  "~Ricardo_Silveira_Cabral1",
  "~Tatiana_Shavrina1",
  "~Jakob_Nicolaus_Foerster1",
  "~Yoram_Bachrach2",
  "~William_Yang_Wang2",
  "~Roberta_Raileanu2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/75f6e6aa5276a0b93fd3859ec7b41c92ee79cea8.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "LLM Agents",
  "Tool Use",
  "Benchmark",
  "AI Research Agents"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
nathani2025mlgym,
title={{MLG}ym: A New Framework and Benchmark for Advancing {AI} Research Agents},
author={Deepak Nathani and Lovish Madaan and Nicholas Roberts and Nikolay Bashlykov and Ajay Menon and Vincent Moens and Mikhail Plekhanov and Amar Budhiraja and Despoina Magka and Vladislav Vorotilov and Gaurav Chaurasia and Dieuwke Hupkes and Ricardo Silveira Cabral and Tatiana Shavrina and Jakob Nicolaus Foerster and Yoram Bachrach and William Yang Wang and Roberta Raileanu},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=ryTr83DxRq}
} | 
	nathani|mlgym_a_new_framework_and_benchmark_for_advancing_ai_research_agents | null | null | null | null | null | |
| 
	AutoScale: Scale-Aware Data Mixing for Pre-Training LLMs | 
	We propose AutoScale, which automatically predicts compute-optimal data compositions for training LLMs at the target training data scale. | 
	Domain reweighting is an emerging research area aimed at adjusting the relative weights of different data sources to improve the effectiveness and efficiency of LLM pre-training. We show that data mixtures that perform well at smaller scales may not retain their advantage at larger scales, challenging the existing practice of determining competitive mixtures in small-scale experiments and *directly* applying them at much larger scales. To address this, we propose AutoScale, a two-stage, scale-aware data composition framework. First, AutoScale fits a parametric model that predicts the model’s loss under different data compositions, then uses it to find an approximate best allocation at smaller, more manageable budgets. Next, leveraging a novel theoretical analysis of how optimal compositions evolve with scale, AutoScale extrapolates that composition to larger budgets without further retraining. Empirically, AutoScale accelerates convergence and improves downstream performance.
For instance, when pre-training GPT-2 Large, it achieves a 28\% faster perplexity reduction than baselines and up to a 38\% speed-up over unweighted training, while yielding best-average results on various downstream tasks. Overall, our findings illustrate how domain importance shifts with training scale, underscoring the need for scale-dependent data curation in LLM training. 
Our code is open-sourced. | 
	[
  "Feiyang Kang",
  "Yifan Sun",
  "Bingbing Wen",
  "Si Chen",
  "Dawn Song",
  "Rafid Mahmood",
  "Ruoxi Jia"
] | 
	https://openreview.net/forum?id=rujwIvjooA | 
	rujwIvjooA | 
	rujwIvjooA | 
	[
  "~Feiyang_Kang1",
  "~Yifan_Sun8",
  "~Bingbing_Wen1",
  "~Si_Chen5",
  "~Dawn_Song1",
  "~Rafid_Mahmood1",
  "~Ruoxi_Jia1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/bcff10571bac9282299d471ff5a373c50aadd772.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Models (LLM)",
  "Data Curation",
  "Domain Reweighting",
  "Scaling Laws",
  "Data-centric AI"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
kang2025autoscale,
title={AutoScale: Scale-Aware Data Mixing for Pre-Training {LLM}s},
author={Feiyang Kang and Yifan Sun and Bingbing Wen and Si Chen and Dawn Song and Rafid Mahmood and Ruoxi Jia},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=rujwIvjooA}
} | 
	kang|autoscale_scaleaware_data_mixing_for_pretraining_llms | null | null | null | null | null | |
| 
	LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation | 
	We introduce LongProc (Long Procedural Generation), a new benchmark that requires both the integration of highly dispersed information and long-form generation. | 
	Existing benchmarks for evaluating long-context language models (LCLMs) primarily focus on long-context recall, requiring models to produce short responses based on a few critical snippets while processing thousands of irrelevant tokens.
We introduce LongProc (Long Procedural Generation), a new benchmark that requires both the integration of highly dispersed information and long-form generation. LongProc consists of six diverse procedural generation tasks, such as extracting structured information from HTML pages into a TSV format and executing complex search procedures to create travel plans. 
These tasks challenge LCLMs by testing their ability to follow detailed procedural instructions, synthesize and reason over dispersed information, and generate structured, long-form outputs (up to 8K tokens). 
Furthermore, as these tasks adhere to deterministic procedures and yield structured outputs, they enable reliable rule-based evaluation. 
We evaluated 23 LCLMs, including instruction-tuned models and recent reasoning models, on LongProc at three difficulty levels, with the maximum number of output tokens set at 500, 2K, and 8K. 
Notably, while all tested models claim a context window size above 32K tokens, open-weight models typically falter on 2K-token tasks, and closed-source models like GPT-4o show significant degradation on 8K-token tasks.
Reasoning models achieve stronger overall performance in long-form generation, benefiting from long CoT training.
Further analysis reveals that LCLMs struggle to maintain long-range coherence in long-form generations.
These findings highlight critical limitations in current LCLMs and suggest substantial room for improvement. | 
	[
  "Xi Ye",
  "Fangcong Yin",
  "Yinghui He",
  "Joie Zhang",
  "Howard Yen",
  "Tianyu Gao",
  "Greg Durrett",
  "Danqi Chen"
] | 
	https://openreview.net/forum?id=ruWC5LIMSo | 
	ruWC5LIMSo | 
	ruWC5LIMSo | 
	[
  "~Xi_Ye2",
  "~Fangcong_Yin1",
  "~Yinghui_He1",
  "~Joie_Zhang1",
  "~Howard_Yen1",
  "~Tianyu_Gao1",
  "~Greg_Durrett1",
  "~Danqi_Chen1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/7b139d89c8b6d0a334b5ce42efe3b6c75de299f9.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Keywords: Large language models",
  "long-context",
  "natural language processing"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ye2025longproc,
title={LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation},
author={Xi Ye and Fangcong Yin and Yinghui He and Joie Zhang and Howard Yen and Tianyu Gao and Greg Durrett and Danqi Chen},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=ruWC5LIMSo}
} | 
	ye|longproc_benchmarking_longcontext_language_models_on_long_procedural_generation | 
	/attachment/38208434b51c7f317e26844b704854a0274f2798.zip | null | null | null | null | |
| 
	Unifying Autoregressive and Diffusion-Based Sequence Generation | 
	We build upon diffusion language models to 1. make them autoregressive and 2. use an hybrid of the "uniform" and "absorb" token noising processes. | 
	We present significant extensions to diffusion-based sequence generation models, blurring the line with autoregressive language models.
We introduce *hyperschedules*, which assign distinct noise schedules to individual token positions, generalizing both autoregressive models (*e.g.*, GPT) and conventional diffusion models (*e.g.*, SEDD, MDLM) as special cases. 
Second, we propose two \emph{hybrid token-wise noising processes} that interpolate between absorbing and uniform processes, enabling the model to fix past mistakes, and we introduce a *novel inference algorithm* that leverages this new feature in a simplified context inspired from MDLM.
To support efficient training and inference, we design attention masks compatible with KV-caching.
Our methods achieve state-of-the-art perplexity and generate diverse, high-quality sequences across standard benchmarks, suggesting a promising path for autoregressive diffusion-based sequence generation.
See code and resources at https://hdlm-colm.github.io/ . | 
	[
  "Nima Fathi",
  "Torsten Scholak",
  "Pierre-Andre Noel"
] | 
	https://openreview.net/forum?id=rgq9BFXSFl | 
	rgq9BFXSFl | 
	rgq9BFXSFl | 
	[
  "~Nima_Fathi1",
  "~Torsten_Scholak1",
  "~Pierre-Andre_Noel1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/b3ac37815e8b729f1792c2e3a44ab3d9748d4e20.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "discrete diffusion",
  "generative diffusion models",
  "language models",
  "autoregressive language models"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
fathi2025unifying,
title={Unifying Autoregressive and Diffusion-Based Sequence Generation},
author={Nima Fathi and Torsten Scholak and Pierre-Andre Noel},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=rgq9BFXSFl}
} | 
	fathi|unifying_autoregressive_and_diffusionbased_sequence_generation | null | null | null | null | null | |
| 
	RankAlign: A Ranking View of the Generator-Validator Gap in Large Language Models | 
	We reduce the generator-validator gap (a discrepancy between LLMs' generated answers and self-verification), with a ranking-based loss function, improving model consistency | 
	Although large language models (LLMs) have become more capable and accurate across many tasks, some fundamental sources of unreliability remain in their behavior. One key limitation is their inconsistency at reporting the same information when prompts are changed. In this paper, we consider the discrepancy between a model’s generated answer and their own verification of that answer, the generator-validator gap. We define this gap in a more stringent way than prior work: we expect correlation of scores from a generator and a validator over the entire set of candidate answers, i.e., candidate completions that could possibly arise during ordinary language use without breaking Gricean norms. We show that according to this measure, a large gap exists in various settings, including question answering, lexical semantics tasks, and next-word prediction. We then propose RankAlign, a ranking-based training method, and show that it significantly closes the gap, surpassing all baseline methods. Moreover, this approach generalizes well to out-of-domain tasks and lexical items. | 
	[
  "Juan Diego Rodriguez",
  "Wenxuan Ding",
  "Katrin Erk",
  "Greg Durrett"
] | 
	https://openreview.net/forum?id=rJOkPauru9 | 
	rJOkPauru9 | 
	rJOkPauru9 | 
	[
  "~Juan_Diego_Rodriguez1",
  "~Wenxuan_Ding1",
  "~Katrin_Erk1",
  "~Greg_Durrett1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/445e32b946f6a1089e85265a3388edb5f9cac631.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "consistency",
  "robustness",
  "ranking loss",
  "generalizability",
  "generator-validator gap"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
rodriguez2025rankalign,
title={RankAlign: A Ranking View of the Generator-Validator Gap in Large Language Models},
author={Juan Diego Rodriguez and Wenxuan Ding and Katrin Erk and Greg Durrett},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=rJOkPauru9}
} | 
	rodriguez|rankalign_a_ranking_view_of_the_generatorvalidator_gap_in_large_language_models | null | null | null | null | null | |
| 
	Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B | 
	We use causal interventions to uncover a two-step strategy used to assemble task information from the fewshot examples in a prompt. | 
	In-Context Learning (ICL) is an intriguing ability of large language models (LLMs). Despite a substantial amount of work on its behavioral aspects and how it emerges in miniature setups, it remains unclear which mechanism assembles task information from the individual examples in a fewshot prompt. We use causal interventions to identify information flow in Gemma-2 2B for five naturalistic ICL tasks. We find that the model infers task information using a two-step strategy we call contextualize-then-aggregate: In the lower layers, the model builds up representations of individual fewshot examples, which are contextualized by preceding examples through connections between fewshot input and output tokens across the sequence. In the higher layers, these representations are aggregated to identify the task and prepare prediction of the next output. The importance of the contextualization step differs between tasks, and it may become more important in the presence of ambiguous examples. Overall, by providing rigorous causal analysis, our results shed light on the mechanisms through which ICL happens in language models. | 
	[
  "Aleksandra Bakalova",
  "Yana Veitsman",
  "Xinting Huang",
  "Michael Hahn"
] | 
	https://openreview.net/forum?id=rGNAyHReSg | 
	rGNAyHReSg | 
	rGNAyHReSg | 
	[
  "~Aleksandra_Bakalova2",
  "~Yana_Veitsman1",
  "~Xinting_Huang2",
  "~Michael_Hahn1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/0e90ec68695e0ca24ed0d342de1795113dc8b7fa.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "In-Context Learning",
  "Mechanistic Interpretability"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
bakalova2025contextualizethenaggregate,
title={Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B},
author={Aleksandra Bakalova and Yana Veitsman and Xinting Huang and Michael Hahn},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=rGNAyHReSg}
} | 
	bakalova|contextualizethenaggregate_circuits_for_incontext_learning_in_gemma2_2b | 
	/attachment/264eaf273f446b8b60871c95b423bf71648a3819.zip | null | null | null | null | |
| 
	When Splitting Makes Stronger: A Theoretical and Empirical Analysis of Divide-and-Conquer Prompting in LLMs | 
	This paper examines when divide-and-conquer (DaC) prompting is beneficial for LLMs. Through theoretical and empirical analysis, we identify specific task types where breaking inputs into sub-parts improves performance. | 
	Foundation models, particularly Large Language Models (LLMs), have garnered significant interest due to their wide range of applications.  Yet these models demonstrate notable weaknesses when confronted with tasks involving iterative sub-problems or deliberately misleading content—exemplified by complex arithmetic operations and comprehensive fake news evaluation. 
Conventional instructional prompting frequently produces flawed outputs in these scenarios. While research has established that advanced techniques such as Chain-of-Thoughts and Least-to-Most methodologies can dramatically enhance LLM performance, emerging investigation indicates that a more streamlined divide-and-conquer (DaC) approach—which systematically partitions input sequences into discrete components—can yield remarkable improvements for particular problem classes like misinformation assessment. Our investigation rigorously examines the efficacy of DaC prompting strategies and precisely delineates the task characteristics that benefit most from this methodology. Through comprehensive theoretical analysis, we establish formal guarantees for performance enhancement in specifically identified task categories. We validate our theoretical framework through focused empirical studies on large integer multiplication and factual verification tasks, where experimental outcomes robustly confirm our analytical predictions, demonstrating DaC's practical superiority in these challenging domains. | 
	[
  "Yizhou Zhang",
  "Defu Cao",
  "Lun Du",
  "Qiang Fu",
  "Yan Liu"
] | 
	https://openreview.net/forum?id=rAR7iPI8Kh | 
	rAR7iPI8Kh | 
	rAR7iPI8Kh | 
	[
  "~Yizhou_Zhang3",
  "~Defu_Cao1",
  "~Lun_Du1",
  "~Qiang_Fu7",
  "~Yan_Liu1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/e08ca1a770c51d4f6e6353307b900deabf548fcb.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Program-guided Prompt",
  "Divide-and-Conquer",
  "Foundation Model",
  "Large Language Models",
  "Deceptive content"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zhang2025when,
title={When Splitting Makes Stronger: A Theoretical and Empirical Analysis of Divide-and-Conquer Prompting in {LLM}s},
author={Yizhou Zhang and Defu Cao and Lun Du and Qiang Fu and Yan Liu},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=rAR7iPI8Kh}
} | 
	zhang|when_splitting_makes_stronger_a_theoretical_and_empirical_analysis_of_divideandconquer_prompting_in_llms | null | null | null | null | null | |
| 
	ScholarCopilot: Training Large Language Models for Academic Writing with Accurate Citations | 
	ScholarCopilot is a language model that combines text generation and citation retrieval for academic writing | 
	Academic writing requires both coherent text generation and precise citation of relevant literature. Although recent Retrieval-Augmented Generation (RAG) systems have significantly improved factual accuracy in general-purpose text generation, their ability to support professional academic writing remains limited. In this work, we introduce ScholarCopilot, a unified framework designed to enhance existing large language models for generating professional academic articles with accurate and contextually relevant citations.
ScholarCopilot dynamically determines when to retrieve scholarly references by generating a retrieval token [RET], which is then used to query a citation database. The retrieved references are fed into the model to augment the generation process.
We jointly optimize both the generation and citation tasks within a single framework to improve efficiency. Our model is built upon Qwen-2.5-7B and trained on 500K papers from arXiv. It achieves a top-1 retrieval accuracy of 40.1% on our evaluation dataset, outperforming baselines such as E5-Mistral-7B-Instruct (15.0%) and BM25 (9.8%).
On a dataset of 1,000 academic writing samples, ScholarCopilot scores 16.2/25 in generation quality--measured across relevance, coherence, academic rigor, completeness, and innovation--significantly surpassing all existing models, including much larger ones like the Retrieval-Augmented Qwen2.5-72B-Instruct. Human studies further demonstrate that ScholarCopilot, despite being a 7B model, significantly outperforms ChatGPT, achieving 100% preference in citation quality and over 70% in overall usefulness. | 
	[
  "Yubo Wang",
  "Xueguang Ma",
  "Ping Nie",
  "Huaye Zeng",
  "Zhiheng Lyu",
  "Yuxuan Zhang",
  "Benjamin Schneider",
  "Yi Lu",
  "Xiang Yue",
  "Wenhu Chen"
] | 
	https://openreview.net/forum?id=r8nloXtluk | 
	r8nloXtluk | 
	r8nloXtluk | 
	[
  "~Yubo_Wang9",
  "~Xueguang_Ma1",
  "~Ping_Nie1",
  "~Huaye_Zeng2",
  "~Zhiheng_Lyu2",
  "~Yuxuan_Zhang12",
  "~Benjamin_Schneider1",
  "~Yi_Lu9",
  "~Xiang_Yue1",
  "~Wenhu_Chen3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/7ea1b2c5ff292ac8983dfe38db0674217a2fa774.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "RAG"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025scholarcopilot,
title={ScholarCopilot: Training Large Language Models for Academic Writing with Accurate Citations},
author={Yubo Wang and Xueguang Ma and Ping Nie and Huaye Zeng and Zhiheng Lyu and Yuxuan Zhang and Benjamin Schneider and Yi Lu and Xiang Yue and Wenhu Chen},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=r8nloXtluk}
} | 
	wang|scholarcopilot_training_large_language_models_for_academic_writing_with_accurate_citations | null | null | null | null | null | |
| 
	TRELLIS: Learning to Compress Key-Value Memory in Attention Models | 
	This paper introduces a novel approach to efficiently compress the K-V cache into a fixed number of slots | 
	Transformers, while powerful, suffer from quadratic computational complexity and the ever-growing Key-Value (KV) cache of the attention mechanism. This paper introduces Trellis,  a novel Transformer architecture with bounded memory that learns how to compress its key-value memory dynamically at test time. Trellis replaces the standard KV cache with a fixed-size memory and train a two-pass recurrent compression mechanism to store new keys and values into memory.  To achieve this, it leverages an online gradient descent procedure with a forget gate, enabling the compressed memory to be updated recursively while learning to retain important contextual information from incoming tokens at test time. Extensive experiments on language modeling, common-sense reasoning, recall-intensive tasks, and time series show that the proposed architecture outperforms strong baselines. Notably, its performance gains increase as the sequence length increases, highlighting its potential for long-context applications. | 
	[
  "Mahdi Karami",
  "Ali Behrouz",
  "Praneeth Kacham",
  "Vahab Mirrokni"
] | 
	https://openreview.net/forum?id=r61s1FNYlj | 
	r61s1FNYlj | 
	r61s1FNYlj | 
	[
  "~Mahdi_Karami2",
  "~Ali_Behrouz1",
  "~Praneeth_Kacham1",
  "~Vahab_Mirrokni2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/05db279bb86af77d3beccbd36e9cbc4df6822121.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Sequence Models",
  "Language models",
  "Recurrent Neural Nets",
  "Test Time Training"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
karami2025trellis,
title={{TRELLIS}: Learning to Compress Key-Value Memory in Attention Models},
author={Mahdi Karami and Ali Behrouz and Praneeth Kacham and Vahab Mirrokni},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=r61s1FNYlj}
} | 
	karami|trellis_learning_to_compress_keyvalue_memory_in_attention_models | null | null | null | null | null | |
| 
	LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K | 
	LV-Eval is a long-context benchmark with 5 length levels up to 256K. It's designed to be challenging, suitable for controllable comparison, and mitigates knowledge leakage issue in evaluation. | 
	State-of-the-art large language models (LLMs) are now claiming remarkable supported context lengths of 256k or even more. In contrast, the average context lengths of mainstream benchmarks are insufficient (5k-21k), and they suffer from potential knowledge leakage and inaccurate metrics, resulting in biased evaluation. This paper introduces LV-Eval, a challenging long-context benchmark with five length levels (16k, 32k, 64k, 128k, and 256k) reaching up to 256k words. LV-Eval features two main tasks, single-hop QA and multi-hop QA, comprising 11 bilingual datasets. The design of LV-Eval has incorporated three key techniques, namely confusing facts insertion, keyword and phrase replacement, and keyword-recall-based metric design. The advantages of LV-Eval include controllable evaluation across different context lengths, challenging test instances with confusing facts, mitigated knowledge leakage, and more objective evaluations. We evaluate 15 LLMs on LV-Eval and conduct ablation studies on the benchmarking techniques. The results reveal that:
(i) Moonshot-v1 and recent large-scale open-source models, such as Qwen-2.5-72B and Llama-3.1-70B, achieve the highest performance on LV-Eval, particularly at lengths below $64k$. (ii) Models exhibit distinct score trends. For example, GLM-4-9B-128k, Yi-6B-200k, and Llama3-8B-1M exhibit a relatively gentle degradation of performance, but their absolute performances may not necessarily be higher than those of LLMs with shorter context lengths. (iii) LLMs' performances can significantly degrade in the presence of confusing information, especially in the pressure test of "needle in a haystack". (iv) Issues related to knowledge leakage and inaccurate metrics introduce bias in evaluation, and these concerns are alleviated in LV-Eval. | 
	[
  "Tao Yuan",
  "Xuefei Ning",
  "Dong Zhou",
  "Zhijie Yang",
  "Shiyao Li",
  "Minghui Zhuang",
  "Zheyue Tan",
  "Zhuyu Yao",
  "Dahua Lin",
  "Boxun Li",
  "Guohao Dai",
  "Shengen Yan",
  "Yu Wang"
] | 
	https://openreview.net/forum?id=r0AXK5Cnhr | 
	r0AXK5Cnhr | 
	r0AXK5Cnhr | 
	[
  "~Tao_Yuan7",
  "~Xuefei_Ning1",
  "~Dong_Zhou8",
  "~Zhijie_Yang3",
  "~Shiyao_Li2",
  "~Minghui_Zhuang1",
  "~Zheyue_Tan1",
  "~Zhuyu_Yao1",
  "~Dahua_Lin1",
  "~Boxun_Li2",
  "~Guohao_Dai4",
  "~Shengen_Yan1",
  "~Yu_Wang3"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/ebb97f30ec2136677483b21d1a5838ec8cc0a23a.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "large language model",
  "long-context benchmark",
  "knowledge leakage mitigation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
yuan2025lveval,
title={{LV}-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K},
author={Tao Yuan and Xuefei Ning and Dong Zhou and Zhijie Yang and Shiyao Li and Minghui Zhuang and Zheyue Tan and Zhuyu Yao and Dahua Lin and Boxun Li and Guohao Dai and Shengen Yan and Yu Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=r0AXK5Cnhr}
} | 
	yuan|lveval_a_balanced_longcontext_benchmark_with_5_length_levels_up_to_256k | null | null | null | null | null | |
| 
	Texture or Semantics? Vision-Language Models Get Lost in Font Recognition | 
	We introduce a special two-level benchmark to assess VLMs’ font recognition abilities. The results indicate that VLMs perform poorly on font recognition tasks and are easily influenced by visual cues rather than semantic understanding. | 
	Modern Vision-Language Models (VLMs) exhibit remarkable visual and linguistic capabilities, achieving impressive performance in various tasks such as image recognition and object localization. However, their effectiveness in fine-grained tasks remains an open question. In everyday scenarios, individuals encountering design materials, such as magazines, typography tutorials, research papers, or branding content, may wish to identify aesthetically pleasing fonts used in the text. Given their multimodal capabilities and free accessibility, many VLMs are often considered potential tools for font recognition. This raises a fundamental question: Do VLMs truly possess the capability to recognize fonts? To investigate this, we introduce the Font Recognition Benchmark (FRB), a compact and well-structured dataset comprising 15 commonly used fonts. FRB includes two versions: (i) an easy version, where 10 sentences are rendered in different fonts, and (ii) a hard version, where each text sample consists of the names of the 15 fonts themselves, introducing a stroop effect that challenges model perception. Through extensive evaluation of various VLMs on font recognition tasks, we arrive at the following key findings: (i) Current VLMs exhibit limited font recognition capabilities, with many state-of-the-art models failing to achieve satisfactory performance and being easily affected by the stroop effect introduced by textual information. (ii) Few-shot learning and Chain-of-Thought (CoT) prompting provide minimal benefits in improving font recognition accuracy across different VLMs. (iii) Attention analysis sheds light on the inherent limitations of VLMs in capturing semantic features. | 
	[
  "Zhecheng Li",
  "Guoxian Song",
  "Yujun Cai",
  "Zhen Xiong",
  "Junsong Yuan",
  "Yiwei Wang"
] | 
	https://openreview.net/forum?id=qiLJVU4I8P | 
	qiLJVU4I8P | 
	qiLJVU4I8P | 
	[
  "~Zhecheng_Li1",
  "~Guoxian_Song1",
  "~Yujun_Cai1",
  "~Zhen_Xiong2",
  "~Junsong_Yuan2",
  "~Yiwei_Wang2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/a6aed302b07a25b72eadc5f77edb021f5876fa54.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "vision language models",
  "font recognition",
  "texture or semantics",
  "stroop effect"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
li2025texture,
title={Texture or Semantics? Vision-Language Models Get Lost in Font Recognition},
author={Zhecheng Li and Guoxian Song and Yujun Cai and Zhen Xiong and Junsong Yuan and Yiwei Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qiLJVU4I8P}
} | 
	li|texture_or_semantics_visionlanguage_models_get_lost_in_font_recognition | null | null | null | null | null | |
| 
	REM: Evaluating LLM Embodied Spatial Reasoning through Multi-Frame Trajectories | 
	We introduce REM, a benchmark revealing that current multimodal language models lack fundamental abilities in spatial reasoning, object permanence, and tracking objects over changing viewpoints. | 
	Humans build viewpoint-independent cognitive maps through navigation, enabling intuitive reasoning about object permanence and spatial relations. We argue that multimodal large language models (MLLMs), despite extensive video training, lack this fundamental spatial reasoning capability, a critical limitation for embodied applications. To demonstrate these limitations and drive research, we introduce REM: Reasoning over Embodied Multi-Frame Trajectories, a benchmark using controllable 3D environments for long-horizon embodied spatial reasoning. REM systematically evaluates key aspects like object permanence/distinction, spatial relationships, and numerical tracking across dynamic embodied viewpoints. Our evaluation shows that the best-performing current models exhibit promising overall performance, but become increasingly unreliable at even moderate complexity levels easily handled by humans. These findings highlight challenges MLLMs face in developing robust spatial representations from sequential visual input. Consequently, REM provides targeted metrics and diagnostics to foster improved spatial understanding in future models. | 
	[
  "Jacob Thompson",
  "Emiliano Garcia-Lopez",
  "Yonatan Bisk"
] | 
	https://openreview.net/forum?id=qbWpEufkqk | 
	qbWpEufkqk | 
	qbWpEufkqk | 
	[
  "~Jacob_Thompson1",
  "~Emiliano_Garcia-Lopez1",
  "~Yonatan_Bisk1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/f0632dcb2d7ef41f07f0cf67492a41d5a8622c95.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Multimodal Reasoning",
  "Embodied AI",
  "VLMs",
  "Spatial Reasoning",
  "Object Permanence",
  "Visuospatial Representation",
  "Large Language Models (LLMs)",
  "Egocentric Vision",
  "Video Understanding",
  "Evaluation Benchmarks",
  "Synthetic Environments",
  "Long-horizon Reasoning",
  "Numerical Tracking",
  "Temporal Ordering",
  "Scene Understanding",
  "LLM Limitations"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
thompson2025rem,
title={{REM}: Evaluating {LLM} Embodied Spatial Reasoning through Multi-Frame Trajectories},
author={Jacob Thompson and Emiliano Garcia-Lopez and Yonatan Bisk},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qbWpEufkqk}
} | 
	thompson|rem_evaluating_llm_embodied_spatial_reasoning_through_multiframe_trajectories | null | null | null | null | null | |
| 
	ALOPE: Adaptive Layer Optimization for Translation Quality Estimation using Large Language Models | 
	This paper presents ALOPE, an adaptive layer-optimization framework for LLM-based translation quality estimation, enhancing cross-lingual transfer learning through layer-wise adaptation, dynamic weighting, and multi-head regression. | 
	Large Language Models (LLMs) have shown remarkable performance across a wide range of natural language processing tasks. Quality Estimation (QE) for Machine Translation (MT), which assesses the quality of a source-target pair without relying on reference translations, remains a challenging cross-lingual task for LLMs. The challenges stem from the inherent limitations of existing LLM-based QE systems, which are pre-trained for causal language modelling rather than regression-specific tasks, further elevated by the presence of low-resource languages given pre-training data distribution. This paper introduces ALOPE, an adaptive layer-optimization framework designed to enhance LLM-based QE by restructuring Transformer representations through layer-wise adaptation for improved regression-based prediction. Our framework integrates low-rank adapters (LoRA) with regression task heads, leveraging selected pre-trained Transformer layers for improved cross-lingual alignment. In addition to the layer-specific adaptation, ALOPE introduces two strategies—dynamic weighting, which adaptively combines representations from multiple layers, and multi-head regression, which aggregates regression losses from multiple heads for QE. Our framework shows improvements over various existing LLM-based QE approaches. Empirical evidence suggests that intermediate Transformer layers in LLMs provide contextual representations that are more aligned with the cross-lingual nature of the QE task. We make resultant models and framework code publicly available for further research, also allowing existing LLM-based MT frameworks to be scaled with QE capabilities. | 
	[
  "Archchana Sindhujan",
  "Shenbin Qian",
  "Chan Chi Chun Matthew",
  "Constantin Orasan",
  "Diptesh Kanojia"
] | 
	https://openreview.net/forum?id=qSFr5wJPGc | 
	qSFr5wJPGc | 
	qSFr5wJPGc | 
	[
  "~Archchana_Sindhujan1",
  "~Shenbin_Qian1",
  "~Chan_Chi_Chun_Matthew1",
  "~Constantin_Orasan1",
  "~Diptesh_Kanojia1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/506fd31916a1a4ae8559855c5d93d0291fd570c2.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Quality Estimation",
  "Machine Translation",
  "Translation Quality"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
sindhujan2025alope,
title={{ALOPE}: Adaptive Layer Optimization for Translation Quality Estimation using Large Language Models},
author={Archchana Sindhujan and Shenbin Qian and Chan Chi Chun Matthew and Constantin Orasan and Diptesh Kanojia},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qSFr5wJPGc}
} | 
	sindhujan|alope_adaptive_layer_optimization_for_translation_quality_estimation_using_large_language_models | null | null | null | null | null | |
| 
	Hidden in plain sight: VLMs overlook their visual representations | 
	VLMs perform worse on vision-centric tasks than their underlying vision models, relying on their language priors instead. Improving their integration of visual data—not just adding stronger vision backbones—is key to unlocking their full potential. | 
	Language provides a natural interface to specify and evaluate performance on visual tasks. 
To realize this possibility, vision language models (VLMs) must successfully integrate visual and linguistic information.
Our work compares VLMs to a direct readout of their visual encoders to understand their ability to integrate across these modalities. Across a series of vision-centric benchmarks (e.g., depth estimation, correspondence), we find that VLMs perform substantially worse than their visual encoders, dropping to near-chance performance. We investigate these results through a series of analyses across the entire VLM: namely 1) the degradation of vision representations, 2) brittleness to task prompt, and 3) the language model's role in solving the task. We find that the bottleneck in performing these vision-centric tasks lies in this third category; VLMs are not effectively using visual information easily accessible throughout the \textit{entire} model, and they inherit their language biases. 
   Our work helps diagnose the failure modes of open-source VLMs, and presents a series of evaluations useful for future investigations into visual understanding within VLMs. | 
	[
  "Stephanie Fu",
  "tyler bonnen",
  "Devin Guillory",
  "Trevor Darrell"
] | 
	https://openreview.net/forum?id=qQb1JLrwol | 
	qQb1JLrwol | 
	qQb1JLrwol | 
	[
  "~Stephanie_Fu1",
  "~tyler_bonnen1",
  "~Devin_Guillory1",
  "~Trevor_Darrell2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/74c909720993b9aa6a3d8e2e6f98ea5bea6cec00.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "vision",
  "language",
  "representation",
  "benchmark",
  "encoder",
  "vlm"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
fu2025hidden,
title={Hidden in plain sight: {VLM}s overlook their visual representations},
author={Stephanie Fu and tyler bonnen and Devin Guillory and Trevor Darrell},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qQb1JLrwol}
} | 
	fu|hidden_in_plain_sight_vlms_overlook_their_visual_representations | null | true | null | null | null | |
| 
	Interpreting the linear structure of vision-language model embedding spaces | 
	We train and release sparse autoencoders on the joint vision-language spaces of four models, and examine what the linear concepts reveal about the joint organization of meaning and modality | 
	Vision-language models encode images and text in a joint space, minimizing the distance between corresponding image and text pairs. How are language and images organized in this joint space, and how do the models encode meaning and modality? To investigate this, we train and release sparse autoencoders (SAEs) on the embedding spaces of four vision-language models (CLIP, SigLIP, SigLIP2, and AIMv2). SAEs approximate model embeddings as sparse linear combinations of learned directions, or ``concepts''. We find that, compared to other methods of linear feature learning, SAEs are better at reconstructing the real embeddings, while also able to retain the most sparsity. Retraining SAEs with different seeds or different data diet leads to two findings: the rare, specific concepts captured by the SAEs are liable to change drastically, but we also show that commonly-activating concepts are remarkably stable across runs. Interestingly, while most concepts activate primarily for one modality, we find they are not merely encoding modality per se. Many are almost orthogonal to the subspace that defines modality, and the concept directions do not function as good modality classifiers, suggesting that they encode cross-modal semantics. To quantify this bridging behavior, we introduce the Bridge Score, a metric that identifies concept pairs which are both co-activated across aligned image-text inputs and geometrically aligned in the shared space. This reveals that even single-modality concepts can collaborate to support cross-modal integration. We release interactive demos of the SAEs for all models, allowing researchers to explore the organization of the concept spaces. Overall, our findings uncover a sparse linear structure within VLM embedding spaces that is shaped by modality, yet stitched together through latent bridges—offering new insight into how multimodal meaning is constructed. | 
	[
  "Isabel Papadimitriou",
  "Huangyuan Su",
  "Thomas Fel",
  "Sham M. Kakade",
  "Stephanie Gil"
] | 
	https://openreview.net/forum?id=qPsmGjpq1j | 
	qPsmGjpq1j | 
	qPsmGjpq1j | 
	[
  "~Isabel_Papadimitriou1",
  "~Huangyuan_Su1",
  "~Thomas_Fel2",
  "~Sham_M._Kakade1",
  "~Stephanie_Gil2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/dd502e2e07e3a09b56182f616adadaa9c98e348c.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "vision-language models",
  "interpretability",
  "cross-modality meaning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
papadimitriou2025interpreting,
title={Interpreting the linear structure of vision-language model embedding spaces},
author={Isabel Papadimitriou and Huangyuan Su and Thomas Fel and Sham M. Kakade and Stephanie Gil},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qPsmGjpq1j}
} | 
	papadimitriou|interpreting_the_linear_structure_of_visionlanguage_model_embedding_spaces | null | null | null | null | null | |
| 
	SmolVLM: Redefining small and efficient multimodal models | 
	We explore extremely efficient VLMs starting at 256M parameters, which run with less than 1GB | 
	Large Vision-Language Models (VLMs) deliver exceptional performance but require significant computational resources, limiting their deployment on mobile and edge devices. Smaller VLMs typically mirror design choices of larger models, such as extensive image tokenization, leading to inefficient GPU memory usage and constrained practicality for on-device applications.
We introduce SmolVLM, a series of compact multimodal models specifically engineered for resource-efficient inference. We systematically explore architectural configurations, tokenization strategies, and data curation optimized for low computational overhead. Through this, we identify key design choices that yield substantial performance gains on both image and video tasks within minimal memory footprints.
Our smallest model, SmolVLM-256M, uses less than 1GB GPU memory during inference and outperforms the 300-times larger Idefics-80B model, despite an 18-month development gap. Our largest model, at 2.2B parameters, rivals state-of-the-art VLMs consuming twice the GPU memory. SmolVLM models extend beyond static images, demonstrating robust video comprehension capabilities.
Our results emphasize that strategic architectural optimizations, aggressive yet efficient tokenization, and carefully curated training data significantly enhance multimodal performance, facilitating practical, energy-efficient deployments at significantly smaller scales. | 
	[
  "Andrés Marafioti",
  "Orr Zohar",
  "Miquel Farré",
  "Merve noyan",
  "Elie Bakouch",
  "Pedro Manuel Cuenca Jiménez",
  "Cyril Zakka",
  "Loubna Ben allal",
  "Anton Lozhkov",
  "Nouamane Tazi",
  "Vaibhav Srivastav",
  "Joshua Lochner",
  "Hugo Larcher",
  "Mathieu Morlon",
  "Lewis Tunstall",
  "Leandro Von Werra",
  "Thomas Wolf"
] | 
	https://openreview.net/forum?id=qMUbhGUFUb | 
	qMUbhGUFUb | 
	qMUbhGUFUb | 
	[
  "~Andrés_Marafioti1",
  "~Orr_Zohar1",
  "~Miquel_Farré1",
  "~Merve_noyan1",
  "~Elie_Bakouch1",
  "~Pedro_Manuel_Cuenca_Jiménez1",
  "~Cyril_Zakka1",
  "~Loubna_Ben_allal1",
  "~Anton_Lozhkov1",
  "~Nouamane_Tazi1",
  "~Vaibhav_Srivastav2",
  "~Joshua_Lochner1",
  "~Hugo_Larcher1",
  "~Mathieu_Morlon1",
  "~Lewis_Tunstall1",
  "~Leandro_Von_Werra1",
  "~Thomas_Wolf1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/ff0790564f57d670a9033629dfbdaa6328752eca.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Vision Language Models",
  "Large Multimodal Models",
  "Vision Understanding",
  "Video Understanding"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
marafioti2025smolvlm,
title={Smol{VLM}: Redefining small and efficient multimodal models},
author={Andr{\'e}s Marafioti and Orr Zohar and Miquel Farr{\'e} and Merve noyan and Elie Bakouch and Pedro Manuel Cuenca Jim{\'e}nez and Cyril Zakka and Loubna Ben allal and Anton Lozhkov and Nouamane Tazi and Vaibhav Srivastav and Joshua Lochner and Hugo Larcher and Mathieu Morlon and Lewis Tunstall and Leandro Von Werra and Thomas Wolf},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qMUbhGUFUb}
} | 
	marafioti|smolvlm_redefining_small_and_efficient_multimodal_models | 
	/attachment/341f5c3957a28cc47659fed5751c69c213885514.zip | null | null | null | null | |
| 
	Benchmarking Retrieval-Augmented Generation for Chemistry | 
	We construct a comprehensive Retrieval-Augmented Generation benchmark for chemistry. | 
	Retrieval-augmented generation (RAG) has emerged as a powerful framework for enhancing large language models (LLMs) with external knowledge, particularly in scientific domains that demand specialized and dynamic information. 
Despite its promise, the application of RAG in the chemistry domain remains underexplored, primarily due to the lack of high-quality, domain-specific corpora and well-curated evaluation benchmarks. 
In this work, we introduce ChemRAG-Bench, a comprehensive benchmark designed to systematically assess the effectiveness of RAG across a diverse set of chemistry-related tasks. 
The accompanying chemistry corpus integrates heterogeneous knowledge sources, including scientific literature, the PubChem database, PubMed abstracts, textbooks, and Wikipedia entries. 
In addition, we present ChemRAG-Toolkit, a modular and extensible RAG toolkit that supports five retrieval algorithms and eight LLMs. 
Using ChemRAG-Toolkit, we demonstrate that RAG yields a substantial performance gain—achieving an average relative improvement of 17.4\% over direct inference methods. 
We further conduct in-depth analyses on retriever architectures, corpus selection, and the number of retrieved passages, culminating in practical recommendations to guide future research and deployment of RAG systems in the chemistry domain. | 
	[
  "Xianrui Zhong",
  "Bowen Jin",
  "Siru Ouyang",
  "Yanzhen Shen",
  "Qiao Jin",
  "Yin Fang",
  "Zhiyong Lu",
  "Jiawei Han"
] | 
	https://openreview.net/forum?id=qG4dL0bart | 
	qG4dL0bart | 
	qG4dL0bart | 
	[
  "~Xianrui_Zhong1",
  "~Bowen_Jin1",
  "~Siru_Ouyang1",
  "~Yanzhen_Shen1",
  "~Qiao_Jin1",
  "~Yin_Fang1",
  "~Zhiyong_Lu1",
  "~Jiawei_Han1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/aecea758b388085e9dbf18bb16b4d7637b2a4709.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Retrieval-Augmented Generation",
  "RAG",
  "Benchmark",
  "AI for Science",
  "LLM",
  "Large Language Model",
  "Chemistry"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zhong2025benchmarking,
title={Benchmarking Retrieval-Augmented Generation for Chemistry},
author={Xianrui Zhong and Bowen Jin and Siru Ouyang and Yanzhen Shen and Qiao Jin and Yin Fang and Zhiyong Lu and Jiawei Han},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=qG4dL0bart}
} | 
	zhong|benchmarking_retrievalaugmented_generation_for_chemistry | null | null | null | null | null | |
| 
	Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection | 
	We propose an automatic method to generate stories with logical inconsistencies, which we then use to curate a benchmark to evaluate capabilities of LLMs towards reasoning for plot holes in stories. | 
	Stories are a fundamental aspect of human experience. Engaging deeply with stories and spotting plot holes—inconsistencies in a storyline that break the internal logic or rules of a story’s world—requires nuanced reasoning skills, including tracking entities and events and their interplay, abstract thinking, pragmatic narrative understanding, commonsense and social reasoning, and theory of mind. As Large Language Models (LLMs) increasingly generate, interpret, and modify text, rigorously assessing their narrative consistency and deeper language understanding becomes critical. However, existing benchmarks focus mainly on surface-level comprehension. In this work, we propose plot hole detection in stories as a proxy to evaluate language understanding and reasoning in LLMs. We introduce FlawedFictionsMaker, a novel algorithm to controllably and carefully synthesize plot holes in human-written stories. Using this algorithm, we construct a benchmark to evaluate LLMs’ plot hole detection abilities in stories —FlawedFictions—robust to contamination, with human filtering ensuring high quality. We find that state-of-the-art LLMs struggle in accurately solving FlawedFictions regardless of the reasoning effort allowed, with performance significantly degrading as story length increases. Finally, we show that LLM-based story summarization and story generation are prone to introducing plot holes, with 50%+ and 100%+ increases in plot hole detection rates with respect to human-written originals. | 
	[
  "Kabir Ahuja",
  "Melanie Sclar",
  "Yulia Tsvetkov"
] | 
	https://openreview.net/forum?id=ptmgWRCWmu | 
	ptmgWRCWmu | 
	ptmgWRCWmu | 
	[
  "~Kabir_Ahuja1",
  "~Melanie_Sclar1",
  "~Yulia_Tsvetkov1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/8e9100661069abe51439f185fa13b0680bdf9b03.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "narrative understanding",
  "reasoning",
  "synthetic data generation",
  "test time scaling",
  "evaluation",
  "natural language generation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ahuja2025finding,
title={Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection},
author={Kabir Ahuja and Melanie Sclar and Yulia Tsvetkov},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=ptmgWRCWmu}
} | 
	ahuja|finding_flawed_fictions_evaluating_complex_reasoning_in_language_models_via_plot_hole_detection | null | null | null | null | null | |
| 
	CoLa: Learning to Interactively Collaborate with Large Language Models | 
	CoLa is an interactive learning framework to effectively collaborate with LLMs. | 
	LLMs' remarkable ability to tackle a wide range of language tasks opened new opportunities for  collaborative human-AI problem solving. LLMs can amplify human capabilities by applying their intuitions and reasoning strategies at scale. We explore whether human guides can be simulated, by generalizing from human demonstrations of guiding an AI system to solve complex language problems. We introduce CoLa, a novel self-guided learning paradigm for training automated $\textit{guides}$ and evaluate it on two QA datasets, a puzzle-solving task, and a constrained text generation task. Our empirical results show that CoLa consistently outperforms competitive approaches across all domains. Moreover, a small-sized trained guide outperforms a strong model like GPT-4 when acting as a guide. We compare the strategies employed by humans and automated guides by conducting a human study on a QA dataset. We show that automated guides outperform humans by adapting their strategies to reasoners' capabilities and conduct qualitative analyses highlighting distinct differences in guiding strategies. | 
	[
  "Abhishek Sharma",
  "Dan Goldwasser"
] | 
	https://openreview.net/forum?id=pm9ykfhknK | 
	pm9ykfhknK | 
	pm9ykfhknK | 
	[
  "~Abhishek_Sharma7",
  "~Dan_Goldwasser1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/885ff46bdcec539c57c36453e0c7e0f62ccff856.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Human Simulation",
  "Interactive Learning",
  "Reinforcement Learning"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
sharma2025cola,
title={CoLa: Learning to Interactively Collaborate with Large Language Models},
author={Abhishek Sharma and Dan Goldwasser},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=pm9ykfhknK}
} | 
	sharma|cola_learning_to_interactively_collaborate_with_large_language_models | null | null | null | null | null | |
| 
	Traceable and Explainable Multimodal Large Language Models: An Information-Theoretic View | 
	We introduce an information-theoretic framework that uses mutual information, a Concept Bottleneck, and an InfoNCE mechanism to explain how multimodal models align and integrate visual and textual inputs. | 
	Existing multimodal large language models (MLLMs) often lack traceable and explainable mechanisms for visual-textual alignment, making it challenging to understand how textual instructions shape multimodal representations. To address this shortcoming, we propose an information-theoretic framework that clarifies how MLLMs handle and transform both text and visual inputs. In particular, we measure the visual information gain that arises from textual instructions and multimodal encodings, thereby illuminating how different modalities interact and contribute to the model’s overall processing.
Our framework decomposes the multimodal encoding process into layer-wise mutual information measures for better explainability, quantifying the visual contribution as the difference between unconditional and text-conditional mutual information. Specifically, inspired by the Information Bottleneck framework, we introduce a Concept Bottleneck that maps high-dimensional multimodal representations into an interpretable space, enabling tractable variational upper bounds on the mutual information between visual inputs and the model’s internal states. Furthermore, we quantify the contextual contribution introduced by textual cues via an InfoNCE mechanism that contrasts multimodal representations computed with and without text guidance. This dual perspective, facilitated by tractable variational upper bounds, provides insight into how visual information is encoded and filtered by textual instructions, while also highlighting the contextual information induced and enhanced by MLLMs. 
Empirical findings demonstrate underexplored dynamics of visual-textual interaction within MLLMs, 
underscoring how textual instructions distinctly shape visual representations and demonstrating how visual prompts, 
when effectively paired with instructions, enhance multimodal understanding. | 
	[
  "Zihan Huang",
  "Junda Wu",
  "Rohan Surana",
  "Raghav Jain",
  "Tong Yu",
  "Raghavendra Addanki",
  "David Arbour",
  "Sungchul Kim",
  "Julian McAuley"
] | 
	https://openreview.net/forum?id=pQm66IPmeE | 
	pQm66IPmeE | 
	pQm66IPmeE | 
	[
  "~Zihan_Huang1",
  "~Junda_Wu1",
  "~Rohan_Surana1",
  "~Raghav_Jain1",
  "~Tong_Yu3",
  "~Raghavendra_Addanki1",
  "~David_Arbour1",
  "~Sungchul_Kim1",
  "~Julian_McAuley1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/8898dfdc9f6fbb8ae39d5dae4cef8545d2219ee3.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "multimodal LLM",
  "information theory"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
huang2025traceable,
title={Traceable and Explainable Multimodal Large Language Models: An Information-Theoretic View},
author={Zihan Huang and Junda Wu and Rohan Surana and Raghav Jain and Tong Yu and Raghavendra Addanki and David Arbour and Sungchul Kim and Julian McAuley},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=pQm66IPmeE}
} | 
	huang|traceable_and_explainable_multimodal_large_language_models_an_informationtheoretic_view | null | null | null | null | null | |
| 
	Understanding the Uncertainty of LLM Explanations: A Perspective Based on Reasoning Topology | 
	A framework quantifies uncertainty in LLM explanations through a formal reasoning topology perspective. | 
	Understanding the uncertainty in large language model (LLM) explanations is important for evaluating their faithfulness and reasoning consistency, thus providing insights into the reliability of LLM's output. In this work, we propose a novel framework that quantifies uncertainty in LLM explanations through a formal reasoning topology perspective. By designing a structural elicitation strategy, we can decompose the explanation into the knowledge and reasoning dimensions, which allows us to not only quantify reasoning uncertainty but also assess knowledge redundancy and provide interpretable insights into the model’s reasoning structure. Our method offers a systematic way to interpret the LLM reasoning process, analyze limitations, and provide guidance for enhancing robustness and faithfulness. This work pioneers the use of graph-structured uncertainty measurement in LLM explanations, offering a new perspective on evaluating and improving reasoning capabilities. | 
	[
  "Longchao Da",
  "Xiaoou Liu",
  "Jiaxin Dai",
  "Lu Cheng",
  "Yaqing Wang",
  "Hua Wei"
] | 
	https://openreview.net/forum?id=p4wZfBFgyI | 
	p4wZfBFgyI | 
	p4wZfBFgyI | 
	[
  "~Longchao_Da1",
  "~Xiaoou_Liu1",
  "~Jiaxin_Dai2",
  "~Lu_Cheng2",
  "~Yaqing_Wang1",
  "~Hua_Wei1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/f25b28f902394e53663073bcf82bb73491d92d95.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Uncertainty Quantification",
  "LLM Explanations",
  "Graph Mining"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
da2025understanding,
title={Understanding the Uncertainty of {LLM} Explanations: A Perspective Based on Reasoning Topology},
author={Longchao Da and Xiaoou Liu and Jiaxin Dai and Lu Cheng and Yaqing Wang and Hua Wei},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=p4wZfBFgyI}
} | 
	da|understanding_the_uncertainty_of_llm_explanations_a_perspective_based_on_reasoning_topology | null | null | null | null | null | |
| 
	PrefPalette: Personalized Preference Modeling with Latent Attributes | 
	PrefPalette models human preferences through interpretable latent attributes and community-specific weightings, outperforming GPT-4o by 46.6% on 45 Reddit communities/ | 
	Personalizing AI systems requires understanding not just what users prefer, but the reasons that underlie those preferences—yet current preference models typically treat human judgment as a black box. We introduce PrefPalette, a framework that decomposes preferences into attribute dimensions and tailors its preference prediction to distinct social community values in a human-interpretable way. PrefPalette operationalizes a cognitive science principle known as multi-attribute decision making in two ways: (1) a scalable counterfactual attribute synthesis step that involves generating synthetic training data to isolate for individual attribute effects (e.g., formality, humor, cultural values), and (2) attention-based preference modeling that learns how different social communities dynamically weight these attributes. This approach moves beyond aggregate preference modeling to capture the diverse evaluation frameworks that drive human judgment. When evaluated on 45 social communities from the online platform Reddit, PrefPalette outperforms GPT-4o by 46.6% in average prediction accuracy. Beyond raw predictive improvements, PrefPalette also shed light on intuitive, community-specific profiles: scholarly communities prioritize verbosity and stimulation, conflict-oriented communities value sarcasm and directness, and support-based communities emphasize empathy. By modeling the attribute-mediated structure of human judgment, PrefPalette delivers both superior preference modeling
and transparent, interpretable insights, and serves as a first step toward
more trustworthy, value-aware personalized applications. | 
	[
  "Shuyue Stella Li",
  "Melanie Sclar",
  "Hunter Lang",
  "Ansong Ni",
  "Jacqueline He",
  "Puxin Xu",
  "Andrew Cohen",
  "Chan Young Park",
  "Yulia Tsvetkov",
  "Asli Celikyilmaz"
] | 
	https://openreview.net/forum?id=p4ujQsKmPV | 
	p4ujQsKmPV | 
	p4ujQsKmPV | 
	[
  "~Shuyue_Stella_Li1",
  "~Melanie_Sclar1",
  "~Hunter_Lang1",
  "~Ansong_Ni1",
  "~Jacqueline_He1",
  "~Puxin_Xu1",
  "~Andrew_Cohen4",
  "~Chan_Young_Park1",
  "~Yulia_Tsvetkov1",
  "~Asli_Celikyilmaz1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/8de71e24767ae7de52d4c6da79f3c07ff1368f62.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Social Reasoning",
  "Preference Modeling",
  "Explainability"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
li2025prefpalette,
title={PrefPalette: Personalized Preference Modeling with Latent Attributes},
author={Shuyue Stella Li and Melanie Sclar and Hunter Lang and Ansong Ni and Jacqueline He and Puxin Xu and Andrew Cohen and Chan Young Park and Yulia Tsvetkov and Asli Celikyilmaz},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=p4ujQsKmPV}
} | 
	li|prefpalette_personalized_preference_modeling_with_latent_attributes | null | null | null | null | null | |
| 
	LLMs as Research Tools: A Large Scale Survey of Researchers’ Usage and Perceptions | 
	A large-scale survey of 816 researchers to study usage of LLMs in scientific research and the perception of such usage. | 
	The rise of large language models (LLMs) has led many researchers to consider their usage for scientific work. Some have found benefits using LLMs to augment or automate aspects of their research pipeline, while others have urged caution due to risks and ethical concerns. Yet little work has sought to quantify and characterize how researchers actually use LLMs and why or why not. We present the first large-scale survey of 816 verified research article authors to understand how the research community leverages and perceives LLMs as research tools. We examine participants' self-reported LLM usage, finding that 81% of researchers have already incorporated LLMs into aspects of their research workflow. We also find that some traditionally disadvantaged groups in academia (non-white, junior, and non-native English speaking researchers) report higher LLM usage and perceived benefits, suggesting potential for improved research equity. However, women, non-binary, and senior researchers have greater ethical concerns. Our study provides much-needed evidence, rather than speculation, about how LLMs are currently being used as research tools. | 
	[
  "Zhehui Liao",
  "Maria Antoniak",
  "Inyoung Cheong",
  "Evie Yu-Yen Cheng",
  "Ai-Heng Lee",
  "Kyle Lo",
  "Joseph Chee Chang",
  "Amy X Zhang"
] | 
	https://openreview.net/forum?id=p0BwJk3R1p | 
	p0BwJk3R1p | 
	p0BwJk3R1p | 
	[
  "~Zhehui_Liao1",
  "~Maria_Antoniak1",
  "~Inyoung_Cheong1",
  "~Evie_Yu-Yen_Cheng1",
  "~Ai-Heng_Lee1",
  "~Kyle_Lo1",
  "~Joseph_Chee_Chang1",
  "~Amy_X_Zhang1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/8fa13ec1b57aa5223febb7bcf9bfb12d49e14139.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "survey",
  "large language model",
  "research",
  "societal impact"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
liao2025llms,
title={{LLM}s as Research Tools: A Large Scale Survey of Researchers{\textquoteright} Usage and Perceptions},
author={Zhehui Liao and Maria Antoniak and Inyoung Cheong and Evie Yu-Yen Cheng and Ai-Heng Lee and Kyle Lo and Joseph Chee Chang and Amy X Zhang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=p0BwJk3R1p}
} | 
	liao|llms_as_research_tools_a_large_scale_survey_of_researchers_usage_and_perceptions | 
	/attachment/27da1dfac11a3cf6ed57dad4725449633edef625.zip | null | null | null | null | |
| 
	Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation | 
	We show significant pragmatic deficiencies in current VLMs when faced with referring expression generation compared to humans, as they violate Gricean maxims. | 
	Referring Expression Generation (REG) is a core task for evaluating the pragmatic competence of vision-language systems, requiring not only accurate semantic grounding but also adherence to principles of cooperative communication. However, current evaluations of vision-language models (VLMs) often overlook the pragmatic dimension, reducing REG to a region-based captioning task and neglecting Gricean maxims. In this work, we revisit REG from a pragmatic perspective, introducing a new dataset (RefOI) of 1.5k images annotated with both written and spoken referring expressions. Through a systematic evaluation of state-of-the-art VLMs, we identify three key failures of pragmatic competence: (1) failure to uniquely identify the referent, (2) inclusion of excessive or irrelevant information, and (3) misalignment with human pragmatic preference, such as the underuse of minimal spatial cues. We also show that standard automatic evaluations fail to capture these pragmatic violations, reinforcing superficial cues rather than genuine referential success. Our findings call for a renewed focus on pragmatically informed models and evaluation frameworks that align with real human communication. | 
	[
  "Ziqiao Ma",
  "Jing Ding",
  "Xuejun Zhang",
  "Dezhi Luo",
  "Jiahe Ding",
  "Sihan Xu",
  "Yuchen Huang",
  "Run Peng",
  "Joyce Chai"
] | 
	https://openreview.net/forum?id=oj3ETSitjb | 
	oj3ETSitjb | 
	oj3ETSitjb | 
	[
  "~Ziqiao_Ma1",
  "~Jing_Ding3",
  "~Xuejun_Zhang4",
  "~Dezhi_Luo1",
  "~Jiahe_Ding1",
  "~Sihan_Xu2",
  "~Yuchen_Huang5",
  "~Run_Peng1",
  "~Joyce_Chai2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/572f9a40475cf207d0a6d41ebdd24de410b70bde.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Vision-Language Models",
  "Pragmatics",
  "Referring Expression Generation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ma2025visionlanguage,
title={Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation},
author={Ziqiao Ma and Jing Ding and Xuejun Zhang and Dezhi Luo and Jiahe Ding and Sihan Xu and Yuchen Huang and Run Peng and Joyce Chai},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oj3ETSitjb}
} | 
	ma|visionlanguage_models_are_not_pragmatically_competent_in_referring_expression_generation | null | null | null | null | null | |
| 
	SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation | 
	This paper introduces a novel multi-stage prune-and-distill framework that efficiently compresses Phi-MoE 3.5 into compact 7.6B and 3.8B parameter models that significantly outperform similarly-sized alternatives. | 
	The Mixture of Experts (MoE) architecture has emerged as a powerful paradigm for scaling large language models (LLMs) while maintaining inference efficiency. However, their substantial memory requirements make them prohibitively expensive to fine-tune or deploy in resource-constrained environments. To address this challenge, we propose \textit{SlimMoE}, a multi-stage compression framework that transforms large MoE models into significantly smaller and more efficient variants without the cost of training from scratch. Our method systematically reduces parameter counts by slimming experts and transferring knowledge through intermediate stages, effectively mitigating the performance degradation typical of one-shot pruning.
Using SlimMoE, we compress Phi-3.5-MoE (41.9B total / 6.6B activated parameters) into two smaller models: Phi-mini-MoE (7.6B total / 2.4B activated) and Phi-tiny-MoE (3.8B total / 1.1B activated), using only 400B tokens -- less than 10\% of the original training data. These models can be fine-tuned on a single GPU (A100 for Phi-mini-MoE, A6000 for Phi-tiny-MoE), making them well suited for academic and resource-limited settings. 
Our experiments show that the compressed models outperform others of similar size and remain competitive with larger models. For example, Phi-mini-MoE matches or exceeds the performance of Phi-3-mini while using only two-thirds of the activated parameters and achieves comparable MMLU scores to LLaMA 3.1 8B with significantly lower latency. These results highlight that structured pruning combined with multi-stage distillation is an effective strategy for building high-quality, compact MoE models, enabling broader adoption of MoE architectures across diverse computational environments. We release our models at \url{https://huggingface.co/microsoft/Phi-mini-MoE-instruct} and \url{https://huggingface.co/microsoft/Phi-tiny-MoE-instruct}. | 
	[
  "Zichong Li",
  "Chen Liang",
  "Zixuan Zhang",
  "Ilgee Hong",
  "Young Jin Kim",
  "Weizhu Chen",
  "Tuo Zhao"
] | 
	https://openreview.net/forum?id=oaCUsn391F | 
	oaCUsn391F | 
	oaCUsn391F | 
	[
  "~Zichong_Li2",
  "~Chen_Liang3",
  "~Zixuan_Zhang5",
  "~Ilgee_Hong1",
  "~Young_Jin_Kim1",
  "~Weizhu_Chen1",
  "~Tuo_Zhao2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/55e9e80f612fc74e9c3e47b1eafd56d35879cd4e.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Model",
  "Mixture of Experts",
  "Structured Pruning",
  "Knowledge Distillation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
li2025slimmoe,
title={SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation},
author={Zichong Li and Chen Liang and Zixuan Zhang and Ilgee Hong and Young Jin Kim and Weizhu Chen and Tuo Zhao},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oaCUsn391F}
} | 
	li|slimmoe_structured_compression_of_large_moe_models_via_expert_slimming_and_distillation | null | null | null | null | null | |
| 
	The Devil is in the EOS: Sequence Training for Detailed Image Captioning | 
	Encourging detailed image captioning through end of sequence token debaising | 
	Despite significant advances in vision-language models (VLMs), image captioning often suffers from a lack of detail, with base models producing short, generic captions. This limitation persists even though VLMs are equipped with strong vision and language backbones. While supervised data and complex reward functions have been proposed to improve detailed image captioning, we identify a simpler underlying issue: a bias towards the end-of-sequence (EOS) token, which is introduced during cross-entropy training. We propose an unsupervised method to debias the model's tendency to predict the EOS token prematurely. By reducing this bias, we encourage the generation of longer, more detailed captions without the need for intricate reward functions or supervision. Our approach is straightforward, effective, and easily applicable to any pretrained model. We demonstrate its effectiveness through experiments with three VLMs and on three detailed captioning benchmarks. Our results show a substantial increase in caption length and relevant details, albeit with an expected increase in the rate of hallucinations. | 
	[
  "Abdelrahman Mohamed",
  "Yova Kementchedjhieva"
] | 
	https://openreview.net/forum?id=oSub7DiyjL | 
	oSub7DiyjL | 
	oSub7DiyjL | 
	[
  "~Abdelrahman_Mohamed3",
  "~Yova_Kementchedjhieva1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/b764b38e67af5bc5f0ebf05ae0f332177a5f3d53.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Detailed image captioning; sequence training; reinforcement learning; vision langauge models"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
mohamed2025the,
title={The Devil is in the {EOS}: Sequence Training for Detailed Image Captioning},
author={Abdelrahman Mohamed and Yova Kementchedjhieva},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oSub7DiyjL}
} | 
	mohamed|the_devil_is_in_the_eos_sequence_training_for_detailed_image_captioning | null | null | null | null | null | |
| 
	Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier | 
	We propose an extension to BPE that allows pretokens to merge into superwords, leading to a more uniform token distribution and better compression. | 
	Pre-tokenization, the initial step in many modern tokenization pipelines, segments text into smaller units called pretokens, typically splitting on whitespace and punctuation. While this process encourages having full, individual words as tokens, it introduces a fundamental limitation in most tokenization algorithms such as Byte Pair Encoding (BPE). Specifically, pre-tokenization causes the distribution of tokens in a corpus to heavily skew towards common, full-length words. This skewed distribution limits the benefits of expanding to larger vocabularies, since the additional tokens appear with progressively lower counts. To overcome this barrier, we propose BoundlessBPE, a modified BPE algorithm that relaxes the pretoken boundary constraint. Our approach selectively merges two complete pretokens into a larger unit we term a superword. Superwords are not necessarily semantically cohesive. For example, the pretokens " of" and " the" might be combined to form the superword " of the". This merging strategy results in a substantially more uniform distribution of tokens across a corpus than standard BPE, and compresses text more effectively, with an approximate 20% increase in bytes per token. | 
	[
  "Craig W Schmidt",
  "Varshini Reddy",
  "Chris Tanner",
  "Yuval Pinter"
] | 
	https://openreview.net/forum?id=oPAjXGV8qQ | 
	oPAjXGV8qQ | 
	oPAjXGV8qQ | 
	[
  "~Craig_W_Schmidt1",
  "~Varshini_Reddy2",
  "~Chris_Tanner1",
  "~Yuval_Pinter1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/e812cb86f37b3a79329043b3b6b9ab1bc1033cfd.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "tokenization",
  "Byte Pair Encoding",
  "BPE",
  "subword tokenization"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
schmidt2025boundless,
title={Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier},
author={Craig W Schmidt and Varshini Reddy and Chris Tanner and Yuval Pinter},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oPAjXGV8qQ}
} | 
	schmidt|boundless_byte_pair_encoding_breaking_the_pretokenization_barrier | null | null | null | null | null | |
| 
	Layerwise Importance Analysis of Feed-Forward Networks in Transformer-based Language Models | 
	FFNs in 70% of the consecutive middle layers of Transformer-based LM contribute more to model performance than other layers. | 
	This study investigates the layerwise importance of feed-forward networks (FFNs) in transformer-based language models during pretraining.
We introduce an experimental approach that, while maintaining the total parameter count, increases the FFN dimensions in some layers and completely removes the FFNs from other layers.
Furthermore, since our focus is on the importance of FFNs during pretraining, we train models from scratch to examine whether the importance of FFNs varies depending on their layer positions, rather than using publicly available pretrained models as is frequently done.
Through comprehensive evaluations of models with varying sizes (285M, 570M, and 1.2B parameters) and layer counts (12, 24, and 40 layers), we demonstrate that concentrating FFNs in 70\% of the consecutive middle layers consistently outperforms standard configurations for multiple downstream tasks. | 
	[
  "Wataru Ikeda",
  "Kazuki Yano",
  "Ryosuke Takahashi",
  "Jaesung Lee",
  "KeigoShibata",
  "Jun Suzuki"
] | 
	https://openreview.net/forum?id=oP3b5YBFoP | 
	oP3b5YBFoP | 
	oP3b5YBFoP | 
	[
  "~Wataru_Ikeda2",
  "~Kazuki_Yano1",
  "~Ryosuke_Takahashi2",
  "~Jaesung_Lee3",
  "~KeigoShibata1",
  "~Jun_Suzuki1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/061c656981c2fe4b6df90b654e9d2f6685a699ca.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Feed-Forward Networks",
  "Model Architecture",
  "Knowledge Representation",
  "Pre-training"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
ikeda2025layerwise,
title={Layerwise Importance Analysis of Feed-Forward Networks in Transformer-based Language Models},
author={Wataru Ikeda and Kazuki Yano and Ryosuke Takahashi and Jaesung Lee and KeigoShibata and Jun Suzuki},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oP3b5YBFoP}
} | 
	ikeda|layerwise_importance_analysis_of_feedforward_networks_in_transformerbased_language_models | null | null | null | null | null | |
| 
	Synthetic Data Generation and Multi-Step Reinforcement Learning for Reasoning and Tool Use | 
	We propose a synthetic data generation and RL methodology for multi-step reasoning and tool use. | 
	Reinforcement learning has been shown to improve the performance of large language models. However, traditional approaches like RLHF or RLAIF treat the problem as single-step. As focus is shifting towards solving more complex reasoning and agentic tasks, language models must take multiple steps of text generation, reasoning and environment interaction before generating a solution. We propose a synthetic data generation and RL methodology targeting multi-step optimization scenarios. This approach, called Step-Wise Reinforcement Learning (SWiRL), iteratively generates multi-step reasoning and tool use data, and then learns from that data. It employs a simple step-wise decomposition that breaks each multi-step trajectory into multiple sub-trajectories corresponding to each action by the original model. It then applies synthetic data filtering and RL optimization on these sub-trajectories. We evaluated SWiRL on a number of multi-step tool use, question answering, and mathematical reasoning tasks. Our experiments show that SWiRL outperforms baseline approaches by 21.5\%, 12.3\%, 14.8\%, 11.1\%, and 15.3\% in relative accuracy on GSM8k, HotPotQA, CofCA, MuSiQue, and BeerQA, respectively. Excitingly, the approach exhibits generalization across tasks: for example, training only on HotPotQA (text question-answering) improves zero-shot performance on GSM8k (a math dataset) by 16.9\%. | 
	[
  "Anna Goldie",
  "Azalia Mirhoseini",
  "Hao Zhou",
  "Irene Cai",
  "Christopher D Manning"
] | 
	https://openreview.net/forum?id=oN9STRYQVa | 
	oN9STRYQVa | 
	oN9STRYQVa | 
	[
  "~Anna_Goldie2",
  "~Azalia_Mirhoseini3",
  "~Hao_Zhou46",
  "~Irene_Cai1",
  "~Christopher_D_Manning1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/474b99204f9524eb1382562f7c89a5910f85b284.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Large Language Models",
  "Reinforcement Learning",
  "Multi-Step Reasoning",
  "Tool Use",
  "Synthetic Data"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
goldie2025synthetic,
title={Synthetic Data Generation and Multi-Step Reinforcement Learning for Reasoning and Tool Use},
author={Anna Goldie and Azalia Mirhoseini and Hao Zhou and Irene Cai and Christopher D Manning},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oN9STRYQVa}
} | 
	goldie|synthetic_data_generation_and_multistep_reinforcement_learning_for_reasoning_and_tool_use | null | null | null | null | null | |
| 
	Rhapsody: A Dataset for Highlight Detection in Podcasts | 
	We present a dataset of 13K podcast episodes paired with segment-level highlight scores. | 
	Podcasts have become daily companions for half a billion users. Given the enormous amount of podcast content available, highlights provide a valuable signal that helps viewers get the gist of an episode and decide if they want to invest in listening to it in its entirety. However, identifying highlights automatically is challenging due to the unstructured and long-form nature of the content. We introduce Rhapsody, a dataset of 13K podcast episodes paired with segment-level highlight scores derived from YouTube's 'most replayed' feature. We frame the podcast highlight detection as a segment-level binary classification task. We explore various baseline approaches, including zero-shot prompting of language models and lightweight fine-tuned language models using segment-level classification heads. Our experimental results indicate that even state-of-the-art language models like GPT-4o and Gemini struggle with this task, while models fine-tuned with in-domain data significantly outperform their zero-shot performance. The fine-tuned model benefits from leveraging both speech signal features and transcripts. These findings highlight the challenges for fine-grained information access in long-form spoken media. | 
	[
  "Younghan Park",
  "Anuj Diwan",
  "David Harwath",
  "Eunsol Choi"
] | 
	https://openreview.net/forum?id=oKdVFxngy1 | 
	oKdVFxngy1 | 
	oKdVFxngy1 | 
	[
  "~Younghan_Park1",
  "~Anuj_Diwan1",
  "~David_Harwath1",
  "~Eunsol_Choi1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/4a008730e477e28f83e5e20da6a289def694aacc.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "podcast highlight detection",
  "long-context reasoning",
  "spoken language processing"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
park2025rhapsody,
title={Rhapsody: A Dataset for Highlight Detection in Podcasts},
author={Younghan Park and Anuj Diwan and David Harwath and Eunsol Choi},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oKdVFxngy1}
} | 
	park|rhapsody_a_dataset_for_highlight_detection_in_podcasts | null | null | null | null | null | |
| 
	ThoughtTerminator: Benchmarking, Calibrating, and Mitigating Overthinking in Reasoning Models | 
	We evaluate the relationship between problem difficulty and token cost in reasoning models, benchmark how efficiently different models allocate tokens, and introduce a simple training-free decoding method to reduce overthinking, Thought Terminator. | 
	Reasoning models have demonstrated impressive performance on difficult tasks that traditional language models struggle at. However, many are plagued with the problem of overthinking---generating large amounts of unnecessary tokens which don't improve accuracy on a question. We introduce approximate measures of problem-level difficulty and demonstrate that a clear relationship between problem difficulty and optimal token spend exists, and evaluate how well calibrated a variety of reasoning models are in terms of efficiently allocating the optimal token count. We find that in general, reasoning models are poorly calibrated, particularly on easy problems. To evaluate calibration on easy questions we introduce DUMB500, a dataset of extremely easy math, reasoning, code, and task problems, and jointly evaluate reasoning model on these simple examples and extremely difficult examples from existing frontier benchmarks on the same task domain. Finally, we introduce ThoughtTerminator, a training-free black box decoding technique that significantly improves reasoning model calibration. | 
	[
  "Xiao Pu",
  "Michael Saxon",
  "Wenyue Hua",
  "William Yang Wang"
] | 
	https://openreview.net/forum?id=oHR862dpMC | 
	oHR862dpMC | 
	oHR862dpMC | 
	[
  "~Xiao_Pu2",
  "~Michael_Saxon1",
  "~Wenyue_Hua1",
  "~William_Yang_Wang2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/be760f06990b1cc580adbc28bf7e09c23cabe7a2.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "reasoning model",
  "overthinking",
  "decoding",
  "tool use",
  "evaluation",
  "benchmark"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
pu2025thoughtterminator,
title={ThoughtTerminator: Benchmarking, Calibrating, and Mitigating Overthinking in Reasoning Models},
author={Xiao Pu and Michael Saxon and Wenyue Hua and William Yang Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oHR862dpMC}
} | 
	pu|thoughtterminator_benchmarking_calibrating_and_mitigating_overthinking_in_reasoning_models | null | null | null | null | null | |
| 
	Plato: Plan to Efficient Decode for Large Language Model Inference | 
	Plan to exploit parallelism structure to break the autoregressive nature of LLM inference. | 
	Large language models (LLMs) have achieved remarkable success in natural language tasks, but their inference incurs substantial computational and memory overhead.
To improve efficiency, parallel decoding methods like Skeleton-of-Thought (SoT) decompose prompts into sub-problems for concurrent processing. However, these methods significantly compromise answer quality by treating semantically linked sub-problems as independent.
We propose Plato, a novel approach that co-designs algorithms and systems for semantic-aware parallel decoding. Plato leverages LLMs to organize sub-problems into a dependency graph based on logical and causal relationships, enabling concurrent decoding of non-dependent nodes while preserving answer coherence and quality.
To further enhance efficiency, Plato pipelines planning and node decoding stages, implements a global context cache, and carefully structures node inference prompts to maximize key-value cache reuse and minimize overhead. Our evaluations show that Plato improves throughput by up to 68% over autoregressive decoding while achieving a 40% net win rate in answer quality. Compared to SoT, Plato demonstrates a remarkable 90% quality net-win rate. Ablation studies reveal that our pipeline design improves speedup by 29%, while our KV cache reuse optimization reduces overhead by 75%. | 
	[
  "Shuowei Jin",
  "Xueshen Liu",
  "Yongji Wu",
  "Haizhong Zheng",
  "Qingzhao Zhang",
  "Atul Prakash",
  "Matthew Lentz",
  "Danyang Zhuo",
  "Feng Qian",
  "Zhuoqing Mao"
] | 
	https://openreview.net/forum?id=oGO0fNVWrN | 
	oGO0fNVWrN | 
	oGO0fNVWrN | 
	[
  "~Shuowei_Jin1",
  "~Xueshen_Liu1",
  "~Yongji_Wu1",
  "~Haizhong_Zheng1",
  "~Qingzhao_Zhang1",
  "~Atul_Prakash1",
  "~Matthew_Lentz1",
  "~Danyang_Zhuo1",
  "~Feng_Qian4",
  "~Zhuoqing_Mao1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/14b0fc4f8d7204c3034c61a920be7e0720064fda.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Efficient LLM Inference"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
jin2025plato,
title={Plato: Plan to Efficient Decode for Large Language Model Inference},
author={Shuowei Jin and Xueshen Liu and Yongji Wu and Haizhong Zheng and Qingzhao Zhang and Atul Prakash and Matthew Lentz and Danyang Zhuo and Feng Qian and Zhuoqing Mao},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=oGO0fNVWrN}
} | 
	jin|plato_plan_to_efficient_decode_for_large_language_model_inference | null | null | null | null | null | |
| 
	Probing Syntax in Large Language Models: Successes and Remaining Challenges | 
	This work evaluates syntactic representations in LLMs using structural probes. We assess these probes across three benchmarks, revealing that their accuracy is compromised by linear distance and syntactic depth, yet remains invariant to surprisal. | 
	The syntactic structures of sentences can be readily read-out from the activations of large language models (LLMs). However, the ``structural probes'' that have been developed to reveal this phenomenon are typically evaluated on an indiscriminate set of sentences. Consequently, it remains unclear whether structural and/or statistical factors systematically affect these syntactic representations. To address this issue, we conduct an in-depth analysis of structural probes on three controlled benchmarks. Our results are fourfold. First, structural probes are biased by a superficial property: the closer two words are in a sentence, the more likely structural probes will consider them as syntactically linked. Second, structural probes are challenged by linguistic properties: they poorly represent deep syntactic structures, and get interfered by interacting nouns or ungrammatical verb forms. Third, structural probes do not appear to be affected by the LLMs' predictability of individual words. Fourth, despite these challenges, structural probes still reveal syntactic links far more accurately than the linear baseline or the LLMs' raw activation spaces. Taken together, this work sheds light on both the challenges and the successes of current structural probes and provides a benchmark made of controlled stimuli to better evaluate their performance. | 
	[
  "Pablo J. Diego Simon",
  "Emmanuel Chemla",
  "Jean-Remi King",
  "Yair Lakretz"
] | 
	https://openreview.net/forum?id=nrZysNmJ0n | 
	nrZysNmJ0n | 
	nrZysNmJ0n | 
	[
  "~Pablo_J._Diego_Simon1",
  "~Emmanuel_Chemla1",
  "~Jean-Remi_King1",
  "~Yair_Lakretz2"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/c6a94b429ad0d683d6dff35df27378dfcab59ab1.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "Syntax",
  "LLMs",
  "Probing",
  "Evaluation"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
simon2025probing,
title={Probing Syntax in Large Language Models: Successes and Remaining Challenges},
author={Pablo J. Diego Simon and Emmanuel Chemla and Jean-Remi King and Yair Lakretz},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=nrZysNmJ0n}
} | 
	simon|probing_syntax_in_large_language_models_successes_and_remaining_challenges | null | null | null | null | null | |
| 
	CITER: Collaborative Inference for Efficient Large Language Model Decoding with Token-Level Routing | 
	A novel Collaborative Inference with Token-lEvel Routing (CITER) framework that introduces a token-level routing mechanism, enabling efficient collaboration between small and large language models (SLMs & LLMs). | 
	Large language models have achieved remarkable success in various tasks but suffer from high computational costs during inference, limiting their deployment in resource-constrained applications. To address this issue, we propose a novel Collaborative Inference with Token-lEvel Routing (CITER) framework that enables efficient collaboration between small and large language models (SLMs & LLMs) through a token-level routing strategy. Specifically, CITER routes non-critical tokens to an SLM for efficiency and routes critical tokens to an LLM for generalization quality. We formulate router training as a policy optimization, where the router receives rewards based on both the quality of predictions and the inference costs of generation. This allows the router to learn to predict token-level routing scores and make routing decisions based on both the current token and the future impact of its decisions. To further accelerate the reward evaluation process, we introduce a shortcut which significantly reduces the costs of the reward estimation and improving the practicality of our approach. Extensive experiments on five benchmark datasets demonstrate that CITER reduces the inference costs while preserving high-quality generation, offering a promising solution for real-time and resource-constrained applications. | 
	[
  "Wenhao Zheng",
  "Yixiao Chen",
  "Weitong Zhang",
  "Souvik Kundu",
  "Yun Li",
  "Zhengzhong Liu",
  "Eric P. Xing",
  "Hongyi Wang",
  "Huaxiu Yao"
] | 
	https://openreview.net/forum?id=nqX9UYW9Af | 
	nqX9UYW9Af | 
	nqX9UYW9Af | 
	[
  "~Wenhao_Zheng4",
  "~Yixiao_Chen2",
  "~Weitong_Zhang2",
  "~Souvik_Kundu2",
  "~Yun_Li7",
  "~Zhengzhong_Liu1",
  "~Eric_Xing1",
  "~Hongyi_Wang1",
  "~Huaxiu_Yao1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/44adc116c02c1f30777bdd97bc65accd40cb593a.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "collaborative inference",
  "efficient inference",
  "token-level routing",
  "large language model"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
zheng2025citer,
title={{CITER}: Collaborative Inference for Efficient Large Language Model Decoding with Token-Level Routing},
author={Wenhao Zheng and Yixiao Chen and Weitong Zhang and Souvik Kundu and Yun Li and Zhengzhong Liu and Eric P. Xing and Hongyi Wang and Huaxiu Yao},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=nqX9UYW9Af}
} | 
	zheng|citer_collaborative_inference_for_efficient_large_language_model_decoding_with_tokenlevel_routing | null | null | null | null | null | |
| 
	Text Speaks Louder than Vision: ASCII Art Reveals Textual Biases in Vision-Language Models | 
	Multimodal models struggle with adversarial ASCII art images, revealing limitations in information alignment. | 
	Vision-language models (VLMs) have advanced rapidly in processing multimodal information, but their ability to reconcile conflicting signals across modalities remains underexplored. This study investigates how VLMs process ASCII art, a unique medium where textual elements collectively form visual patterns, potentially creating semantic-visual conflicts. We introduce a novel evaluation framework that systematically challenges five state-of-the-art models (including GPT-4o, Claude, and Gemini) using adversarial ASCII art, where character-level semantics deliberately contradict global visual patterns. Our experiments reveal a strong text-priority bias: VLMs consistently prioritize textual information over visual patterns, with visual recognition ability declining dramatically as semantic complexity increases. Various mitigation attempts through visual parameter tuning and prompt engineering yielded only modest improvements, suggesting that this limitation requires architectural-level solutions. These findings uncover fundamental flaws in how current VLMs integrate multimodal information, providing important guidance for future model development while highlighting significant implications for content moderation systems vulnerable to adversarial examples. | 
	[
  "Zhaochen Wang",
  "Bryan Hooi",
  "Yiwei Wang",
  "Ming-Hsuan Yang",
  "Zi Huang",
  "Yujun Cai"
] | 
	https://openreview.net/forum?id=naEyNVTLsh | 
	naEyNVTLsh | 
	naEyNVTLsh | 
	[
  "~Zhaochen_Wang1",
  "~Bryan_Hooi1",
  "~Yiwei_Wang2",
  "~Ming-Hsuan_Yang1",
  "~Zi_Huang1",
  "~Yujun_Cai1"
] | 
	{
  "value": "COLM 2025"
} | 
	{
  "value": "colmweb.org/COLM/2025/Conference"
} | 
	{
  "value": "/pdf/c8c8412cde77ce8cdb46ae02a22e6dfe87928ea8.pdf"
} | 
	conference | 
	colmweb.org/COLM/2025/Conference | 2,025 | 
	COLM | 
	[
  "vision language model",
  "ASCII art",
  "sentiment analysis"
] | 
	I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html | 
	I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html | null | null | 
	@inproceedings{
wang2025text,
title={Text Speaks Louder than Vision: {ASCII} Art Reveals Textual Biases in Vision-Language Models},
author={Zhaochen Wang and Bryan Hooi and Yiwei Wang and Ming-Hsuan Yang and Zi Huang and Yujun Cai},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=naEyNVTLsh}
} | 
	wang|text_speaks_louder_than_vision_ascii_art_reveals_textual_biases_in_visionlanguage_models | null | null | null | null | null | 
			Subsets and Splits
				
	
				
			
				
No community queries yet
The top public SQL queries from the community will appear here once available.
