title
stringlengths 21
128
| content_TLDR
stringlengths 40
250
| abstract
stringlengths 613
2.09k
| authors
listlengths 1
42
| openreview_url
stringlengths 42
42
| id
stringlengths 10
10
| forum
stringlengths 10
10
| authorids
listlengths 1
42
| venue
dict | venueid
dict | pdf_url
dict | invitation
stringclasses 1
value | group
stringclasses 1
value | venue_name
stringclasses 1
value | year
int64 2.03k
2.03k
| conference
stringclasses 1
value | content_keywords
listlengths 1
16
| content_code_of_ethics
stringclasses 1
value | content_author_guide
stringclasses 1
value | content_flagged_for_ethics_review
bool 1
class | content_ethics_comments
stringclasses 11
values | content__bibtex
stringlengths 246
1.01k
| content_paperhash
stringlengths 29
134
| content_supplementary_material
stringclasses 73
values | content_award_nomination
bool 1
class | content_reciprocal_reviewing_status
stringclasses 1
value | content_reciprocal_reviewing_author
stringclasses 4
values | content_reciprocal_reviewing_exemption_reason
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Don’t lie to your friends: Learning what you know from collaborative self-play
|
Task-level reward in a multi-agent collaborative game can teach the agents about calibration and selective tool use
|
To be helpful assistants, AI agents must be aware of their own capabilities and limitations. This includes knowing when to answer from parametric knowledge versus using tools, when to trust tool outputs, and when to abstain or hedge. Such capabilities are hard to teach through supervised fine-tuning because they require constructing examples that reflect the agent's specific capabilities. We therefore propose a radically new approach to teaching agents what they know: \emph{collaborative self-play}. We construct multi-agent collaborations in which the group is rewarded for collectively arriving at correct answers. The desired meta-knowledge emerges from the incentives built into the structure of the interaction. We focus on small societies of agents that have access to heterogeneous tools (corpus-specific retrieval), and therefore must collaborate to maximize their success with minimal effort. Experiments show that group-level rewards for multi-agent communities can induce policies that \emph{transfer} to improve tool use and selective prediction in single-agent scenarios.
|
[
"Jacob Eisenstein",
"Reza Aghajani",
"Adam Fisch",
"Dheeru Dua",
"Fantine Huot",
"Mirella Lapata",
"Vicky Zayats",
"Jonathan Berant"
] |
https://openreview.net/forum?id=2vDJiGUfhV
|
2vDJiGUfhV
|
2vDJiGUfhV
|
[
"~Jacob_Eisenstein1",
"~Reza_Aghajani2",
"~Adam_Fisch2",
"~Dheeru_Dua2",
"~Fantine_Huot1",
"~Mirella_Lapata1",
"~Vicky_Zayats1",
"~Jonathan_Berant1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/33b60b49c794c267262ad17bde96770ae321292d.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"self-play",
"calibration",
"tool use",
"multiagent"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
eisenstein2025dont,
title={Don{\textquoteright}t lie to your friends: Learning what you know from collaborative self-play},
author={Jacob Eisenstein and Reza Aghajani and Adam Fisch and Dheeru Dua and Fantine Huot and Mirella Lapata and Vicky Zayats and Jonathan Berant},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2vDJiGUfhV}
}
|
eisenstein|dont_lie_to_your_friends_learning_what_you_know_from_collaborative_selfplay
| null | true
| null | null | null |
|
RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing
|
Automatically constructing repo-level execution-based environments for training and evaluation.
|
We present RepoST, a scalable method to construct environments that provide execution feedback for repository-level code generation for both training and evaluation. Unlike existing works that aim to build entire repositories for execution, which is challenging for both human and LLMs, we provide execution feedback with sandbox testing, which isolates a given target function and its dependencies to a separate script for testing. Sandbox testing reduces the complexity of external dependencies and enables constructing environments at a large scale. We use our method to construct RepoST-Train, a large-scale train set with 7,415 functions from 832 repositories. Training with the execution feedback provided by RepoST-Train leads to a performance gain of 5.5% Pass@1 on HumanEval and 3.5% Pass@1 on RepoEval. We also build an evaluation dataset, RepoST-Eval, and benchmark 12 code generation models. Code and datasets available at https://repost-code-gen.github.io/
|
[
"Yiqing Xie",
"Alex Xie",
"Divyanshu Sheth",
"Pengfei Liu",
"Daniel Fried",
"Carolyn Rose"
] |
https://openreview.net/forum?id=2txrMBpw3q
|
2txrMBpw3q
|
2txrMBpw3q
|
[
"~Yiqing_Xie1",
"~Alex_Xie1",
"~Divyanshu_Sheth1",
"~Pengfei_Liu1",
"~Daniel_Fried1",
"~Carolyn_Rose1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/0f0642854eb629ef6d848738749e7fc2ffc1b800.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"code generation training",
"repo-level code generation"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
xie2025repost,
title={Repo{ST}: Scalable Repository-Level Coding Environment Construction with Sandbox Testing},
author={Yiqing Xie and Alex Xie and Divyanshu Sheth and Pengfei Liu and Daniel Fried and Carolyn Rose},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2txrMBpw3q}
}
|
xie|repost_scalable_repositorylevel_coding_environment_construction_with_sandbox_testing
|
/attachment/bc410e83bf051f2b0b987d2ac6fc926e03c3548d.zip
| null | null | null | null |
|
2 OLMo 2 Furious (COLM’s Version)
|
We release AnonModel, a family of fully open 7B, 13B and 32B models achieving competitive performance with fewer computational cost while providing transparency through released training data, code, checkpoints, and more.
|
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes a family of dense autoregressive language models at 7B, 13B, and 32B scales with fully released artifacts—model weights, full training data, training code and recipes, training logs, and thousands of intermediate checkpoints. In this work, we describe our modified model architecture and training recipe, focusing on techniques for achieving better training stability and improved per-token efficiency. Our updated pretraining data mixture introduces a new, specialized data mix called Dolmino Mix 1124, which significantly improves model capabilities across many downstream task benchmarks when introduced via late-stage curriculum training (i.e., specialized data during the annealing phase of pretraining). Finally, we incorporate best practices from Tülu 3 to develop OLMo 2-Instruct, focusing on permissive data and extending our final-stage reinforcement learning with verifiable rewards (RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance-to-training compute, often matching or outperforming open-weight-only models like Llama 3.1, Qwen 2.5, and Gemma 2 while using fewer FLOPs and with fully transparent training data, code, and recipe. Our fully open OLMo 2-Instruct models are competitive with open-weight-only models of comparable size and even some proprietary models like GPT-3.5 Turbo and GPT-4o Mini.
|
[
"Evan Pete Walsh",
"Luca Soldaini",
"Dirk Groeneveld",
"Kyle Lo",
"Shane Arora",
"Akshita Bhagia",
"Yuling Gu",
"Shengyi Huang",
"Matt Jordan",
"Nathan Lambert",
"Dustin Schwenk",
"Oyvind Tafjord",
"Taira Anderson",
"David Atkinson",
"Faeze Brahman",
"Christopher Clark",
"Pradeep Dasigi",
"Nouha Dziri",
"Allyson Ettinger",
"Michal Guerquin",
"David Heineman",
"Hamish Ivison",
"Pang Wei Koh",
"Jiacheng Liu",
"Saumya Malik",
"William Merrill",
"Lester James Validad Miranda",
"Jacob Morrison",
"Tyler Murray",
"Crystal Nam",
"Jake Poznanski",
"Valentina Pyatkin",
"Aman Rangapur",
"Michael Schmitz",
"Sam Skjonsberg",
"David Wadden",
"Christopher Wilhelm",
"Michael Wilson",
"Luke Zettlemoyer",
"Ali Farhadi",
"Noah A. Smith",
"Hannaneh Hajishirzi"
] |
https://openreview.net/forum?id=2ezugTT9kU
|
2ezugTT9kU
|
2ezugTT9kU
|
[
"~Evan_Pete_Walsh1",
"~Luca_Soldaini1",
"~Dirk_Groeneveld1",
"~Kyle_Lo1",
"~Shane_Arora1",
"~Akshita_Bhagia1",
"~Yuling_Gu1",
"~Shengyi_Huang1",
"~Matt_Jordan1",
"~Nathan_Lambert1",
"~Dustin_Schwenk1",
"~Oyvind_Tafjord2",
"~Taira_Anderson1",
"~David_Atkinson3",
"~Faeze_Brahman1",
"~Christopher_Clark1",
"~Pradeep_Dasigi1",
"~Nouha_Dziri2",
"~Allyson_Ettinger1",
"~Michal_Guerquin1",
"~David_Heineman1",
"~Hamish_Ivison1",
"~Pang_Wei_Koh1",
"~Jiacheng_Liu2",
"~Saumya_Malik1",
"~William_Merrill1",
"~Lester_James_Validad_Miranda1",
"~Jacob_Morrison2",
"~Tyler_Murray2",
"~Crystal_Nam1",
"~Jake_Poznanski1",
"~Valentina_Pyatkin1",
"~Aman_Rangapur1",
"~Michael_Schmitz1",
"~Sam_Skjonsberg1",
"~David_Wadden1",
"~Christopher_Wilhelm1",
"~Michael_Wilson4",
"~Luke_Zettlemoyer1",
"~Ali_Farhadi3",
"~Noah_A._Smith2",
"~Hannaneh_Hajishirzi1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/ee2c137da42a7d7cd97b58127c3b38b1bd47107d.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"language model",
"pretraining",
"training stability",
"training data",
"instruction tuning"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
walsh2025,
title={2 {OLM}o 2 Furious ({COLM}{\textquoteright}s Version)},
author={Evan Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Allyson Ettinger and Michal Guerquin and David Heineman and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James Validad Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Jake Poznanski and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2ezugTT9kU}
}
|
walsh|2_olmo_2_furious_colms_version
| null | null |
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
|
~Kyle_Lo1
|
{
"readers": [
"colmweb.org/COLM/2025/Conference",
"colmweb.org/COLM/2025/Conference/Submission1136/Authors"
]
}
|
|
SUV: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning
|
TL;DR: We propose SUV, a selective unlearning framework that excises memorized copyrighted content from LLMs without compromising their overall performance.
|
Large Language Models (LLMs) have transformed natural language processing by learning from massive datasets, yet this rapid progress has also drawn legal scrutiny, as the ability to unintentionally generate copyrighted content has already prompted several prominent lawsuits. In this work, we introduce SUV (Selective Unlearning for Verbatim data), a selective unlearning framework designed to prevent LLM from memorizing copyrighted content while preserving its overall utility. In detail, the proposed method constructs a dataset that captures instances of copyrighted infringement cases by the targeted LLM. With the dataset, we unlearn the content from the LLM by means of Direct Preference Optimization (DPO), which replaces the verbatim copyrighted content with plausible and coherent alternatives. Since DPO may hinder the LLM’s performance in other unrelated tasks, we integrate gradient projection and Fisher information regularization to mitigate the degradation. We validate our approach using a large-scale dataset of 500 famous books (predominantly copyrighted works) and demonstrate that SUV significantly reduces verbatim memorization with negligible impact on the performance on unrelated tasks. Extensive experiments on both our dataset and public benchmarks confirm the scalability and efficacy of our approach, offering a promising solution for mitigating copyright risks in real-world LLM applications.
|
[
"Tianyang Xu",
"Xiaoze Liu",
"Feijie Wu",
"Xiaoqian Wang",
"Jing Gao"
] |
https://openreview.net/forum?id=2YdSsi0bxK
|
2YdSsi0bxK
|
2YdSsi0bxK
|
[
"~Tianyang_Xu3",
"~Xiaoze_Liu1",
"~Feijie_Wu1",
"~Xiaoqian_Wang1",
"~Jing_Gao1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/23716f5cef278592db52547b21cc4ce2972d69ed.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"LLMs; Calibration; Copyright; Unlearning"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
xu2025suv,
title={{SUV}: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning},
author={Tianyang Xu and Xiaoze Liu and Feijie Wu and Xiaoqian Wang and Jing Gao},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2YdSsi0bxK}
}
|
xu|suv_scalable_large_language_model_copyright_compliance_with_regularized_selective_unlearning
| null | null | null | null | null |
|
PredGen: Accelerated Inference of Large Language Models through Input-Time Speculation for Real-Time Speech Interaction
|
We leverage speculative decoding at user input time to reduce the latency of speech interactions.
|
Large Language Models (LLMs) are widely used in real-time voice chat applications, typically in combination with text-to-speech (TTS) systems to generate audio responses. However, their large size often leads to noticeable latency between the end of user input and the start of audio output, resulting in suboptimal user experiences. This latency is particularly evident when LLMs are deployed as single-user voice assistants on consumer-grade hardware with limited computing capacity. We discovered that this latency is primarily dominated by the time it takes for the LLMs to generate the first sentence, which is required as input by the TTS systems that synthesize audio responses on a sentence-by-sentence basis. To address this bottleneck, we propose Predictive Generation (PredGen), a novel framework that mitigates—or even eliminates—this delay through speculative decoding at input time. PredGen generates candidate responses while the user is still speaking, enabling the system to begin TTS processing with minimal delay. Simulated experiments on the Lmsys and MT-Bench datasets show that the proposed method can effectively reduce the latency by around 2× across a wide range of use cases, while incurring only minimal additional computation cost at input time—computation that would otherwise go unused.
|
[
"Shufan Li",
"Aditya Grover"
] |
https://openreview.net/forum?id=2Kl8Ztw6wk
|
2Kl8Ztw6wk
|
2Kl8Ztw6wk
|
[
"~Shufan_Li1",
"~Aditya_Grover1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/9999eac5fd953cddbee91e9c9d9168d017c657f5.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"Large Language Models",
"Inference",
"Speculative Decoding"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
li2025predgen,
title={PredGen: Accelerated Inference of Large Language Models through Input-Time Speculation for Real-Time Speech Interaction},
author={Shufan Li and Aditya Grover},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2Kl8Ztw6wk}
}
|
li|predgen_accelerated_inference_of_large_language_models_through_inputtime_speculation_for_realtime_speech_interaction
| null | null | null | null | null |
|
Language models align with brain regions that represent concepts across modalities
|
We find that LMs predict brain signal better in areas that respond more consistently to the same concept encoded in different modalities.
|
Cognitive science and neuroscience have long faced the challenge of disentangling representations of language from representations of conceptual meaning. As the same problem arises in today's language models (LMs), we investigate the relationship between LM--brain alignment and two neural metrics: (1) the level of brain activation during processing of sentences, targeting linguistic processing, and (2) a novel measure of meaning consistency across input modalities, which quantifies how consistently a brain region responds to the same concept across paradigms (sentence, word cloud, image) using an fMRI dataset (Pereira et al., 2018). Our experiments show that both language-only and language-vision models predict the signal better in more meaning-consistent areas of the brain, even when these areas are not strongly sensitive to language processing, suggesting that LMs might internally represent cross-modal conceptual meaning.
|
[
"Maria Ryskina",
"Greta Tuckute",
"Alexander Fung",
"Ashley Malkin",
"Evelina Fedorenko"
] |
https://openreview.net/forum?id=2JohTFaGbW
|
2JohTFaGbW
|
2JohTFaGbW
|
[
"~Maria_Ryskina1",
"~Greta_Tuckute1",
"~Alexander_Fung1",
"~Ashley_Malkin1",
"~Evelina_Fedorenko1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/429ed5a7923c63dc595d8547a5686bc7ac1fc9bd.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"LM–brain alignment",
"fMRI",
"conceptual meaning",
"cognitive neuroscience"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
ryskina2025language,
title={Language models align with brain regions that represent concepts across modalities},
author={Maria Ryskina and Greta Tuckute and Alexander Fung and Ashley Malkin and Evelina Fedorenko},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2JohTFaGbW}
}
|
ryskina|language_models_align_with_brain_regions_that_represent_concepts_across_modalities
| null | null | null | null | null |
|
Truth-value judgment in language models: ‘truth directions’ are context sensitive
|
Investigation of the in-context behaviour of LLM 'truth directions'.
|
Recent work has demonstrated that the latent spaces of large language models (LLMs) contain directions predictive of the truth of sentences. Multiple methods recover such directions and build probes that are described as uncovering a model’s “knowledge” or “beliefs”. We investigate this phenomenon, looking closely at the impact of context on the probes. Our experiments establish where in the LLM the probe’s predictions are (most) sensitive to the presence of related sentences, and how to best characterize this kind of sensitivity. We do so by measuring different types of consistency errors that occur after probing an LLM whose inputs consist of hypotheses preceded by (negated) supporting and contradicting sentences. We also perform a causal intervention experiment, investigating whether moving the representation of a premise along these truth-value directions influences the position of an entailed or contradicted sentence along that same direction. We find that the probes we test are generally context sensitive, but that contexts which should not affect the truth often still impact the probe outputs. Our experiments show that the type of errors depend on the layer, the model, and the kind of data. Finally, our results suggest that truth-value directions are causal mediators in the inference process that incorporates in-context information.
|
[
"Stefan F. Schouten",
"Peter Bloem",
"Ilia Markov",
"Piek Vossen"
] |
https://openreview.net/forum?id=2H85485yAb
|
2H85485yAb
|
2H85485yAb
|
[
"~Stefan_F._Schouten1",
"~Peter_Bloem1",
"~Ilia_Markov2",
"~Piek_Vossen2"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/e6dc394daf825c536dba3787225b727ce1977be1.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"mechinterp",
"mechanistic interpretability",
"interpretability",
"truth directions",
"LLM beliefs",
"large language model",
"llm"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
schouten2025truthvalue,
title={Truth-value judgment in language models: {\textquoteleft}truth directions{\textquoteright} are context sensitive},
author={Stefan F. Schouten and Peter Bloem and Ilia Markov and Piek Vossen},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=2H85485yAb}
}
|
schouten|truthvalue_judgment_in_language_models_truth_directions_are_context_sensitive
| null | null | null | null | null |
|
Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment
|
We propose a method that uses multiple context-adaptive prompts with a pretrained LLM to generate diverse text embeddings for contrastive vision-language learning.
|
We propose Context-Adaptive Multi-Prompt Embedding, a novel approach to enrich semantic representations in vision-language contrastive learning. Unlike standard CLIP-style models that rely on a single text embedding, our method introduces multiple structured prompts, each containing a distinct adaptive token that captures diverse semantic aspects of the input text. We leverage a pretrained LLM as the text encoder within the CLIP framework, processing all prompts jointly in a single forward pass. The resulting prompt embeddings are combined into a unified text representation, enabling semantically richer alignment with visual features. To further promote semantic diversity and representation quality, we incorporate a diversity regularization loss and a negation-aware loss, encouraging specialization across prompts and improving contrastive discrimination. Our method achieves consistent improvements on both image-text and video-text retrieval benchmarks.
|
[
"Dahun Kim",
"Anelia Angelova"
] |
https://openreview.net/forum?id=29jP6OsrIQ
|
29jP6OsrIQ
|
29jP6OsrIQ
|
[
"~Dahun_Kim1",
"~Anelia_Angelova1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/f81ca9025bba2c317f120f349e32aa1c6814c7b8.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"CLIP",
"contrastive learning",
"LLM embedding"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
kim2025contextadaptive,
title={Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment},
author={Dahun Kim and Anelia Angelova},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=29jP6OsrIQ}
}
|
kim|contextadaptive_multiprompt_embedding_with_large_language_models_for_visionlanguage_alignment
|
/attachment/5bc95b106630bdf9b944809b0efbe6b8eb7c0a1f.zip
| null | null | null | null |
|
FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning
|
We introduce FalseReject, a dataset and method to reduce unnecessary refusals in language models, significantly improving their balance between safety and helpfulness.
|
Safety alignment approaches in large language models (LLMs) often lead to the over-refusal of benign queries, significantly diminishing their utility in sensitive scenarios. To address this challenge, we introduce FalseReject, a comprehensive resource containing 16k seemingly toxic queries accompanied by structured responses across 44 safety-related categories. We propose a graph-informed adversarial multi-agent interaction framework to generate diverse and complex prompts, while structuring responses with explicit reasoning to aid models in accurately distinguishing safe from unsafe contexts. FalseReject includes training datasets tailored for both standard instruction-tuned models and reasoning-oriented models, as well as a human-annotated benchmark test set. Our extensive benchmarking on 29 state-of-the-art (SOTA) LLMs reveals persistent over-refusal challenges. Empirical results demonstrate that supervised finetuning with FalseReject substantially reduces unnecessary refusals without compromising overall safety or general language capabilities.
|
[
"Zhehao Zhang",
"Weijie Xu",
"Fanyou Wu",
"Chandan K. Reddy"
] |
https://openreview.net/forum?id=1w9Hay7tvm
|
1w9Hay7tvm
|
1w9Hay7tvm
|
[
"~Zhehao_Zhang1",
"~Weijie_Xu1",
"~Fanyou_Wu1",
"~Chandan_K._Reddy1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/ec166f8ec049c9b30eba1ec7ea72c691f741fb37.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"Over-refusal",
"Safety",
"Instruction Tunning"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
zhang2025falsereject,
title={FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in {LLM}s via Structured Reasoning},
author={Zhehao Zhang and Weijie Xu and Fanyou Wu and Chandan K. Reddy},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=1w9Hay7tvm}
}
|
zhang|falsereject_a_resource_for_improving_contextual_safety_and_mitigating_overrefusals_in_llms_via_structured_reasoning
| null | null | null | null | null |
|
Modifying Large Language Model Post-Training for Diverse Creative Writing
|
In creative writing generation, we facilitate diversity in LLM outputs by counting in how each training instance differs from other instances with the same prompt.
|
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation---the degree of difference between a training sample and all other samples with the same prompt---in the training objective to facilitate learning from rare high-quality instances. By adopting our approach to direct preference optimization (DPO) and odds ratio preference optimization (ORPO), we demonstrate that we can promote the output diversity of trained models while minimally decreasing quality. Our best model with 8B parameters could achieve on-par diversity as a human-created dataset while having output quality similar to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We further validate our approaches with a human evaluation, an ablation, and a comparison to an existing diversification approach, DivPO.
|
[
"John Joon Young Chung",
"Vishakh Padmakumar",
"Melissa Roemmele",
"Yuqian Sun",
"Max Kreminski"
] |
https://openreview.net/forum?id=1Pmuw08LoM
|
1Pmuw08LoM
|
1Pmuw08LoM
|
[
"~John_Joon_Young_Chung1",
"~Vishakh_Padmakumar1",
"~Melissa_Roemmele1",
"~Yuqian_Sun1",
"~Max_Kreminski1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/46f97e4db02cec3f04f7d04c3fc6e7a599a585af.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"creative writing generation",
"diversity",
"post-training"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
chung2025modifying,
title={Modifying Large Language Model Post-Training for Diverse Creative Writing},
author={John Joon Young Chung and Vishakh Padmakumar and Melissa Roemmele and Yuqian Sun and Max Kreminski},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=1Pmuw08LoM}
}
|
chung|modifying_large_language_model_posttraining_for_diverse_creative_writing
|
/attachment/4ba9fd43d8499d750dec728005bede64a9ef32bf.zip
| null | null | null | null |
|
BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning
|
We propose BigCharts, a diverse chart dataset, and a training framework that improves VLMs’ chart comprehension.
|
Charts are essential to data analysis, transforming raw data into clear visual representations that support human decision-making.
Although current vision-language models (VLMs) have made significant progress, they continue to struggle with chart comprehension due to training on datasets that lack diversity and real-world authenticity, or on automatically extracted underlying data tables of charts, which can contain numerous estimation errors. Furthermore, existing models only rely on supervised fine-tuning using these low-quality datasets, severely limiting their effectiveness. To address these issues, we first propose BigCharts, a dataset creation pipeline that generates visually diverse chart images by conditioning the rendering process on real-world charts sourced from multiple online platforms.
Unlike purely synthetic datasets, BigCharts incorporates real-world data, ensuring authenticity and visual diversity, while still retaining accurate underlying data due to our proposed replotting process. Additionally, we introduce a comprehensive training framework that integrates supervised fine-tuning with Group Relative Policy Optimization (GRPO)-based reinforcement learning. By introducing novel reward signals specifically designed for chart reasoning, our approach enhances model robustness and generalization across diverse chart styles and domains, resulting in a state-of-the-art chart reasoning model, BigCharts-R1. Extensive experiments demonstrate that our models surpass existing methods on multiple chart question-answering benchmarks compared to even larger open-source and closed-source models.
|
[
"Ahmed Masry",
"Abhay Puri",
"Masoud Hashemi",
"Juan A. Rodriguez",
"Megh Thakkar",
"Khyati Mahajan",
"Vikas Yadav",
"Sathwik Tejaswi Madhusudhan",
"Alexandre Piché",
"Dzmitry Bahdanau",
"Christopher Pal",
"David Vazquez",
"Enamul Hoque",
"Perouz Taslakian",
"Sai Rajeswar",
"Spandana Gella"
] |
https://openreview.net/forum?id=19fydz1QnW
|
19fydz1QnW
|
19fydz1QnW
|
[
"~Ahmed_Masry1",
"~Abhay_Puri1",
"~Masoud_Hashemi1",
"~Juan_A._Rodriguez1",
"~Megh_Thakkar1",
"~Khyati_Mahajan1",
"~Vikas_Yadav2",
"~Sathwik_Tejaswi_Madhusudhan2",
"~Alexandre_Piché1",
"~Dzmitry_Bahdanau1",
"~Christopher_Pal1",
"~David_Vazquez1",
"~Enamul_Hoque2",
"~Perouz_Taslakian1",
"~Sai_Rajeswar2",
"~Spandana_Gella2"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/1a434bfeb2d15ce12abaf1e3a291e089decb700a.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"charts",
"chartqa",
"vision language models",
"multimodal"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
masry2025bigchartsr,
title={BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning},
author={Ahmed Masry and Abhay Puri and Masoud Hashemi and Juan A. Rodriguez and Megh Thakkar and Khyati Mahajan and Vikas Yadav and Sathwik Tejaswi Madhusudhan and Alexandre Pich{\'e} and Dzmitry Bahdanau and Christopher Pal and David Vazquez and Enamul Hoque and Perouz Taslakian and Sai Rajeswar and Spandana Gella},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=19fydz1QnW}
}
|
masry|bigchartsr1_enhanced_chart_reasoning_with_visual_reinforcement_finetuning
| null | null | null | null | null |
|
Noiser: Bounded Input Perturbations for Attributing Large Language Models
|
We introduce Noiser, a bounded perturbation-based method to explain LLMs predictions and tackle distribution shift. We also propose answerability, a new metric that evaluates the relevance of rationales without gold data or human evaluation.
|
Feature attribution (FA) methods are common post-hoc approaches that explain how Large Language Models (LLMs) make predictions. Accordingly, generating faithful attributions that reflect the actual inner behavior of the model is crucial. In this paper, we introduce Noiser, a perturbation-based FA method that imposes bounded noise on each input embedding and measures the robustness of the model against partially noised input to obtain the input attributions. Additionally, we propose an answerability metric that employs an instructed judge model to assess the extent to which highly scored tokens suffice to recover the predicted output. Through a comprehensive evaluation across six LLMs and three tasks, we demonstrate that Noiser consistently outperforms existing gradient-based, attention-based, and perturbation-based FA methods in terms of both faithfulness and answerability, making it a robust and effective approach for explaining language model predictions.
|
[
"Mohammad Reza Ghasemi Madani",
"Aryo Pradipta Gema",
"Yu Zhao",
"Gabriele Sarti",
"Pasquale Minervini",
"Andrea Passerini"
] |
https://openreview.net/forum?id=17yFbHmblo
|
17yFbHmblo
|
17yFbHmblo
|
[
"~Mohammad_Reza_Ghasemi_Madani1",
"~Aryo_Pradipta_Gema1",
"~Yu_Zhao13",
"~Gabriele_Sarti1",
"~Pasquale_Minervini4",
"~Andrea_Passerini2"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/b3e0eeffcd43001aba85a5e79caa97470f363e5b.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"Feature Attribution",
"Post-hoc Explanations",
"Large Language Model",
"LLMs"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
madani2025noiser,
title={Noiser: Bounded Input Perturbations for Attributing Large Language Models},
author={Mohammad Reza Ghasemi Madani and Aryo Pradipta Gema and Yu Zhao and Gabriele Sarti and Pasquale Minervini and Andrea Passerini},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=17yFbHmblo}
}
|
madani|noiser_bounded_input_perturbations_for_attributing_large_language_models
| null | null | null | null | null |
|
ALFA: Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning
|
Novel alignment recipe to teach LLMs perform complex goals by (1) decomposing it into more tangible attributes, (2) creating synthetic data, and (3) integrating the attributes.
|
Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decision-making. We present ALignment via Fine-grained Attributes, (ALFA) a framework that improves LLM question-asking by (i) decomposing the notion of a “good” question into a set of theory-grounded attributes (e.g., clarity, relevance), (ii) controllably synthesizing attribute-specific question variations, and (iii) aligning models via preference-based optimization to explicitly learn to ask better questions along these fine-grained attributes. Focusing on clinical reasoning as a case study, we introduce the MediQ-AskDocs dataset, composed of 17k real-world clinical interactions augmented with 80k attribute-specific preference pairs of follow-up questions, as well as a novel expert-annotated interactive healthcare QA task to evaluate question-asking abilities. Models aligned with ALFA reduce diagnostic errors by 56.6% on MediQ-AskDocs compared to SoTA instruction-tuned LLMs, with a question-level win-rate of 64.4% and strong generalizability. Our findings suggest that explicitly guiding question-asking with structured, fine-grained attributes offers a scalable path to improve LLMs, especially in expert application domains.
|
[
"Shuyue Stella Li",
"Jimin Mun",
"Faeze Brahman",
"Pedram Hosseini",
"Bryceton G. Thomas",
"Jessica M. Sin",
"Bing Ren",
"Jonathan S. Ilgen",
"Yulia Tsvetkov",
"Maarten Sap"
] |
https://openreview.net/forum?id=12u7diwku0
|
12u7diwku0
|
12u7diwku0
|
[
"~Shuyue_Stella_Li1",
"~Jimin_Mun1",
"~Faeze_Brahman1",
"~Pedram_Hosseini1",
"~Bryceton_G._Thomas1",
"~Jessica_M._Sin1",
"~Bing_Ren1",
"~Jonathan_S._Ilgen1",
"~Yulia_Tsvetkov1",
"~Maarten_Sap1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/8083d74723508dd138fb4da403569d14619daf33.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"Information Seeking",
"Question Asking",
"Reliable LLM",
"Clinical Reasoning",
"Structured Rewards"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
li2025alfa,
title={{ALFA}: Aligning {LLM}s to Ask Good Questions A Case Study in Clinical Reasoning},
author={Shuyue Stella Li and Jimin Mun and Faeze Brahman and Pedram Hosseini and Bryceton G. Thomas and Jessica M. Sin and Bing Ren and Jonathan S. Ilgen and Yulia Tsvetkov and Maarten Sap},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=12u7diwku0}
}
|
li|alfa_aligning_llms_to_ask_good_questions_a_case_study_in_clinical_reasoning
| null | null | null | null | null |
|
Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback
|
We propose to use importance weighting to iteratively retrain an off-policy corrected reward model, resulting in a signficantly better final policy.
|
Reinforcement Learning from Human Feedback (RLHF) allows us to train models, such as language models (LMs), to follow complex human preferences. In RLHF for LMs, we first train an LM using supervised fine-tuning, sample pairs of responses, obtain human feedback, and use the resulting data to train a reward model (RM). RL methods are then used to train the LM to maximize the reward given by the RM. As training progresses, the responses generated by the LM no longer resemble the responses seen by the RM during training, leading to the RM becoming inaccurate. The score given by the RM keeps increasing, but the learned behavior no longer matches the human preferences. This issue is known as overoptimization. We investigate overoptimization from the point of view of distribution shift and show that the shift results in an inconsistent estimate of the RM parameters, leading to an inconsistent estimate of the policy gradient. We propose Off-Policy Corrected Reward Modeling (OCRM), which iteratively off-policy corrects the RM using importance weighting, without requiring new labels or samples. This results in a more accurate RM, which empirically leads to an improved final policy. We validate our approach in experiments with summarization and chatbot datasets and show that it performs significantly better than standard RLHF methods and baselines.
|
[
"Johannes Ackermann",
"Takashi Ishida",
"Masashi Sugiyama"
] |
https://openreview.net/forum?id=0zxugBcgF5
|
0zxugBcgF5
|
0zxugBcgF5
|
[
"~Johannes_Ackermann1",
"~Takashi_Ishida1",
"~Masashi_Sugiyama1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/f388799f7d6563f4c4688c9d4afd2a33cba72370.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"alignment",
"reinforcement learning from human feedback",
"reinforcement learning",
"reward modeling"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
ackermann2025offpolicy,
title={Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback},
author={Johannes Ackermann and Takashi Ishida and Masashi Sugiyama},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=0zxugBcgF5}
}
|
ackermann|offpolicy_corrected_reward_modeling_for_reinforcement_learning_from_human_feedback
| null | null | null | null | null |
|
MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding
|
A Live Benchmark for Multimodal Large Language Models in Scientific Understanding
|
As multimodal large language models (MLLMs) grow increasingly capable, fixed benchmarks are gradually losing their effectiveness in evaluating high-level scientific understanding. In this paper, we introduce the Multimodal Academic Cover benchmark (MAC), a live benchmark that could continuously evolve with scientific advancement and model progress. MAC leverages over 25,000 image-text pairs sourced from issues of top-tier scientific journals such as Nature, Science, and Cell, challenging MLLMs to reason across abstract visual and textual scientific content. Experiments on our most recent yearly snapshot, MAC-2025, reveal that while MLLMs demonstrate strong perceptual abilities, their cross-modal scientific reasoning remains limited. To bridge this gap, we propose DAD, a lightweight inference-time approach that enhances MLLMs by extending MLLM visual features with language space reasoning, achieving performance improvements of up to 11%. Finally, we highlight the live nature of MAC through experiments on updating journal covers and models for curation, illustrating its potential to remain aligned with the frontier of human knowledge. We release our benchmark at https://github.com/mhjiang0408/MAC_Bench.
|
[
"Mohan Jiang",
"Jin Gao",
"Jiahao Zhan",
"Dequan Wang"
] |
https://openreview.net/forum?id=0aHOVhkuOB
|
0aHOVhkuOB
|
0aHOVhkuOB
|
[
"~Mohan_Jiang1",
"~Jin_Gao3",
"~Jiahao_Zhan1",
"~Dequan_Wang1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/a1789c20b738bb8892ffc7f3489a13f8d78352e4.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"MLLM",
"Benchmark",
"Scientific Understanding Capabilities"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
jiang2025mac,
title={{MAC}: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding},
author={Mohan Jiang and Jin Gao and Jiahao Zhan and Dequan Wang},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=0aHOVhkuOB}
}
|
jiang|mac_a_live_benchmark_for_multimodal_large_language_models_in_scientific_understanding
|
/attachment/9e5107c434a576e2bea79b55e38158745b152410.zip
| null | null | null | null |
|
Impact-driven Context Filtering For Cross-file Code Completion
|
We introduce an adaptive retrieval-augmented framework for repository-level code completion, which automatically filters irrelevant context to enhance completion accuracy and efficiency.
|
Retrieval-augmented generation (RAG) has recently demonstrated considerable potential for repository-level code completion, as it integrates cross-file knowledge with in-file preceding code to provide comprehensive contexts for generation. To better understand the contribution of the retrieved cross-file contexts, we introduce a likelihood-based metric to evaluate the impact of each retrieved code chunk on the completion. Our analysis reveals that, despite retrieving numerous chunks, only a small subset positively contributes to the completion, while some chunks even degrade performance. To address this issue, we leverage this metric to construct a repository-level dataset where each retrieved chunk is labeled as positive, neutral, or negative based on its relevance to the target completion. We then propose an adaptive retrieval context filtering framework, CODEFILTER, trained on this dataset to mitigate the harmful effects of negative retrieved contexts in code completion. Extensive evaluation on the RepoEval and CrossCodeLongEval benchmarks demonstrates that CODEFILTER consistently improves completion accuracy compared to approaches without filtering operations across various tasks. Additionally, CODEFILTER significantly reduces the length of the input prompt, enhancing computational efficiency while exhibiting strong generalizability across different models. These results underscore the potential of CODEFILTER to enhance the accuracy, efficiency, and attributability of repository-level code completion.
|
[
"Yanzhou Li",
"Shangqing Liu",
"Kangjie Chen",
"Tianwei Zhang",
"Yang Liu"
] |
https://openreview.net/forum?id=0Y2zXLFBji
|
0Y2zXLFBji
|
0Y2zXLFBji
|
[
"~Yanzhou_Li1",
"~Shangqing_Liu1",
"~Kangjie_Chen1",
"~Tianwei_Zhang1",
"~Yang_Liu36"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/9d676b553f9f39c1dd7d5e1a823ecbbc1e0d6f51.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"Code Completion; Adaptive Retreivla-augmented Generation; Large Language Model"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
li2025impactdriven,
title={Impact-driven Context Filtering For Cross-file Code Completion},
author={Yanzhou Li and Shangqing Liu and Kangjie Chen and Tianwei Zhang and Yang Liu},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=0Y2zXLFBji}
}
|
li|impactdriven_context_filtering_for_crossfile_code_completion
| null | null | null | null | null |
|
Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture
|
JEPA4Rec improves sequential recommendation by combining joint embedding predictive architecture with language modeling, achieving better item representations and outperforming existing methods, especially in low-resource and cross-domain scenarios.
|
Language representation learning has emerged as a promising approach for sequential recommendation, thanks to its ability to learn generalizable representations. However, despite its advantages, this approach still struggles with data sparsity and a limited understanding of common-sense user preferences. To address these limitations, we propose $\textbf{JEPA4Rec}$, a framework that combines $\textbf{J}$oint $\textbf{E}$mbedding $\textbf{P}$redictive $\textbf{A}$rchitecture with language modeling of item textual descriptions. JEPA4Rec captures semantically rich and transferable representations, improving recommendation performance and reducing reliance on large-scale pre-training data. Specifically, JEPA4Rec represents items as text sentences by flattening descriptive information such as $\textit{title, category}$, and other attributes. To encode these sentences, we employ a bidirectional Transformer encoder with modified embedding layers tailored for capturing item information in recommendation datasets. We apply masking to text sentences and use them to predict the representations of the unmasked sentences, helping the model learn generalizable item embeddings. To further improve recommendation performance and language understanding, we employ a two-stage training strategy incorporating self-supervised learning losses. Experiments on six real-world datasets demonstrate that JEPA4Rec consistently outperforms state-of-the-art methods, particularly in cross-domain, cross-platform, and low-resource scenarios.
|
[
"Nguyen Anh Minh",
"Dung D. Le"
] |
https://openreview.net/forum?id=0Qbwjd0fxB
|
0Qbwjd0fxB
|
0Qbwjd0fxB
|
[
"~Nguyen_Anh_Minh1",
"~Dung_D._Le2"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/15165329da5a76904efdb6e90d529cb9b849bc5f.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"language models",
"joint embedding predictive architecture",
"sequential recommendation"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
minh2025learning,
title={Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture},
author={Nguyen Anh Minh and Dung D. Le},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=0Qbwjd0fxB}
}
|
minh|learning_effective_language_representations_for_sequential_recommendation_via_joint_embedding_predictive_architecture
|
/attachment/39c271ff9b08d923e1db47f3b6618f8d65046970.zip
| null | null | null | null |
|
BEARCUBS: A benchmark for computer-using web agents
|
We introduce BEARCUBS, a benchmark of 111 information-seeking questions designed to evaluate a web agent’s ability to search, browse, and identify factual information from the web.
|
Modern web agents possess computer use abilities that allow them to interact with webpages by sending commands to a virtual keyboard and mouse. While such agents have considerable potential to assist human users with complex tasks, evaluating their capabilities in real-world settings poses a major challenge. To this end, we introduce BEARCUBS, a “smallbut mighty” benchmark of 111 information-seeking questions designed to evaluate a web agent’s ability to search, browse, and identify factual information from the web. Unlike prior web agent benchmarks, solving BEARCUBS requires (1) accessing live web content rather than synthetic or simulated pages, which captures the unpredictability of real-world web interactions; and (2) performing a broad range of multimodal interactions (e.g., video understanding, 3D navigation) that cannot be bypassed via text-based workarounds. Each question in BEARCUBS has a corresponding short, unambiguous answer and a human-validated browsing trajectory, allowing for transparent evaluation of agent performance and strategies. A human study confirms that BEARCUBS questions are solvable but non-trivial (84.7% human accuracy), revealing domain knowledge gaps and overlooked details as common failure points. We find that ChatGPT Agent significantly outperforms other computer-using agents with an overall accuracy of 65.8% (compared to e.g., Operator’s 23.4%), showcasing substantial progress in tasks involving real computer use, such as playing web games and navigating 3D environments. Nevertheless, closing the gap to human performance requires improvements in areas like fine control, complex data filtering, and execution speed. To facilitate future research, BEARCUBS will be updated periodically to replace invalid or contaminated questions, keeping the benchmark fresh for future generations of web agents.
|
[
"Yixiao Song",
"Katherine Thai",
"Chau Minh Pham",
"Yapei Chang",
"Mazin Nadaf",
"Mohit Iyyer"
] |
https://openreview.net/forum?id=0JzWiigkUy
|
0JzWiigkUy
|
0JzWiigkUy
|
[
"~Yixiao_Song1",
"~Katherine_Thai1",
"~Chau_Minh_Pham1",
"~Yapei_Chang1",
"~Mazin_Nadaf1",
"~Mohit_Iyyer1"
] |
{
"value": "COLM 2025"
}
|
{
"value": "colmweb.org/COLM/2025/Conference"
}
|
{
"value": "/pdf/0c8e8f33881d200624dd89fc08cb7bc3ef3e7f55.pdf"
}
|
conference
|
colmweb.org/COLM/2025/Conference
| 2,025
|
COLM
|
[
"computer-use agent",
"benchmark",
"multimodal"
] |
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
|
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
| null | null |
@inproceedings{
song2025bearcubs,
title={{BEARCUBS}: A benchmark for computer-using web agents},
author={Yixiao Song and Katherine Thai and Chau Minh Pham and Yapei Chang and Mazin Nadaf and Mohit Iyyer},
booktitle={Second Conference on Language Modeling},
year={2025},
url={https://openreview.net/forum?id=0JzWiigkUy}
}
|
song|bearcubs_a_benchmark_for_computerusing_web_agents
|
/attachment/a29bd3e2638735810acd3292f411a4a586ca58c3.zip
| null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.