When Models Lie, We Learn: Multilingual Span-Level Hallucination Detection with PsiloQA Paper • 2510.04849 • Published 26 days ago • 109
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models Paper • 2401.00396 • Published Dec 31, 2023 • 4
Turk-LettuceDetect: A Hallucination Detection Models for Turkish RAG Applications Paper • 2509.17671 • Published Sep 22 • 9
LettuceDetect: A Hallucination Detection Framework for RAG Applications Paper • 2502.17125 • Published Feb 24 • 12
TinyLettuce Collection This Collection contains our small, Ettin-encoder (https://arxiv.org/abs/2507.11412) based models trained on synthetic and RagTruth data. • 6 items • Updated Aug 31 • 3
When Models Lie, We Learn: Multilingual Span-Level Hallucination Detection with PsiloQA Paper • 2510.04849 • Published 26 days ago • 109
view article Article LettuceDetect: A Hallucination Detection Framework for RAG Applications By adaamko • Feb 28 • 10
<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs Paper • 2509.08358 • Published Sep 10 • 13
When Punctuation Matters: A Large-Scale Comparison of Prompt Robustness Methods for LLMs Paper • 2508.11383 • Published Aug 15 • 40
T-LoRA: Single Image Diffusion Model Customization Without Overfitting Paper • 2507.05964 • Published Jul 8 • 118
Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models Paper • 2506.06751 • Published Jun 7 • 71
Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA Paper • 2505.21115 • Published May 27 • 139