Data of the "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" paper
AI & ML interests
LLM, trustworthy AI, AI security, privacy, calibration, hallucination
Recent Activity
View all activity
Papers
Is Multilingual LLM Watermarking Truly Multilingual? A Simple Back-Translation Solution
Dr.LLM: Dynamic Layer Routing in LLMs
NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models" https://arxiv.org/abs/2411.00154
List of research articles of Parameter Lab
-
Dr.LLM: Dynamic Layer Routing in LLMs
Paper • 2510.12773 • Published • 31 -
Is Multilingual LLM Watermarking Truly Multilingual? A Simple Back-Translation Solution
Paper • 2510.18019 • Published • 16 -
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Paper • 2506.15674 • Published • 2 -
C-SEO Bench: Does Conversational SEO Work?
Paper • 2506.11097 • Published • 2
Fine-tuned models for black-box LLM calibration, trained for "Apricot: Calibrating Large Language Models Using Their Generations Only" (ACL 2024)
-
parameterlab/apricot_binary_trivia_qa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 1 • 1 -
parameterlab/apricot_clustering_trivia_qa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 2 -
parameterlab/apricot_binary_coqa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 3 -
parameterlab/apricot_clustering_coqa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 2
Data of the "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" paper
List of research articles of Parameter Lab
-
Dr.LLM: Dynamic Layer Routing in LLMs
Paper • 2510.12773 • Published • 31 -
Is Multilingual LLM Watermarking Truly Multilingual? A Simple Back-Translation Solution
Paper • 2510.18019 • Published • 16 -
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Paper • 2506.15674 • Published • 2 -
C-SEO Bench: Does Conversational SEO Work?
Paper • 2506.11097 • Published • 2
NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models" https://arxiv.org/abs/2411.00154
Fine-tuned models for black-box LLM calibration, trained for "Apricot: Calibrating Large Language Models Using Their Generations Only" (ACL 2024)
-
parameterlab/apricot_binary_trivia_qa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 1 • 1 -
parameterlab/apricot_clustering_trivia_qa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 2 -
parameterlab/apricot_binary_coqa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 3 -
parameterlab/apricot_clustering_coqa_deberta-v3-base_for_vicuna-7b-v1.5
Text Classification • 0.2B • Updated • 2