File size: 1,235 Bytes
bc56f96
 
 
 
 
 
 
 
b67d34c
bc56f96
930ddc2
 
 
 
bc56f96
 
b67d34c
bc56f96
 
 
 
 
 
 
 
930ddc2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- reasoning
- looped transformer
arxiv: 2511.08577
library_name: transformers
datasets:
- open-r1/Mixture-of-Thoughts
base_model:
- Qwen/Qwen3-1.7B-Base
---

This is the general version of TaH-plus-1.7B, trained on a mixture of math, code, and science data, presented in the paper [Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models](https://huggingface.co/papers/2511.08577).

Think-at-Hard(TaH0 uses a neural decider to dynamically initiate latent iterations only where needed. Compared with baselines that iterate twice for all output tokens, TaH delivers 8.1-11.3% accuracy gains while exempting 94% of tokens from the second iteration. Against strong single-iteration Qwen3 models finetuned with the same data, it also delivers 4.0-5.0% accuracy gains. When allowing less than 3% additional parameters from LoRA and the iteration decider, the gains increase to 8.5-12.6% and 5.3-5.4%, respectively.

Please visit our [GitHub repo](https://github.com/thu-nics/TaH) for more information.


### Sample Usage

Please see [Github Example](https://github.com/thu-nics/TaH?tab=readme-ov-file#run-an-example-for-tah) for sample usage.