language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- reasoning
- looped transformer
arxiv: 2511.08577
library_name: transformers
datasets:
- open-r1/Mixture-of-Thoughts
base_model:
- Qwen/Qwen3-1.7B-Base
This is the general version of TaH-plus-1.7B, trained on a mixture of math, code, and science data, presented in the paper Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models.
Think-at-Hard(TaH0 uses a neural decider to dynamically initiate latent iterations only where needed. Compared with baselines that iterate twice for all output tokens, TaH delivers 8.1-11.3% accuracy gains while exempting 94% of tokens from the second iteration. Against strong single-iteration Qwen3 models finetuned with the same data, it also delivers 4.0-5.0% accuracy gains. When allowing less than 3% additional parameters from LoRA and the iteration decider, the gains increase to 8.5-12.6% and 5.3-5.4%, respectively.
Please visit our GitHub repo for more information.
Sample Usage
Please see Github Example for sample usage.