--- license: apache-2.0 datasets: - Floppanacci/QWQ-LongCOT-AIMO - qingy2024/QwQ-LongCoT-Verified-130K - PowerInfer/QWQ-LONGCOT-500K - Magpie-Align/Magpie-Reasoning-V2-250K-CoT-QwQ language: - en base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference - code - math - moe --- ![DFG.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/G3sOT3hywqnBeW7EeaRNx.png) # Hatshepsut-Qwen3\_QWQ-LCoT-4B > **Hatshepsut-Qwen3\_QWQ-LCoT-4B** is a fine-tuned variant of the **Qwen3-4B** architecture, explicitly trained on **QWQ Synthetic datasets** with support for **Least-to-Complexity-of-Thought (LCoT)** prompting. This model is optimized for **precise mathematical reasoning**, **logic-driven multi-step solutions**, and **structured technical outputs**, while being compute-efficient and instruction-aligned. > [!note] GGUF : https://huggingface.co/prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B-Q4_K_M-GGUF ## Key Features 1. **LCoT Prompting Mastery** Specifically tuned to handle Least-to-Complexity-of-Thought prompting, encouraging granular reasoning from simple to complex steps in problem solving. 2. **QWQ-Based Precision Reasoning** Built on the QWQ synthetic datasets, ensuring high-fidelity outputs in symbolic logic, algebraic manipulation, and mathematical word problems. 3. **Code Understanding & Logic Generation** Interprets and writes concise, logically sound code snippets in Python, C++, and JavaScript, with special focus on algorithmic steps and edge case handling. 4. **Structured Output Control** Outputs responses in JSON, Markdown, LaTeX, and table formats, ideal for educational material, notebooks, and structured reasoning chains. 5. **Multilingual Reasoning** Supports over 20 languages, enabling STEM-based problem solving and translation tasks across global languages. 6. **Efficient 4B Parameter Footprint** Lightweight yet powerful—suitable for researchers, educators, and developers running on mid-tier GPUs (e.g., A10, 3090, or L4). ## Quickstart with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Hatshepsut-Qwen3_QWQ-LCoT-4B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Solve using LCoT: If 3x - 7 = 2(x + 1), what is the value of x?" messages = [ {"role": "system", "content": "You are a step-by-step reasoning assistant trained on QWQ datasets with LCoT support."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Intended Use * LCoT-style multi-step problem solving * Algebra, geometry, and logic question answering * Code generation with algorithmic transparency * Educational tools for math and programming * Structured technical output in Markdown/LaTeX * Multilingual STEM tutoring and reasoning ## Limitations * May be sensitive to poorly formatted prompts * Less creative for open-domain or fictional tasks * Smaller context window (compared to 14B+ variants) * Early-stage reasoning errors may propagate if not prompted clearly ## References 1. [QWQ Synthetic Dataset]– Specialized reasoning corpus (experimental) 2. [LIMO: Less is More for Reasoning](https://arxiv.org/pdf/2502.03387) 3. [AIMO-2 Math Benchmark – OpenMathReasoning](https://arxiv.org/pdf/2504.16891) 4. [YaRN: Context Extension for LLMs](https://arxiv.org/pdf/2309.00071)