Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,8 @@ tags:
|
|
| 12 |
- Qwen
|
| 13 |
- Deepseek
|
| 14 |
---
|
|
|
|
|
|
|
| 15 |
# **Elita-0.1-Distilled-R1-Abliterated**
|
| 16 |
|
| 17 |
Elita-0.1-Distilled-R1-Abliterated is based on the *Qwen [ KT ] model*, which was distilled by *DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B*. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
|
|
|
| 12 |
- Qwen
|
| 13 |
- Deepseek
|
| 14 |
---
|
| 15 |
+

|
| 16 |
+
|
| 17 |
# **Elita-0.1-Distilled-R1-Abliterated**
|
| 18 |
|
| 19 |
Elita-0.1-Distilled-R1-Abliterated is based on the *Qwen [ KT ] model*, which was distilled by *DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B*. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|