Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ library_name: transformers
|
|
| 15 |
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
|
| 16 |
|
| 17 |
|
| 18 |
-
## Ring-1T-preview
|
| 19 |
|
| 20 |
Recently, we have been fully occupied with the post-training of Ling 2.0's __1T foundational language model__, striving to maximize the __natural language reasoning__ potential of this trillion-scale base model. Conducting post-training on such a huge model, particularly the "training" involved in large-scale reinforcement learning, stands as one of the most technically challenging tasks the Ling Team has encountered since its establishment. On the other hand, it has also been a process that continuously reshapes our technical understanding and reinforces the belief that "scaling is all you need."
|
| 21 |
|
|
|
|
| 15 |
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
|
| 16 |
|
| 17 |
|
| 18 |
+
## Ring-1T-preview, Deep Thinking, No Waiting
|
| 19 |
|
| 20 |
Recently, we have been fully occupied with the post-training of Ling 2.0's __1T foundational language model__, striving to maximize the __natural language reasoning__ potential of this trillion-scale base model. Conducting post-training on such a huge model, particularly the "training" involved in large-scale reinforcement learning, stands as one of the most technically challenging tasks the Ling Team has encountered since its establishment. On the other hand, it has also been a process that continuously reshapes our technical understanding and reinforces the belief that "scaling is all you need."
|
| 21 |
|