Text Generation
Transformers
Safetensors
bailing_moe
conversational
custom_code
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,3 +1,6 @@
 
 
 
1
  ---
2
  license: mit
3
  pipeline_tag: text-generation
@@ -43,14 +46,14 @@ In the **AIME 25** benchmark, Ling-1T extends the **Pareto frontier** of reasoni
43
 
44
  Ling-1T excels in visual reasoning and front-end code generation tasks, combining deep semantic understanding with precise code synthesis.
45
  We introduce a hybrid *Syntax–Function–Aesthetics* reward mechanism, enabling the model to not only generate correct and functional code but also demonstrate a refined sense of **visual aesthetics**.
46
- On **ArtifactsBench**, Ling-1T ranks **first among open-source models**, and the benchmark visualizations in this card were, in fact, *generated by Ling-1T itself*.
47
 
48
 
49
  ### Emergent Intelligence at Trillion-Scale
50
 
51
  Scaling to the trillion-parameter level has revealed strong **emergent reasoning and transfer capabilities**.
52
  For example, in the **BFCL V3** tool-use benchmark, Ling-1T achieves **≈ 70% tool-call accuracy** with only light instruction tuning—despite having seen no large-scale trajectory data during training.
53
- Ling-1T can:
54
 
55
  * Interpret complex natural-language instructions
56
  * Transform abstract logic into functional visual components
@@ -327,7 +330,7 @@ More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.h
327
 
328
  ## Limitations & Future Plans
329
 
330
- While **Ling-1T** has made strong progress in efficient reasoning, cross-domain generalization, and training efficiency, several limitations remain:
331
 
332
  * **GQA-based attention**: stable for long-context reasoning but relatively costly. Future versions will adopt **hybrid attention** to improve efficiency.
333
  * **Limited agentic ability**: current model has room to grow in multi-turn interaction, long-term memory, and tool use.
 
1
+
2
+
3
+
4
  ---
5
  license: mit
6
  pipeline_tag: text-generation
 
46
 
47
  Ling-1T excels in visual reasoning and front-end code generation tasks, combining deep semantic understanding with precise code synthesis.
48
  We introduce a hybrid *Syntax–Function–Aesthetics* reward mechanism, enabling the model to not only generate correct and functional code but also demonstrate a refined sense of **visual aesthetics**.
49
+ On **ArtifactsBench**, [Ling-1T](https://zenmux.ai/inclusionai/ling-1t?utm_source=hf_inclusionAI) ranks **first among open-source models**, and the benchmark visualizations in this card were, in fact, *generated by Ling-1T itself*.
50
 
51
 
52
  ### Emergent Intelligence at Trillion-Scale
53
 
54
  Scaling to the trillion-parameter level has revealed strong **emergent reasoning and transfer capabilities**.
55
  For example, in the **BFCL V3** tool-use benchmark, Ling-1T achieves **≈ 70% tool-call accuracy** with only light instruction tuning—despite having seen no large-scale trajectory data during training.
56
+ [Ling-1T](https://zenmux.ai/inclusionai/ling-1t?utm_source=hf_inclusionAI) can:
57
 
58
  * Interpret complex natural-language instructions
59
  * Transform abstract logic into functional visual components
 
330
 
331
  ## Limitations & Future Plans
332
 
333
+ While **[Ling-1T](https://zenmux.ai/inclusionai/ling-1t?utm_source=hf_inclusionAI)** has made strong progress in efficient reasoning, cross-domain generalization, and training efficiency, several limitations remain:
334
 
335
  * **GQA-based attention**: stable for long-context reasoning but relatively costly. Future versions will adopt **hybrid attention** to improve efficiency.
336
  * **Limited agentic ability**: current model has room to grow in multi-turn interaction, long-term memory, and tool use.