--- license: mit base_model: - inclusionAI/Ling-1T-base-2.0 pipeline_tag: text-generation library_name: transformers ---

🤗 Hugging Face   |   ðŸ¤– ModelScope

## Ring-1T-preview ,Deep Thinking, No Waiting Recently, we have been fully occupied with the post-training of Ling 2.0's __1T foundational language model__, striving to maximize the __natural language reasoning__ potential of this trillion-scale base model. Conducting post-training on such a huge model, particularly the "training" involved in large-scale reinforcement learning, stands as one of the most technically challenging tasks the Ling Team has encountered since its establishment. On the other hand, it has also been a process that continuously reshapes our technical understanding and reinforces the belief that "scaling is all you need." In the early stages of large-scale reinforcement learning training, __Ring-1T__, the thinking model version of the 1T foundational language model, has already demonstrated __powerful natural language reasoning capabilities__. In __AIME 2025__ (American Invitational Mathematics Examination), the model achieved a high score of 92.6 through pure natural language reasoning, further approaching GPT-5 with thinking (no tools)'s score of 94.6. Additionally, the model has shown strong competitiveness in the __Harvard-MIT Mathematics Tournament__ (HMMT) 2025, __competition-level code generation tasks__ such as LiveCodeBench v6 and CodeForces, as well as the __abstract reasoning benchmark__ ARC-AGI-1 task. To facilitate early community exploration of the reasoning capabilities of the trillion-parameter thinking model Ring-1T, we have decided to open-source its preview version, __Ring-1T-preview-FP8__, ahead of schedule. This model retains the efficient MoE architecture of Ling 2.0, completed pre-training on 20T tokens of corpora, and underwent RLVR training tailored for reasoning abilities within our self-developed efficient reinforcement learning system __ASystem__ (the AReal framework of which has been open-sourced), leveraging the previously disclosed "__icepop__" method(https://ringtech.notion.site/icepop). __Ring-1T__ remains under continuous training. While the preview version already demonstrates powerful natural language reasoning capabilities, it still exhibits issues such as language mixing and repetitive reasoning. We look forward to community exploration and feedback to collectively accelerate the iterative refinement of this trillion-parameter foundation.

### IMO Cases

## Quickstart ### 🤗 Hugging Face Transformers Here is a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "inclusionAI/Ring-1T-preview" model = AutoModelForCausalLM.from_pretrained( model_name, dtype="auto", device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language models." messages = [ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=8192 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### 🤖 ModelScope If you're in mainland China, we strongly recommend you to use our model from 🤖 ModelScope. ## License This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).