Files changed (2) hide show
  1. README.md +5 -29
  2. config.json +0 -1
README.md CHANGED
@@ -10,8 +10,6 @@ language:
10
  - zh
11
  - ar
12
  - ru
13
- base_model:
14
- - HuggingFaceTB/SmolLM3-3B-Base
15
  ---
16
 
17
 
@@ -41,7 +39,7 @@ The model is a decoder-only transformer using GQA and NoPE (with 3:1 ratio), it
41
  ### Key features
42
  - Instruct model optimized for **hybrid reasoning**
43
  - **Fully open model**: open weights + full training details including public data mixture and training configs
44
- - **Long context:** Trained on 64k context and supports up to **128k tokens** using YARN extrapolation
45
  - **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)
46
 
47
  For more details refer to our blog post: https://hf.co/blog/smollm3
@@ -198,7 +196,7 @@ text = tokenizer.apply_chat_template(
198
  )
199
  ```
200
 
201
- For local inference, you can use `llama.cpp`, `ONNX`, `MLX`, `MLC` and `ExecuTorch`. You can find quantized checkpoints in this collection (https://huggingface.co/collections/HuggingFaceTB/smollm3-686d33c1fdffe8e635317e23)
202
 
203
  ### vLLM and SGLang
204
 
@@ -358,35 +356,13 @@ The model has also been trained on Arabic (standard), Chinese and Russian data,
358
  Here is an infographic with all the training details
359
  - The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and post-training will be uploaded later
360
  - The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
361
- - The training intermediate checkpoints (including the mid-training and SFT checkpoints) are available at [HuggingFaceTB/SmolLM3-3B-checkpoints](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-checkpoints)
362
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/651e96991b97c9f33d26bde6/qiE5ZYr9SD1CIAtfEfuC8.png)
363
-
364
- ### EU Summary of Public Content
365
-
366
- The EU AI Act requires all GPAI models to provide a Public Summary of Training Content according to a [given template](https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models).
367
- You can find the summary for this model below, as well as in its [development Space](https://huggingface.co/spaces/hfmlsoc/smollm3-eu-data-transparency).
368
-
369
- <iframe
370
- src="https://hfmlsoc-smollm3-eu-data-transparency.hf.space"
371
- frameborder="0"
372
- width="850"
373
- height="350"
374
- ></iframe>
375
 
 
376
 
377
  ## Limitations
378
 
379
  SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
380
 
381
- ## License
382
- [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
383
 
384
- ## Citation
385
- ```bash
386
- @misc{bakouch2025smollm3,
387
- title={{SmolLM3: smol, multilingual, long-context reasoner}},
388
- author={Bakouch, Elie and Ben Allal, Loubna and Lozhkov, Anton and Tazi, Nouamane and Tunstall, Lewis and Patiño, Carlos Miguel and Beeching, Edward and Roucher, Aymeric and Reedi, Aksel Joonas and Gallouédec, Quentin and Rasul, Kashif and Habib, Nathan and Fourrier, Clémentine and Kydlicek, Hynek and Penedo, Guilherme and Larcher, Hugo and Morlon, Mathieu and Srivastav, Vaibhav and Lochner, Joshua and Nguyen, Xuan-Son and Raffel, Colin and von Werra, Leandro and Wolf, Thomas},
389
- year={2025},
390
- howpublished={\url{https://huggingface.co/blog/smollm3}}
391
- }
392
- ```
 
10
  - zh
11
  - ar
12
  - ru
 
 
13
  ---
14
 
15
 
 
39
  ### Key features
40
  - Instruct model optimized for **hybrid reasoning**
41
  - **Fully open model**: open weights + full training details including public data mixture and training configs
42
+ - **Long context:** Trained on 64k context and suppots up to **128k tokens** using YARN extrapolation
43
  - **Multilingual**: 6 natively supported (English, French, Spanish, German, Italian, and Portuguese)
44
 
45
  For more details refer to our blog post: https://hf.co/blog/smollm3
 
196
  )
197
  ```
198
 
199
+ For local inference, you can use `llama.cpp`, `ONNX`, `MLX` and `MLC`. You can find quantized checkpoints in this collection (https://huggingface.co/collections/HuggingFaceTB/smollm3-686d33c1fdffe8e635317e23)
200
 
201
  ### vLLM and SGLang
202
 
 
356
  Here is an infographic with all the training details
357
  - The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and post-training will be uploaded later
358
  - The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359
 
360
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/651e96991b97c9f33d26bde6/qiE5ZYr9SD1CIAtfEfuC8.png)
361
 
362
  ## Limitations
363
 
364
  SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
365
 
 
 
366
 
367
+ ## License
368
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
 
 
 
 
 
 
config.json CHANGED
@@ -100,7 +100,6 @@
100
  "rope_scaling": null,
101
  "rope_theta": 5000000.0,
102
  "sliding_window": null,
103
- "tie_word_embeddings": true,
104
  "torch_dtype": "bfloat16",
105
  "transformers_version": "4.54.0.dev0",
106
  "use_cache": false,
 
100
  "rope_scaling": null,
101
  "rope_theta": 5000000.0,
102
  "sliding_window": null,
 
103
  "torch_dtype": "bfloat16",
104
  "transformers_version": "4.54.0.dev0",
105
  "use_cache": false,