Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,12 @@ base_model:
|
|
| 18 |
|
| 19 |
We are releasing intermediate checkpoints of SmolLM3 to enable further research.
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
## Pre-training
|
| 22 |
|
| 23 |
We release checkpoints every 40,000 steps, which equals 94.4B tokens.
|
|
|
|
| 18 |
|
| 19 |
We are releasing intermediate checkpoints of SmolLM3 to enable further research.
|
| 20 |
|
| 21 |
+
For more details, check the [SmoLLM GithUb repo](https://github.com/huggingface/smollm) with the end-to-end training and evaluation code:
|
| 22 |
+
|
| 23 |
+
- ✓ Pretraining scripts (nanotron)
|
| 24 |
+
- ✓ Post-training code SFT + APO (TRL/alignment-handbook)
|
| 25 |
+
- ✓ Evaluation scripts to reproduce all reported metrics
|
| 26 |
+
|
| 27 |
## Pre-training
|
| 28 |
|
| 29 |
We release checkpoints every 40,000 steps, which equals 94.4B tokens.
|