Update README.md
Browse files
README.md
CHANGED
|
@@ -94,6 +94,7 @@ The high signal tasks also show lower coefficient of variation compared to the
|
|
| 94 |
**At 1.4 Billion Model Size Trained on 350 Billion Tokens**
|
| 95 |
|
| 96 |
<img src="fig7.jpg" alt="fig7.jpg" style="width:1400px;"/>
|
|
|
|
| 97 |
**Figure 7:** Average scores of 1.4 Billion parameter models trained on 350 Billion tokens randomly sampled from state-of-the-art open datasets. Scores are averaged over 3 random seeds used for data sampling and are reported along with standard deviations. GneissWeb performs the best among the class of large datasets.
|
| 98 |
|
| 99 |
The datasets evaluated are broken down into those which are above 5 Trillion tokens in size and those below 5 Trillion. The former are useful for Stage-1 kind of training and are the primary focus of this study, The latter are useful for Stage-2 kind of training and with certain tuning of parameters of filtering a version of GneissWeb can be produced for this space.
|
|
|
|
| 94 |
**At 1.4 Billion Model Size Trained on 350 Billion Tokens**
|
| 95 |
|
| 96 |
<img src="fig7.jpg" alt="fig7.jpg" style="width:1400px;"/>
|
| 97 |
+
|
| 98 |
**Figure 7:** Average scores of 1.4 Billion parameter models trained on 350 Billion tokens randomly sampled from state-of-the-art open datasets. Scores are averaged over 3 random seeds used for data sampling and are reported along with standard deviations. GneissWeb performs the best among the class of large datasets.
|
| 99 |
|
| 100 |
The datasets evaluated are broken down into those which are above 5 Trillion tokens in size and those below 5 Trillion. The former are useful for Stage-1 kind of training and are the primary focus of this study, The latter are useful for Stage-2 kind of training and with certain tuning of parameters of filtering a version of GneissWeb can be produced for this space.
|