Update README.md
Browse files
README.md
CHANGED
|
@@ -44,8 +44,6 @@ The baselines from which equivalent data was subsampled and used for this compar
|
|
| 44 |
|
| 45 |
Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb. A similar strategy as for the creation of the Fineweb baseline was used for other baselines too
|
| 46 |
|
| 47 |
-
|
| 48 |
-

|
| 49 |
<img src="ablation_strategy.png" alt="ablation_strategy.png" style="width:1000px;"/>
|
| 50 |
Figure 2: Subsampling and Ablation Strategy
|
| 51 |
|
|
@@ -62,8 +60,7 @@ We evaluated our ablation models using lm-evaluation-harness on two categories
|
|
| 62 |
Since ablations are performed by training ‘small’ models (1.4B parameter models) for a ‘few billion’ tokens (typically 35B tokens), it is important to identify benchmarks that provide good signal at this relatively small scale. Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in Fig 3 and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
|
| 63 |
|
| 64 |
|
| 65 |
-
|
| 66 |
-

|
| 67 |
|
| 68 |
Figure 3 : High Signal Tasks — provide good signal at relatively small scale (of 1.4B models trained on 35B to 100B tokens)
|
| 69 |
|
|
@@ -73,7 +70,7 @@ The High-Signal tasks were used to analyze individual ingredients and possible r
|
|
| 73 |
|
| 74 |
The extended tasks shown in Fig 4 are a superset of the High Signal tasks. Besides the task categories of Commonsense Reasoning, Reading Comprehension, World Knowledge, Language Understanding, it also has the category of Symbolic Problem Solving. For the extended set, we also focus on zero-shot as well as few-shot variations.
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
Figure 4 : Extended Tasks — broader set of tasks to evaluate generalization at larger number of tokens and/or larger model sizes
|
| 79 |
|
|
|
|
| 44 |
|
| 45 |
Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb. A similar strategy as for the creation of the Fineweb baseline was used for other baselines too
|
| 46 |
|
|
|
|
|
|
|
| 47 |
<img src="ablation_strategy.png" alt="ablation_strategy.png" style="width:1000px;"/>
|
| 48 |
Figure 2: Subsampling and Ablation Strategy
|
| 49 |
|
|
|
|
| 60 |
Since ablations are performed by training ‘small’ models (1.4B parameter models) for a ‘few billion’ tokens (typically 35B tokens), it is important to identify benchmarks that provide good signal at this relatively small scale. Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in Fig 3 and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
|
| 61 |
|
| 62 |
|
| 63 |
+
<img src="HighSignal.png" alt="HighSignal.png" style="width:1000px;"/>
|
|
|
|
| 64 |
|
| 65 |
Figure 3 : High Signal Tasks — provide good signal at relatively small scale (of 1.4B models trained on 35B to 100B tokens)
|
| 66 |
|
|
|
|
| 70 |
|
| 71 |
The extended tasks shown in Fig 4 are a superset of the High Signal tasks. Besides the task categories of Commonsense Reasoning, Reading Comprehension, World Knowledge, Language Understanding, it also has the category of Symbolic Problem Solving. For the extended set, we also focus on zero-shot as well as few-shot variations.
|
| 72 |
|
| 73 |
+
<img src="Extended_Tasks.png" alt="Extended_Tasks.png" style="width:1000px;"/>
|
| 74 |
|
| 75 |
Figure 4 : Extended Tasks — broader set of tasks to evaluate generalization at larger number of tokens and/or larger model sizes
|
| 76 |
|