Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
017962c
·
verified ·
1 Parent(s): 381e892

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -40,15 +40,13 @@ To compare GneissWeb against the baselines, we trained decoder models of sizes 1
40
 
41
  The baselines from which equivalent data was subsampled and used for this comparison included:
42
 
43
- ![Fig2.jpg](Fig2.jpg)
44
-
45
  <img src="Fig2.jpg" alt="Fig2.jpg" style="width:1000px;"/>
46
 
47
  Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb. A similar strategy as for the creation of the Fineweb baseline was used for other baselines too
48
 
49
 
50
  ![ablation_strategy.png](ablation_strategy.png)
51
-
52
  Figure 2: Subsampling and Ablation Strategy
53
 
54
  We trained and evaluated our models on an LSF (Load Sharing Facility) cluster with each node equipped with eight H100 GPUs. For training tasks involving 35 billion tokens, we typically trained models with 1.4 billion trainable parameters across 64 GPUs. For more compute intensive tasks, we scale up to 128 or 256 GPUs to reduce training time and for evaluation tasks we generally used 8 GPUs.
 
40
 
41
  The baselines from which equivalent data was subsampled and used for this comparison included:
42
 
 
 
43
  <img src="Fig2.jpg" alt="Fig2.jpg" style="width:1000px;"/>
44
 
45
  Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb. A similar strategy as for the creation of the Fineweb baseline was used for other baselines too
46
 
47
 
48
  ![ablation_strategy.png](ablation_strategy.png)
49
+ <img src="ablation_strategy.png" alt="ablation_strategy.png" style="width:1000px;"/>
50
  Figure 2: Subsampling and Ablation Strategy
51
 
52
  We trained and evaluated our models on an LSF (Load Sharing Facility) cluster with each node equipped with eight H100 GPUs. For training tasks involving 35 billion tokens, we typically trained models with 1.4 billion trainable parameters across 64 GPUs. For more compute intensive tasks, we scale up to 128 or 256 GPUs to reduce training time and for evaluation tasks we generally used 8 GPUs.