Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
b035c60
·
verified ·
1 Parent(s): 4e3d7e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -31,6 +31,7 @@ These were applied in the order shown in the Fig 1
31
  **Figure 1 :** GneissWeb recipe
32
 
33
  The net impact was that the dataset size of 15T tokens was filtered down to approx 10T tokens. In subsequent sections we describe the overall performance obtained using GneissWeb compared to other baselines.
 
34
    **Evaluation Strategy**
35
 
36
  To compare GneissWeb against the baselines, we trained decoder models of 7B parameters on a Llama architecture. These were trained on 350B tokens to validate the performance of each processing step. The data was tokenized using a starcoder tokenizer and training was done with a sequence length of 8192.
 
31
  **Figure 1 :** GneissWeb recipe
32
 
33
  The net impact was that the dataset size of 15T tokens was filtered down to approx 10T tokens. In subsequent sections we describe the overall performance obtained using GneissWeb compared to other baselines.
34
+
35
    **Evaluation Strategy**
36
 
37
  To compare GneissWeb against the baselines, we trained decoder models of 7B parameters on a Llama architecture. These were trained on 350B tokens to validate the performance of each processing step. The data was tokenized using a starcoder tokenizer and training was done with a sequence length of 8192.