Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ library_name:
|
|
| 8 |
- transformers
|
| 9 |
---
|
| 10 |
|
| 11 |
-
##What is it?
|
| 12 |
- Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb 1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
| 13 |
- Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb 1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
|
| 14 |
- [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/IBM/data-prep-kit/blob/dev/examples/notebooks/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
|
|
@@ -17,10 +17,10 @@ library_name:
|
|
| 17 |
- Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it
|
| 18 |
|
| 19 |
|
| 20 |
-
##The GneissWeb Recipe in a Nutshell: Building on Top of FineWeb
|
| 21 |
Huggingface introduced [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb), a large-scale dataset for LLM pre-training, consisting of `15 Trillion` tokens (`44TB` disk space). We started with the goal ofproducing `10+ Trillion` high quality tokens from FineWeb V1.1.0, so that we get sufficiently large number of quality tokens suitable for pre-training. Unlike FineWeb-Edu and similar domain-specific datasets, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. The GneissWeb recipe allows fo tuning the filtering thresholds such that the resulting dataset can be suitable for pre-training as well as annealing.
|
| 22 |
|
| 23 |
-
###Recipe Steps
|
| 24 |
The GneissWeb dataset was obtained by applying the following processing steps to Fineweb:
|
| 25 |
- Exact substring deduplication at line level
|
| 26 |
- Custom built Fasttext quality filter
|
|
@@ -35,7 +35,7 @@ These were applied in the order shown in the `Fig 1`.
|
|
| 35 |
**Figure 1:** GneissWeb recipe
|
| 36 |
|
| 37 |
|
| 38 |
-
##Evaluation Strategy
|
| 39 |
|
| 40 |
To compare GneissWeb against the baselines, we trained decoder models of `7B` parameters on Llama architecture. These were trained on `350B` tokens to validate the performance of each processing step. The data was tokenized using a starcoder tokenizer and training was done with PyTorch FSDP stack and a sequence length of `8192`.
|
| 41 |
|
|
@@ -49,7 +49,7 @@ We used FineWeb 1.1.0 and FineWeb-Edu-score2 as our comparison baselines. The fo
|
|
| 49 |
|
| 50 |
We evaluated our ablation models using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on two categories of tasks: 11 High-Signal tasks (0-shot and few-shot) and 20 Extended tasks (0-shot and few-shot).
|
| 51 |
|
| 52 |
-
###High-Signal tasks
|
| 53 |
|
| 54 |
Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in `Fig 3` and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
|
| 55 |
|
|
@@ -59,7 +59,7 @@ Similar to FineWeb, we used the following criteria for selecting the 11 High-Sig
|
|
| 59 |
|
| 60 |
The High-Signal tasks were used to analyze individual ingredients and possible recipe combinations via ablations. After we narrowed a few candidate recipes using these signals, we used the extended set of benchmarks to evaluate the model’s ability to generalize.
|
| 61 |
|
| 62 |
-
###Extended tasks
|
| 63 |
|
| 64 |
The extended tasks shown in `Fig 4` are a superset of the High Signal tasks. Besides the task categories of Commonsense Reasoning, Reading Comprehension, World Knowledge, Language Understanding, it also has the category of Symbolic Problem Solving. For the extended set, we also focus on zero-shot as well as few-shot variations.
|
| 65 |
|
|
@@ -69,7 +69,7 @@ The extended tasks shown in `Fig 4` are a superset of the High Signal tasks. Be
|
|
| 69 |
|
| 70 |
The Extended Task set have some tasks which are not in High Signal. These tasks are useful but at ablation scale may have high standard deviation (like `PubMedQA`) or are at random guessing the entire training cycle (like `MMLU`) or which are above random guessing but do not show improvement with training (like `GSM8k`). However, these tasks are useful indicators for larger model performance and thus have been retained in the Extended Tasks set.
|
| 71 |
|
| 72 |
-
###Evaluation Results, `7B` parameter model, `350B` Tokens
|
| 73 |
|
| 74 |
Given than training models of size `7 Billion` parameters require lot more compute and so does evaluation, we have limited training to `350 Billion` tokens. We see that the models trained on GneissWeb outperform the models trained on FineWeb.V1.1.0 and FineWeb-Edu-score-2.
|
| 75 |
|
|
@@ -81,7 +81,7 @@ Given than training models of size `7 Billion` parameters require lot more compu
|
|
| 81 |
|
| 82 |
**Figure 6:** Average evaluation score on High-Signal tasks versus the number of tokens at `7B` model parameters for `350B` tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.1.1.0.
|
| 83 |
|
| 84 |
-
##Summary
|
| 85 |
|
| 86 |
**Developers**: IBM Research
|
| 87 |
|
|
|
|
| 8 |
- transformers
|
| 9 |
---
|
| 10 |
|
| 11 |
+
## What is it?
|
| 12 |
- Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb 1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
| 13 |
- Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb 1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
|
| 14 |
- [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/IBM/data-prep-kit/blob/dev/examples/notebooks/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
|
|
|
|
| 17 |
- Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it
|
| 18 |
|
| 19 |
|
| 20 |
+
## The GneissWeb Recipe in a Nutshell: Building on Top of FineWeb
|
| 21 |
Huggingface introduced [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb), a large-scale dataset for LLM pre-training, consisting of `15 Trillion` tokens (`44TB` disk space). We started with the goal ofproducing `10+ Trillion` high quality tokens from FineWeb V1.1.0, so that we get sufficiently large number of quality tokens suitable for pre-training. Unlike FineWeb-Edu and similar domain-specific datasets, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. The GneissWeb recipe allows fo tuning the filtering thresholds such that the resulting dataset can be suitable for pre-training as well as annealing.
|
| 22 |
|
| 23 |
+
### Recipe Steps
|
| 24 |
The GneissWeb dataset was obtained by applying the following processing steps to Fineweb:
|
| 25 |
- Exact substring deduplication at line level
|
| 26 |
- Custom built Fasttext quality filter
|
|
|
|
| 35 |
**Figure 1:** GneissWeb recipe
|
| 36 |
|
| 37 |
|
| 38 |
+
## Evaluation Strategy
|
| 39 |
|
| 40 |
To compare GneissWeb against the baselines, we trained decoder models of `7B` parameters on Llama architecture. These were trained on `350B` tokens to validate the performance of each processing step. The data was tokenized using a starcoder tokenizer and training was done with PyTorch FSDP stack and a sequence length of `8192`.
|
| 41 |
|
|
|
|
| 49 |
|
| 50 |
We evaluated our ablation models using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on two categories of tasks: 11 High-Signal tasks (0-shot and few-shot) and 20 Extended tasks (0-shot and few-shot).
|
| 51 |
|
| 52 |
+
### High-Signal tasks
|
| 53 |
|
| 54 |
Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in `Fig 3` and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
|
| 55 |
|
|
|
|
| 59 |
|
| 60 |
The High-Signal tasks were used to analyze individual ingredients and possible recipe combinations via ablations. After we narrowed a few candidate recipes using these signals, we used the extended set of benchmarks to evaluate the model’s ability to generalize.
|
| 61 |
|
| 62 |
+
### Extended tasks
|
| 63 |
|
| 64 |
The extended tasks shown in `Fig 4` are a superset of the High Signal tasks. Besides the task categories of Commonsense Reasoning, Reading Comprehension, World Knowledge, Language Understanding, it also has the category of Symbolic Problem Solving. For the extended set, we also focus on zero-shot as well as few-shot variations.
|
| 65 |
|
|
|
|
| 69 |
|
| 70 |
The Extended Task set have some tasks which are not in High Signal. These tasks are useful but at ablation scale may have high standard deviation (like `PubMedQA`) or are at random guessing the entire training cycle (like `MMLU`) or which are above random guessing but do not show improvement with training (like `GSM8k`). However, these tasks are useful indicators for larger model performance and thus have been retained in the Extended Tasks set.
|
| 71 |
|
| 72 |
+
### Evaluation Results, `7B` parameter model, `350B` Tokens
|
| 73 |
|
| 74 |
Given than training models of size `7 Billion` parameters require lot more compute and so does evaluation, we have limited training to `350 Billion` tokens. We see that the models trained on GneissWeb outperform the models trained on FineWeb.V1.1.0 and FineWeb-Edu-score-2.
|
| 75 |
|
|
|
|
| 81 |
|
| 82 |
**Figure 6:** Average evaluation score on High-Signal tasks versus the number of tokens at `7B` model parameters for `350B` tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.1.1.0.
|
| 83 |
|
| 84 |
+
## Summary
|
| 85 |
|
| 86 |
**Developers**: IBM Research
|
| 87 |
|