Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
4e0a4e8
·
verified ·
1 Parent(s): bbb4ee2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -93,7 +93,7 @@ Given than training models of size `7 Billion` parameters require lot more compu
93
 
94
  1. fastText models used in the curation of GneissWeb
95
  1. [Quality Classifier](https://huggingface.co/ibm-granite/GneissWeb.Quality_annotator)
96
- The fastText model takes as input text and classifies whether the text is "high-quality" (labeled as `__label__hq`) or "low-quality" (labeled as `__label__cc`). The GneissWeb ensemble filter uses the confidence score given to `__label__hq` for filtering documents based on an appropriately chosen threshold. The fastText model is used along with [DCLM-fastText] (https://huggingface.co/mlfoundations/fasttext-oh-eli5) and other quality annotators. Please refer to the [example notebook](https://github.com/IBM/data-prep-kit/blob/dev/transforms/language/gneissweb_classification/gneissweb_classification.ipynb) for using a fastText model with Data-prep-kit.
97
  2. Classifiers for [Science](https://huggingface.co/ibm-granite/GneissWeb.Sci_classifier), [Technology](https://huggingface.co/ibm-granite/GneissWeb.Tech_classifier), [Medical](https://huggingface.co/ibm-granite/GneissWeb.Med_classifier) and [Education](https://huggingface.co/ibm-granite/GneissWeb.Edu_classifier). Each classifier takes as input text and classifies whether the text belongs to the target topic (labeled as `__label__hq`) or other categories "cc" (labeled as `__label__cc`). Please refer to the [example notebook](https://github.com/IBM/data-prep-kit/blob/dev/transforms/language/gneissweb_classification/gneissweb_classification.ipynb) for using the classifiers with Data-prep-kit. The GneissWeb ensemble filter uses the confidence score given to `__label__hq` for filtering documents based on an appropriately chosen threshold. The fastText models are used together along with other quality annotators.
98
 
99
  2. [Bloom filter](https://huggingface.co/ibm-granite/GneissWeb.bloom) built on the document ids contained in GneissWeb. This can be used to recreate GneissWeb using the document ids from FineWeb 1.1.0 or any other version of Commoncrawl. This filter offers a way to determine which documents of FineWeb 1.1.0 or Commoncrawl are part of GneissWeb. This [example](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) shows how to apply the bloom filter on any parquet file. The [Bloom annotator transform](https://github.com/ian-cho/data-prep-kit/tree/dev/transforms/universal/bloom) assigns a label of `1` if the document is present in the GneissWeb Bloom filter; otherwise, it assigns `0`. This approach provides a clear understanding of which documents in FineWeb 1.1.0 are also present in GneissWeb and which are not. The `id` column in FineWeb 1.1.0 looks like this: `<urn:uuid:39147604-bfbe-4ed5-b19c-54105f8ae8a7>`. The bloom filter is of the [rbloom](https://github.com/KenanHanke/rbloom) type and of size `28 GB`. Identification of the documents via the bloom filter would get to an approximation of GneissWeb because the "Exact substring deduplication" step is not being applied, which changes the content of documents. Applying "Exact substring deduplication" on top of the bloom filter will lead to a mcuh better approximation of the GneissWeb dataset.
 
93
 
94
  1. fastText models used in the curation of GneissWeb
95
  1. [Quality Classifier](https://huggingface.co/ibm-granite/GneissWeb.Quality_annotator)
96
+ The fastText model takes as input text and classifies whether the text is "high-quality" (labeled as `__label__hq`) or "low-quality" (labeled as `__label__cc`). The GneissWeb ensemble filter uses the confidence score given to `__label__hq` for filtering documents based on an appropriately chosen threshold. The fastText model is used along with [DCLM-fastText](https://huggingface.co/mlfoundations/fasttext-oh-eli5) and other quality annotators. Please refer to the [example notebook](https://github.com/IBM/data-prep-kit/blob/dev/transforms/language/gneissweb_classification/gneissweb_classification.ipynb) for using a fastText model with Data-prep-kit.
97
  2. Classifiers for [Science](https://huggingface.co/ibm-granite/GneissWeb.Sci_classifier), [Technology](https://huggingface.co/ibm-granite/GneissWeb.Tech_classifier), [Medical](https://huggingface.co/ibm-granite/GneissWeb.Med_classifier) and [Education](https://huggingface.co/ibm-granite/GneissWeb.Edu_classifier). Each classifier takes as input text and classifies whether the text belongs to the target topic (labeled as `__label__hq`) or other categories "cc" (labeled as `__label__cc`). Please refer to the [example notebook](https://github.com/IBM/data-prep-kit/blob/dev/transforms/language/gneissweb_classification/gneissweb_classification.ipynb) for using the classifiers with Data-prep-kit. The GneissWeb ensemble filter uses the confidence score given to `__label__hq` for filtering documents based on an appropriately chosen threshold. The fastText models are used together along with other quality annotators.
98
 
99
  2. [Bloom filter](https://huggingface.co/ibm-granite/GneissWeb.bloom) built on the document ids contained in GneissWeb. This can be used to recreate GneissWeb using the document ids from FineWeb 1.1.0 or any other version of Commoncrawl. This filter offers a way to determine which documents of FineWeb 1.1.0 or Commoncrawl are part of GneissWeb. This [example](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) shows how to apply the bloom filter on any parquet file. The [Bloom annotator transform](https://github.com/ian-cho/data-prep-kit/tree/dev/transforms/universal/bloom) assigns a label of `1` if the document is present in the GneissWeb Bloom filter; otherwise, it assigns `0`. This approach provides a clear understanding of which documents in FineWeb 1.1.0 are also present in GneissWeb and which are not. The `id` column in FineWeb 1.1.0 looks like this: `<urn:uuid:39147604-bfbe-4ed5-b19c-54105f8ae8a7>`. The bloom filter is of the [rbloom](https://github.com/KenanHanke/rbloom) type and of size `28 GB`. Identification of the documents via the bloom filter would get to an approximation of GneissWeb because the "Exact substring deduplication" step is not being applied, which changes the content of documents. Applying "Exact substring deduplication" on top of the bloom filter will lead to a mcuh better approximation of the GneissWeb dataset.