Update README.md
Browse files
README.md
CHANGED
|
@@ -131,6 +131,8 @@ Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trill
|
|
| 131 |
Identification of the documents via the bloom filter would get to an approximation of GneissWeb only since the "Exact substring deduplication" filtering step would not be applied. Application of that step from the DPK would get it closer to the GneissWeb dataset
|
| 132 |
|
| 133 |
|
| 134 |
-
4. IBM Data Prep Kit transforms for [Rep_removal](https://github.com/IBM/data-prep-kit/tree/dev/transforms/universal/rep_removal), [Classifications](https://github.com/IBM/data-prep-kit/tree/dev/transforms/language/gneissweb_classification), [Extreme_tokenized](https://github.com/IBM/data-prep-kit/tree/dev/transforms/language/extreme_tokenized)
|
| 135 |
|
| 136 |
-
5.
|
|
|
|
|
|
|
|
|
| 131 |
Identification of the documents via the bloom filter would get to an approximation of GneissWeb only since the "Exact substring deduplication" filtering step would not be applied. Application of that step from the DPK would get it closer to the GneissWeb dataset
|
| 132 |
|
| 133 |
|
| 134 |
+
4. IBM Data Prep Kit transforms for [Rep_removal](https://github.com/IBM/data-prep-kit/tree/dev/transforms/universal/rep_removal), [Classifications](https://github.com/IBM/data-prep-kit/tree/dev/transforms/language/gneissweb_classification), [Extreme_tokenized](https://github.com/IBM/data-prep-kit/tree/dev/transforms/language/extreme_tokenized)
|
| 135 |
|
| 136 |
+
5. [notebook](https://github.com/IBM/data-prep-kit/blob/dev/examples/notebooks/GneissWeb/GneissWeb.ipynb) to recreate GneissWeb using the methods described above :
|
| 137 |
+
|
| 138 |
+
6. Notebook to recreate GneissWeb using a bloom filter built on the document ids of GneissWeb
|