Dan Fu
commited on
Commit
·
f80c196
1
Parent(s):
0a8b6b7
Remove links
Browse files
README.md
CHANGED
|
@@ -44,13 +44,13 @@ The dataset structure is as follows:
|
|
| 44 |
|
| 45 |
## Dataset Creation
|
| 46 |
|
| 47 |
-
This dataset was created to follow the
|
| 48 |
|
| 49 |
### Source Data
|
| 50 |
|
| 51 |
#### Commoncrawl
|
| 52 |
|
| 53 |
-
We downlaod five dumps from Commoncrawl, and run the dumps through the official
|
| 54 |
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
|
| 55 |
classify paragraphs as Wikipedia references or random Commoncrawl samples.
|
| 56 |
|
|
|
|
| 44 |
|
| 45 |
## Dataset Creation
|
| 46 |
|
| 47 |
+
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
|
| 48 |
|
| 49 |
### Source Data
|
| 50 |
|
| 51 |
#### Commoncrawl
|
| 52 |
|
| 53 |
+
We downlaod five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
|
| 54 |
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
|
| 55 |
classify paragraphs as Wikipedia references or random Commoncrawl samples.
|
| 56 |
|