repo_id
stringlengths 8
49
| owner
stringlengths 3
24
| tags
stringlengths 22
25.6k
| type
stringclasses 1
value | markdown_content
stringlengths 454
30.5k
|
|---|---|---|---|---|
fka/awesome-chatgpt-prompts
|
fka
|
task_categories:question-answering, license:cc0-1.0, size_categories:n<1K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, ChatGPT
|
community
|
# Dataset: `fka/awesome-chatgpt-prompts`
## 📝 Metadata
- **Author/Owner:** fka
- **Downloads:** 33620
- **Likes:** 9309
- **Tags:** task_categories:question-answering, license:cc0-1.0, size_categories:n<1K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, ChatGPT
- **License:** Not specified
## 📖 Description
```text
🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0...
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `prompts.csv`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `act` | `str` |
| `prompt` | `str` |
|
HuggingFaceFW/fineweb
|
HuggingFaceFW
|
task_categories:text-generation, language:en, license:odc-by, size_categories:10B<n<100B, modality:tabular, modality:text, arxiv:2306.01116, arxiv:2109.07445, arxiv:2406.17557, doi:10.57967/hf/2493, region:us
|
community
|
# Dataset: `HuggingFaceFW/fineweb`
## 📝 Metadata
- **Author/Owner:** HuggingFaceFW
- **Downloads:** 277751
- **Likes:** 2407
- **Tags:** task_categories:text-generation, language:en, license:odc-by, size_categories:10B<n<100B, modality:tabular, modality:text, arxiv:2306.01116, arxiv:2109.07445, arxiv:2406.17557, doi:10.57967/hf/2493, region:us
- **License:** Not specified
## 📖 Description
```text
🍷 FineWeb
15 trillion tokens of the finest data the 🌐 web has to offer
What is it?
The 🍷 FineWeb dataset consists of more than 18.5T tokens (originally 15T tokens) of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library.
🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/CC-MAIN-2013-20/000_00000.parquet`
- `data/CC-MAIN-2013-20/000_00001.parquet`
- `data/CC-MAIN-2013-20/000_00002.parquet`
- `data/CC-MAIN-2013-20/000_00003.parquet`
- `data/CC-MAIN-2013-20/000_00004.parquet`
- `data/CC-MAIN-2013-20/000_00005.parquet`
- `data/CC-MAIN-2013-20/000_00006.parquet`
- `data/CC-MAIN-2013-20/000_00007.parquet`
- `data/CC-MAIN-2013-20/000_00008.parquet`
- `data/CC-MAIN-2013-20/000_00009.parquet`
- `data/CC-MAIN-2013-20/000_00010.parquet`
- `data/CC-MAIN-2013-20/000_00011.parquet`
- `data/CC-MAIN-2013-20/000_00012.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
### Config: `sample-10BT`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
### Config: `sample-100BT`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
|
Anthropic/hh-rlhf
|
Anthropic
|
license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2204.05862, region:us, human-feedback
|
community
|
# Dataset: `Anthropic/hh-rlhf`
## 📝 Metadata
- **Author/Owner:** Anthropic
- **Downloads:** 34144
- **Likes:** 1464
- **Tags:** license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2204.05862, region:us, human-feedback
- **License:** Not specified
## 📖 Description
```text
Dataset Card for HH-RLHF
Dataset Summary
This repository provides access to two different kinds of data:
Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. These data are meant to train preference (or reward) models for subsequent RLHF training. These data are not meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/hh-rlhf....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `harmless-base/test.jsonl.gz`
- `harmless-base/train.jsonl.gz`
- `helpful-base/test.jsonl.gz`
- `helpful-base/train.jsonl.gz`
- `helpful-online/test.jsonl.gz`
- `helpful-online/train.jsonl.gz`
- `helpful-rejection-sampled/test.jsonl.gz`
- `helpful-rejection-sampled/train.jsonl.gz`
- `red-team-attempts/red_team_attempts.jsonl.gz`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `chosen` | `str` |
| `rejected` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `chosen` | `str` |
| `rejected` | `str` |
|
Open-Orca/OpenOrca
|
Open-Orca
|
task_categories:text-classification, task_categories:token-classification, task_categories:table-question-answering, task_categories:question-answering, task_categories:zero-shot-classification, task_categories:summarization, task_categories:feature-extraction, task_categories:text-generation, language:en, license:mit, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2306.02707, arxiv:2301.13688, arxiv:2302.13971, region:us
|
community
|
# Dataset: `Open-Orca/OpenOrca`
## 📝 Metadata
- **Author/Owner:** Open-Orca
- **Downloads:** 8142
- **Likes:** 1457
- **Tags:** task_categories:text-classification, task_categories:token-classification, task_categories:table-question-answering, task_categories:question-answering, task_categories:zero-shot-classification, task_categories:summarization, task_categories:feature-extraction, task_categories:text-generation, language:en, license:mit, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2306.02707, arxiv:2301.13688, arxiv:2302.13971, region:us
- **License:** Not specified
## 📖 Description
```text
🐋 The OpenOrca Dataset! 🐋
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
Official Models
Mistral-7B-OpenOrca
Our latest model, the first 7B to score better overall than all… See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/OpenOrca....
```
## 📂 File System Sample
- `.gitattributes`
- `1M-GPT4-Augmented.parquet`
- `3_5M-GPT3_5-Augmented.parquet`
- `OpenOrcaLogo.png`
- `README.md`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `system_prompt` | `str` |
| `question` | `str` |
| `response` | `str` |
|
OpenAssistant/oasst1
|
OpenAssistant
|
language:en, language:es, language:ru, language:de, language:pl, language:th, language:vi, language:sv, language:bn, language:da, language:he, language:it, language:fa, language:sk, language:id, language:nb, language:el, language:nl, language:hu, language:eu, language:zh, language:eo, language:ja, language:ca, language:cs, language:bg, language:fi, language:pt, language:tr, language:ro, language:ar, language:uk, language:gl, language:fr, language:ko, license:apache-2.0, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2304.07327, region:us, human-feedback
|
community
|
# Dataset: `OpenAssistant/oasst1`
## 📝 Metadata
- **Author/Owner:** OpenAssistant
- **Downloads:** 8006
- **Likes:** 1440
- **Tags:** language:en, language:es, language:ru, language:de, language:pl, language:th, language:vi, language:sv, language:bn, language:da, language:he, language:it, language:fa, language:sk, language:id, language:nb, language:el, language:nl, language:hu, language:eu, language:zh, language:eo, language:ja, language:ca, language:cs, language:bg, language:fi, language:pt, language:tr, language:ro, language:ar, language:uk, language:gl, language:fr, language:ko, license:apache-2.0, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2304.07327, region:us, human-feedback
- **License:** Not specified
## 📖 Description
```text
OpenAssistant Conversations Dataset (OASST1)
Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort… See the full description on the dataset page: https://huggingface.co/datasets/OpenAssistant/oasst1....
```
## 📂 File System Sample
- `.gitattributes`
- `2023-04-12_oasst_all.messages.jsonl.gz`
- `2023-04-12_oasst_all.trees.jsonl.gz`
- `2023-04-12_oasst_prompts.messages.jsonl.gz`
- `2023-04-12_oasst_ready.messages.jsonl.gz`
- `2023-04-12_oasst_ready.trees.jsonl.gz`
- `2023-04-12_oasst_spam.messages.jsonl.gz`
- `LICENSE`
- `README.md`
- `data/train-00000-of-00001-b42a775f407cee45.parquet`
- `data/validation-00000-of-00001-134b8fd0c89408b6.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `message_id` | `str` |
| `parent_id` | `NoneType` |
| `user_id` | `str` |
| `created_date` | `str` |
| `text` | `str` |
| `role` | `str` |
| `lang` | `str` |
| `review_count` | `int` |
| `review_result` | `bool` |
| `deleted` | `bool` |
| `rank` | `NoneType` |
| `synthetic` | `bool` |
| `model_name` | `NoneType` |
| `detoxify` | `dict` |
| `message_tree_id` | `str` |
| `tree_state` | `str` |
| `emojis` | `dict` |
| `labels` | `dict` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `message_id` | `str` |
| `parent_id` | `NoneType` |
| `user_id` | `str` |
| `created_date` | `str` |
| `text` | `str` |
| `role` | `str` |
| `lang` | `str` |
| `review_count` | `int` |
| `review_result` | `bool` |
| `deleted` | `bool` |
| `rank` | `NoneType` |
| `synthetic` | `bool` |
| `model_name` | `NoneType` |
| `detoxify` | `dict` |
| `message_tree_id` | `str` |
| `tree_state` | `str` |
| `emojis` | `dict` |
| `labels` | `dict` |
|
gsdf/EasyNegative
|
gsdf
|
license:other, size_categories:n<1K, format:imagefolder, modality:image, library:datasets, library:mlcroissant, region:us
|
community
|
# Dataset: `gsdf/EasyNegative`
## 📝 Metadata
- **Author/Owner:** gsdf
- **Downloads:** 62861
- **Likes:** 1158
- **Tags:** license:other, size_categories:n<1K, format:imagefolder, modality:image, library:datasets, library:mlcroissant, region:us
- **License:** Not specified
## 📖 Description
```text
Negative Embedding
This is a Negative Embedding trained with Counterfeit. Please use it in the "\stable-diffusion-webui\embeddings" folder.It can be used with other models, but the effectiveness is not certain.
Counterfeit-V2.0.safetensors
AbyssOrangeMix2_sfw.safetensors
anything-v4.0-pruned.safetensors...
```
## 📂 File System Sample
- `.gitattributes`
- `EasyNegative.pt`
- `EasyNegative.safetensors`
- `README.md`
- `sample01.png`
- `sample02.png`
- `sample03.png`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `PngImageFile` |
|
togethercomputer/RedPajama-Data-1T
|
togethercomputer
|
task_categories:text-generation, language:en, size_categories:1M<n<10M, modality:text, library:datasets, library:mlcroissant, region:us
|
community
|
# Dataset: `togethercomputer/RedPajama-Data-1T`
## 📝 Metadata
- **Author/Owner:** togethercomputer
- **Downloads:** 979
- **Likes:** 1105
- **Tags:** task_categories:text-generation, language:en, size_categories:1M<n<10M, modality:text, library:datasets, library:mlcroissant, region:us
- **License:** Not specified
## 📖 Description
```text
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `RedPajama-Data-1T.py`
- `urls/arxiv.txt`
- `urls/c4.txt`
- `urls/common_crawl.txt`
- `urls/github.txt`
- `urls/stackexchange.txt`
- `urls/wikipedia.txt`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found RedPajama-Data-1T.py
```
|
wikimedia/wikipedia
|
wikimedia
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, language:ab, language:ace, language:ady, language:af, language:alt, language:am, language:ami, language:an, language:ang, language:anp, language:ar, language:arc, language:ary, language:arz, language:as, language:ast, language:atj, language:av, language:avk, language:awa, language:ay, language:az, language:azb, language:ba, language:ban, language:bar, language:bbc, language:bcl, language:be, language:bg, language:bh, language:bi, language:bjn, language:blk, language:bm, language:bn, language:bo, language:bpy, language:br, language:bs, language:bug, language:bxr, language:ca, language:cbk, language:cdo, language:ce, language:ceb, language:ch, language:chr, language:chy, language:ckb, language:co, language:cr, language:crh, language:cs, language:csb, language:cu, language:cv, language:cy, language:da, language:dag, language:de, language:dga, language:din, language:diq, language:dsb, language:dty, language:dv, language:dz, language:ee, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:ext, language:fa, language:fat, language:ff, language:fi, language:fj, language:fo, language:fon, language:fr, language:frp, language:frr, language:fur, language:fy, language:ga, language:gag, language:gan, language:gcr, language:gd, language:gl, language:glk, language:gn, language:gom, language:gor, language:got, language:gpe, language:gsw, language:gu, language:guc, language:gur, language:guw, language:gv, language:ha, language:hak, language:haw, language:hbs, language:he, language:hi, language:hif, language:hr, language:hsb, language:ht, language:hu, language:hy, language:hyw, language:ia, language:id, language:ie, language:ig, language:ik, language:ilo, language:inh, language:io, language:is, language:it, language:iu, language:ja, language:jam, language:jbo, language:jv, language:ka, language:kaa, language:kab, language:kbd, language:kbp, language:kcg, language:kg, language:ki, language:kk, language:kl, language:km, language:kn, language:ko, language:koi, language:krc, language:ks, language:ksh, language:ku, language:kv, language:kw, language:ky, language:la, language:lad, language:lb, language:lbe, language:lez, language:lfn, language:lg, language:li, language:lij, language:lld, language:lmo, language:ln, language:lo, language:lt, language:ltg, language:lv, language:lzh, language:mad, language:mai, language:map, language:mdf, language:mg, language:mhr, language:mi, language:min, language:mk, language:ml, language:mn, language:mni, language:mnw, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nan, language:nap, language:nds, language:ne, language:new, language:nia, language:nl, language:nn, language:no, language:nov, language:nqo, language:nrf, language:nso, language:nv, language:ny, language:oc, language:olo, language:om, language:or, language:os, language:pa, language:pag, language:pam, language:pap, language:pcd, language:pcm, language:pdc, language:pfl, language:pi, language:pih, language:pl, language:pms, language:pnb, language:pnt, language:ps, language:pt, language:pwn, language:qu, language:rm, language:rmy, language:rn, language:ro, language:ru, language:rue, language:rup, language:rw, language:sa, language:sah, language:sat, language:sc, language:scn, language:sco, language:sd, language:se, language:sg, language:sgs, language:shi, language:shn, language:si, language:sk, language:skr, language:sl, language:sm, language:smn, language:sn, language:so, language:sq, language:sr, language:srn, language:ss, language:st, language:stq, language:su, language:sv, language:sw, language:szl, language:szy, language:ta, language:tay, language:tcy, language:te, language:tet, language:tg, language:th, language:ti, language:tk, language:tl, language:tly, language:tn, language:to, language:tpi, language:tr, language:trv, language:ts, language:tt, language:tum, language:tw, language:ty, language:tyv, language:udm, language:ug, language:uk, language:ur, language:uz, language:ve, language:vec, language:vep, language:vi, language:vls, language:vo, language:vro, language:wa, language:war, language:wo, language:wuu, language:xal, language:xh, language:xmf, language:yi, language:yo, language:yue, language:za, language:zea, language:zgh, language:zh, language:zu, license:cc-by-sa-3.0, license:gfdl, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `wikimedia/wikipedia`
## 📝 Metadata
- **Author/Owner:** wikimedia
- **Downloads:** 59399
- **Likes:** 955
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, language:ab, language:ace, language:ady, language:af, language:alt, language:am, language:ami, language:an, language:ang, language:anp, language:ar, language:arc, language:ary, language:arz, language:as, language:ast, language:atj, language:av, language:avk, language:awa, language:ay, language:az, language:azb, language:ba, language:ban, language:bar, language:bbc, language:bcl, language:be, language:bg, language:bh, language:bi, language:bjn, language:blk, language:bm, language:bn, language:bo, language:bpy, language:br, language:bs, language:bug, language:bxr, language:ca, language:cbk, language:cdo, language:ce, language:ceb, language:ch, language:chr, language:chy, language:ckb, language:co, language:cr, language:crh, language:cs, language:csb, language:cu, language:cv, language:cy, language:da, language:dag, language:de, language:dga, language:din, language:diq, language:dsb, language:dty, language:dv, language:dz, language:ee, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:ext, language:fa, language:fat, language:ff, language:fi, language:fj, language:fo, language:fon, language:fr, language:frp, language:frr, language:fur, language:fy, language:ga, language:gag, language:gan, language:gcr, language:gd, language:gl, language:glk, language:gn, language:gom, language:gor, language:got, language:gpe, language:gsw, language:gu, language:guc, language:gur, language:guw, language:gv, language:ha, language:hak, language:haw, language:hbs, language:he, language:hi, language:hif, language:hr, language:hsb, language:ht, language:hu, language:hy, language:hyw, language:ia, language:id, language:ie, language:ig, language:ik, language:ilo, language:inh, language:io, language:is, language:it, language:iu, language:ja, language:jam, language:jbo, language:jv, language:ka, language:kaa, language:kab, language:kbd, language:kbp, language:kcg, language:kg, language:ki, language:kk, language:kl, language:km, language:kn, language:ko, language:koi, language:krc, language:ks, language:ksh, language:ku, language:kv, language:kw, language:ky, language:la, language:lad, language:lb, language:lbe, language:lez, language:lfn, language:lg, language:li, language:lij, language:lld, language:lmo, language:ln, language:lo, language:lt, language:ltg, language:lv, language:lzh, language:mad, language:mai, language:map, language:mdf, language:mg, language:mhr, language:mi, language:min, language:mk, language:ml, language:mn, language:mni, language:mnw, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nan, language:nap, language:nds, language:ne, language:new, language:nia, language:nl, language:nn, language:no, language:nov, language:nqo, language:nrf, language:nso, language:nv, language:ny, language:oc, language:olo, language:om, language:or, language:os, language:pa, language:pag, language:pam, language:pap, language:pcd, language:pcm, language:pdc, language:pfl, language:pi, language:pih, language:pl, language:pms, language:pnb, language:pnt, language:ps, language:pt, language:pwn, language:qu, language:rm, language:rmy, language:rn, language:ro, language:ru, language:rue, language:rup, language:rw, language:sa, language:sah, language:sat, language:sc, language:scn, language:sco, language:sd, language:se, language:sg, language:sgs, language:shi, language:shn, language:si, language:sk, language:skr, language:sl, language:sm, language:smn, language:sn, language:so, language:sq, language:sr, language:srn, language:ss, language:st, language:stq, language:su, language:sv, language:sw, language:szl, language:szy, language:ta, language:tay, language:tcy, language:te, language:tet, language:tg, language:th, language:ti, language:tk, language:tl, language:tly, language:tn, language:to, language:tpi, language:tr, language:trv, language:ts, language:tt, language:tum, language:tw, language:ty, language:tyv, language:udm, language:ug, language:uk, language:ur, language:uz, language:ve, language:vec, language:vep, language:vi, language:vls, language:vo, language:vro, language:wa, language:war, language:wo, language:wuu, language:xal, language:xh, language:xmf, language:yi, language:yo, language:yue, language:za, language:zea, language:zgh, language:zh, language:zu, license:cc-by-sa-3.0, license:gfdl, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for Wikimedia Wikipedia
Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
with one subset per language, each containing a single train split.
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
All language subsets have already been processed for recent dump, and you… See the full description on the dataset page: https://huggingface.co/datasets/wikimedia/wikipedia....
```
## 📂 File System Sample
- `.gitattributes`
- `20231101.ab/train-00000-of-00001.parquet`
- `20231101.ace/train-00000-of-00001.parquet`
- `20231101.ady/train-00000-of-00001.parquet`
- `20231101.af/train-00000-of-00001.parquet`
- `20231101.als/train-00000-of-00001.parquet`
- `20231101.alt/train-00000-of-00001.parquet`
- `20231101.am/train-00000-of-00001.parquet`
- `20231101.ami/train-00000-of-00001.parquet`
- `20231101.an/train-00000-of-00001.parquet`
- `20231101.ang/train-00000-of-00001.parquet`
- `20231101.anp/train-00000-of-00001.parquet`
- `20231101.ar/train-00000-of-00007.parquet`
- `20231101.ar/train-00001-of-00007.parquet`
- `20231101.ar/train-00002-of-00007.parquet`
- ... and more.
## 📊 Data Structure
### Config: `20231101.ab`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `url` | `str` |
| `title` | `str` |
| `text` | `str` |
### Config: `20231101.ace`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `url` | `str` |
| `title` | `str` |
| `text` | `str` |
### Config: `20231101.ady`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `url` | `str` |
| `title` | `str` |
| `text` | `str` |
|
allenai/dolma
|
allenai
|
task_categories:text-generation, language:en, license:odc-by, size_categories:n>1T, arxiv:2402.00159, arxiv:2301.13688, region:us, language-modeling, casual-lm, llm
|
community
|
# Dataset: `allenai/dolma`
## 📝 Metadata
- **Author/Owner:** allenai
- **Downloads:** 1053
- **Likes:** 950
- **Tags:** task_categories:text-generation, language:en, license:odc-by, size_categories:n>1T, arxiv:2402.00159, arxiv:2301.13688, region:us, language-modeling, casual-lm, llm
- **License:** Not specified
## 📖 Description
```text
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research...
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `LICENSE.md`
- `README.md`
- `dolma.py`
- `urls/v1.txt`
- `urls/v1_5-sample.txt`
- `urls/v1_5.txt`
- `urls/v1_6-sample.txt`
- `urls/v1_6.txt`
- `urls/v1_7.txt`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found dolma.py
```
|
Nerfgun3/bad_prompt
|
Nerfgun3
|
language:en, license:creativeml-openrail-m, size_categories:n<1K, format:imagefolder, modality:image, library:datasets, library:mlcroissant, region:us, stable-diffusion, text-to-image, image-to-image
|
community
|
# Dataset: `Nerfgun3/bad_prompt`
## 📝 Metadata
- **Author/Owner:** Nerfgun3
- **Downloads:** 3413
- **Likes:** 926
- **Tags:** language:en, license:creativeml-openrail-m, size_categories:n<1K, format:imagefolder, modality:image, library:datasets, library:mlcroissant, region:us, stable-diffusion, text-to-image, image-to-image
- **License:** Not specified
## 📖 Description
```text
Negative Embedding / Textual Inversion
Idea
The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.
Side note: Embedding has proven to be very helpful for the generation of hands! :)
Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder.
Please put the embedding in the negative… See the full description on the dataset page: https://huggingface.co/datasets/Nerfgun3/bad_prompt....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `bad_prompt.pt`
- `bad_prompt_showcase.jpg`
- `bad_prompt_version2.pt`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `JpegImageFile` |
|
openai/gsm8k
|
openai
|
annotations_creators:crowdsourced, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2110.14168, region:us, math-word-problems
|
community
|
# Dataset: `openai/gsm8k`
## 📝 Metadata
- **Author/Owner:** openai
- **Downloads:** 392383
- **Likes:** 916
- **Tags:** annotations_creators:crowdsourced, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2110.14168, region:us, math-word-problems
- **License:** Not specified
## 📖 Description
```text
Dataset Card for GSM8K
Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
These problems take between 2 and 8 steps to solve.
Solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the… See the full description on the dataset page: https://huggingface.co/datasets/openai/gsm8k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `main/test-00000-of-00001.parquet`
- `main/train-00000-of-00001.parquet`
- `socratic/test-00000-of-00001.parquet`
- `socratic/train-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `main`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `answer` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `answer` | `str` |
### Config: `socratic`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `answer` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `answer` | `str` |
|
FreedomIntelligence/medical-o1-reasoning-SFT
|
FreedomIntelligence
|
task_categories:question-answering, task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2412.18925, region:us, medical, biology
|
community
|
# Dataset: `FreedomIntelligence/medical-o1-reasoning-SFT`
## 📝 Metadata
- **Author/Owner:** FreedomIntelligence
- **Downloads:** 8139
- **Likes:** 914
- **Tags:** task_categories:question-answering, task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2412.18925, region:us, medical, biology
- **License:** Not specified
## 📖 Description
```text
News
[2025/04/22] We split the data and kept only the medical SFT dataset (medical_o1_sft.json). The file medical_o1_sft_mix.json contains a mix of medical and general instruction data.
[2025/02/22] We released the distilled dataset from Deepseek-R1 based on medical verifiable problems. You can use it to initialize your models with the reasoning chain from Deepseek-R1.
[2024/12/25] We open-sourced the medical reasoning dataset for SFT, built on medical verifiable problems and an LLM… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `medical_o1_sft.json`
- `medical_o1_sft_Chinese.json`
- `medical_o1_sft_mix.json`
- `medical_o1_sft_mix_Chinese.json`
## 📊 Data Structure
### Config: `en`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Question` | `str` |
| `Complex_CoT` | `str` |
| `Response` | `str` |
### Config: `zh`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Question` | `str` |
| `Complex_CoT` | `str` |
| `Response` | `str` |
### Config: `en_mix`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Question` | `str` |
| `Complex_CoT` | `str` |
| `Response` | `str` |
|
tiiuae/falcon-refinedweb
|
tiiuae
|
task_categories:text-generation, language:en, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2306.01116, arxiv:2203.15556, arxiv:2107.06499, arxiv:2104.08758, arxiv:2109.07445, arxiv:1911.00359, arxiv:2112.11446, doi:10.57967/hf/0737, region:us
|
community
|
# Dataset: `tiiuae/falcon-refinedweb`
## 📝 Metadata
- **Author/Owner:** tiiuae
- **Downloads:** 18764
- **Likes:** 874
- **Tags:** task_categories:text-generation, language:en, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2306.01116, arxiv:2203.15556, arxiv:2107.06499, arxiv:2104.08758, arxiv:2109.07445, arxiv:1911.00359, arxiv:2112.11446, doi:10.57967/hf/0737, region:us
- **License:** Not specified
## 📖 Description
```text
📀 Falcon RefinedWeb
Falcon RefinedWeb is a massive English web dataset built by TII and released under an ODC-By 1.0 license.
See the 📓 paper on arXiv for more details.
RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data.
RefinedWeb is also "multimodal-friendly": it contains links and alt… See the full description on the dataset page: https://huggingface.co/datasets/tiiuae/falcon-refinedweb....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-05534-b8fc5348cbe605a5.parquet`
- `data/train-00001-of-05534-9bca3ce859516338.parquet`
- `data/train-00002-of-05534-01680948bd81de83.parquet`
- `data/train-00003-of-05534-b7806bb8ca893c23.parquet`
- `data/train-00004-of-05534-cbe1137f523084a7.parquet`
- `data/train-00005-of-05534-e4a5eae6c1419c9b.parquet`
- `data/train-00006-of-05534-5bc49be138fd315b.parquet`
- `data/train-00007-of-05534-c185da59aece723d.parquet`
- `data/train-00008-of-05534-8ed029166c6795cf.parquet`
- `data/train-00009-of-05534-f0954d1648b16e88.parquet`
- `data/train-00010-of-05534-5373970b3705bfc5.parquet`
- `data/train-00011-of-05534-7d7970b7bb98922c.parquet`
- `data/train-00012-of-05534-acf3fb5fee2b7e50.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `content` | `str` |
| `url` | `str` |
| `timestamp` | `datetime` |
| `dump` | `str` |
| `segment` | `str` |
| `image_urls` | `list` |
|
databricks/databricks-dolly-15k
|
databricks
|
task_categories:question-answering, task_categories:summarization, language:en, license:cc-by-sa-3.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2203.02155, region:us
|
community
|
# Dataset: `databricks/databricks-dolly-15k`
## 📝 Metadata
- **Author/Owner:** databricks
- **Downloads:** 17847
- **Likes:** 873
- **Tags:** task_categories:question-answering, task_categories:summarization, language:en, license:cc-by-sa-3.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2203.02155, region:us
- **License:** Not specified
## 📖 Description
```text
Summary
databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
Creative Commons Attribution-ShareAlike 3.0 Unported… See the full description on the dataset page: https://huggingface.co/datasets/databricks/databricks-dolly-15k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `databricks-dolly-15k.jsonl`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `context` | `str` |
| `response` | `str` |
| `category` | `str` |
|
bigcode/the-stack
|
bigcode
|
task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2211.15533, arxiv:2107.03374, arxiv:2207.14157, region:us
|
community
|
# Dataset: `bigcode/the-stack`
## 📝 Metadata
- **Author/Owner:** bigcode
- **Downloads:** 35573
- **Likes:** 865
- **Tags:** task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2211.15533, arxiv:2107.03374, arxiv:2207.14157, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for The Stack
Changelog
Release
Description
v1.0
Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 3TB in size.
v1.1
The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming languages… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/abap/train-00000-of-00001.parquet`
- `data/actionscript/train-00000-of-00002.parquet`
- `data/actionscript/train-00001-of-00002.parquet`
- `data/ada/train-00000-of-00001.parquet`
- `data/agda/train-00000-of-00001.parquet`
- `data/ags-script/train-00000-of-00001.parquet`
- `data/alloy/train-00000-of-00001.parquet`
- `data/ampl/train-00000-of-00001.parquet`
- `data/antlr/train-00000-of-00001.parquet`
- `data/apacheconf/train-00000-of-00001.parquet`
- `data/api-blueprint/train-00000-of-00001.parquet`
- `data/apl/train-00000-of-00001.parquet`
- `data/applescript/train-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'bigcode/the-stack' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/bigcode/the-stack to ask for access.
```
|
anon8231489123/ShareGPT_Vicuna_unfiltered
|
anon8231489123
|
language:en, license:apache-2.0, region:us
|
community
|
# Dataset: `anon8231489123/ShareGPT_Vicuna_unfiltered`
## 📝 Metadata
- **Author/Owner:** anon8231489123
- **Downloads:** 43135
- **Likes:** 822
- **Tags:** language:en, license:apache-2.0, region:us
- **License:** Not specified
## 📖 Description
```text
Further cleaning done. Please look through the dataset and ensure that I didn't miss anything.
Update: Confirmed working method for training the model: https://huggingface.co/AlekseyKorshuk/vicuna-7b/discussions/4#64346c08ef6d5abefe42c12c
Two choices:
Removes instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json
Has instances of "I'm sorry, but":… See the full description on the dataset page: https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered....
```
## 📂 File System Sample
- `.gitattributes`
- `HTML_cleaned_raw_dataset/sg_90k_part1.json`
- `HTML_cleaned_raw_dataset/sg_90k_part1_html_cleaned.json`
- `HTML_cleaned_raw_dataset/sg_90k_part2.json`
- `HTML_cleaned_raw_dataset/sg_90k_part2_html_cleaned.json`
- `README.md`
- `ShareGPT_V3_unfiltered_cleaned_split.json`
- `ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json`
- `Vicuna_unfiltered_train.ipynb`
- `clean_sharegpt.py`
- `flash_attn-0.2.8-cp39-cp39-linux_x86_64.whl`
- `optional_clean.py`
- `pretty_json.py`
- `split_long_conversation.py`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: No (supported) data files found in anon8231489123/ShareGPT_Vicuna_unfiltered
```
|
tatsu-lab/alpaca
|
tatsu-lab
|
task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, instruction-finetuning
|
community
|
# Dataset: `tatsu-lab/alpaca`
## 📝 Metadata
- **Author/Owner:** tatsu-lab
- **Downloads:** 39139
- **Likes:** 812
- **Tags:** task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, instruction-finetuning
- **License:** Not specified
## 📖 Description
```text
Dataset Card for Alpaca
Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from Self-Instruct framework and made the following modifications:
The text-davinci-003 engine to generate the instruction data instead… See the full description on the dataset page: https://huggingface.co/datasets/tatsu-lab/alpaca....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001-a09b74b3ef9c3b56.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
| `text` | `str` |
|
HuggingFaceFW/fineweb-edu
|
HuggingFaceFW
|
task_categories:text-generation, language:en, license:odc-by, size_categories:1B<n<10B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.17557, arxiv:2404.14219, arxiv:2401.10020, arxiv:2109.07445, doi:10.57967/hf/2497, region:us
|
community
|
# Dataset: `HuggingFaceFW/fineweb-edu`
## 📝 Metadata
- **Author/Owner:** HuggingFaceFW
- **Downloads:** 360780
- **Likes:** 786
- **Tags:** task_categories:text-generation, language:en, license:odc-by, size_categories:1B<n<10B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.17557, arxiv:2404.14219, arxiv:2401.10020, arxiv:2109.07445, doi:10.57967/hf/2497, region:us
- **License:** Not specified
## 📖 Description
```text
📚 FineWeb-Edu
1.3 trillion tokens of the finest educational data the 🌐 web has to offer
Paper: https://arxiv.org/abs/2406.17557
What is it?
📚 FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We then… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/CC-MAIN-2013-20/train-00000-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00001-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00002-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00003-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00004-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00005-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00006-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00007-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00008-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00009-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00010-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00011-of-00014.parquet`
- `data/CC-MAIN-2013-20/train-00012-of-00014.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `NoneType` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
| `score` | `float` |
| `int_score` | `int` |
### Config: `sample-10BT`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
| `score` | `float` |
| `int_score` | `int` |
### Config: `sample-100BT`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `token_count` | `int` |
| `score` | `float` |
| `int_score` | `int` |
|
roneneldan/TinyStories
|
roneneldan
|
task_categories:text-generation, language:en, license:cdla-sharing-1.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.07759, region:us
|
community
|
# Dataset: `roneneldan/TinyStories`
## 📝 Metadata
- **Author/Owner:** roneneldan
- **Downloads:** 44344
- **Likes:** 769
- **Tags:** task_categories:text-generation, language:en, license:cdla-sharing-1.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.07759, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
Described in the following paper: https://arxiv.org/abs/2305.07759.
The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M.
Additional resources:
tinystories_all_data.tar.gz - contains a superset of… See the full description on the dataset page: https://huggingface.co/datasets/roneneldan/TinyStories....
```
## 📂 File System Sample
- `.gitattributes`
- `Evaluation prompts.yaml`
- `MLP_input_output.npy`
- `README.md`
- `TinyStories-train.txt`
- `TinyStories-valid.txt`
- `TinyStoriesV2-GPT4-train.txt`
- `TinyStoriesV2-GPT4-valid.txt`
- `TinyStories_all_data.tar.gz`
- `data/train-00000-of-00004-2d5a1467fff1081b.parquet`
- `data/train-00001-of-00004-5852b56a2bd28fd9.parquet`
- `data/train-00002-of-00004-a26307300439e943.parquet`
- `data/train-00003-of-00004-d243063613e5a057.parquet`
- `data/validation-00000-of-00001-869c898b519ad725.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
open-thoughts/OpenThoughts-114k
|
open-thoughts
|
license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.04178, region:us, curator, synthetic
|
community
|
# Dataset: `open-thoughts/OpenThoughts-114k`
## 📝 Metadata
- **Author/Owner:** open-thoughts
- **Downloads:** 42796
- **Likes:** 766
- **Tags:** license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.04178, region:us, curator, synthetic
- **License:** Not specified
## 📖 Description
```text
[!NOTE]
We have released a paper for OpenThoughts! See our paper here.
Open-Thoughts-114k
Open synthetic reasoning dataset with 114k high-quality examples covering math, science, code, and puzzles!
Inspect the content with rich formatting with Curator Viewer.
Available Subsets
default subset containing ready-to-train data used to finetune the OpenThinker-7B and OpenThinker-32B models:
ds = load_dataset("open-thoughts/OpenThoughts-114k", split="train")… See the full description on the dataset page: https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00006.parquet`
- `data/train-00001-of-00006.parquet`
- `data/train-00002-of-00006.parquet`
- `data/train-00003-of-00006.parquet`
- `data/train-00004-of-00006.parquet`
- `data/train-00005-of-00006.parquet`
- `diagram.png`
- `diagram_dark.png`
- `metadata/train-00000-of-00012.parquet`
- `metadata/train-00001-of-00012.parquet`
- `metadata/train-00002-of-00012.parquet`
- `metadata/train-00003-of-00012.parquet`
- `metadata/train-00004-of-00012.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `system` | `str` |
| `conversations` | `list` |
### Config: `metadata`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `problem` | `str` |
| `deepseek_reasoning` | `str` |
| `deepseek_solution` | `str` |
| `ground_truth_solution` | `str` |
| `domain` | `str` |
| `source` | `str` |
| `test_cases` | `NoneType` |
| `starter_code` | `NoneType` |
|
teknium/OpenHermes-2.5
|
teknium
|
language:eng, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, synthetic, GPT-4, Distillation, Compilation
|
community
|
# Dataset: `teknium/OpenHermes-2.5`
## 📝 Metadata
- **Author/Owner:** teknium
- **Downloads:** 2983
- **Likes:** 762
- **Tags:** language:eng, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, synthetic, GPT-4, Distillation, Compilation
- **License:** Not specified
## 📖 Description
```text
Dataset Card for Dataset Name
This is the dataset that made OpenHermes 2.5 and Nous Hermes 2 series of models.
Support me on GitHub sponsors <3 : https://github.com/sponsors/teknium1
Dataset Details
Dataset Description
The Open Hermes 2/2.5 and Nous Hermes 2 models have made significant advancements of SOTA LLM's over recent months, and are underpinned by this exact compilation and curation of many open source datasets and custom created synthetic datasets.… See the full description on the dataset page: https://huggingface.co/datasets/teknium/OpenHermes-2.5....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `openhermes2_5.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `custom_instruction` | `NoneType` |
| `topic` | `NoneType` |
| `model_name` | `NoneType` |
| `model` | `NoneType` |
| `skip_prompt_formatting` | `bool` |
| `category` | `str` |
| `conversations` | `list` |
| `views` | `NoneType` |
| `language` | `NoneType` |
| `id` | `NoneType` |
| `title` | `NoneType` |
| `idx` | `NoneType` |
| `hash` | `NoneType` |
| `avatarUrl` | `NoneType` |
| `system_prompt` | `NoneType` |
| `source` | `str` |
|
lmsys/lmsys-chat-1m
|
lmsys
|
size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.11998, region:us
|
community
|
# Dataset: `lmsys/lmsys-chat-1m`
## 📝 Metadata
- **Author/Owner:** lmsys
- **Downloads:** 9564
- **Likes:** 749
- **Tags:** size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.11998, region:us
- **License:** Not specified
## 📖 Description
```text
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the Vicuna demo and Chatbot Arena website from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of use"… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/lmsys-chat-1m....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00006-4feeb3f83346a0e9.parquet`
- `data/train-00001-of-00006-4030672591c2f478.parquet`
- `data/train-00002-of-00006-1779b7cec9462180.parquet`
- `data/train-00003-of-00006-2fa862bfed56af1f.parquet`
- `data/train-00004-of-00006-18f4bdd50c103e71.parquet`
- `data/train-00005-of-00006-fe1acc5d10a9f0e2.parquet`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'lmsys/lmsys-chat-1m' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/lmsys/lmsys-chat-1m to ask for access.
```
|
QingyiSi/Alpaca-CoT
|
QingyiSi
|
language:en, language:zh, language:ml, license:apache-2.0, region:us, Instruction, Cot
|
community
|
# Dataset: `QingyiSi/Alpaca-CoT`
## 📝 Metadata
- **Author/Owner:** QingyiSi
- **Downloads:** 7634
- **Likes:** 737
- **Tags:** language:en, language:zh, language:ml, license:apache-2.0, region:us, Instruction, Cot
- **License:** Not specified
## 📖 Description
```text
Instruction-Finetuning Dataset Collection (Alpaca-CoT)
This repository will continuously collect various instruction tuning datasets. And we standardize different datasets into the same format, which can be directly loaded by the code of Alpaca model.
We also have conducted empirical study on various instruction-tuning datasets based on the Alpaca model, as shown in https://github.com/PhoebusSi/alpaca-CoT.
If you think this dataset collection is helpful to you, please like this… See the full description on the dataset page: https://huggingface.co/datasets/QingyiSi/Alpaca-CoT....
```
## 📂 File System Sample
- `.gitattributes`
- `Auto-CoT/Auto-CoT.json`
- `Auto-CoT/Auto.json`
- `COIG/counterfactural_correction_multi_round_chat.json`
- `COIG/counterfactural_correction_multi_round_chat_context.json`
- `COIG/exam.json`
- `COIG/human-value-alignment_common.json`
- `COIG/human-value-alignment_special.json`
- `COIG/leetcode.json`
- `COIG/translate_en.json`
- `COIG/translate_zh.json`
- `CSL/csl.json`
- `Chain-of-Thought/CoT_Chinese_data.json`
- `Chain-of-Thought/CoT_data.json`
- `Chain-of-Thought/README.md`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
|
yahma/alpaca-cleaned
|
yahma
|
task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, instruction-finetuning
|
community
|
# Dataset: `yahma/alpaca-cleaned`
## 📝 Metadata
- **Author/Owner:** yahma
- **Downloads:** 27344
- **Likes:** 736
- **Tags:** task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, instruction-finetuning
- **License:** Not specified
## 📖 Description
```text
Dataset Card for Alpaca-Cleaned
Repository: https://github.com/gururise/AlpacaDataCleaned
Dataset Description
This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset:
Hallucinations: Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
"instruction":"Summarize the… See the full description on the dataset page: https://huggingface.co/datasets/yahma/alpaca-cleaned....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `alpaca_data_cleaned.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `output` | `str` |
| `input` | `str` |
| `instruction` | `str` |
|
Congliu/Chinese-DeepSeek-R1-Distill-data-110k
|
Congliu
|
task_categories:text-generation, task_categories:question-answering, language:zh, license:apache-2.0, size_categories:100K<n<1M, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `Congliu/Chinese-DeepSeek-R1-Distill-data-110k`
## 📝 Metadata
- **Author/Owner:** Congliu
- **Downloads:** 716
- **Likes:** 703
- **Tags:** task_categories:text-generation, task_categories:question-answering, language:zh, license:apache-2.0, size_categories:100K<n<1M, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
中文基于满血DeepSeek-R1蒸馏数据集(Chinese-Data-Distill-From-R1)
🤗 Hugging Face | 🤖 ModelScope | 🚀 Github | 📑 Blog
注意:提供了直接SFT使用的版本,点击下载。将数据中的思考和答案整合成output字段,大部分SFT代码框架均可直接直接加载训练。
本数据集为中文开源蒸馏满血R1的数据集,数据集中不仅包含math数据,还包括大量的通用类型数据,总数量为110K。
为什么开源这个数据?
R1的效果十分强大,并且基于R1蒸馏数据SFT的小模型也展现出了强大的效果,但检索发现,大部分开源的R1蒸馏数据集均为英文数据集。 同时,R1的报告中展示,蒸馏模型中同时也使用了部分通用场景数据集。
为了帮助大家更好地复现R1蒸馏模型的效果,特此开源中文数据集。该中文数据集中的数据分布如下:
Math:共计36568个样本,
Exam:共计2432个样本,
STEM:共计12648个样本,… See the full description on the dataset page: https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `distill_r1_110k.jsonl`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `input` | `str` |
| `content` | `str` |
| `reasoning_content` | `str` |
| `repo_name` | `str` |
| `prompt_tokens_len` | `int` |
| `content_tokens_len` | `int` |
| `reasoning_content_tokens_len` | `int` |
| `score` | `int` |
|
HuggingFaceFW/fineweb-2
|
HuggingFaceFW
|
task_categories:text-generation, language:aai, language:aak, language:aau, language:aaz, language:aba, language:abi, language:abk, language:abn, language:abq, language:abs, language:abt, language:abx, language:aby, language:abz, language:aca, language:acd, language:ace, language:acf, language:ach, language:acm, language:acn, language:acr, language:acu, language:ada, language:ade, language:adh, language:adi, language:adj, language:adl, language:ady, language:adz, language:aeb, language:aer, language:aeu, language:aey, language:afr, language:agd, language:agg, language:agm, language:agn, language:agr, language:agt, language:agu, language:agw, language:agx, language:aha, language:ahk, language:aia, language:aii, language:aim, language:ain, language:ajg, language:aji, language:ajz, language:akb, language:ake, language:akh, language:akp, language:alj, language:aln, language:alp, language:alq, language:als, language:alt, language:aly, language:alz, language:ame, language:amf, language:amh, language:ami, language:amk, language:amm, language:amn, language:amp, language:amr, language:amu, language:amx, language:ang, language:anm, language:ann, language:anp, language:anv, language:any, language:aoi, language:aoj, language:aom, language:aoz, language:apb, language:apc, language:ape, language:apn, language:apr, language:apt, language:apu, language:apw, language:apy, language:apz, language:arb, language:are, language:arg, language:arl, language:arn, language:arp, language:arq, language:ars, language:ary, language:arz, language:asg, language:asm, language:aso, language:ast, language:ata, language:atb, language:atd, language:atg, language:ati, language:atj, language:atq, language:att, language:auc, language:aui, language:auy, language:ava, language:avk, language:avn, language:avt, language:avu, language:awa, language:awb, language:awx, language:ayo, language:ayp, language:ayr, language:azb, language:azg, language:azj, language:azz, language:bak, language:bam, language:ban, language:bao, language:bar, language:bas, language:bav, language:bba, language:bbb, language:bbc, language:bbj, language:bbk, language:bbo, language:bbr, language:bch, language:bci, language:bcl, language:bco, language:bcw, language:bdd, language:bdh, language:bdq, language:bea, language:bef, language:bel, language:bem, language:ben, language:beq, language:bew, language:bex, language:bfd, language:bfo, language:bgr, language:bgs, language:bgt, language:bgz, language:bhg, language:bhl, language:bho, language:bhp, language:bhw, language:bhz, language:bib, language:big, language:bim, language:bin, language:bis, language:biu, language:biv, language:bjn, language:bjp, language:bjr, language:bjv, language:bkd, language:bkl, language:bkq, language:bku, language:bkv, language:bla, language:blh, language:blk, language:blw, language:blz, language:bmh, language:bmk, language:bmq, language:bmr, language:bmu, language:bmv, language:bno, language:bnp, language:boa, language:bod, language:boj, language:bom, language:bon, language:bos, language:bov, language:box, language:bpr, language:bps, language:bpy, language:bqc, language:bqj, language:bqp, language:bre, language:brh, language:bru, language:brx, language:bsc, language:bsn, language:bsp, language:bsq, language:bss, language:btd, language:bth, language:bts, language:btt, language:btx, language:bud, language:bug, language:buk, language:bul, language:bum, language:bus, language:bvc, language:bvd, language:bvr, language:bvz, language:bwd, language:bwi, language:bwq, language:bwu, language:bxh, language:bxr, language:byr, language:byv, language:byx, language:bzd, language:bzh, language:bzi, language:bzj, language:caa, language:cab, language:cac, language:caf, language:cag, language:cak, language:cao, language:cap, language:caq, language:car, language:cas, language:cat, language:cav, language:cax, language:cbc, language:cbi, language:cbk, language:cbr, language:cbs, language:cbt, language:cbu, language:cbv, language:cce, language:cco, language:ccp, language:ceb, language:ceg, language:cek, language:ces, language:cfm, language:cgc, language:cgg, language:cha, language:chd, language:che, language:chf, language:chj, language:chk, language:cho, language:chq, language:chr, language:chu, language:chv, language:chw, language:chz, language:cjk, language:cjo, language:cjp, language:cjs, language:cjv, language:ckb, language:cko, language:ckt, language:cle, language:clu, language:cly, language:cme, language:cmn, language:cmo, language:cmr, language:cnh, language:cni, language:cnk, language:cnl, language:cnt, language:cnw, language:coe, language:cof, language:cok, language:con, language:cop, language:cor, language:cos, language:cot, language:cou, language:cpa, language:cpb, language:cpc, language:cpu, language:cpy, language:crh, language:crj, language:crk, language:crl, language:crm, language:crn, language:crs, language:crt, language:crx, language:csb, language:csk, language:cso, language:csw, language:csy, language:cta, language:ctd, language:cto, language:ctp, language:ctu, language:cub, language:cuc, language:cui, language:cuk, language:cul, language:cut, language:cux, language:cwe, language:cwt, language:cya, language:cym, language:czt, language:daa, language:dad, language:daf, language:dag, language:dah, language:dak, language:dan, language:dar, language:ddg, language:ddn, language:ded, language:des, language:deu, language:dga, language:dgc, language:dgi, language:dgr, language:dgz, language:dhg, language:dhm, language:dhv, language:did, language:dig, language:dik, language:diq, language:dis, language:diu, language:div, language:dje, language:djk, language:djr, language:dks, language:dln, language:dng, language:dnj, language:dnw, language:dob, language:doi, language:dop, language:dos, language:dow, language:drg, language:dru, language:dsb, language:dtb, language:dtp, language:dts, language:dty, language:dua, language:due, language:dug, language:duo, language:dur, language:dwr, language:dww, language:dyi, language:dyo, language:dyu, language:dzo, language:ebk, language:efi, language:eka, language:ekk, language:eko, language:ell, language:emi, language:eml, language:emp, language:enb, language:enl, language:enm, language:enq, language:enx, language:epo, language:eri, language:ese, language:esi, language:esk, language:ess, language:esu, language:eto, language:etr, language:etu, language:eus, language:eve, language:ewe, language:ewo, language:ext, language:eza, language:faa, language:fad, language:fai, language:fal, language:fan, language:fao, language:far, language:fas, language:fat, language:ffm, language:fij, language:fil, language:fin, language:fit, language:fkv, language:fmu, language:fon, language:for, language:fra, language:frd, language:fro, language:frp, language:frr, language:fry, language:fub, language:fud, language:fue, language:fuf, language:fuh, language:fuq, language:fur, language:fuv, language:gaa, language:gag, language:gah, language:gai, language:gam, language:gaw, language:gaz, language:gbi, language:gbo, language:gbr, language:gcf, language:gcr, language:gde, language:gdg, language:gdn, language:gdr, language:geb, language:gej, language:gfk, language:ghs, language:gid, language:gil, language:giz, language:gjn, language:gkn, language:gla, language:gle, language:glg, language:glk, language:glv, language:gmh, language:gmv, language:gna, language:gnb, language:gnd, language:gng, language:gnn, language:gnw, language:goa, language:gof, language:gog, language:goh, language:gom, language:gor, language:gos, language:got, language:gqr, language:grc, language:grt, language:gso, language:gsw, language:gub, language:guc, language:gud, language:gug, language:guh, language:gui, language:guj, language:guk, language:gul, language:gum, language:gun, language:guo, language:guq, language:gur, language:guu, language:guw, language:gux, language:guz, language:gvc, language:gvf, language:gvl, language:gvn, language:gwi, language:gwr, language:gya, language:gym, language:gyr, language:hac, language:hae, language:hag, language:hak, language:hat, language:hau, language:hav, language:haw, language:hay, language:hbo, language:hch, language:heb, language:heg, language:heh, language:her, language:hif, language:hig, language:hil, language:hin, language:hix, language:hla, language:hmo, language:hmr, language:hne, language:hnj, language:hnn, language:hns, language:hop, language:hot, language:hra, language:hrv, language:hrx, language:hsb, language:hto, language:hub, language:hui, language:hun, language:hus, language:huu, language:huv, language:hvn, language:hwc, language:hye, language:hyw, language:ian, language:iba, language:ibg, language:ibo, language:icr, language:ido, language:idu, language:ifa, language:ifb, language:ife, language:ifk, language:ifu, language:ify, language:ige, language:ign, language:ike, language:ikk, language:ikt, language:ikw, language:ilb, language:ile, language:ilo, language:imo, language:ina, language:inb, language:ind, language:inh, language:ino, language:iou, language:ipi, language:iqw, language:iri, language:irk, language:iry, language:isd, language:ish, language:isl, language:iso, language:ita, language:itv, language:ium, language:ivb, language:ivv, language:iws, language:ixl, language:izr, language:izz, language:jaa, language:jac, language:jae, language:jam, language:jav, language:jbo, language:jbu, language:jic, language:jiv, language:jmc, language:jpn, language:jra, language:jun, language:jvn, language:kaa, language:kab, language:kac, language:kak, language:kal, language:kam, language:kan, language:kao, language:kaq, language:kas, language:kat, language:kaz, language:kbc, language:kbd, language:kbh, language:kbm, language:kbo, language:kbp, language:kbq, language:kbr, language:kby, language:kca, language:kcg, language:kck, language:kdc, language:kde, language:kdh, language:kdi, language:kdj, language:kdl, language:kdr, language:kea, language:kei, language:kek, language:ken, language:keo, language:ker, language:kew, language:kez, language:kff, language:kgf, language:kgk, language:kgp, language:kgr, language:kha, language:khk, language:khm, language:khs, language:khz, language:kia, language:kij, language:kik, language:kin, language:kir, language:kiu, language:kix, language:kjb, language:kje, language:kjh, language:kjs, language:kkc, language:kki, language:kkj, language:kkl, language:kle, language:klt, language:klv, language:kmb, language:kmg, language:kmh, language:kmk, language:kmm, language:kmo, language:kmr, language:kms, language:kmu, language:kmy, language:knc, language:kne, language:knf, language:kng, language:knj, language:knk, language:kno, language:knv, language:knx, language:kny, language:kog, language:koi, language:koo, language:kor, language:kos, language:kpe, language:kpf, language:kpg, language:kpj, language:kpq, language:kpr, language:kpv, language:kpw, language:kpx, language:kpz, language:kqc, language:kqe, language:kqf, language:kql, language:kqn, language:kqo, language:kqp, language:kqs, language:kqw, language:kqy, language:krc, language:kri, language:krj, language:krl, language:kru, language:krx, language:ksb, language:ksc, language:ksd, language:ksf, language:ksh, language:ksj, language:ksp, language:ksr, language:kss, language:ksw, language:ktb, language:ktj, language:ktm, language:kto, language:ktu, language:ktz, language:kua, language:kub, language:kud, language:kue, language:kuj, language:kum, language:kup, language:kus, language:kvg, language:kvj, language:kvn, language:kwd, language:kwf, language:kwi, language:kwj, language:kwn, language:kwy, language:kxc, language:kxm, language:kxw, language:kyc, language:kyf, language:kyg, language:kyq, language:kyu, language:kyz, language:kze, language:kzf, language:kzj, language:lac, language:lad, language:lai, language:laj, language:lam, language:lao, language:lap, language:lat, language:lbb, language:lbe, language:lbj, language:lbk, language:lcm, language:lcp, language:ldi, language:ldn, language:lee, language:lef, language:leh, language:lem, language:leu, language:lew, language:lex, language:lez, language:lfn, language:lgg, language:lgl, language:lgm, language:lhi, language:lhu, language:lia, language:lid, language:lif, language:lij, language:lim, language:lin, language:lip, language:lis, language:lit, language:liv, language:ljp, language:lki, language:llb, language:lld, language:llg, language:lln, language:lmk, language:lmo, language:lmp, language:lnd, language:lob, language:loe, language:log, language:lok, language:lol, language:lom, language:loq, language:loz, language:lrc, language:lsi, language:lsm, language:ltg, language:ltz, language:lua, language:lub, language:luc, language:lud, language:lue, language:lug, language:lun, language:luo, language:lus, language:lvs, language:lwg, language:lwo, language:lww, language:lzh, language:maa, language:mad, language:maf, language:mag, language:mah, language:mai, language:maj, language:mak, language:mal, language:mam, language:maq, language:mar, language:mas, language:mau, language:mav, language:maw, language:maz, language:mbb, language:mbc, language:mbd, language:mbf, language:mbh, language:mbi, language:mbj, language:mbl, language:mbs, language:mbt, language:mca, language:mcb, language:mcd, language:mcf, language:mck, language:mcn, language:mco, language:mcp, language:mcq, language:mcu, language:mda, language:mdf, language:mdy, language:med, language:mee, language:mej, language:mek, language:men, language:meq, language:mer, language:met, language:meu, language:mev, language:mfe, language:mfg, language:mfh, language:mfi, language:mfk, language:mfq, language:mfy, language:mfz, language:mgc, language:mgh, language:mgo, language:mgr, language:mhi, language:mhl, language:mhr, language:mhw, language:mhx, language:mhy, language:mib, language:mic, language:mie, language:mif, language:mig, language:mih, language:mil, language:mim, language:min, language:mio, language:mip, language:miq, language:mir, language:mit, language:miy, language:miz, language:mjc, language:mjw, language:mkd, language:mkl, language:mkn, language:mks, language:mkz, language:mlh, language:mlp, language:mlt, language:mlu, language:mmn, language:mmo, language:mmx, language:mna, language:mnb, language:mnf, language:mni, language:mnk, language:mns, language:mnw, language:mnx, language:mny, language:moa, language:moc, language:mog, language:moh, language:mop, language:mor, language:mos, language:mox, language:mpg, language:mph, language:mpm, language:mpp, language:mps, language:mpt, language:mpx, language:mqb, language:mqj, language:mqy, language:mrg, language:mri, language:mrj, language:mrq, language:mrv, language:mrw, language:msb, language:msc, language:mse, language:msk, language:msy, language:mta, language:mtg, language:mti, language:mto, language:mtp, language:mua, language:mug, language:muh, language:mui, language:mup, language:mur, language:mus, language:mux, language:muy, language:mva, language:mvn, language:mvp, language:mwc, language:mwf, language:mwl, language:mwm, language:mwn, language:mwp, language:mwq, language:mwv, language:mww, language:mxb, language:mxp, language:mxq, language:mxt, language:mxv, language:mya, language:myb, language:myk, language:myu, language:myv, language:myw, language:myx, language:myy, language:mza, language:mzh, language:mzk, language:mzl, language:mzm, language:mzn, language:mzw, language:mzz, language:nab, language:naf, language:nah, language:nak, language:nan, language:nap, language:naq, language:nas, language:nav, language:naw, language:nba, language:nbc, language:nbe, language:nbl, language:nbq, language:nbu, language:nca, language:nch, language:ncj, language:ncl, language:ncq, language:nct, language:ncu, language:ncx, language:ndc, language:nde, language:ndh, language:ndi, language:ndj, language:ndo, language:nds, language:ndz, language:neb, language:new, language:nfa, language:nfr, language:ngb, language:ngc, language:ngl, language:ngp, language:ngu, language:nhd, language:nhe, language:nhg, language:nhi, language:nhk, language:nho, language:nhr, language:nhu, language:nhw, language:nhx, language:nhy, language:nia, language:nif, language:nii, language:nij, language:nim, language:nin, language:nio, language:niu, language:niy, language:njb, language:njm, language:njn, language:njo, language:njz, language:nkf, language:nko, language:nld, language:nlg, language:nma, language:nmf, language:nmh, language:nmo, language:nmw, language:nmz, language:nnb, language:nng, language:nnh, language:nnl, language:nno, language:nnp, language:nnq, language:nnw, language:noa, language:nob, language:nod, language:nog, language:non, language:nop, language:not, language:nou, language:nov, language:nph, language:npi, language:npl, language:npo, language:npy, language:nqo, language:nre, language:nrf, language:nri, language:nrm, language:nsa, language:nse, language:nsm, language:nsn, language:nso, language:nss, language:nst, language:nsu, language:ntp, language:ntr, language:ntu, language:nuj, language:nus, language:nuy, language:nvm, language:nwb, language:nwi, language:nwx, language:nxd, language:nya, language:nyf, language:nyk, language:nyn, language:nyo, language:nyu, language:nyy, language:nza, language:nzi, language:nzm, language:obo, language:oci, language:ogo, language:ojb, language:oke, language:oku, language:okv, language:old, language:olo, language:omb, language:omw, language:ong, language:ons, language:ood, language:opm, language:orv, language:ory, language:oss, language:ota, language:otd, language:ote, language:otm, language:otn, language:oto, language:otq, language:ots, language:otw, language:oym, language:ozm, language:pab, language:pad, language:pag, language:pah, language:pam, language:pan, language:pao, language:pap, language:pau, language:pbb, language:pbc, language:pbi, language:pbt, language:pcd, language:pck, language:pcm, language:pdc, language:pdt, language:pem, language:pfe, language:pfl, language:phm, language:pib, language:pio, language:pir, language:pis, language:pjt, language:pkb, language:plg, language:pls, language:plt, language:plu, language:plw, language:pma, language:pmf, language:pmq, language:pms, language:pmx, language:pnb, language:pne, language:pnt, language:pny, language:poe, language:poh, language:poi, language:pol, language:pon, language:por, language:pos, language:pot, language:pov, language:poy, language:ppk, language:ppo, language:pps, language:prf, language:prg, language:pri, language:prq, language:pse, language:pss, language:ptp, language:ptu, language:pui, language:pwg, language:pwn, language:pww, language:pxm, language:qub, language:quc, language:quf, language:qug, language:quh, language:qul, language:qup, language:qus, language:quw, language:quy, language:quz, language:qva, language:qvc, language:qve, language:qvh, language:qvi, language:qvm, language:qvn, language:qvo, language:qvs, language:qvw, language:qvz, language:qwh, language:qxh, language:qxl, language:qxn, language:qxo, language:qxr, language:rad, language:rai, language:rap, language:rar, language:rav, language:raw, language:rcf, language:rej, language:rel, language:rgu, language:rhg, language:ria, language:rim, language:rjs, language:rkb, language:rmc, language:rme, language:rml, language:rmn, language:rmo, language:rmq, language:rmy, language:rnd, language:rng, language:rnl, language:roh, language:ron, language:roo, language:rop, language:row, language:rro, language:rtm, language:rub, language:rue, language:ruf, language:rug, language:run, language:rup, language:rus, language:rwo, language:sab, language:sag, language:sah, language:san, language:sas, language:sat, language:sba, language:sbd, language:sbe, language:sbl, language:sbs, language:sby, language:sck, language:scn, language:sco, language:sda, language:sdc, language:sdh, language:sdo, language:sdq, language:seh, language:ses, language:sey, language:sfw, language:sgb, language:sgc, language:sgh, language:sgs, language:sgw, language:sgz, language:shi, language:shk, language:shn, language:shp, language:shu, language:sid, language:sig, language:sil, language:sim, language:sin, language:sja, language:sjo, language:sju, language:skg, language:skr, language:sld, language:slk, language:sll, language:slv, language:sma, language:sme, language:smj, language:smk, language:sml, language:smn, language:smo, language:sms, language:smt, language:sna, language:snc, language:snd, language:snf, language:snn, language:snp, language:snw, language:sny, language:soe, language:som, language:sop, language:soq, language:sot, language:soy, language:spa, language:spl, language:spm, language:spp, language:sps, language:spy, language:srd, language:sri, language:srm, language:srn, language:srp, language:srq, language:srr, language:ssd, language:ssg, language:ssw, language:ssx, language:stn, language:stp, language:stq, language:sua, language:suc, language:sue, language:suk, language:sun, language:sur, language:sus, language:suz, language:swb, language:swc, language:swe, language:swg, language:swh, language:swk, language:swp, language:sxb, language:sxn, language:syb, language:syc, language:syl, language:szl, language:szy, language:tab, language:tac, language:tah, language:taj, language:tam, language:tap, language:taq, language:tar, language:tat, language:tav, language:taw, language:tay, language:tbc, language:tbg, language:tbk, language:tbl, language:tbo, language:tbw, language:tby, language:tbz, language:tca, language:tcc, language:tcf, language:tcs, language:tcy, language:tcz, language:ted, language:tee, language:tel, language:tem, language:teo, language:ter, language:tet, language:tew, language:tfr, language:tgk, language:tgo, language:tgp, language:tha, language:thk, language:thl, language:tif, language:tig, language:tih, language:tik, language:tim, language:tir, language:tiv, language:tiy, language:tke, language:tkl, language:tkr, language:tku, language:tlb, language:tlf, language:tlh, language:tlj, language:tll, language:tly, language:tmc, language:tmd, language:tna, language:tnc, language:tnk, language:tnn, language:tnp, language:tnr, language:tob, language:toc, language:tod, language:tog, language:toh, language:toi, language:toj, language:tok, language:ton, language:too, language:top, language:tos, language:tpa, language:tpi, language:tpm, language:tpp, language:tpt, language:tpw, language:tpz, language:tqo, language:trc, language:trn, language:tro, language:trp, language:trq, language:trs, language:trv, language:tsc, language:tsg, language:tsn, language:tso, language:tsw, language:tsz, language:ttc, language:tte, language:ttj, language:ttq, language:tuc, language:tue, language:tuf, language:tui, language:tuk, language:tul, language:tum, language:tuo, language:tur, language:tuv, language:tvk, language:tvl, language:twi, language:twu, language:twx, language:txq, language:txu, language:tyv, language:tzh, language:tzj, language:tzl, language:tzm, language:tzo, language:ubr, language:ubu, language:udm, language:udu, language:uig, language:ukr, language:umb, language:upv, language:ura, language:urb, language:urd, language:urh, language:uri, language:urk, language:urt, language:urw, language:ury, language:usa, language:usp, language:uth, language:uvh, language:uvl, language:uzn, language:uzs, language:vag, language:vap, language:var, language:vec, language:ven, language:vep, language:vid, language:vie, language:viv, language:vls, language:vmk, language:vmw, language:vmy, language:vol, language:vot, language:vro, language:vun, language:vut, language:waj, language:wal, language:wap, language:war, language:wat, language:way, language:wba, language:wbm, language:wbp, language:wed, language:wer, language:wes, language:wew, language:whg, language:whk, language:wib, language:wim, language:wiu, language:wln, language:wls, language:wlv, language:wlx, language:wmt, language:wmw, language:wnc, language:wnu, language:wob, language:wol, language:wos, language:wrk, language:wrs, language:wsg, language:wsk, language:wuu, language:wuv, language:wwa, language:xal, language:xav, language:xbi, language:xbr, language:xed, language:xho, language:xla, language:xmf, language:xmm, language:xmv, language:xnn, language:xog, language:xon, language:xrb, language:xsb, language:xsi, language:xsm, language:xsr, language:xsu, language:xtd, language:xtm, language:xtn, language:xuo, language:yaa, language:yad, language:yal, language:yam, language:yan, language:yao, language:yap, language:yaq, language:yat, language:yaz, language:ybb, language:yby, language:ycn, language:ydd, language:yim, language:yka, language:yle, language:yli, language:yml, language:yom, language:yon, language:yor, language:yrb, language:yre, language:yrk, language:yrl, language:yss, language:yua, language:yue, language:yuj, language:yup, language:yut, language:yuw, language:yuz, language:yva, language:zaa, language:zab, language:zac, language:zad, language:zae, language:zai, language:zam, language:zao, language:zar, language:zas, language:zat, language:zav, language:zaw, language:zca, language:zdj, language:zea, language:zgh, language:zia, language:ziw, language:zne, language:zom, language:zos, language:zpa, language:zpc, language:zpg, language:zpi, language:zpj, language:zpl, language:zpm, language:zpo, language:zpq, language:zpt, language:zpu, language:zpv, language:zpz, language:zsm, language:zsr, language:ztq, language:zty, language:zul, language:zyb, language:zyp, license:odc-by, size_categories:1B<n<10B, modality:tabular, modality:text, arxiv:2506.20920, arxiv:2109.07445, arxiv:2406.17557, doi:10.57967/hf/3744, region:us
|
community
|
# Dataset: `HuggingFaceFW/fineweb-2`
## 📝 Metadata
- **Author/Owner:** HuggingFaceFW
- **Downloads:** 111209
- **Likes:** 676
- **Tags:** task_categories:text-generation, language:aai, language:aak, language:aau, language:aaz, language:aba, language:abi, language:abk, language:abn, language:abq, language:abs, language:abt, language:abx, language:aby, language:abz, language:aca, language:acd, language:ace, language:acf, language:ach, language:acm, language:acn, language:acr, language:acu, language:ada, language:ade, language:adh, language:adi, language:adj, language:adl, language:ady, language:adz, language:aeb, language:aer, language:aeu, language:aey, language:afr, language:agd, language:agg, language:agm, language:agn, language:agr, language:agt, language:agu, language:agw, language:agx, language:aha, language:ahk, language:aia, language:aii, language:aim, language:ain, language:ajg, language:aji, language:ajz, language:akb, language:ake, language:akh, language:akp, language:alj, language:aln, language:alp, language:alq, language:als, language:alt, language:aly, language:alz, language:ame, language:amf, language:amh, language:ami, language:amk, language:amm, language:amn, language:amp, language:amr, language:amu, language:amx, language:ang, language:anm, language:ann, language:anp, language:anv, language:any, language:aoi, language:aoj, language:aom, language:aoz, language:apb, language:apc, language:ape, language:apn, language:apr, language:apt, language:apu, language:apw, language:apy, language:apz, language:arb, language:are, language:arg, language:arl, language:arn, language:arp, language:arq, language:ars, language:ary, language:arz, language:asg, language:asm, language:aso, language:ast, language:ata, language:atb, language:atd, language:atg, language:ati, language:atj, language:atq, language:att, language:auc, language:aui, language:auy, language:ava, language:avk, language:avn, language:avt, language:avu, language:awa, language:awb, language:awx, language:ayo, language:ayp, language:ayr, language:azb, language:azg, language:azj, language:azz, language:bak, language:bam, language:ban, language:bao, language:bar, language:bas, language:bav, language:bba, language:bbb, language:bbc, language:bbj, language:bbk, language:bbo, language:bbr, language:bch, language:bci, language:bcl, language:bco, language:bcw, language:bdd, language:bdh, language:bdq, language:bea, language:bef, language:bel, language:bem, language:ben, language:beq, language:bew, language:bex, language:bfd, language:bfo, language:bgr, language:bgs, language:bgt, language:bgz, language:bhg, language:bhl, language:bho, language:bhp, language:bhw, language:bhz, language:bib, language:big, language:bim, language:bin, language:bis, language:biu, language:biv, language:bjn, language:bjp, language:bjr, language:bjv, language:bkd, language:bkl, language:bkq, language:bku, language:bkv, language:bla, language:blh, language:blk, language:blw, language:blz, language:bmh, language:bmk, language:bmq, language:bmr, language:bmu, language:bmv, language:bno, language:bnp, language:boa, language:bod, language:boj, language:bom, language:bon, language:bos, language:bov, language:box, language:bpr, language:bps, language:bpy, language:bqc, language:bqj, language:bqp, language:bre, language:brh, language:bru, language:brx, language:bsc, language:bsn, language:bsp, language:bsq, language:bss, language:btd, language:bth, language:bts, language:btt, language:btx, language:bud, language:bug, language:buk, language:bul, language:bum, language:bus, language:bvc, language:bvd, language:bvr, language:bvz, language:bwd, language:bwi, language:bwq, language:bwu, language:bxh, language:bxr, language:byr, language:byv, language:byx, language:bzd, language:bzh, language:bzi, language:bzj, language:caa, language:cab, language:cac, language:caf, language:cag, language:cak, language:cao, language:cap, language:caq, language:car, language:cas, language:cat, language:cav, language:cax, language:cbc, language:cbi, language:cbk, language:cbr, language:cbs, language:cbt, language:cbu, language:cbv, language:cce, language:cco, language:ccp, language:ceb, language:ceg, language:cek, language:ces, language:cfm, language:cgc, language:cgg, language:cha, language:chd, language:che, language:chf, language:chj, language:chk, language:cho, language:chq, language:chr, language:chu, language:chv, language:chw, language:chz, language:cjk, language:cjo, language:cjp, language:cjs, language:cjv, language:ckb, language:cko, language:ckt, language:cle, language:clu, language:cly, language:cme, language:cmn, language:cmo, language:cmr, language:cnh, language:cni, language:cnk, language:cnl, language:cnt, language:cnw, language:coe, language:cof, language:cok, language:con, language:cop, language:cor, language:cos, language:cot, language:cou, language:cpa, language:cpb, language:cpc, language:cpu, language:cpy, language:crh, language:crj, language:crk, language:crl, language:crm, language:crn, language:crs, language:crt, language:crx, language:csb, language:csk, language:cso, language:csw, language:csy, language:cta, language:ctd, language:cto, language:ctp, language:ctu, language:cub, language:cuc, language:cui, language:cuk, language:cul, language:cut, language:cux, language:cwe, language:cwt, language:cya, language:cym, language:czt, language:daa, language:dad, language:daf, language:dag, language:dah, language:dak, language:dan, language:dar, language:ddg, language:ddn, language:ded, language:des, language:deu, language:dga, language:dgc, language:dgi, language:dgr, language:dgz, language:dhg, language:dhm, language:dhv, language:did, language:dig, language:dik, language:diq, language:dis, language:diu, language:div, language:dje, language:djk, language:djr, language:dks, language:dln, language:dng, language:dnj, language:dnw, language:dob, language:doi, language:dop, language:dos, language:dow, language:drg, language:dru, language:dsb, language:dtb, language:dtp, language:dts, language:dty, language:dua, language:due, language:dug, language:duo, language:dur, language:dwr, language:dww, language:dyi, language:dyo, language:dyu, language:dzo, language:ebk, language:efi, language:eka, language:ekk, language:eko, language:ell, language:emi, language:eml, language:emp, language:enb, language:enl, language:enm, language:enq, language:enx, language:epo, language:eri, language:ese, language:esi, language:esk, language:ess, language:esu, language:eto, language:etr, language:etu, language:eus, language:eve, language:ewe, language:ewo, language:ext, language:eza, language:faa, language:fad, language:fai, language:fal, language:fan, language:fao, language:far, language:fas, language:fat, language:ffm, language:fij, language:fil, language:fin, language:fit, language:fkv, language:fmu, language:fon, language:for, language:fra, language:frd, language:fro, language:frp, language:frr, language:fry, language:fub, language:fud, language:fue, language:fuf, language:fuh, language:fuq, language:fur, language:fuv, language:gaa, language:gag, language:gah, language:gai, language:gam, language:gaw, language:gaz, language:gbi, language:gbo, language:gbr, language:gcf, language:gcr, language:gde, language:gdg, language:gdn, language:gdr, language:geb, language:gej, language:gfk, language:ghs, language:gid, language:gil, language:giz, language:gjn, language:gkn, language:gla, language:gle, language:glg, language:glk, language:glv, language:gmh, language:gmv, language:gna, language:gnb, language:gnd, language:gng, language:gnn, language:gnw, language:goa, language:gof, language:gog, language:goh, language:gom, language:gor, language:gos, language:got, language:gqr, language:grc, language:grt, language:gso, language:gsw, language:gub, language:guc, language:gud, language:gug, language:guh, language:gui, language:guj, language:guk, language:gul, language:gum, language:gun, language:guo, language:guq, language:gur, language:guu, language:guw, language:gux, language:guz, language:gvc, language:gvf, language:gvl, language:gvn, language:gwi, language:gwr, language:gya, language:gym, language:gyr, language:hac, language:hae, language:hag, language:hak, language:hat, language:hau, language:hav, language:haw, language:hay, language:hbo, language:hch, language:heb, language:heg, language:heh, language:her, language:hif, language:hig, language:hil, language:hin, language:hix, language:hla, language:hmo, language:hmr, language:hne, language:hnj, language:hnn, language:hns, language:hop, language:hot, language:hra, language:hrv, language:hrx, language:hsb, language:hto, language:hub, language:hui, language:hun, language:hus, language:huu, language:huv, language:hvn, language:hwc, language:hye, language:hyw, language:ian, language:iba, language:ibg, language:ibo, language:icr, language:ido, language:idu, language:ifa, language:ifb, language:ife, language:ifk, language:ifu, language:ify, language:ige, language:ign, language:ike, language:ikk, language:ikt, language:ikw, language:ilb, language:ile, language:ilo, language:imo, language:ina, language:inb, language:ind, language:inh, language:ino, language:iou, language:ipi, language:iqw, language:iri, language:irk, language:iry, language:isd, language:ish, language:isl, language:iso, language:ita, language:itv, language:ium, language:ivb, language:ivv, language:iws, language:ixl, language:izr, language:izz, language:jaa, language:jac, language:jae, language:jam, language:jav, language:jbo, language:jbu, language:jic, language:jiv, language:jmc, language:jpn, language:jra, language:jun, language:jvn, language:kaa, language:kab, language:kac, language:kak, language:kal, language:kam, language:kan, language:kao, language:kaq, language:kas, language:kat, language:kaz, language:kbc, language:kbd, language:kbh, language:kbm, language:kbo, language:kbp, language:kbq, language:kbr, language:kby, language:kca, language:kcg, language:kck, language:kdc, language:kde, language:kdh, language:kdi, language:kdj, language:kdl, language:kdr, language:kea, language:kei, language:kek, language:ken, language:keo, language:ker, language:kew, language:kez, language:kff, language:kgf, language:kgk, language:kgp, language:kgr, language:kha, language:khk, language:khm, language:khs, language:khz, language:kia, language:kij, language:kik, language:kin, language:kir, language:kiu, language:kix, language:kjb, language:kje, language:kjh, language:kjs, language:kkc, language:kki, language:kkj, language:kkl, language:kle, language:klt, language:klv, language:kmb, language:kmg, language:kmh, language:kmk, language:kmm, language:kmo, language:kmr, language:kms, language:kmu, language:kmy, language:knc, language:kne, language:knf, language:kng, language:knj, language:knk, language:kno, language:knv, language:knx, language:kny, language:kog, language:koi, language:koo, language:kor, language:kos, language:kpe, language:kpf, language:kpg, language:kpj, language:kpq, language:kpr, language:kpv, language:kpw, language:kpx, language:kpz, language:kqc, language:kqe, language:kqf, language:kql, language:kqn, language:kqo, language:kqp, language:kqs, language:kqw, language:kqy, language:krc, language:kri, language:krj, language:krl, language:kru, language:krx, language:ksb, language:ksc, language:ksd, language:ksf, language:ksh, language:ksj, language:ksp, language:ksr, language:kss, language:ksw, language:ktb, language:ktj, language:ktm, language:kto, language:ktu, language:ktz, language:kua, language:kub, language:kud, language:kue, language:kuj, language:kum, language:kup, language:kus, language:kvg, language:kvj, language:kvn, language:kwd, language:kwf, language:kwi, language:kwj, language:kwn, language:kwy, language:kxc, language:kxm, language:kxw, language:kyc, language:kyf, language:kyg, language:kyq, language:kyu, language:kyz, language:kze, language:kzf, language:kzj, language:lac, language:lad, language:lai, language:laj, language:lam, language:lao, language:lap, language:lat, language:lbb, language:lbe, language:lbj, language:lbk, language:lcm, language:lcp, language:ldi, language:ldn, language:lee, language:lef, language:leh, language:lem, language:leu, language:lew, language:lex, language:lez, language:lfn, language:lgg, language:lgl, language:lgm, language:lhi, language:lhu, language:lia, language:lid, language:lif, language:lij, language:lim, language:lin, language:lip, language:lis, language:lit, language:liv, language:ljp, language:lki, language:llb, language:lld, language:llg, language:lln, language:lmk, language:lmo, language:lmp, language:lnd, language:lob, language:loe, language:log, language:lok, language:lol, language:lom, language:loq, language:loz, language:lrc, language:lsi, language:lsm, language:ltg, language:ltz, language:lua, language:lub, language:luc, language:lud, language:lue, language:lug, language:lun, language:luo, language:lus, language:lvs, language:lwg, language:lwo, language:lww, language:lzh, language:maa, language:mad, language:maf, language:mag, language:mah, language:mai, language:maj, language:mak, language:mal, language:mam, language:maq, language:mar, language:mas, language:mau, language:mav, language:maw, language:maz, language:mbb, language:mbc, language:mbd, language:mbf, language:mbh, language:mbi, language:mbj, language:mbl, language:mbs, language:mbt, language:mca, language:mcb, language:mcd, language:mcf, language:mck, language:mcn, language:mco, language:mcp, language:mcq, language:mcu, language:mda, language:mdf, language:mdy, language:med, language:mee, language:mej, language:mek, language:men, language:meq, language:mer, language:met, language:meu, language:mev, language:mfe, language:mfg, language:mfh, language:mfi, language:mfk, language:mfq, language:mfy, language:mfz, language:mgc, language:mgh, language:mgo, language:mgr, language:mhi, language:mhl, language:mhr, language:mhw, language:mhx, language:mhy, language:mib, language:mic, language:mie, language:mif, language:mig, language:mih, language:mil, language:mim, language:min, language:mio, language:mip, language:miq, language:mir, language:mit, language:miy, language:miz, language:mjc, language:mjw, language:mkd, language:mkl, language:mkn, language:mks, language:mkz, language:mlh, language:mlp, language:mlt, language:mlu, language:mmn, language:mmo, language:mmx, language:mna, language:mnb, language:mnf, language:mni, language:mnk, language:mns, language:mnw, language:mnx, language:mny, language:moa, language:moc, language:mog, language:moh, language:mop, language:mor, language:mos, language:mox, language:mpg, language:mph, language:mpm, language:mpp, language:mps, language:mpt, language:mpx, language:mqb, language:mqj, language:mqy, language:mrg, language:mri, language:mrj, language:mrq, language:mrv, language:mrw, language:msb, language:msc, language:mse, language:msk, language:msy, language:mta, language:mtg, language:mti, language:mto, language:mtp, language:mua, language:mug, language:muh, language:mui, language:mup, language:mur, language:mus, language:mux, language:muy, language:mva, language:mvn, language:mvp, language:mwc, language:mwf, language:mwl, language:mwm, language:mwn, language:mwp, language:mwq, language:mwv, language:mww, language:mxb, language:mxp, language:mxq, language:mxt, language:mxv, language:mya, language:myb, language:myk, language:myu, language:myv, language:myw, language:myx, language:myy, language:mza, language:mzh, language:mzk, language:mzl, language:mzm, language:mzn, language:mzw, language:mzz, language:nab, language:naf, language:nah, language:nak, language:nan, language:nap, language:naq, language:nas, language:nav, language:naw, language:nba, language:nbc, language:nbe, language:nbl, language:nbq, language:nbu, language:nca, language:nch, language:ncj, language:ncl, language:ncq, language:nct, language:ncu, language:ncx, language:ndc, language:nde, language:ndh, language:ndi, language:ndj, language:ndo, language:nds, language:ndz, language:neb, language:new, language:nfa, language:nfr, language:ngb, language:ngc, language:ngl, language:ngp, language:ngu, language:nhd, language:nhe, language:nhg, language:nhi, language:nhk, language:nho, language:nhr, language:nhu, language:nhw, language:nhx, language:nhy, language:nia, language:nif, language:nii, language:nij, language:nim, language:nin, language:nio, language:niu, language:niy, language:njb, language:njm, language:njn, language:njo, language:njz, language:nkf, language:nko, language:nld, language:nlg, language:nma, language:nmf, language:nmh, language:nmo, language:nmw, language:nmz, language:nnb, language:nng, language:nnh, language:nnl, language:nno, language:nnp, language:nnq, language:nnw, language:noa, language:nob, language:nod, language:nog, language:non, language:nop, language:not, language:nou, language:nov, language:nph, language:npi, language:npl, language:npo, language:npy, language:nqo, language:nre, language:nrf, language:nri, language:nrm, language:nsa, language:nse, language:nsm, language:nsn, language:nso, language:nss, language:nst, language:nsu, language:ntp, language:ntr, language:ntu, language:nuj, language:nus, language:nuy, language:nvm, language:nwb, language:nwi, language:nwx, language:nxd, language:nya, language:nyf, language:nyk, language:nyn, language:nyo, language:nyu, language:nyy, language:nza, language:nzi, language:nzm, language:obo, language:oci, language:ogo, language:ojb, language:oke, language:oku, language:okv, language:old, language:olo, language:omb, language:omw, language:ong, language:ons, language:ood, language:opm, language:orv, language:ory, language:oss, language:ota, language:otd, language:ote, language:otm, language:otn, language:oto, language:otq, language:ots, language:otw, language:oym, language:ozm, language:pab, language:pad, language:pag, language:pah, language:pam, language:pan, language:pao, language:pap, language:pau, language:pbb, language:pbc, language:pbi, language:pbt, language:pcd, language:pck, language:pcm, language:pdc, language:pdt, language:pem, language:pfe, language:pfl, language:phm, language:pib, language:pio, language:pir, language:pis, language:pjt, language:pkb, language:plg, language:pls, language:plt, language:plu, language:plw, language:pma, language:pmf, language:pmq, language:pms, language:pmx, language:pnb, language:pne, language:pnt, language:pny, language:poe, language:poh, language:poi, language:pol, language:pon, language:por, language:pos, language:pot, language:pov, language:poy, language:ppk, language:ppo, language:pps, language:prf, language:prg, language:pri, language:prq, language:pse, language:pss, language:ptp, language:ptu, language:pui, language:pwg, language:pwn, language:pww, language:pxm, language:qub, language:quc, language:quf, language:qug, language:quh, language:qul, language:qup, language:qus, language:quw, language:quy, language:quz, language:qva, language:qvc, language:qve, language:qvh, language:qvi, language:qvm, language:qvn, language:qvo, language:qvs, language:qvw, language:qvz, language:qwh, language:qxh, language:qxl, language:qxn, language:qxo, language:qxr, language:rad, language:rai, language:rap, language:rar, language:rav, language:raw, language:rcf, language:rej, language:rel, language:rgu, language:rhg, language:ria, language:rim, language:rjs, language:rkb, language:rmc, language:rme, language:rml, language:rmn, language:rmo, language:rmq, language:rmy, language:rnd, language:rng, language:rnl, language:roh, language:ron, language:roo, language:rop, language:row, language:rro, language:rtm, language:rub, language:rue, language:ruf, language:rug, language:run, language:rup, language:rus, language:rwo, language:sab, language:sag, language:sah, language:san, language:sas, language:sat, language:sba, language:sbd, language:sbe, language:sbl, language:sbs, language:sby, language:sck, language:scn, language:sco, language:sda, language:sdc, language:sdh, language:sdo, language:sdq, language:seh, language:ses, language:sey, language:sfw, language:sgb, language:sgc, language:sgh, language:sgs, language:sgw, language:sgz, language:shi, language:shk, language:shn, language:shp, language:shu, language:sid, language:sig, language:sil, language:sim, language:sin, language:sja, language:sjo, language:sju, language:skg, language:skr, language:sld, language:slk, language:sll, language:slv, language:sma, language:sme, language:smj, language:smk, language:sml, language:smn, language:smo, language:sms, language:smt, language:sna, language:snc, language:snd, language:snf, language:snn, language:snp, language:snw, language:sny, language:soe, language:som, language:sop, language:soq, language:sot, language:soy, language:spa, language:spl, language:spm, language:spp, language:sps, language:spy, language:srd, language:sri, language:srm, language:srn, language:srp, language:srq, language:srr, language:ssd, language:ssg, language:ssw, language:ssx, language:stn, language:stp, language:stq, language:sua, language:suc, language:sue, language:suk, language:sun, language:sur, language:sus, language:suz, language:swb, language:swc, language:swe, language:swg, language:swh, language:swk, language:swp, language:sxb, language:sxn, language:syb, language:syc, language:syl, language:szl, language:szy, language:tab, language:tac, language:tah, language:taj, language:tam, language:tap, language:taq, language:tar, language:tat, language:tav, language:taw, language:tay, language:tbc, language:tbg, language:tbk, language:tbl, language:tbo, language:tbw, language:tby, language:tbz, language:tca, language:tcc, language:tcf, language:tcs, language:tcy, language:tcz, language:ted, language:tee, language:tel, language:tem, language:teo, language:ter, language:tet, language:tew, language:tfr, language:tgk, language:tgo, language:tgp, language:tha, language:thk, language:thl, language:tif, language:tig, language:tih, language:tik, language:tim, language:tir, language:tiv, language:tiy, language:tke, language:tkl, language:tkr, language:tku, language:tlb, language:tlf, language:tlh, language:tlj, language:tll, language:tly, language:tmc, language:tmd, language:tna, language:tnc, language:tnk, language:tnn, language:tnp, language:tnr, language:tob, language:toc, language:tod, language:tog, language:toh, language:toi, language:toj, language:tok, language:ton, language:too, language:top, language:tos, language:tpa, language:tpi, language:tpm, language:tpp, language:tpt, language:tpw, language:tpz, language:tqo, language:trc, language:trn, language:tro, language:trp, language:trq, language:trs, language:trv, language:tsc, language:tsg, language:tsn, language:tso, language:tsw, language:tsz, language:ttc, language:tte, language:ttj, language:ttq, language:tuc, language:tue, language:tuf, language:tui, language:tuk, language:tul, language:tum, language:tuo, language:tur, language:tuv, language:tvk, language:tvl, language:twi, language:twu, language:twx, language:txq, language:txu, language:tyv, language:tzh, language:tzj, language:tzl, language:tzm, language:tzo, language:ubr, language:ubu, language:udm, language:udu, language:uig, language:ukr, language:umb, language:upv, language:ura, language:urb, language:urd, language:urh, language:uri, language:urk, language:urt, language:urw, language:ury, language:usa, language:usp, language:uth, language:uvh, language:uvl, language:uzn, language:uzs, language:vag, language:vap, language:var, language:vec, language:ven, language:vep, language:vid, language:vie, language:viv, language:vls, language:vmk, language:vmw, language:vmy, language:vol, language:vot, language:vro, language:vun, language:vut, language:waj, language:wal, language:wap, language:war, language:wat, language:way, language:wba, language:wbm, language:wbp, language:wed, language:wer, language:wes, language:wew, language:whg, language:whk, language:wib, language:wim, language:wiu, language:wln, language:wls, language:wlv, language:wlx, language:wmt, language:wmw, language:wnc, language:wnu, language:wob, language:wol, language:wos, language:wrk, language:wrs, language:wsg, language:wsk, language:wuu, language:wuv, language:wwa, language:xal, language:xav, language:xbi, language:xbr, language:xed, language:xho, language:xla, language:xmf, language:xmm, language:xmv, language:xnn, language:xog, language:xon, language:xrb, language:xsb, language:xsi, language:xsm, language:xsr, language:xsu, language:xtd, language:xtm, language:xtn, language:xuo, language:yaa, language:yad, language:yal, language:yam, language:yan, language:yao, language:yap, language:yaq, language:yat, language:yaz, language:ybb, language:yby, language:ycn, language:ydd, language:yim, language:yka, language:yle, language:yli, language:yml, language:yom, language:yon, language:yor, language:yrb, language:yre, language:yrk, language:yrl, language:yss, language:yua, language:yue, language:yuj, language:yup, language:yut, language:yuw, language:yuz, language:yva, language:zaa, language:zab, language:zac, language:zad, language:zae, language:zai, language:zam, language:zao, language:zar, language:zas, language:zat, language:zav, language:zaw, language:zca, language:zdj, language:zea, language:zgh, language:zia, language:ziw, language:zne, language:zom, language:zos, language:zpa, language:zpc, language:zpg, language:zpi, language:zpj, language:zpl, language:zpm, language:zpo, language:zpq, language:zpt, language:zpu, language:zpv, language:zpz, language:zsm, language:zsr, language:ztq, language:zty, language:zul, language:zyb, language:zyp, license:odc-by, size_categories:1B<n<10B, modality:tabular, modality:text, arxiv:2506.20920, arxiv:2109.07445, arxiv:2406.17557, doi:10.57967/hf/3744, region:us
- **License:** Not specified
## 📖 Description
```text
🥂 FineWeb2
A sparkling update with 1000s of languages
What is it?
This is the second iteration of the popular 🍷 FineWeb dataset, bringing high quality pretraining data to over 1000 🗣️ languages.
The 🥂 FineWeb2 dataset is fully reproducible, available under the permissive ODC-By 1.0 license and extensively validated through hundreds of ablation experiments.
In particular, on the set of 9 diverse languages we used to guide our processing decisions, 🥂 FineWeb2… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-2....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/aai_Latn/test/000_00000.parquet`
- `data/aai_Latn/train/000_00000.parquet`
- `data/aai_Latn_removed/train/000_00000.parquet`
- `data/aak_Latn/test/000_00000.parquet`
- `data/aak_Latn/train/000_00000.parquet`
- `data/aak_Latn_removed/train/000_00000.parquet`
- `data/aau_Latn/test/000_00000.parquet`
- `data/aau_Latn/train/000_00000.parquet`
- `data/aau_Latn_removed/train/000_00000.parquet`
- `data/aaz_Latn/test/000_00000.parquet`
- `data/aaz_Latn/train/000_00000.parquet`
- `data/aaz_Latn_removed/train/000_00000.parquet`
- `data/aba_Latn/test/000_00000.parquet`
- ... and more.
## 📊 Data Structure
### Config: `aai_Latn`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
### Config: `aak_Latn`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
### Config: `aau_Latn`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `language` | `str` |
| `language_score` | `float` |
| `language_script` | `str` |
| `minhash_cluster_size` | `int` |
| `top_langs` | `str` |
|
BAAI/Infinity-Instruct
|
BAAI
|
task_categories:text-generation, language:en, language:zh, license:cc-by-sa-4.0, size_categories:10M<n<100M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.11116, arxiv:2402.00530, arxiv:2405.19327, arxiv:2409.07045, arxiv:2408.07089, region:us
|
community
|
# Dataset: `BAAI/Infinity-Instruct`
## 📝 Metadata
- **Author/Owner:** BAAI
- **Downloads:** 2768
- **Likes:** 670
- **Tags:** task_categories:text-generation, language:en, language:zh, license:cc-by-sa-4.0, size_categories:10M<n<100M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.11116, arxiv:2402.00530, arxiv:2405.19327, arxiv:2409.07045, arxiv:2408.07089, region:us
- **License:** Not specified
## 📖 Description
```text
Infinity Instruct
Beijing Academy of Artificial Intelligence (BAAI)
[Paper][Code][🤗]
The quality and scale of instruction data are crucial for model performance. Recently, open-source models have increasingly relied on fine-tuning datasets comprising millions of instances, necessitating both high quality and large scale. However, the open-source community has long been constrained by the high costs associated with building such extensive and high-quality instruction… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/Infinity-Instruct....
```
## 📂 File System Sample
- `.gitattributes`
- `0625/train-00000-of-00007.parquet`
- `0625/train-00001-of-00007.parquet`
- `0625/train-00002-of-00007.parquet`
- `0625/train-00003-of-00007.parquet`
- `0625/train-00004-of-00007.parquet`
- `0625/train-00005-of-00007.parquet`
- `0625/train-00006-of-00007.parquet`
- `3M/train-00000-of-00035.parquet`
- `3M/train-00001-of-00035.parquet`
- `3M/train-00002-of-00035.parquet`
- `3M/train-00003-of-00035.parquet`
- `3M/train-00004-of-00035.parquet`
- `3M/train-00005-of-00035.parquet`
- `3M/train-00006-of-00035.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'BAAI/Infinity-Instruct' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/BAAI/Infinity-Instruct to ask for access.
```
|
open-r1/OpenR1-Math-220k
|
open-r1
|
language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `open-r1/OpenR1-Math-220k`
## 📝 Metadata
- **Author/Owner:** open-r1
- **Downloads:** 7609
- **Likes:** 658
- **Tags:** language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
OpenR1-Math-220k
Dataset description
OpenR1-Math-220k is a large-scale dataset for mathematical reasoning. It consists of 220k math problems with two to four reasoning traces generated by DeepSeek R1 for problems from NuminaMath 1.5.
The traces were verified using Math Verify for most samples and Llama-3.3-70B-Instruct as a judge for 12% of the samples, and each problem contains at least one reasoning trace with a correct answer.
The dataset consists of two splits:… See the full description on the dataset page: https://huggingface.co/datasets/open-r1/OpenR1-Math-220k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `all/default-00000-of-00010.parquet`
- `all/default-00001-of-00010.parquet`
- `all/default-00002-of-00010.parquet`
- `all/default-00003-of-00010.parquet`
- `all/default-00004-of-00010.parquet`
- `all/default-00005-of-00010.parquet`
- `all/default-00006-of-00010.parquet`
- `all/default-00007-of-00010.parquet`
- `all/default-00008-of-00010.parquet`
- `all/default-00009-of-00010.parquet`
- `all/extended-00000-of-00010.parquet`
- `all/extended-00001-of-00010.parquet`
- `all/extended-00002-of-00010.parquet`
- ... and more.
## 📊 Data Structure
### Config: `all`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `problem` | `str` |
| `solution` | `str` |
| `answer` | `str` |
| `problem_type` | `str` |
| `question_type` | `str` |
| `source` | `str` |
| `uuid` | `str` |
| `is_reasoning_complete` | `list` |
| `generations` | `list` |
| `correctness_math_verify` | `list` |
| `correctness_llama` | `list` |
| `finish_reasons` | `NoneType` |
| `correctness_count` | `int` |
| `messages` | `list` |
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `problem` | `str` |
| `solution` | `str` |
| `answer` | `str` |
| `problem_type` | `str` |
| `question_type` | `str` |
| `source` | `str` |
| `uuid` | `str` |
| `is_reasoning_complete` | `list` |
| `generations` | `list` |
| `correctness_math_verify` | `list` |
| `correctness_llama` | `NoneType` |
| `finish_reasons` | `NoneType` |
| `correctness_count` | `int` |
| `messages` | `list` |
### Config: `extended`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `problem` | `str` |
| `solution` | `str` |
| `answer` | `str` |
| `problem_type` | `str` |
| `question_type` | `str` |
| `source` | `str` |
| `uuid` | `str` |
| `is_reasoning_complete` | `list` |
| `generations` | `list` |
| `correctness_math_verify` | `list` |
| `correctness_llama` | `NoneType` |
| `finish_reasons` | `NoneType` |
| `correctness_count` | `int` |
| `messages` | `list` |
|
m-a-p/COIG-CQIA
|
m-a-p
|
task_categories:question-answering, task_categories:text-classification, task_categories:text-generation, language:zh, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2403.18058, arxiv:2304.07987, arxiv:2307.09705, region:us
|
community
|
# Dataset: `m-a-p/COIG-CQIA`
## 📝 Metadata
- **Author/Owner:** m-a-p
- **Downloads:** 4198
- **Likes:** 654
- **Tags:** task_categories:question-answering, task_categories:text-classification, task_categories:text-generation, language:zh, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2403.18058, arxiv:2304.07987, arxiv:2307.09705, region:us
- **License:** Not specified
## 📖 Description
```text
COIG-CQIA:Quality is All you need for Chinese Instruction Fine-tuning
Dataset Details
Dataset Description
欢迎来到COIG-CQIA,COIG-CQIA全称为Chinese Open Instruction Generalist - Quality is All You Need, 是一个开源的高质量指令微调数据集,旨在为中文NLP社区提供高质量且符合人类交互行为的指令微调数据。COIG-CQIA以中文互联网获取到的问答及文章作为原始数据,经过深度清洗、重构及人工审核构建而成。本项目受LIMA: Less Is More for Alignment等研究启发,使用少量高质量的数据即可让大语言模型学习到人类交互行为,因此在数据构建中我们十分注重数据的来源、质量与多样性,数据集详情请见数据介绍以及我们接下来的论文。
Welcome to the COIG-CQIA… See the full description on the dataset page: https://huggingface.co/datasets/m-a-p/COIG-CQIA....
```
## 📂 File System Sample
- `.gitattributes`
- `COIG-CQIA-full.jsonl`
- `README.md`
- `Yi_logo.svg`
- `chinese_traditional/chengyu.jsonl`
- `chinese_traditional/poem.jsonl`
- `chinese_traditional/trad-multi-choice-100-2.jsonl`
- `chinese_traditional/trad-multi-choice-100.jsonl`
- `chinese_traditional/trad-multi-choice-40.jsonl`
- `chinese_traditional/translate_classical_chinese.jsonl`
- `coig_pc/coig_pc_core_sample.jsonl`
- `douban/book_introduce.jsonl`
- `douban/book_reviews.jsonl`
- `douban/movie_anime_synopsis_more_prompt.jsonl`
- `douban/movie_documentary_synopsis_more_prompt.jsonl`
- ... and more.
## 📊 Data Structure
### Config: `chinese_traditional`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
| `task_type` | `dict` |
| `domain` | `list` |
| `metadata` | `str` |
| `answer_from` | `str` |
| `human_verified` | `bool` |
| `copyright` | `str` |
### Config: `coig_pc`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
| `task_type` | `dict` |
| `domain` | `list` |
| `metadata` | `str` |
| `answer_from` | `str` |
| `human_verified` | `bool` |
| `copyright` | `str` |
### Config: `exam`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
| `task_type` | `dict` |
| `domain` | `list` |
| `metadata` | `str` |
| `answer_from` | `str` |
| `human_verified` | `bool` |
| `copyright` | `str` |
|
HuggingFaceTB/cosmopedia
|
HuggingFaceTB
|
language:en, license:apache-2.0, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.05463, arxiv:2306.11644, region:us, synthetic
|
community
|
# Dataset: `HuggingFaceTB/cosmopedia`
## 📝 Metadata
- **Author/Owner:** HuggingFaceTB
- **Downloads:** 38381
- **Likes:** 642
- **Tags:** language:en, license:apache-2.0, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.05463, arxiv:2306.11644, region:us, synthetic
- **License:** Not specified
## 📖 Description
```text
Cosmopedia v0.1
Image generated by DALL-E, the prompt was generated by Mixtral-8x7B-Instruct-v0.1
Note: Cosmopedia v0.2 is available at smollm-corpus
User: What do you think "Cosmopedia" could mean? Hint: in our case it's not related to cosmology.
Mixtral-8x7B-Instruct-v0.1: A possible meaning for "Cosmopedia" could be an encyclopedia or collection of information about
different cultures, societies, and topics from around the world, emphasizing diversity and global… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/cosmopedia....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/auto_math_text/train-00000-of-00018.parquet`
- `data/auto_math_text/train-00001-of-00018.parquet`
- `data/auto_math_text/train-00002-of-00018.parquet`
- `data/auto_math_text/train-00003-of-00018.parquet`
- `data/auto_math_text/train-00004-of-00018.parquet`
- `data/auto_math_text/train-00005-of-00018.parquet`
- `data/auto_math_text/train-00006-of-00018.parquet`
- `data/auto_math_text/train-00007-of-00018.parquet`
- `data/auto_math_text/train-00008-of-00018.parquet`
- `data/auto_math_text/train-00009-of-00018.parquet`
- `data/auto_math_text/train-00010-of-00018.parquet`
- `data/auto_math_text/train-00011-of-00018.parquet`
- `data/auto_math_text/train-00012-of-00018.parquet`
- ... and more.
## 📊 Data Structure
### Config: `auto_math_text`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `text_token_length` | `int` |
| `text` | `str` |
| `seed_data` | `str` |
| `format` | `str` |
| `audience` | `str` |
### Config: `khanacademy`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `text_token_length` | `int` |
| `text` | `str` |
| `seed_data` | `str` |
| `format` | `str` |
| `audience` | `str` |
### Config: `openstax`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text_token_length` | `int` |
| `prompt` | `str` |
| `text` | `str` |
| `seed_data` | `str` |
| `format` | `str` |
| `audience` | `str` |
|
proj-persona/PersonaHub
|
proj-persona
|
task_categories:text-generation, task_categories:text-classification, task_categories:token-classification, task_categories:fill-mask, task_categories:table-question-answering, language:en, language:zh, license:cc-by-nc-sa-4.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.20094, region:us, synthetic, text, math, reasoning, instruction, tool, persona
|
community
|
# Dataset: `proj-persona/PersonaHub`
## 📝 Metadata
- **Author/Owner:** proj-persona
- **Downloads:** 12820
- **Likes:** 642
- **Tags:** task_categories:text-generation, task_categories:text-classification, task_categories:token-classification, task_categories:fill-mask, task_categories:table-question-answering, language:en, language:zh, license:cc-by-nc-sa-4.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.20094, region:us, synthetic, text, math, reasoning, instruction, tool, persona
- **License:** Not specified
## 📖 Description
```text
Scaling Synthetic Data Creation with 1,000,000,000 Personas
This repo releases data introduced in our paper Scaling Synthetic Data Creation with 1,000,000,000 Personas:
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce PERSONA HUB – a collection of 1 billion diverse personas automatically curated from web data.… See the full description on the dataset page: https://huggingface.co/datasets/proj-persona/PersonaHub....
```
## 📂 File System Sample
- `.gitattributes`
- `ElitePersonas/elite_personas.part1.jsonl`
- `ElitePersonas/elite_personas.part10.jsonl`
- `ElitePersonas/elite_personas.part11.jsonl`
- `ElitePersonas/elite_personas.part12.jsonl`
- `ElitePersonas/elite_personas.part13.jsonl`
- `ElitePersonas/elite_personas.part14.jsonl`
- `ElitePersonas/elite_personas.part15.jsonl`
- `ElitePersonas/elite_personas.part16.jsonl`
- `ElitePersonas/elite_personas.part17.jsonl`
- `ElitePersonas/elite_personas.part18.jsonl`
- `ElitePersonas/elite_personas.part19.jsonl`
- `ElitePersonas/elite_personas.part2.jsonl`
- `ElitePersonas/elite_personas.part3.jsonl`
- `ElitePersonas/elite_personas.part4.jsonl`
- ... and more.
## 📊 Data Structure
### Config: `math`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `input persona` | `str` |
| `synthesized text` | `str` |
| `description` | `str` |
### Config: `instruction`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `input persona` | `str` |
| `synthesized text` | `str` |
| `description` | `str` |
### Config: `reasoning`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `input persona` | `str` |
| `synthesized text` | `str` |
| `description` | `str` |
|
HuggingFaceFW/finepdfs
|
HuggingFaceFW
|
task_categories:text-generation, language:aai, language:aak, language:aau, language:aaz, language:aba, language:abi, language:abk, language:abn, language:abq, language:abs, language:abt, language:abx, language:aby, language:abz, language:aca, language:acd, language:ace, language:acf, language:ach, language:acm, language:acn, language:acr, language:acu, language:ada, language:ade, language:adh, language:adi, language:adj, language:adl, language:ady, language:adz, language:aeb, language:aer, language:aeu, language:aey, language:afr, language:agd, language:agg, language:agm, language:agn, language:agr, language:agt, language:agu, language:agw, language:agx, language:aha, language:ahk, language:aia, language:aii, language:aim, language:ain, language:ajg, language:aji, language:ajz, language:akb, language:ake, language:akh, language:akp, language:alj, language:aln, language:alp, language:alq, language:als, language:alt, language:aly, language:alz, language:ame, language:amf, language:amh, language:ami, language:amk, language:amm, language:amn, language:amp, language:amr, language:amu, language:amx, language:ang, language:anm, language:ann, language:anp, language:anv, language:any, language:aoi, language:aoj, language:aom, language:aoz, language:apb, language:apc, language:ape, language:apn, language:apr, language:apt, language:apu, language:apw, language:apy, language:apz, language:arb, language:are, language:arg, language:arl, language:arn, language:arp, language:arq, language:ars, language:ary, language:arz, language:asg, language:asm, language:aso, language:ast, language:ata, language:atb, language:atd, language:atg, language:ati, language:atj, language:atq, language:att, language:auc, language:aui, language:auy, language:ava, language:avk, language:avn, language:avt, language:avu, language:awa, language:awb, language:awx, language:ayo, language:ayp, language:ayr, language:azb, language:azg, language:azj, language:azz, language:bak, language:bam, language:ban, language:bao, language:bar, language:bas, language:bav, language:bba, language:bbb, language:bbc, language:bbj, language:bbk, language:bbo, language:bbr, language:bch, language:bci, language:bcl, language:bco, language:bcw, language:bdd, language:bdh, language:bdq, language:bea, language:bef, language:bel, language:bem, language:ben, language:beq, language:bew, language:bex, language:bfd, language:bfo, language:bgr, language:bgs, language:bgt, language:bgz, language:bhg, language:bhl, language:bho, language:bhp, language:bhw, language:bhz, language:bib, language:big, language:bim, language:bin, language:bis, language:biu, language:biv, language:bjn, language:bjp, language:bjr, language:bjv, language:bkd, language:bkl, language:bkq, language:bku, language:bkv, language:bla, language:blh, language:blk, language:blw, language:blz, language:bmh, language:bmk, language:bmq, language:bmr, language:bmu, language:bmv, language:bno, language:bnp, language:boa, language:bod, language:boj, language:bom, language:bon, language:bos, language:bov, language:box, language:bpr, language:bps, language:bpy, language:bqc, language:bqj, language:bqp, language:bre, language:brh, language:bru, language:brx, language:bsc, language:bsn, language:bsp, language:bsq, language:bss, language:btd, language:bth, language:bts, language:btt, language:btx, language:bud, language:bug, language:buk, language:bul, language:bum, language:bus, language:bvc, language:bvd, language:bvr, language:bvz, language:bwd, language:bwi, language:bwq, language:bwu, language:bxh, language:bxr, language:byr, language:byv, language:byx, language:bzd, language:bzh, language:bzi, language:bzj, language:caa, language:cab, language:cac, language:caf, language:cag, language:cak, language:cao, language:cap, language:caq, language:car, language:cas, language:cat, language:cav, language:cax, language:cbc, language:cbi, language:cbk, language:cbr, language:cbs, language:cbt, language:cbu, language:cbv, language:cce, language:cco, language:ccp, language:ceb, language:ceg, language:cek, language:ces, language:cfm, language:cgc, language:cgg, language:cha, language:chd, language:che, language:chf, language:chj, language:chk, language:cho, language:chq, language:chr, language:chu, language:chv, language:chw, language:chz, language:cjk, language:cjo, language:cjp, language:cjs, language:cjv, language:ckb, language:cko, language:ckt, language:cle, language:clu, language:cly, language:cme, language:cmn, language:cmo, language:cmr, language:cnh, language:cni, language:cnk, language:cnl, language:cnt, language:cnw, language:coe, language:cof, language:cok, language:con, language:cop, language:cor, language:cos, language:cot, language:cou, language:cpa, language:cpb, language:cpc, language:cpu, language:cpy, language:crh, language:crj, language:crk, language:crl, language:crm, language:crn, language:crs, language:crt, language:crx, language:csb, language:csk, language:cso, language:csw, language:csy, language:cta, language:ctd, language:cto, language:ctp, language:ctu, language:cub, language:cuc, language:cui, language:cuk, language:cul, language:cut, language:cux, language:cwe, language:cwt, language:cya, language:cym, language:czt, language:daa, language:dad, language:daf, language:dag, language:dah, language:dak, language:dan, language:dar, language:ddg, language:ddn, language:ded, language:des, language:deu, language:dga, language:dgc, language:dgi, language:dgr, language:dgz, language:dhg, language:dhm, language:dhv, language:did, language:dig, language:dik, language:diq, language:dis, language:diu, language:div, language:dje, language:djk, language:djr, language:dks, language:dln, language:dng, language:dnj, language:dnw, language:dob, language:doi, language:dop, language:dos, language:dow, language:drg, language:dru, language:dsb, language:dtb, language:dtp, language:dts, language:dty, language:dua, language:due, language:dug, language:duo, language:dur, language:dwr, language:dww, language:dyi, language:dyo, language:dyu, language:dzo, language:ebk, language:efi, language:eka, language:ekk, language:eko, language:ell, language:emi, language:eml, language:emp, language:enb, language:enl, language:enm, language:enq, language:enx, language:epo, language:eri, language:ese, language:esi, language:esk, language:ess, language:esu, language:eto, language:etr, language:etu, language:eus, language:eve, language:ewe, language:ewo, language:ext, language:eza, language:faa, language:fad, language:fai, language:fal, language:fan, language:fao, language:far, language:fas, language:fat, language:ffm, language:fij, language:fil, language:fin, language:fit, language:fkv, language:fmu, language:fon, language:for, language:fra, language:frd, language:fro, language:frp, language:frr, language:fry, language:fub, language:fud, language:fue, language:fuf, language:fuh, language:fuq, language:fur, language:fuv, language:gaa, language:gag, language:gah, language:gai, language:gam, language:gaw, language:gaz, language:gbi, language:gbo, language:gbr, language:gcf, language:gcr, language:gde, language:gdg, language:gdn, language:gdr, language:geb, language:gej, language:gfk, language:ghs, language:gid, language:gil, language:giz, language:gjn, language:gkn, language:gla, language:gle, language:glg, language:glk, language:glv, language:gmh, language:gmv, language:gna, language:gnb, language:gnd, language:gng, language:gnn, language:gnw, language:goa, language:gof, language:gog, language:goh, language:gom, language:gor, language:gos, language:got, language:gqr, language:grc, language:grt, language:gso, language:gsw, language:gub, language:guc, language:gud, language:gug, language:guh, language:gui, language:guj, language:guk, language:gul, language:gum, language:gun, language:guo, language:guq, language:gur, language:guu, language:guw, language:gux, language:guz, language:gvc, language:gvf, language:gvl, language:gvn, language:gwi, language:gwr, language:gya, language:gym, language:gyr, language:hac, language:hae, language:hag, language:hak, language:hat, language:hav, language:haw, language:hay, language:hbo, language:hch, language:heb, language:heg, language:heh, language:her, language:hif, language:hig, language:hil, language:hin, language:hix, language:hla, language:hmo, language:hmr, language:hne, language:hnj, language:hnn, language:hns, language:hop, language:hot, language:hra, language:hrv, language:hrx, language:hsb, language:hto, language:hub, language:hui, language:hun, language:hus, language:huu, language:huv, language:hvn, language:hwc, language:hye, language:hyw, language:ian, language:iba, language:ibg, language:ibo, language:icr, language:ido, language:idu, language:ifa, language:ifb, language:ife, language:ifk, language:ifu, language:ify, language:ige, language:ign, language:ike, language:ikk, language:ikt, language:ikw, language:ilb, language:ile, language:ilo, language:imo, language:ina, language:inb, language:ind, language:inh, language:ino, language:iou, language:ipi, language:iqw, language:iri, language:irk, language:iry, language:isd, language:ish, language:isl, language:iso, language:ita, language:itv, language:ium, language:ivb, language:ivv, language:iws, language:ixl, language:izr, language:izz, language:jaa, language:jac, language:jae, language:jam, language:jav, language:jbo, language:jbu, language:jic, language:jiv, language:jmc, language:jpn, language:jra, language:jun, language:jvn, language:kaa, language:kab, language:kac, language:kak, language:kal, language:kam, language:kan, language:kao, language:kaq, language:kas, language:kat, language:kaz, language:kbc, language:kbd, language:kbh, language:kbm, language:kbo, language:kbp, language:kbq, language:kbr, language:kby, language:kca, language:kcg, language:kck, language:kdc, language:kde, language:kdh, language:kdi, language:kdj, language:kdl, language:kdr, language:kea, language:kei, language:kek, language:ken, language:keo, language:ker, language:kew, language:kez, language:kff, language:kgf, language:kgk, language:kgp, language:kgr, language:kha, language:khk, language:khm, language:khs, language:khz, language:kia, language:kij, language:kik, language:kin, language:kir, language:kiu, language:kix, language:kjb, language:kje, language:kjh, language:kjs, language:kkc, language:kki, language:kkj, language:kkl, language:kle, language:klt, language:klv, language:kmb, language:kmg, language:kmh, language:kmk, language:kmm, language:kmo, language:kmr, language:kms, language:kmu, language:kmy, language:knc, language:kne, language:knf, language:kng, language:knj, language:knk, language:kno, language:knv, language:knx, language:kny, language:kog, language:koi, language:koo, language:kor, language:kos, language:kpe, language:kpf, language:kpg, language:kpj, language:kpq, language:kpr, language:kpv, language:kpw, language:kpx, language:kpz, language:kqc, language:kqe, language:kqf, language:kql, language:kqn, language:kqo, language:kqp, language:kqs, language:kqw, language:kqy, language:krc, language:kri, language:krj, language:krl, language:kru, language:krx, language:ksb, language:ksc, language:ksd, language:ksf, language:ksh, language:ksj, language:ksp, language:ksr, language:kss, language:ksw, language:ktb, language:ktj, language:ktm, language:kto, language:ktu, language:ktz, language:kua, language:kub, language:kud, language:kue, language:kuj, language:kum, language:kup, language:kus, language:kvg, language:kvj, language:kvn, language:kwd, language:kwf, language:kwi, language:kwj, language:kwn, language:kwy, language:kxc, language:kxm, language:kxw, language:kyc, language:kyf, language:kyg, language:kyq, language:kyu, language:kyz, language:kze, language:kzf, language:kzj, language:lac, language:lad, language:lai, language:laj, language:lam, language:lao, language:lap, language:lat, language:lbb, language:lbe, language:lbj, language:lbk, language:lcm, language:lcp, language:ldi, language:ldn, language:lee, language:lef, language:leh, language:lem, language:leu, language:lew, language:lex, language:lez, language:lfn, language:lgg, language:lgl, language:lgm, language:lhi, language:lhu, language:lia, language:lid, language:lif, language:lij, language:lim, language:lin, language:lip, language:lis, language:lit, language:liv, language:ljp, language:lki, language:llb, language:lld, language:llg, language:lln, language:lmk, language:lmo, language:lmp, language:lnd, language:lob, language:loe, language:log, language:lok, language:lol, language:lom, language:loq, language:loz, language:lrc, language:lsi, language:lsm, language:ltg, language:ltz, language:lua, language:lub, language:luc, language:lud, language:lue, language:lug, language:lun, language:luo, language:lus, language:lvs, language:lwg, language:lwo, language:lww, language:lzh, language:maa, language:mad, language:maf, language:mag, language:mah, language:mai, language:maj, language:mak, language:mal, language:mam, language:maq, language:mar, language:mas, language:mau, language:mav, language:maw, language:maz, language:mbb, language:mbc, language:mbd, language:mbf, language:mbh, language:mbi, language:mbj, language:mbl, language:mbs, language:mbt, language:mca, language:mcb, language:mcd, language:mcf, language:mck, language:mcn, language:mco, language:mcp, language:mcq, language:mcu, language:mda, language:mdf, language:mdy, language:med, language:mee, language:mej, language:mek, language:men, language:meq, language:mer, language:met, language:meu, language:mev, language:mfe, language:mfg, language:mfh, language:mfi, language:mfk, language:mfq, language:mfy, language:mfz, language:mgc, language:mgh, language:mgo, language:mgr, language:mhi, language:mhl, language:mhr, language:mhw, language:mhx, language:mhy, language:mib, language:mic, language:mie, language:mif, language:mig, language:mih, language:mil, language:mim, language:min, language:mio, language:mip, language:miq, language:mir, language:mit, language:miy, language:miz, language:mjc, language:mjw, language:mkd, language:mkl, language:mkn, language:mks, language:mkz, language:mlh, language:mlp, language:mlt, language:mlu, language:mmn, language:mmo, language:mmx, language:mna, language:mnb, language:mnf, language:mni, language:mnk, language:mns, language:mnw, language:mnx, language:mny, language:moa, language:moc, language:mog, language:moh, language:mop, language:mor, language:mos, language:mox, language:mpg, language:mph, language:mpm, language:mpp, language:mps, language:mpt, language:mpx, language:mqb, language:mqj, language:mqy, language:mrg, language:mri, language:mrj, language:mrq, language:mrv, language:mrw, language:msb, language:msc, language:mse, language:msk, language:msy, language:mta, language:mtg, language:mti, language:mto, language:mtp, language:mua, language:mug, language:muh, language:mui, language:mup, language:mur, language:mus, language:mux, language:muy, language:mva, language:mvn, language:mvp, language:mwc, language:mwf, language:mwl, language:mwm, language:mwn, language:mwp, language:mwq, language:mwv, language:mww, language:mxb, language:mxp, language:mxq, language:mxt, language:mxv, language:mya, language:myb, language:myk, language:myu, language:myv, language:myw, language:myx, language:myy, language:mza, language:mzh, language:mzk, language:mzl, language:mzm, language:mzn, language:mzw, language:mzz, language:nab, language:naf, language:nah, language:nak, language:nap, language:naq, language:nas, language:nav, language:naw, language:nba, language:nbc, language:nbe, language:nbl, language:nbq, language:nbu, language:nca, language:nch, language:ncj, language:ncl, language:ncq, language:nct, language:ncu, language:ncx, language:ndc, language:nde, language:ndh, language:ndi, language:ndj, language:ndo, language:nds, language:ndz, language:neb, language:new, language:nfa, language:nfr, language:ngb, language:ngc, language:ngl, language:ngp, language:ngu, language:nhd, language:nhe, language:nhg, language:nhi, language:nhk, language:nho, language:nhr, language:nhu, language:nhw, language:nhx, language:nhy, language:nia, language:nif, language:nii, language:nij, language:nim, language:nin, language:nio, language:niu, language:niy, language:njb, language:njm, language:njn, language:njo, language:njz, language:nkf, language:nko, language:nld, language:nlg, language:nma, language:nmf, language:nmh, language:nmo, language:nmw, language:nmz, language:nnb, language:nng, language:nnh, language:nnl, language:nno, language:nnp, language:nnq, language:nnw, language:noa, language:nob, language:nod, language:nog, language:non, language:nop, language:not, language:nou, language:nov, language:nph, language:npi, language:npl, language:npo, language:npy, language:nqo, language:nre, language:nrf, language:nri, language:nrm, language:nsa, language:nse, language:nsm, language:nsn, language:nso, language:nss, language:nst, language:nsu, language:ntp, language:ntr, language:ntu, language:nuj, language:nus, language:nuy, language:nvm, language:nwb, language:nwi, language:nwx, language:nxd, language:nya, language:nyf, language:nyk, language:nyn, language:nyo, language:nyu, language:nyy, language:nza, language:nzi, language:nzm, language:obo, language:oci, language:ogo, language:ojb, language:oke, language:oku, language:okv, language:old, language:olo, language:omb, language:omw, language:ong, language:ons, language:ood, language:opm, language:orv, language:ory, language:oss, language:ota, language:otd, language:ote, language:otm, language:otn, language:oto, language:otq, language:ots, language:otw, language:oym, language:ozm, language:pab, language:pad, language:pag, language:pah, language:pam, language:pan, language:pao, language:pap, language:pau, language:pbb, language:pbc, language:pbi, language:pbt, language:pcd, language:pck, language:pcm, language:pdc, language:pdt, language:pem, language:pfe, language:pfl, language:phm, language:pib, language:pio, language:pir, language:pis, language:pjt, language:pkb, language:plg, language:pls, language:plt, language:plu, language:plw, language:pma, language:pmf, language:pmq, language:pms, language:pmx, language:pnb, language:pne, language:pnt, language:pny, language:poe, language:poh, language:poi, language:pol, language:pon, language:por, language:pos, language:pot, language:pov, language:poy, language:ppk, language:ppo, language:pps, language:prf, language:prg, language:pri, language:prq, language:pse, language:pss, language:ptp, language:ptu, language:pui, language:pwg, language:pwn, language:pww, language:pxm, language:qub, language:quc, language:quf, language:qug, language:quh, language:qul, language:qup, language:qus, language:quw, language:quy, language:quz, language:qva, language:qvc, language:qve, language:qvh, language:qvi, language:qvm, language:qvn, language:qvo, language:qvs, language:qvw, language:qvz, language:qwh, language:qxh, language:qxl, language:qxn, language:qxo, language:qxr, language:rad, language:rai, language:rap, language:rar, language:rav, language:raw, language:rcf, language:rej, language:rel, language:rgu, language:rhg, language:ria, language:rim, language:rjs, language:rkb, language:rmc, language:rme, language:rml, language:rmn, language:rmo, language:rmq, language:rmy, language:rnd, language:rng, language:rnl, language:roh, language:ron, language:roo, language:rop, language:row, language:rro, language:rtm, language:rub, language:rue, language:ruf, language:rug, language:run, language:rup, language:rus, language:rwo, language:sab, language:sag, language:sah, language:san, language:sas, language:sat, language:sba, language:sbd, language:sbe, language:sbl, language:sbs, language:sby, language:sck, language:scn, language:sco, language:sda, language:sdc, language:sdh, language:sdo, language:sdq, language:seh, language:ses, language:sey, language:sfw, language:sgb, language:sgc, language:sgh, language:sgs, language:sgw, language:sgz, language:shi, language:shk, language:shn, language:shp, language:shu, language:sid, language:sig, language:sil, language:sim, language:sin, language:sja, language:sjo, language:sju, language:skg, language:skr, language:sld, language:slk, language:sll, language:slv, language:sma, language:sme, language:smj, language:smk, language:sml, language:smn, language:smo, language:sms, language:smt, language:sna, language:snc, language:snd, language:snf, language:snn, language:snp, language:snw, language:sny, language:soe, language:som, language:sop, language:soq, language:sot, language:soy, language:spa, language:spl, language:spm, language:spp, language:sps, language:spy, language:srd, language:sri, language:srm, language:srn, language:srp, language:srq, language:srr, language:ssd, language:ssg, language:ssw, language:ssx, language:stn, language:stp, language:stq, language:sua, language:suc, language:sue, language:suk, language:sun, language:sur, language:sus, language:suz, language:swb, language:swc, language:swe, language:swg, language:swh, language:swk, language:swp, language:sxb, language:sxn, language:syb, language:syc, language:syl, language:szl, language:szy, language:tab, language:tac, language:tah, language:taj, language:tam, language:tap, language:taq, language:tar, language:tat, language:tav, language:taw, language:tay, language:tbc, language:tbg, language:tbk, language:tbl, language:tbo, language:tbw, language:tby, language:tbz, language:tca, language:tcc, language:tcf, language:tcs, language:tcy, language:tcz, language:ted, language:tee, language:tel, language:tem, language:teo, language:ter, language:tet, language:tew, language:tfr, language:tgk, language:tgo, language:tgp, language:tha, language:thk, language:thl, language:tif, language:tig, language:tih, language:tik, language:tim, language:tir, language:tiv, language:tiy, language:tke, language:tkl, language:tkr, language:tku, language:tlb, language:tlf, language:tlh, language:tlj, language:tll, language:tly, language:tmc, language:tmd, language:tna, language:tnc, language:tnk, language:tnn, language:tnp, language:tnr, language:tob, language:toc, language:tod, language:tog, language:toh, language:toi, language:toj, language:tok, language:ton, language:too, language:top, language:tos, language:tpa, language:tpi, language:tpm, language:tpp, language:tpt, language:tpw, language:tpz, language:tqo, language:trc, language:trn, language:tro, language:trp, language:trq, language:trs, language:trv, language:tsc, language:tsg, language:tsn, language:tso, language:tsw, language:tsz, language:ttc, language:tte, language:ttj, language:ttq, language:tuc, language:tue, language:tuf, language:tui, language:tuk, language:tul, language:tum, language:tuo, language:tur, language:tuv, language:tvk, language:tvl, language:twi, language:twu, language:twx, language:txq, language:txu, language:tyv, language:tzh, language:tzj, language:tzl, language:tzm, language:tzo, language:ubr, language:ubu, language:udm, language:udu, language:uig, language:ukr, language:umb, language:upv, language:ura, language:urb, language:urd, language:urh, language:uri, language:urk, language:urt, language:urw, language:ury, language:usa, language:usp, language:uth, language:uvh, language:uvl, language:uzn, language:uzs, language:vag, language:vap, language:var, language:vec, language:ven, language:vep, language:vid, language:vie, language:viv, language:vls, language:vmk, language:vmw, language:vmy, language:vol, language:vot, language:vro, language:vun, language:vut, language:waj, language:wal, language:wap, language:war, language:wat, language:way, language:wba, language:wbm, language:wbp, language:wed, language:wer, language:wes, language:wew, language:whg, language:whk, language:wib, language:wim, language:wiu, language:wln, language:wls, language:wlv, language:wlx, language:wmt, language:wmw, language:wnc, language:wnu, language:wob, language:wol, language:wos, language:wrk, language:wrs, language:wsg, language:wsk, language:wuu, language:wuv, language:wwa, language:xal, language:xav, language:xbi, language:xbr, language:xed, language:xho, language:xla, language:xmf, language:xmm, language:xmv, language:xnn, language:xog, language:xon, language:xrb, language:xsb, language:xsi, language:xsm, language:xsr, language:xsu, language:xtd, language:xtm, language:xtn, language:xuo, language:yaa, language:yad, language:yal, language:yam, language:yan, language:yao, language:yap, language:yaq, language:yat, language:yaz, language:ybb, language:yby, language:ycn, language:ydd, language:yim, language:yka, language:yle, language:yli, language:yml, language:yom, language:yon, language:yor, language:yrb, language:yre, language:yrk, language:yrl, language:yss, language:yua, language:yue, language:yuj, language:yup, language:yut, language:yuw, language:yuz, language:yva, language:zaa, language:zab, language:zac, language:zad, language:zae, language:zai, language:zam, language:zao, language:zar, language:zas, language:zat, language:zav, language:zaw, language:zca, language:zdj, language:zea, language:zgh, language:zia, language:ziw, language:zne, language:zom, language:zos, language:zpa, language:zpc, language:zpg, language:zpi, language:zpj, language:zpl, language:zpm, language:zpo, language:zpq, language:zpt, language:zpu, language:zpv, language:zpz, language:zsm, language:zsr, language:ztq, language:zty, language:zul, language:zyb, language:zyp, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.18421, arxiv:2109.07445, region:us
|
community
|
# Dataset: `HuggingFaceFW/finepdfs`
## 📝 Metadata
- **Author/Owner:** HuggingFaceFW
- **Downloads:** 47698
- **Likes:** 633
- **Tags:** task_categories:text-generation, language:aai, language:aak, language:aau, language:aaz, language:aba, language:abi, language:abk, language:abn, language:abq, language:abs, language:abt, language:abx, language:aby, language:abz, language:aca, language:acd, language:ace, language:acf, language:ach, language:acm, language:acn, language:acr, language:acu, language:ada, language:ade, language:adh, language:adi, language:adj, language:adl, language:ady, language:adz, language:aeb, language:aer, language:aeu, language:aey, language:afr, language:agd, language:agg, language:agm, language:agn, language:agr, language:agt, language:agu, language:agw, language:agx, language:aha, language:ahk, language:aia, language:aii, language:aim, language:ain, language:ajg, language:aji, language:ajz, language:akb, language:ake, language:akh, language:akp, language:alj, language:aln, language:alp, language:alq, language:als, language:alt, language:aly, language:alz, language:ame, language:amf, language:amh, language:ami, language:amk, language:amm, language:amn, language:amp, language:amr, language:amu, language:amx, language:ang, language:anm, language:ann, language:anp, language:anv, language:any, language:aoi, language:aoj, language:aom, language:aoz, language:apb, language:apc, language:ape, language:apn, language:apr, language:apt, language:apu, language:apw, language:apy, language:apz, language:arb, language:are, language:arg, language:arl, language:arn, language:arp, language:arq, language:ars, language:ary, language:arz, language:asg, language:asm, language:aso, language:ast, language:ata, language:atb, language:atd, language:atg, language:ati, language:atj, language:atq, language:att, language:auc, language:aui, language:auy, language:ava, language:avk, language:avn, language:avt, language:avu, language:awa, language:awb, language:awx, language:ayo, language:ayp, language:ayr, language:azb, language:azg, language:azj, language:azz, language:bak, language:bam, language:ban, language:bao, language:bar, language:bas, language:bav, language:bba, language:bbb, language:bbc, language:bbj, language:bbk, language:bbo, language:bbr, language:bch, language:bci, language:bcl, language:bco, language:bcw, language:bdd, language:bdh, language:bdq, language:bea, language:bef, language:bel, language:bem, language:ben, language:beq, language:bew, language:bex, language:bfd, language:bfo, language:bgr, language:bgs, language:bgt, language:bgz, language:bhg, language:bhl, language:bho, language:bhp, language:bhw, language:bhz, language:bib, language:big, language:bim, language:bin, language:bis, language:biu, language:biv, language:bjn, language:bjp, language:bjr, language:bjv, language:bkd, language:bkl, language:bkq, language:bku, language:bkv, language:bla, language:blh, language:blk, language:blw, language:blz, language:bmh, language:bmk, language:bmq, language:bmr, language:bmu, language:bmv, language:bno, language:bnp, language:boa, language:bod, language:boj, language:bom, language:bon, language:bos, language:bov, language:box, language:bpr, language:bps, language:bpy, language:bqc, language:bqj, language:bqp, language:bre, language:brh, language:bru, language:brx, language:bsc, language:bsn, language:bsp, language:bsq, language:bss, language:btd, language:bth, language:bts, language:btt, language:btx, language:bud, language:bug, language:buk, language:bul, language:bum, language:bus, language:bvc, language:bvd, language:bvr, language:bvz, language:bwd, language:bwi, language:bwq, language:bwu, language:bxh, language:bxr, language:byr, language:byv, language:byx, language:bzd, language:bzh, language:bzi, language:bzj, language:caa, language:cab, language:cac, language:caf, language:cag, language:cak, language:cao, language:cap, language:caq, language:car, language:cas, language:cat, language:cav, language:cax, language:cbc, language:cbi, language:cbk, language:cbr, language:cbs, language:cbt, language:cbu, language:cbv, language:cce, language:cco, language:ccp, language:ceb, language:ceg, language:cek, language:ces, language:cfm, language:cgc, language:cgg, language:cha, language:chd, language:che, language:chf, language:chj, language:chk, language:cho, language:chq, language:chr, language:chu, language:chv, language:chw, language:chz, language:cjk, language:cjo, language:cjp, language:cjs, language:cjv, language:ckb, language:cko, language:ckt, language:cle, language:clu, language:cly, language:cme, language:cmn, language:cmo, language:cmr, language:cnh, language:cni, language:cnk, language:cnl, language:cnt, language:cnw, language:coe, language:cof, language:cok, language:con, language:cop, language:cor, language:cos, language:cot, language:cou, language:cpa, language:cpb, language:cpc, language:cpu, language:cpy, language:crh, language:crj, language:crk, language:crl, language:crm, language:crn, language:crs, language:crt, language:crx, language:csb, language:csk, language:cso, language:csw, language:csy, language:cta, language:ctd, language:cto, language:ctp, language:ctu, language:cub, language:cuc, language:cui, language:cuk, language:cul, language:cut, language:cux, language:cwe, language:cwt, language:cya, language:cym, language:czt, language:daa, language:dad, language:daf, language:dag, language:dah, language:dak, language:dan, language:dar, language:ddg, language:ddn, language:ded, language:des, language:deu, language:dga, language:dgc, language:dgi, language:dgr, language:dgz, language:dhg, language:dhm, language:dhv, language:did, language:dig, language:dik, language:diq, language:dis, language:diu, language:div, language:dje, language:djk, language:djr, language:dks, language:dln, language:dng, language:dnj, language:dnw, language:dob, language:doi, language:dop, language:dos, language:dow, language:drg, language:dru, language:dsb, language:dtb, language:dtp, language:dts, language:dty, language:dua, language:due, language:dug, language:duo, language:dur, language:dwr, language:dww, language:dyi, language:dyo, language:dyu, language:dzo, language:ebk, language:efi, language:eka, language:ekk, language:eko, language:ell, language:emi, language:eml, language:emp, language:enb, language:enl, language:enm, language:enq, language:enx, language:epo, language:eri, language:ese, language:esi, language:esk, language:ess, language:esu, language:eto, language:etr, language:etu, language:eus, language:eve, language:ewe, language:ewo, language:ext, language:eza, language:faa, language:fad, language:fai, language:fal, language:fan, language:fao, language:far, language:fas, language:fat, language:ffm, language:fij, language:fil, language:fin, language:fit, language:fkv, language:fmu, language:fon, language:for, language:fra, language:frd, language:fro, language:frp, language:frr, language:fry, language:fub, language:fud, language:fue, language:fuf, language:fuh, language:fuq, language:fur, language:fuv, language:gaa, language:gag, language:gah, language:gai, language:gam, language:gaw, language:gaz, language:gbi, language:gbo, language:gbr, language:gcf, language:gcr, language:gde, language:gdg, language:gdn, language:gdr, language:geb, language:gej, language:gfk, language:ghs, language:gid, language:gil, language:giz, language:gjn, language:gkn, language:gla, language:gle, language:glg, language:glk, language:glv, language:gmh, language:gmv, language:gna, language:gnb, language:gnd, language:gng, language:gnn, language:gnw, language:goa, language:gof, language:gog, language:goh, language:gom, language:gor, language:gos, language:got, language:gqr, language:grc, language:grt, language:gso, language:gsw, language:gub, language:guc, language:gud, language:gug, language:guh, language:gui, language:guj, language:guk, language:gul, language:gum, language:gun, language:guo, language:guq, language:gur, language:guu, language:guw, language:gux, language:guz, language:gvc, language:gvf, language:gvl, language:gvn, language:gwi, language:gwr, language:gya, language:gym, language:gyr, language:hac, language:hae, language:hag, language:hak, language:hat, language:hav, language:haw, language:hay, language:hbo, language:hch, language:heb, language:heg, language:heh, language:her, language:hif, language:hig, language:hil, language:hin, language:hix, language:hla, language:hmo, language:hmr, language:hne, language:hnj, language:hnn, language:hns, language:hop, language:hot, language:hra, language:hrv, language:hrx, language:hsb, language:hto, language:hub, language:hui, language:hun, language:hus, language:huu, language:huv, language:hvn, language:hwc, language:hye, language:hyw, language:ian, language:iba, language:ibg, language:ibo, language:icr, language:ido, language:idu, language:ifa, language:ifb, language:ife, language:ifk, language:ifu, language:ify, language:ige, language:ign, language:ike, language:ikk, language:ikt, language:ikw, language:ilb, language:ile, language:ilo, language:imo, language:ina, language:inb, language:ind, language:inh, language:ino, language:iou, language:ipi, language:iqw, language:iri, language:irk, language:iry, language:isd, language:ish, language:isl, language:iso, language:ita, language:itv, language:ium, language:ivb, language:ivv, language:iws, language:ixl, language:izr, language:izz, language:jaa, language:jac, language:jae, language:jam, language:jav, language:jbo, language:jbu, language:jic, language:jiv, language:jmc, language:jpn, language:jra, language:jun, language:jvn, language:kaa, language:kab, language:kac, language:kak, language:kal, language:kam, language:kan, language:kao, language:kaq, language:kas, language:kat, language:kaz, language:kbc, language:kbd, language:kbh, language:kbm, language:kbo, language:kbp, language:kbq, language:kbr, language:kby, language:kca, language:kcg, language:kck, language:kdc, language:kde, language:kdh, language:kdi, language:kdj, language:kdl, language:kdr, language:kea, language:kei, language:kek, language:ken, language:keo, language:ker, language:kew, language:kez, language:kff, language:kgf, language:kgk, language:kgp, language:kgr, language:kha, language:khk, language:khm, language:khs, language:khz, language:kia, language:kij, language:kik, language:kin, language:kir, language:kiu, language:kix, language:kjb, language:kje, language:kjh, language:kjs, language:kkc, language:kki, language:kkj, language:kkl, language:kle, language:klt, language:klv, language:kmb, language:kmg, language:kmh, language:kmk, language:kmm, language:kmo, language:kmr, language:kms, language:kmu, language:kmy, language:knc, language:kne, language:knf, language:kng, language:knj, language:knk, language:kno, language:knv, language:knx, language:kny, language:kog, language:koi, language:koo, language:kor, language:kos, language:kpe, language:kpf, language:kpg, language:kpj, language:kpq, language:kpr, language:kpv, language:kpw, language:kpx, language:kpz, language:kqc, language:kqe, language:kqf, language:kql, language:kqn, language:kqo, language:kqp, language:kqs, language:kqw, language:kqy, language:krc, language:kri, language:krj, language:krl, language:kru, language:krx, language:ksb, language:ksc, language:ksd, language:ksf, language:ksh, language:ksj, language:ksp, language:ksr, language:kss, language:ksw, language:ktb, language:ktj, language:ktm, language:kto, language:ktu, language:ktz, language:kua, language:kub, language:kud, language:kue, language:kuj, language:kum, language:kup, language:kus, language:kvg, language:kvj, language:kvn, language:kwd, language:kwf, language:kwi, language:kwj, language:kwn, language:kwy, language:kxc, language:kxm, language:kxw, language:kyc, language:kyf, language:kyg, language:kyq, language:kyu, language:kyz, language:kze, language:kzf, language:kzj, language:lac, language:lad, language:lai, language:laj, language:lam, language:lao, language:lap, language:lat, language:lbb, language:lbe, language:lbj, language:lbk, language:lcm, language:lcp, language:ldi, language:ldn, language:lee, language:lef, language:leh, language:lem, language:leu, language:lew, language:lex, language:lez, language:lfn, language:lgg, language:lgl, language:lgm, language:lhi, language:lhu, language:lia, language:lid, language:lif, language:lij, language:lim, language:lin, language:lip, language:lis, language:lit, language:liv, language:ljp, language:lki, language:llb, language:lld, language:llg, language:lln, language:lmk, language:lmo, language:lmp, language:lnd, language:lob, language:loe, language:log, language:lok, language:lol, language:lom, language:loq, language:loz, language:lrc, language:lsi, language:lsm, language:ltg, language:ltz, language:lua, language:lub, language:luc, language:lud, language:lue, language:lug, language:lun, language:luo, language:lus, language:lvs, language:lwg, language:lwo, language:lww, language:lzh, language:maa, language:mad, language:maf, language:mag, language:mah, language:mai, language:maj, language:mak, language:mal, language:mam, language:maq, language:mar, language:mas, language:mau, language:mav, language:maw, language:maz, language:mbb, language:mbc, language:mbd, language:mbf, language:mbh, language:mbi, language:mbj, language:mbl, language:mbs, language:mbt, language:mca, language:mcb, language:mcd, language:mcf, language:mck, language:mcn, language:mco, language:mcp, language:mcq, language:mcu, language:mda, language:mdf, language:mdy, language:med, language:mee, language:mej, language:mek, language:men, language:meq, language:mer, language:met, language:meu, language:mev, language:mfe, language:mfg, language:mfh, language:mfi, language:mfk, language:mfq, language:mfy, language:mfz, language:mgc, language:mgh, language:mgo, language:mgr, language:mhi, language:mhl, language:mhr, language:mhw, language:mhx, language:mhy, language:mib, language:mic, language:mie, language:mif, language:mig, language:mih, language:mil, language:mim, language:min, language:mio, language:mip, language:miq, language:mir, language:mit, language:miy, language:miz, language:mjc, language:mjw, language:mkd, language:mkl, language:mkn, language:mks, language:mkz, language:mlh, language:mlp, language:mlt, language:mlu, language:mmn, language:mmo, language:mmx, language:mna, language:mnb, language:mnf, language:mni, language:mnk, language:mns, language:mnw, language:mnx, language:mny, language:moa, language:moc, language:mog, language:moh, language:mop, language:mor, language:mos, language:mox, language:mpg, language:mph, language:mpm, language:mpp, language:mps, language:mpt, language:mpx, language:mqb, language:mqj, language:mqy, language:mrg, language:mri, language:mrj, language:mrq, language:mrv, language:mrw, language:msb, language:msc, language:mse, language:msk, language:msy, language:mta, language:mtg, language:mti, language:mto, language:mtp, language:mua, language:mug, language:muh, language:mui, language:mup, language:mur, language:mus, language:mux, language:muy, language:mva, language:mvn, language:mvp, language:mwc, language:mwf, language:mwl, language:mwm, language:mwn, language:mwp, language:mwq, language:mwv, language:mww, language:mxb, language:mxp, language:mxq, language:mxt, language:mxv, language:mya, language:myb, language:myk, language:myu, language:myv, language:myw, language:myx, language:myy, language:mza, language:mzh, language:mzk, language:mzl, language:mzm, language:mzn, language:mzw, language:mzz, language:nab, language:naf, language:nah, language:nak, language:nap, language:naq, language:nas, language:nav, language:naw, language:nba, language:nbc, language:nbe, language:nbl, language:nbq, language:nbu, language:nca, language:nch, language:ncj, language:ncl, language:ncq, language:nct, language:ncu, language:ncx, language:ndc, language:nde, language:ndh, language:ndi, language:ndj, language:ndo, language:nds, language:ndz, language:neb, language:new, language:nfa, language:nfr, language:ngb, language:ngc, language:ngl, language:ngp, language:ngu, language:nhd, language:nhe, language:nhg, language:nhi, language:nhk, language:nho, language:nhr, language:nhu, language:nhw, language:nhx, language:nhy, language:nia, language:nif, language:nii, language:nij, language:nim, language:nin, language:nio, language:niu, language:niy, language:njb, language:njm, language:njn, language:njo, language:njz, language:nkf, language:nko, language:nld, language:nlg, language:nma, language:nmf, language:nmh, language:nmo, language:nmw, language:nmz, language:nnb, language:nng, language:nnh, language:nnl, language:nno, language:nnp, language:nnq, language:nnw, language:noa, language:nob, language:nod, language:nog, language:non, language:nop, language:not, language:nou, language:nov, language:nph, language:npi, language:npl, language:npo, language:npy, language:nqo, language:nre, language:nrf, language:nri, language:nrm, language:nsa, language:nse, language:nsm, language:nsn, language:nso, language:nss, language:nst, language:nsu, language:ntp, language:ntr, language:ntu, language:nuj, language:nus, language:nuy, language:nvm, language:nwb, language:nwi, language:nwx, language:nxd, language:nya, language:nyf, language:nyk, language:nyn, language:nyo, language:nyu, language:nyy, language:nza, language:nzi, language:nzm, language:obo, language:oci, language:ogo, language:ojb, language:oke, language:oku, language:okv, language:old, language:olo, language:omb, language:omw, language:ong, language:ons, language:ood, language:opm, language:orv, language:ory, language:oss, language:ota, language:otd, language:ote, language:otm, language:otn, language:oto, language:otq, language:ots, language:otw, language:oym, language:ozm, language:pab, language:pad, language:pag, language:pah, language:pam, language:pan, language:pao, language:pap, language:pau, language:pbb, language:pbc, language:pbi, language:pbt, language:pcd, language:pck, language:pcm, language:pdc, language:pdt, language:pem, language:pfe, language:pfl, language:phm, language:pib, language:pio, language:pir, language:pis, language:pjt, language:pkb, language:plg, language:pls, language:plt, language:plu, language:plw, language:pma, language:pmf, language:pmq, language:pms, language:pmx, language:pnb, language:pne, language:pnt, language:pny, language:poe, language:poh, language:poi, language:pol, language:pon, language:por, language:pos, language:pot, language:pov, language:poy, language:ppk, language:ppo, language:pps, language:prf, language:prg, language:pri, language:prq, language:pse, language:pss, language:ptp, language:ptu, language:pui, language:pwg, language:pwn, language:pww, language:pxm, language:qub, language:quc, language:quf, language:qug, language:quh, language:qul, language:qup, language:qus, language:quw, language:quy, language:quz, language:qva, language:qvc, language:qve, language:qvh, language:qvi, language:qvm, language:qvn, language:qvo, language:qvs, language:qvw, language:qvz, language:qwh, language:qxh, language:qxl, language:qxn, language:qxo, language:qxr, language:rad, language:rai, language:rap, language:rar, language:rav, language:raw, language:rcf, language:rej, language:rel, language:rgu, language:rhg, language:ria, language:rim, language:rjs, language:rkb, language:rmc, language:rme, language:rml, language:rmn, language:rmo, language:rmq, language:rmy, language:rnd, language:rng, language:rnl, language:roh, language:ron, language:roo, language:rop, language:row, language:rro, language:rtm, language:rub, language:rue, language:ruf, language:rug, language:run, language:rup, language:rus, language:rwo, language:sab, language:sag, language:sah, language:san, language:sas, language:sat, language:sba, language:sbd, language:sbe, language:sbl, language:sbs, language:sby, language:sck, language:scn, language:sco, language:sda, language:sdc, language:sdh, language:sdo, language:sdq, language:seh, language:ses, language:sey, language:sfw, language:sgb, language:sgc, language:sgh, language:sgs, language:sgw, language:sgz, language:shi, language:shk, language:shn, language:shp, language:shu, language:sid, language:sig, language:sil, language:sim, language:sin, language:sja, language:sjo, language:sju, language:skg, language:skr, language:sld, language:slk, language:sll, language:slv, language:sma, language:sme, language:smj, language:smk, language:sml, language:smn, language:smo, language:sms, language:smt, language:sna, language:snc, language:snd, language:snf, language:snn, language:snp, language:snw, language:sny, language:soe, language:som, language:sop, language:soq, language:sot, language:soy, language:spa, language:spl, language:spm, language:spp, language:sps, language:spy, language:srd, language:sri, language:srm, language:srn, language:srp, language:srq, language:srr, language:ssd, language:ssg, language:ssw, language:ssx, language:stn, language:stp, language:stq, language:sua, language:suc, language:sue, language:suk, language:sun, language:sur, language:sus, language:suz, language:swb, language:swc, language:swe, language:swg, language:swh, language:swk, language:swp, language:sxb, language:sxn, language:syb, language:syc, language:syl, language:szl, language:szy, language:tab, language:tac, language:tah, language:taj, language:tam, language:tap, language:taq, language:tar, language:tat, language:tav, language:taw, language:tay, language:tbc, language:tbg, language:tbk, language:tbl, language:tbo, language:tbw, language:tby, language:tbz, language:tca, language:tcc, language:tcf, language:tcs, language:tcy, language:tcz, language:ted, language:tee, language:tel, language:tem, language:teo, language:ter, language:tet, language:tew, language:tfr, language:tgk, language:tgo, language:tgp, language:tha, language:thk, language:thl, language:tif, language:tig, language:tih, language:tik, language:tim, language:tir, language:tiv, language:tiy, language:tke, language:tkl, language:tkr, language:tku, language:tlb, language:tlf, language:tlh, language:tlj, language:tll, language:tly, language:tmc, language:tmd, language:tna, language:tnc, language:tnk, language:tnn, language:tnp, language:tnr, language:tob, language:toc, language:tod, language:tog, language:toh, language:toi, language:toj, language:tok, language:ton, language:too, language:top, language:tos, language:tpa, language:tpi, language:tpm, language:tpp, language:tpt, language:tpw, language:tpz, language:tqo, language:trc, language:trn, language:tro, language:trp, language:trq, language:trs, language:trv, language:tsc, language:tsg, language:tsn, language:tso, language:tsw, language:tsz, language:ttc, language:tte, language:ttj, language:ttq, language:tuc, language:tue, language:tuf, language:tui, language:tuk, language:tul, language:tum, language:tuo, language:tur, language:tuv, language:tvk, language:tvl, language:twi, language:twu, language:twx, language:txq, language:txu, language:tyv, language:tzh, language:tzj, language:tzl, language:tzm, language:tzo, language:ubr, language:ubu, language:udm, language:udu, language:uig, language:ukr, language:umb, language:upv, language:ura, language:urb, language:urd, language:urh, language:uri, language:urk, language:urt, language:urw, language:ury, language:usa, language:usp, language:uth, language:uvh, language:uvl, language:uzn, language:uzs, language:vag, language:vap, language:var, language:vec, language:ven, language:vep, language:vid, language:vie, language:viv, language:vls, language:vmk, language:vmw, language:vmy, language:vol, language:vot, language:vro, language:vun, language:vut, language:waj, language:wal, language:wap, language:war, language:wat, language:way, language:wba, language:wbm, language:wbp, language:wed, language:wer, language:wes, language:wew, language:whg, language:whk, language:wib, language:wim, language:wiu, language:wln, language:wls, language:wlv, language:wlx, language:wmt, language:wmw, language:wnc, language:wnu, language:wob, language:wol, language:wos, language:wrk, language:wrs, language:wsg, language:wsk, language:wuu, language:wuv, language:wwa, language:xal, language:xav, language:xbi, language:xbr, language:xed, language:xho, language:xla, language:xmf, language:xmm, language:xmv, language:xnn, language:xog, language:xon, language:xrb, language:xsb, language:xsi, language:xsm, language:xsr, language:xsu, language:xtd, language:xtm, language:xtn, language:xuo, language:yaa, language:yad, language:yal, language:yam, language:yan, language:yao, language:yap, language:yaq, language:yat, language:yaz, language:ybb, language:yby, language:ycn, language:ydd, language:yim, language:yka, language:yle, language:yli, language:yml, language:yom, language:yon, language:yor, language:yrb, language:yre, language:yrk, language:yrl, language:yss, language:yua, language:yue, language:yuj, language:yup, language:yut, language:yuw, language:yuz, language:yva, language:zaa, language:zab, language:zac, language:zad, language:zae, language:zai, language:zam, language:zao, language:zar, language:zas, language:zat, language:zav, language:zaw, language:zca, language:zdj, language:zea, language:zgh, language:zia, language:ziw, language:zne, language:zom, language:zos, language:zpa, language:zpc, language:zpg, language:zpi, language:zpj, language:zpl, language:zpm, language:zpo, language:zpq, language:zpt, language:zpu, language:zpv, language:zpz, language:zsm, language:zsr, language:ztq, language:zty, language:zul, language:zyb, language:zyp, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2506.18421, arxiv:2109.07445, region:us
- **License:** Not specified
## 📖 Description
```text
Liberating 3T of the finest tokens from PDFs
What is this?
As we run out of web pages to process, the natural question has always been: what to do next? Only a few knew about a data source that everyone avoided for ages, due to its incredible extraction cost and complexity: PDFs.
📄 FinePDFs is exactly that. It is the largest publicly available corpus sourced exclusively from PDFs, containing about 3 trillion tokens across 475 million documents in 1733 languages.
Compared to HTML… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/finepdfs....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/aai_Latn/train/000_00000.parquet`
- `data/aak_Latn/train/000_00000.parquet`
- `data/aau_Latn/train/000_00000.parquet`
- `data/aaz_Latn/train/000_00000.parquet`
- `data/aba_Latn/train/000_00000.parquet`
- `data/abk_Cyrl/test/000_00000.parquet`
- `data/abk_Cyrl/train/000_00000.parquet`
- `data/abq_Cyrl/train/000_00000.parquet`
- `data/abs_Latn/test/000_00000.parquet`
- `data/abs_Latn/train/000_00000.parquet`
- `data/acm_Arab/test/000_00000.parquet`
- `data/acm_Arab/train/000_00000.parquet`
- `data/acr_Latn/train/000_00000.parquet`
- ... and more.
## 📊 Data Structure
### Config: `aai_Latn`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `offset` | `int` |
| `token_count` | `int` |
| `language` | `str` |
| `page_average_lid` | `str` |
| `page_average_lid_score` | `float` |
| `full_doc_lid` | `str` |
| `full_doc_lid_score` | `float` |
| `per_page_languages` | `list` |
| `is_truncated` | `bool` |
| `extractor` | `str` |
| `page_ends` | `list` |
### Config: `aak_Latn`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `offset` | `int` |
| `token_count` | `int` |
| `language` | `str` |
| `page_average_lid` | `str` |
| `page_average_lid_score` | `float` |
| `full_doc_lid` | `str` |
| `full_doc_lid_score` | `float` |
| `per_page_languages` | `list` |
| `is_truncated` | `bool` |
| `extractor` | `str` |
| `page_ends` | `list` |
### Config: `aau_Latn`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `dump` | `str` |
| `url` | `str` |
| `date` | `str` |
| `file_path` | `str` |
| `offset` | `int` |
| `token_count` | `int` |
| `language` | `str` |
| `page_average_lid` | `str` |
| `page_average_lid_score` | `float` |
| `full_doc_lid` | `str` |
| `full_doc_lid_score` | `float` |
| `per_page_languages` | `list` |
| `is_truncated` | `bool` |
| `extractor` | `str` |
| `page_ends` | `list` |
|
legacy-datasets/wikipedia
|
legacy-datasets
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:crowdsourced, multilinguality:multilingual, source_datasets:original, language:aa, language:ab, language:ace, language:af, language:ak, language:als, language:am, language:an, language:ang, language:ar, language:arc, language:arz, language:as, language:ast, language:atj, language:av, language:ay, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bi, language:bjn, language:bm, language:bn, language:bo, language:bpy, language:br, language:bs, language:bug, language:bxr, language:ca, language:cbk, language:cdo, language:ce, language:ceb, language:ch, language:cho, language:chr, language:chy, language:ckb, language:co, language:cr, language:crh, language:cs, language:csb, language:cu, language:cv, language:cy, language:da, language:de, language:din, language:diq, language:dsb, language:dty, language:dv, language:dz, language:ee, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:ext, language:fa, language:ff, language:fi, language:fj, language:fo, language:fr, language:frp, language:frr, language:fur, language:fy, language:ga, language:gag, language:gan, language:gd, language:gl, language:glk, language:gn, language:gom, language:gor, language:got, language:gu, language:gv, language:ha, language:hak, language:haw, language:he, language:hi, language:hif, language:ho, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ig, language:ii, language:ik, language:ilo, language:inh, language:io, language:is, language:it, language:iu, language:ja, language:jam, language:jbo, language:jv, language:ka, language:kaa, language:kab, language:kbd, language:kbp, language:kg, language:ki, language:kj, language:kk, language:kl, language:km, language:kn, language:ko, language:koi, language:krc, language:ks, language:ksh, language:ku, language:kv, language:kw, language:ky, language:la, language:lad, language:lb, language:lbe, language:lez, language:lfn, language:lg, language:li, language:lij, language:lmo, language:ln, language:lo, language:lrc, language:lt, language:ltg, language:lv, language:lzh, language:mai, language:mdf, language:mg, language:mh, language:mhr, language:mi, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mus, language:mwl, language:my, language:myv, language:mzn, language:na, language:nah, language:nan, language:nap, language:nds, language:ne, language:new, language:ng, language:nl, language:nn, language:no, language:nov, language:nrf, language:nso, language:nv, language:ny, language:oc, language:olo, language:om, language:or, language:os, language:pa, language:pag, language:pam, language:pap, language:pcd, language:pdc, language:pfl, language:pi, language:pih, language:pl, language:pms, language:pnb, language:pnt, language:ps, language:pt, language:qu, language:rm, language:rmy, language:rn, language:ro, language:ru, language:rue, language:rup, language:rw, language:sa, language:sah, language:sat, language:sc, language:scn, language:sco, language:sd, language:se, language:sg, language:sgs, language:sh, language:si, language:sk, language:sl, language:sm, language:sn, language:so, language:sq, language:sr, language:srn, language:ss, language:st, language:stq, language:su, language:sv, language:sw, language:szl, language:ta, language:tcy, language:tdt, language:te, language:tg, language:th, language:ti, language:tk, language:tl, language:tn, language:to, language:tpi, language:tr, language:ts, language:tt, language:tum, language:tw, language:ty, language:tyv, language:udm, language:ug, language:uk, language:ur, language:uz, language:ve, language:vec, language:vep, language:vi, language:vls, language:vo, language:vro, language:wa, language:war, language:wo, language:wuu, language:xal, language:xh, language:xmf, language:yi, language:yo, language:yue, language:za, language:zea, language:zh, language:zu, license:cc-by-sa-3.0, license:gfdl, size_categories:n<1K, region:us
|
community
|
# Dataset: `legacy-datasets/wikipedia`
## 📝 Metadata
- **Author/Owner:** legacy-datasets
- **Downloads:** 25757
- **Likes:** 604
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:crowdsourced, multilinguality:multilingual, source_datasets:original, language:aa, language:ab, language:ace, language:af, language:ak, language:als, language:am, language:an, language:ang, language:ar, language:arc, language:arz, language:as, language:ast, language:atj, language:av, language:ay, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bi, language:bjn, language:bm, language:bn, language:bo, language:bpy, language:br, language:bs, language:bug, language:bxr, language:ca, language:cbk, language:cdo, language:ce, language:ceb, language:ch, language:cho, language:chr, language:chy, language:ckb, language:co, language:cr, language:crh, language:cs, language:csb, language:cu, language:cv, language:cy, language:da, language:de, language:din, language:diq, language:dsb, language:dty, language:dv, language:dz, language:ee, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:ext, language:fa, language:ff, language:fi, language:fj, language:fo, language:fr, language:frp, language:frr, language:fur, language:fy, language:ga, language:gag, language:gan, language:gd, language:gl, language:glk, language:gn, language:gom, language:gor, language:got, language:gu, language:gv, language:ha, language:hak, language:haw, language:he, language:hi, language:hif, language:ho, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ig, language:ii, language:ik, language:ilo, language:inh, language:io, language:is, language:it, language:iu, language:ja, language:jam, language:jbo, language:jv, language:ka, language:kaa, language:kab, language:kbd, language:kbp, language:kg, language:ki, language:kj, language:kk, language:kl, language:km, language:kn, language:ko, language:koi, language:krc, language:ks, language:ksh, language:ku, language:kv, language:kw, language:ky, language:la, language:lad, language:lb, language:lbe, language:lez, language:lfn, language:lg, language:li, language:lij, language:lmo, language:ln, language:lo, language:lrc, language:lt, language:ltg, language:lv, language:lzh, language:mai, language:mdf, language:mg, language:mh, language:mhr, language:mi, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mus, language:mwl, language:my, language:myv, language:mzn, language:na, language:nah, language:nan, language:nap, language:nds, language:ne, language:new, language:ng, language:nl, language:nn, language:no, language:nov, language:nrf, language:nso, language:nv, language:ny, language:oc, language:olo, language:om, language:or, language:os, language:pa, language:pag, language:pam, language:pap, language:pcd, language:pdc, language:pfl, language:pi, language:pih, language:pl, language:pms, language:pnb, language:pnt, language:ps, language:pt, language:qu, language:rm, language:rmy, language:rn, language:ro, language:ru, language:rue, language:rup, language:rw, language:sa, language:sah, language:sat, language:sc, language:scn, language:sco, language:sd, language:se, language:sg, language:sgs, language:sh, language:si, language:sk, language:sl, language:sm, language:sn, language:so, language:sq, language:sr, language:srn, language:ss, language:st, language:stq, language:su, language:sv, language:sw, language:szl, language:ta, language:tcy, language:tdt, language:te, language:tg, language:th, language:ti, language:tk, language:tl, language:tn, language:to, language:tpi, language:tr, language:ts, language:tt, language:tum, language:tw, language:ty, language:tyv, language:udm, language:ug, language:uk, language:ur, language:uz, language:ve, language:vec, language:vep, language:vi, language:vls, language:vo, language:vro, language:wa, language:war, language:wo, language:wuu, language:xal, language:xh, language:xmf, language:yi, language:yo, language:yue, language:za, language:zea, language:zh, language:zu, license:cc-by-sa-3.0, license:gfdl, size_categories:n<1K, region:us
- **License:** Not specified
## 📖 Description
```text
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.)....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/20220301.de/train-00000-of-00018.parquet`
- `data/20220301.de/train-00001-of-00018.parquet`
- `data/20220301.de/train-00002-of-00018.parquet`
- `data/20220301.de/train-00003-of-00018.parquet`
- `data/20220301.de/train-00004-of-00018.parquet`
- `data/20220301.de/train-00005-of-00018.parquet`
- `data/20220301.de/train-00006-of-00018.parquet`
- `data/20220301.de/train-00007-of-00018.parquet`
- `data/20220301.de/train-00008-of-00018.parquet`
- `data/20220301.de/train-00009-of-00018.parquet`
- `data/20220301.de/train-00010-of-00018.parquet`
- `data/20220301.de/train-00011-of-00018.parquet`
- `data/20220301.de/train-00012-of-00018.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found wikipedia.py
```
|
gretelai/synthetic_text_to_sql
|
gretelai
|
task_categories:question-answering, task_categories:table-question-answering, task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2306.05685, region:us, synthetic, SQL, text-to-SQL, code
|
community
|
# Dataset: `gretelai/synthetic_text_to_sql`
## 📝 Metadata
- **Author/Owner:** gretelai
- **Downloads:** 2422
- **Likes:** 601
- **Tags:** task_categories:question-answering, task_categories:table-question-answering, task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2306.05685, region:us, synthetic, SQL, text-to-SQL, code
- **License:** Not specified
## 📖 Description
```text
Image generated by DALL-E. See prompt for more details
synthetic_text_to_sql
gretelai/synthetic_text_to_sql is a rich dataset of high quality synthetic Text-to-SQL samples,
designed and generated using Gretel Navigator, and released under Apache 2.0.
Please see our release blogpost for more details.
The dataset includes:
105,851 records partitioned into 100,000 train and 5,851 test records
~23M total tokens, including ~12M SQL tokens
Coverage across 100 distinct… See the full description on the dataset page: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `bmc2_llm_judge_example_1.txt`
- `bmc2_llm_judge_example_2.txt`
- `dalle_prompt.txt`
- `llm_as_a_judge_rubric.txt`
- `synthetic_text_to_sql_test.snappy.parquet`
- `synthetic_text_to_sql_train.snappy.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `int` |
| `domain` | `str` |
| `domain_description` | `str` |
| `sql_complexity` | `str` |
| `sql_complexity_description` | `str` |
| `sql_task_type` | `str` |
| `sql_task_type_description` | `str` |
| `sql_prompt` | `str` |
| `sql_context` | `str` |
| `sql` | `str` |
| `sql_explanation` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `id` | `int` |
| `domain` | `str` |
| `domain_description` | `str` |
| `sql_complexity` | `str` |
| `sql_complexity_description` | `str` |
| `sql_task_type` | `str` |
| `sql_task_type_description` | `str` |
| `sql_prompt` | `str` |
| `sql_context` | `str` |
| `sql` | `str` |
| `sql_explanation` | `str` |
|
ILSVRC/imagenet-1k
|
ILSVRC
|
task_categories:image-classification, task_ids:multi-class-image-classification, annotations_creators:crowdsourced, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:1M<n<10M, format:parquet, modality:image, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1409.0575, arxiv:1912.07726, arxiv:1811.12231, arxiv:2109.13228, region:us
|
community
|
# Dataset: `ILSVRC/imagenet-1k`
## 📝 Metadata
- **Author/Owner:** ILSVRC
- **Downloads:** 48259
- **Likes:** 592
- **Tags:** task_categories:image-classification, task_ids:multi-class-image-classification, annotations_creators:crowdsourced, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:1M<n<10M, format:parquet, modality:image, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1409.0575, arxiv:1912.07726, arxiv:1811.12231, arxiv:2109.13228, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for ImageNet
Dataset Summary
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are… See the full description on the dataset page: https://huggingface.co/datasets/ILSVRC/imagenet-1k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `classes.py`
- `data/test-00000-of-00028.parquet`
- `data/test-00001-of-00028.parquet`
- `data/test-00002-of-00028.parquet`
- `data/test-00003-of-00028.parquet`
- `data/test-00004-of-00028.parquet`
- `data/test-00005-of-00028.parquet`
- `data/test-00006-of-00028.parquet`
- `data/test-00007-of-00028.parquet`
- `data/test-00008-of-00028.parquet`
- `data/test-00009-of-00028.parquet`
- `data/test-00010-of-00028.parquet`
- `data/test-00011-of-00028.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'ILSVRC/imagenet-1k' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/ILSVRC/imagenet-1k to ask for access.
```
|
HuggingFaceH4/ultrachat_200k
|
HuggingFaceH4
|
task_categories:text-generation, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.14233, region:us
|
community
|
# Dataset: `HuggingFaceH4/ultrachat_200k`
## 📝 Metadata
- **Author/Owner:** HuggingFaceH4
- **Downloads:** 21877
- **Likes:** 590
- **Tags:** task_categories:text-generation, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.14233, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for UltraChat 200k
Dataset Description
This is a heavily filtered version of the UltraChat dataset and was used to train Zephyr-7B-β, a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create UltraChat 200k, we applied the following logic:
Selection of a subset of data for faster supervised fine tuning.
Truecasing of the dataset, as we observed around 5% of the data… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/test_gen-00000-of-00001-3d4cd8309148a71f.parquet`
- `data/test_sft-00000-of-00001-f7dfac4afe5b93f4.parquet`
- `data/train_gen-00000-of-00003-a6c9fb894be3e50b.parquet`
- `data/train_gen-00001-of-00003-d6a0402e417f35ca.parquet`
- `data/train_gen-00002-of-00003-c0db75b92a2f48fd.parquet`
- `data/train_sft-00000-of-00003-a3ecf92756993583.parquet`
- `data/train_sft-00001-of-00003-0a1804bcb6ae68c6.parquet`
- `data/train_sft-00002-of-00003-ee46ed25cfae92c6.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train_sft`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `prompt_id` | `str` |
| `messages` | `list` |
#### Split: `test_sft`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `prompt_id` | `str` |
| `messages` | `list` |
#### Split: `train_gen`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `prompt_id` | `str` |
| `messages` | `list` |
|
nvidia/Llama-Nemotron-Post-Training-Dataset
|
nvidia
|
license:cc-by-4.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2505.00949, region:us
|
community
|
# Dataset: `nvidia/Llama-Nemotron-Post-Training-Dataset`
## 📝 Metadata
- **Author/Owner:** nvidia
- **Downloads:** 3681
- **Likes:** 588
- **Tags:** license:cc-by-4.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2505.00949, region:us
- **License:** Not specified
## 📖 Description
```text
Llama-Nemotron-Post-Training-Dataset-v1.1 Release
Update [4/8/2025]:
v1.1: We are releasing an additional 2.2M Math and 500K Code Reasoning Data in support of our release of Llama-3.1-Nemotron-Ultra-253B-v1. 🎉
Data Overview
This dataset is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model, in support of NVIDIA’s release of… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `RL/instruction_following/instruction_following.jsonl`
- `SFT/chat/chat.jsonl`
- `SFT/code/code_v1.1.jsonl`
- `SFT/code/code_v1.jsonl`
- `SFT/math/math_v1.1.jsonl`
- `SFT/math/math_v1.jsonl`
- `SFT/safety/safety.jsonl`
- `SFT/science/science.jsonl`
- `train/when2call_train_pref.jsonl`
- `train/when2call_train_sft.jsonl`
## 📊 Data Structure
### Config: `SFT`
#### Split: `code`
| Column Name | Data Type |
|---|---|
| `input` | `list` |
| `output` | `str` |
| `category` | `str` |
| `license` | `str` |
| `reasoning` | `str` |
| `generator` | `str` |
| `used_in_training` | `str` |
| `version` | `str` |
| `system_prompt` | `str` |
#### Split: `math`
| Column Name | Data Type |
|---|---|
| `input` | `list` |
| `output` | `str` |
| `category` | `str` |
| `license` | `str` |
| `reasoning` | `str` |
| `generator` | `str` |
| `used_in_training` | `str` |
| `version` | `str` |
| `system_prompt` | `str` |
#### Split: `science`
| Column Name | Data Type |
|---|---|
| `input` | `list` |
| `output` | `str` |
| `category` | `str` |
| `license` | `str` |
| `reasoning` | `str` |
| `generator` | `str` |
| `used_in_training` | `str` |
| `version` | `str` |
| `system_prompt` | `str` |
### Config: `RL`
#### Split: `instruction_following`
| Column Name | Data Type |
|---|---|
| `input` | `list` |
| `args` | `dict` |
| `category` | `str` |
| `license` | `str` |
| `reasoning` | `str` |
| `used_in_training` | `str` |
| `version` | `str` |
| `system_prompt` | `str` |
|
cais/mmlu
|
cais
|
task_categories:question-answering, task_ids:multiple-choice-qa, annotations_creators:no-annotation, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2009.03300, arxiv:2005.00700, arxiv:2005.14165, arxiv:2008.02275, region:us
|
community
|
# Dataset: `cais/mmlu`
## 📝 Metadata
- **Author/Owner:** cais
- **Downloads:** 256770
- **Likes:** 570
- **Tags:** task_categories:question-answering, task_ids:multiple-choice-qa, annotations_creators:no-annotation, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2009.03300, arxiv:2005.00700, arxiv:2005.14165, arxiv:2008.02275, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for MMLU
Dataset Summary
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks… See the full description on the dataset page: https://huggingface.co/datasets/cais/mmlu....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `abstract_algebra/dev-00000-of-00001.parquet`
- `abstract_algebra/test-00000-of-00001.parquet`
- `abstract_algebra/validation-00000-of-00001.parquet`
- `all/auxiliary_train-00000-of-00001.parquet`
- `all/dev-00000-of-00001.parquet`
- `all/test-00000-of-00001.parquet`
- `all/validation-00000-of-00001.parquet`
- `anatomy/dev-00000-of-00001.parquet`
- `anatomy/test-00000-of-00001.parquet`
- `anatomy/validation-00000-of-00001.parquet`
- `astronomy/dev-00000-of-00001.parquet`
- `astronomy/test-00000-of-00001.parquet`
- `astronomy/validation-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
### Config: `abstract_algebra`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `dev`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
### Config: `all`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `dev`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
### Config: `anatomy`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
#### Split: `dev`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `subject` | `str` |
| `choices` | `list` |
| `answer` | `int` |
|
liwu/MNBVC
|
liwu
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:other, language_creators:other, multilinguality:monolingual, source_datasets:original, language:zh, license:mit, region:us
|
community
|
# Dataset: `liwu/MNBVC`
## 📝 Metadata
- **Author/Owner:** liwu
- **Downloads:** 59395
- **Likes:** 560
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:other, language_creators:other, multilinguality:monolingual, source_datasets:original, language:zh, license:mit, region:us
- **License:** Not specified
## 📖 Description
```text
MNBVC: Massive Never-ending BT Vast Chinese corpus...
```
## 📂 File System Sample
- `.gitattributes`
- `MNBVC.py`
- `README.md`
- `academic_paper/arxiv/20230241/arxivCode.0.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.1.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.10.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.11.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.12.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.13.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.14.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.15.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.16.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.17.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.18.jsonl.gz`
- `academic_paper/arxiv/20230241/arxivCode.19.jsonl.gz`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found MNBVC.py
```
|
liuhaotian/LLaVA-Instruct-150K
|
liuhaotian
|
task_categories:visual-question-answering, task_categories:question-answering, language:en, license:cc-by-4.0, size_categories:100K<n<1M, region:us
|
community
|
# Dataset: `liuhaotian/LLaVA-Instruct-150K`
## 📝 Metadata
- **Author/Owner:** liuhaotian
- **Downloads:** 3059
- **Likes:** 550
- **Tags:** task_categories:visual-question-answering, task_categories:question-answering, language:en, license:cc-by-4.0, size_categories:100K<n<1M, region:us
- **License:** Not specified
## 📖 Description
```text
LLaVA Visual Instruct 150K Dataset Card
Dataset details
Dataset type:
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data.
It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability.
Dataset date:
LLaVA Visual Instruct 150K was collected in April 2023, by prompting GPT-4-0314 API.
Paper or resources for more information:
https://llava-vl.github.io/
License:
Creative… See the full description on the dataset page: https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `complex_reasoning_77k.json`
- `conversation_58k.json`
- `detail_23k.json`
- `llava_instruct_150k.json`
- `llava_instruct_80k.json`
- `llava_v1_5_mix665k.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `image` | `str` |
| `conversations` | `list` |
|
uonlp/CulturaX
|
uonlp
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:als, language:am, language:an, language:ar, language:arz, language:as, language:ast, language:av, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bn, language:bo, language:bpy, language:br, language:bs, language:bxr, language:ca, language:cbk, language:ce, language:ceb, language:ckb, language:cs, language:cv, language:cy, language:da, language:de, language:dsb, language:dv, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fr, language:frr, language:fy, language:ga, language:gd, language:gl, language:gn, language:gom, language:gu, language:he, language:hi, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ilo, language:io, language:is, language:it, language:ja, language:jbo, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:krc, language:ku, language:kv, language:kw, language:ky, language:la, language:lb, language:lez, language:li, language:lmo, language:lo, language:lrc, language:lt, language:lv, language:mai, language:mg, language:mhr, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nap, language:nds, language:ne, language:new, language:nl, language:nn, language:no, language:oc, language:or, language:os, language:pa, language:pam, language:pl, language:pms, language:pnb, language:ps, language:pt, language:qu, language:rm, language:ro, language:ru, language:rue, language:sa, language:sah, language:scn, language:sd, language:sh, language:si, language:sk, language:sl, language:so, language:sq, language:sr, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tk, language:tl, language:tr, language:tt, language:tyv, language:ug, language:uk, language:ur, language:uz, language:vec, language:vi, language:vls, language:vo, language:wa, language:war, language:wuu, language:xal, language:xmf, language:yi, language:yo, language:yue, language:zh, size_categories:1B<n<10B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.09400, region:us
|
community
|
# Dataset: `uonlp/CulturaX`
## 📝 Metadata
- **Author/Owner:** uonlp
- **Downloads:** 49428
- **Likes:** 547
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:als, language:am, language:an, language:ar, language:arz, language:as, language:ast, language:av, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bn, language:bo, language:bpy, language:br, language:bs, language:bxr, language:ca, language:cbk, language:ce, language:ceb, language:ckb, language:cs, language:cv, language:cy, language:da, language:de, language:dsb, language:dv, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fr, language:frr, language:fy, language:ga, language:gd, language:gl, language:gn, language:gom, language:gu, language:he, language:hi, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ilo, language:io, language:is, language:it, language:ja, language:jbo, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:krc, language:ku, language:kv, language:kw, language:ky, language:la, language:lb, language:lez, language:li, language:lmo, language:lo, language:lrc, language:lt, language:lv, language:mai, language:mg, language:mhr, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nap, language:nds, language:ne, language:new, language:nl, language:nn, language:no, language:oc, language:or, language:os, language:pa, language:pam, language:pl, language:pms, language:pnb, language:ps, language:pt, language:qu, language:rm, language:ro, language:ru, language:rue, language:sa, language:sah, language:scn, language:sd, language:sh, language:si, language:sk, language:sl, language:so, language:sq, language:sr, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tk, language:tl, language:tr, language:tt, language:tyv, language:ug, language:uk, language:ur, language:uz, language:vec, language:vi, language:vls, language:vo, language:wa, language:war, language:wuu, language:xal, language:xmf, language:yi, language:yo, language:yue, language:zh, size_categories:1B<n<10B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2309.09400, region:us
- **License:** Not specified
## 📖 Description
```text
CulturaX
Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages
Dataset Summary
We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language… See the full description on the dataset page: https://huggingface.co/datasets/uonlp/CulturaX....
```
## 📂 File System Sample
- `.gitattributes`
- `CulturaX_loading_script.py`
- `README.md`
- `af/af_part_00000.parquet`
- `af/checksum.sha256`
- `als/als_part_00000.parquet`
- `als/checksum.sha256`
- `am/am_part_00000.parquet`
- `am/checksum.sha256`
- `an/an_part_00000.parquet`
- `an/checksum.sha256`
- `ar/ar_part_00000.parquet`
- `ar/ar_part_00001.parquet`
- `ar/ar_part_00002.parquet`
- `ar/ar_part_00003.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'uonlp/CulturaX' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/uonlp/CulturaX to ask for access.
```
|
Salesforce/xlam-function-calling-60k
|
Salesforce
|
task_categories:question-answering, task_categories:text-generation, task_categories:reinforcement-learning, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.18518, region:us, function-calling, LLM Agent, code, synthetic
|
community
|
# Dataset: `Salesforce/xlam-function-calling-60k`
## 📝 Metadata
- **Author/Owner:** Salesforce
- **Downloads:** 5819
- **Likes:** 538
- **Tags:** task_categories:question-answering, task_categories:text-generation, task_categories:reinforcement-learning, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.18518, region:us, function-calling, LLM Agent, code, synthetic
- **License:** Not specified
## 📖 Description
```text
APIGen Function-Calling Datasets
Paper | Website | Models
This repo contains 60,000 data collected by APIGen, an automated data generation pipeline designed to produce verifiable high-quality datasets for function-calling applications. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness.
We conducted human evaluation over 600 sampled data points, and… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `figures/dataset_pie_chart.png`
- `figures/function-call-overview.png`
- `figures/overview.jpg`
- `xlam_function_calling_60k.json`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'Salesforce/xlam-function-calling-60k' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k to ask for access.
```
|
facebook/natural_reasoning
|
facebook
|
task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2502.13124, region:us
|
community
|
# Dataset: `facebook/natural_reasoning`
## 📝 Metadata
- **Author/Owner:** facebook
- **Downloads:** 3326
- **Likes:** 538
- **Tags:** task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2502.13124, region:us
- **License:** Not specified
## 📖 Description
```text
NaturalReasoning is a large-scale dataset for general reasoning tasks. It consists of high-quality challenging reasoning questions backtranslated from pretraining corpora DCLM and FineMath. The questions have been deduplicated and decontaminated from popular reasoning benchmarks including MATH, GPQA, MMLU-Pro, MMLU-STEM. For each question, we extract the reference final answer from the original document from the pretraining corpora if possible. We also provide a model-generated response from… See the full description on the dataset page: https://huggingface.co/datasets/facebook/natural_reasoning....
```
## 📂 File System Sample
- `.gitattributes`
- `NaturalReasoning.pdf`
- `README.md`
- `full.jsonl`
- `scaling_plot`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `reference_answer` | `str` |
| `responses` | `list` |
|
poloclub/diffusiondb
|
poloclub
|
task_categories:text-to-image, task_categories:image-to-text, task_ids:image-captioning, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:en, license:cc0-1.0, size_categories:n>1T, arxiv:2210.14896, region:us, stable diffusion, prompt engineering, prompts, research paper
|
community
|
# Dataset: `poloclub/diffusiondb`
## 📝 Metadata
- **Author/Owner:** poloclub
- **Downloads:** 34275
- **Likes:** 519
- **Tags:** task_categories:text-to-image, task_categories:image-to-text, task_ids:image-captioning, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:en, license:cc0-1.0, size_categories:n>1T, arxiv:2210.14896, region:us, stable diffusion, prompt engineering, prompts, research paper
- **License:** Not specified
## 📖 Description
```text
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the interplay between prompts and generative models, detecting deepfakes, and
designing human-AI interaction tools to help users more easily use these models....
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `README.md`
- `diffusiondb-large-part-1/part-000001.zip`
- `diffusiondb-large-part-1/part-000002.zip`
- `diffusiondb-large-part-1/part-000003.zip`
- `diffusiondb-large-part-1/part-000004.zip`
- `diffusiondb-large-part-1/part-000005.zip`
- `diffusiondb-large-part-1/part-000006.zip`
- `diffusiondb-large-part-1/part-000007.zip`
- `diffusiondb-large-part-1/part-000008.zip`
- `diffusiondb-large-part-1/part-000009.zip`
- `diffusiondb-large-part-1/part-000010.zip`
- `diffusiondb-large-part-1/part-000011.zip`
- `diffusiondb-large-part-1/part-000012.zip`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found diffusiondb.py
```
|
JosephusCheung/GuanacoDataset
|
JosephusCheung
|
task_categories:text-generation, task_categories:question-answering, language:zh, language:en, language:ja, language:de, license:gpl-3.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, doi:10.57967/hf/1423, region:us, alpaca, llama, guanaco
|
community
|
# Dataset: `JosephusCheung/GuanacoDataset`
## 📝 Metadata
- **Author/Owner:** JosephusCheung
- **Downloads:** 269
- **Likes:** 515
- **Tags:** task_categories:text-generation, task_categories:question-answering, language:zh, language:en, language:ja, language:de, license:gpl-3.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, doi:10.57967/hf/1423, region:us, alpaca, llama, guanaco
- **License:** Not specified
## 📖 Description
```text
Sorry, it's no longer available on Hugging Face. Please reach out to those who have already downloaded it. If you have a copy, please refrain from re-uploading it to Hugging Face. The people here don't deserve it. See also: https://twitter.com/RealJosephus/status/1779913520529707387
GuanacoDataset
News: We're heading towards multimodal VQA, with blip2-flan-t5-xxl Alignment to Guannaco 7B LLM.
Still under construction: GuanacoVQA weight & GuanacoVQA Dataset
Notice: Effective… See the full description on the dataset page: https://huggingface.co/datasets/JosephusCheung/GuanacoDataset....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `additional/general_ans-utf8.json`
- `additional/general_ans.json`
- `additional/general_questions-utf8.json`
- `additional/general_questions.json`
- `additional/paper_answers-utf8.json`
- `additional/paper_answers.json`
- `additional/paper_questions-utf8.json`
- `additional/paper_questions.json`
- `guanaco_chat_all-utf8.json`
- `guanaco_chat_all.json`
- `guanaco_non_chat-utf8.json`
- `guanaco_non_chat.json`
- `guanaco_non_chat_mini_52K-utf8.json`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'JosephusCheung/GuanacoDataset' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/JosephusCheung/GuanacoDataset to ask for access.
```
|
Salesforce/wikitext
|
Salesforce
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-3.0, license:gfdl, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1609.07843, region:us
|
community
|
# Dataset: `Salesforce/wikitext`
## 📝 Metadata
- **Author/Owner:** Salesforce
- **Downloads:** 884722
- **Likes:** 511
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:crowdsourced, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-3.0, license:gfdl, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1609.07843, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for "wikitext"
Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/wikitext....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `wikitext-103-raw-v1/test-00000-of-00001.parquet`
- `wikitext-103-raw-v1/train-00000-of-00002.parquet`
- `wikitext-103-raw-v1/train-00001-of-00002.parquet`
- `wikitext-103-raw-v1/validation-00000-of-00001.parquet`
- `wikitext-103-v1/test-00000-of-00001.parquet`
- `wikitext-103-v1/train-00000-of-00002.parquet`
- `wikitext-103-v1/train-00001-of-00002.parquet`
- `wikitext-103-v1/validation-00000-of-00001.parquet`
- `wikitext-2-raw-v1/test-00000-of-00001.parquet`
- `wikitext-2-raw-v1/train-00000-of-00001.parquet`
- `wikitext-2-raw-v1/validation-00000-of-00001.parquet`
- `wikitext-2-v1/test-00000-of-00001.parquet`
- `wikitext-2-v1/train-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
### Config: `wikitext-103-raw-v1`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
### Config: `wikitext-103-v1`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
### Config: `wikitext-2-raw-v1`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
HuggingFaceH4/no_robots
|
HuggingFaceH4
|
task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2203.02155, region:us
|
community
|
# Dataset: `HuggingFaceH4/no_robots`
## 📝 Metadata
- **Author/Owner:** HuggingFaceH4
- **Downloads:** 2700
- **Likes:** 505
- **Tags:** task_categories:text-generation, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2203.02155, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for No Robots 🙅♂️🤖
Look Ma, an instruction dataset that wasn't generated by GPTs!
Dataset Summary
No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/no_robots....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001.parquet`
- `data/test_sft-00000-of-00001.parquet`
- `data/train-00000-of-00001.parquet`
- `data/train_sft-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `prompt_id` | `str` |
| `messages` | `list` |
| `category` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `prompt_id` | `str` |
| `messages` | `list` |
| `category` | `str` |
|
Gustavosta/Stable-Diffusion-Prompts
|
Gustavosta
|
annotations_creators:no-annotation, language_creators:found, source_datasets:original, language:en, license:unknown, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `Gustavosta/Stable-Diffusion-Prompts`
## 📝 Metadata
- **Author/Owner:** Gustavosta
- **Downloads:** 8750
- **Likes:** 503
- **Tags:** annotations_creators:no-annotation, language_creators:found, source_datasets:original, language:en, license:unknown, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Stable Diffusion Dataset
This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "Lexica.art". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare.
If you want to test the model with a demo, you can go to: "spaces/Gustavosta/MagicPrompt-Stable-Diffusion".
If you want to see the model, go to: "Gustavosta/MagicPrompt-Stable-Diffusion"....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/eval.parquet`
- `data/train.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Prompt` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `Prompt` | `str` |
|
cerebras/SlimPajama-627B
|
cerebras
|
task_categories:text-generation, language:en, arxiv:2306.01116, arxiv:2302.13971, region:us
|
community
|
# Dataset: `cerebras/SlimPajama-627B`
## 📝 Metadata
- **Author/Owner:** cerebras
- **Downloads:** 44433
- **Likes:** 502
- **Tags:** task_categories:text-generation, language:en, arxiv:2306.01116, arxiv:2302.13971, region:us
- **License:** Not specified
## 📖 Description
```text
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of Together's RedPajama.
Check out our blog post explaining our methods, our code on GitHub, and join the discussion on the Cerebras Discord.
Getting Started
You can download the dataset using Hugging Face datasets:
from datasets import load_dataset
ds = load_dataset("cerebras/SlimPajama-627B")
Background
Today we are releasing SlimPajama – the largest… See the full description on the dataset page: https://huggingface.co/datasets/cerebras/SlimPajama-627B....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `test/chunk1/example_holdout_0.jsonl.zst`
- `test/chunk1/example_holdout_1.jsonl.zst`
- `test/chunk1/example_holdout_10.jsonl.zst`
- `test/chunk1/example_holdout_100.jsonl.zst`
- `test/chunk1/example_holdout_1000.jsonl.zst`
- `test/chunk1/example_holdout_1001.jsonl.zst`
- `test/chunk1/example_holdout_1002.jsonl.zst`
- `test/chunk1/example_holdout_1003.jsonl.zst`
- `test/chunk1/example_holdout_1004.jsonl.zst`
- `test/chunk1/example_holdout_1005.jsonl.zst`
- `test/chunk1/example_holdout_1006.jsonl.zst`
- `test/chunk1/example_holdout_1007.jsonl.zst`
- `test/chunk1/example_holdout_1008.jsonl.zst`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Compression type zstd not supported
```
|
HuggingFaceM4/the_cauldron
|
HuggingFaceM4
|
size_categories:1M<n<10M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1603.07396, arxiv:2206.01718, arxiv:2208.05358, arxiv:1612.06890, arxiv:2310.00367, arxiv:1710.07300, arxiv:2312.12241, arxiv:1912.03098, arxiv:2211.08545, arxiv:2306.05425, arxiv:1709.00103, arxiv:2003.12462, arxiv:1612.00837, arxiv:2205.00363, arxiv:2403.09029, arxiv:2405.02246, region:us
|
community
|
# Dataset: `HuggingFaceM4/the_cauldron`
## 📝 Metadata
- **Author/Owner:** HuggingFaceM4
- **Downloads:** 63741
- **Likes:** 502
- **Tags:** size_categories:1M<n<10M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1603.07396, arxiv:2206.01718, arxiv:2208.05358, arxiv:1612.06890, arxiv:2310.00367, arxiv:1710.07300, arxiv:2312.12241, arxiv:1912.03098, arxiv:2211.08545, arxiv:2306.05425, arxiv:1709.00103, arxiv:2003.12462, arxiv:1612.00837, arxiv:2205.00363, arxiv:2403.09029, arxiv:2405.02246, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for The Cauldron
Dataset description
The Cauldron is part of the Idefics2 release.
It is a massive collection of 50 vision-language datasets (training sets only) that were used for the fine-tuning of the vision-language model Idefics2.
Load the dataset
To load the dataset, install the library datasets with pip install datasets. Then,
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/the_cauldron", "ai2d")
to download and load the… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/the_cauldron....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `ai2d/train-00000-of-00001-2ce340398c113b79.parquet`
- `aokvqa/train-00000-of-00002-88e828b9a932c295.parquet`
- `aokvqa/train-00001-of-00002-ceb27cfe85e08680.parquet`
- `chart2text/train-00000-of-00003-3a2ec464eb1cfc9b.parquet`
- `chart2text/train-00001-of-00003-a65d11892445678c.parquet`
- `chart2text/train-00002-of-00003-8626ac7f2c225705.parquet`
- `chartqa/train-00000-of-00002-7733d4ca73ccd12e.parquet`
- `chartqa/train-00001-of-00002-03251c406186eabb.parquet`
- `clevr/train-00000-of-00024-d244df5ec45319a1.parquet`
- `clevr/train-00001-of-00024-8711717f841a0ad7.parquet`
- `clevr/train-00002-of-00024-851e2670f82ad012.parquet`
- `clevr/train-00003-of-00024-fac2bc9a8da5a47a.parquet`
- `clevr/train-00004-of-00024-93ff8e7b6bd883e7.parquet`
- ... and more.
## 📊 Data Structure
### Config: `ai2d`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
### Config: `aokvqa`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
### Config: `chart2text`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
|
openai/MMMLU
|
openai
|
task_categories:question-answering, language:ar, language:bn, language:de, language:es, language:fr, language:hi, language:id, language:it, language:ja, language:ko, language:pt, language:sw, language:yo, language:zh, license:mit, size_categories:100K<n<1M, format:csv, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2009.03300, region:us
|
community
|
# Dataset: `openai/MMMLU`
## 📝 Metadata
- **Author/Owner:** openai
- **Downloads:** 13331
- **Likes:** 501
- **Tags:** task_categories:question-answering, language:ar, language:bn, language:de, language:es, language:fr, language:hi, language:id, language:it, language:ja, language:ko, language:pt, language:sw, language:yo, language:zh, license:mit, size_categories:100K<n<1M, format:csv, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2009.03300, region:us
- **License:** Not specified
## 📖 Description
```text
Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases… See the full description on the dataset page: https://huggingface.co/datasets/openai/MMMLU....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `test/mmlu_AR-XY.csv`
- `test/mmlu_BN-BD.csv`
- `test/mmlu_DE-DE.csv`
- `test/mmlu_ES-LA.csv`
- `test/mmlu_FR-FR.csv`
- `test/mmlu_HI-IN.csv`
- `test/mmlu_ID-ID.csv`
- `test/mmlu_IT-IT.csv`
- `test/mmlu_JA-JP.csv`
- `test/mmlu_KO-KR.csv`
- `test/mmlu_PT-BR.csv`
- `test/mmlu_SW-KE.csv`
- `test/mmlu_YO-NG.csv`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `Unnamed: 0` | `int` |
| `Question` | `str` |
| `A` | `str` |
| `B` | `str` |
| `C` | `str` |
| `D` | `str` |
| `Answer` | `str` |
| `Subject` | `str` |
### Config: `AR_XY`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `Unnamed: 0` | `int` |
| `Question` | `str` |
| `A` | `str` |
| `B` | `str` |
| `C` | `str` |
| `D` | `str` |
| `Answer` | `str` |
| `Subject` | `str` |
### Config: `BN_BD`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `Unnamed: 0` | `int` |
| `Question` | `str` |
| `A` | `str` |
| `B` | `str` |
| `C` | `str` |
| `D` | `str` |
| `Answer` | `str` |
| `Subject` | `str` |
|
AI-MO/NuminaMath-CoT
|
AI-MO
|
task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, aimo, math
|
community
|
# Dataset: `AI-MO/NuminaMath-CoT`
## 📝 Metadata
- **Author/Owner:** AI-MO
- **Downloads:** 12714
- **Likes:** 500
- **Tags:** task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, aimo, math
- **License:** Not specified
## 📖 Description
```text
Dataset Card for NuminaMath CoT
Dataset Summary
Approximately 860k math problems, where each solution is formatted in a Chain of Thought (CoT) manner. The sources of the dataset range from Chinese high school math exercises to US and international mathematics olympiad competition problems. The data were primarily collected from online exam paper PDFs and mathematics discussion forums. The processing steps include (a) OCR from the original PDFs, (b) segmentation into… See the full description on the dataset page: https://huggingface.co/datasets/AI-MO/NuminaMath-CoT....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001.parquet`
- `data/train-00000-of-00005.parquet`
- `data/train-00001-of-00005.parquet`
- `data/train-00002-of-00005.parquet`
- `data/train-00003-of-00005.parquet`
- `data/train-00004-of-00005.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `source` | `str` |
| `problem` | `str` |
| `solution` | `str` |
| `messages` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `source` | `str` |
| `problem` | `str` |
| `solution` | `str` |
| `messages` | `list` |
|
nvidia/OpenCodeReasoning
|
nvidia
|
task_categories:text-generation, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2504.01943, region:us, synthetic
|
community
|
# Dataset: `nvidia/OpenCodeReasoning`
## 📝 Metadata
- **Author/Owner:** nvidia
- **Downloads:** 2405
- **Likes:** 500
- **Tags:** task_categories:text-generation, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2504.01943, region:us, synthetic
- **License:** Not specified
## 📖 Description
```text
OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
Data Overview
OpenCodeReasoning is the largest reasoning-based synthetic dataset to date for coding, comprises 735,255 samples in Python across 28,319 unique competitive programming
questions. OpenCodeReasoning is designed for supervised fine-tuning (SFT).
Technical Report - Discover the methodology and technical details behind OpenCodeReasoning.
Github Repo - Access the complete pipeline used to… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenCodeReasoning....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `split_0/train-00000-of-00030.parquet`
- `split_0/train-00001-of-00030.parquet`
- `split_0/train-00002-of-00030.parquet`
- `split_0/train-00003-of-00030.parquet`
- `split_0/train-00004-of-00030.parquet`
- `split_0/train-00005-of-00030.parquet`
- `split_0/train-00006-of-00030.parquet`
- `split_0/train-00007-of-00030.parquet`
- `split_0/train-00008-of-00030.parquet`
- `split_0/train-00009-of-00030.parquet`
- `split_0/train-00010-of-00030.parquet`
- `split_0/train-00011-of-00030.parquet`
- `split_0/train-00012-of-00030.parquet`
- ... and more.
## 📊 Data Structure
### Config: `split_0`
#### Split: `split_0`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `input` | `str` |
| `output` | `str` |
| `source` | `str` |
| `license` | `str` |
| `dataset` | `str` |
| `split` | `str` |
| `difficulty` | `str` |
| `solution` | `str` |
### Config: `split_1`
#### Split: `split_1`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `input` | `str` |
| `output` | `str` |
| `source` | `str` |
| `license` | `str` |
| `dataset` | `str` |
| `split` | `str` |
| `difficulty` | `str` |
| `solution` | `str` |
| `index` | `str` |
|
cais/hle
|
cais
|
license:mit, size_categories:1K<n<10K, format:parquet, modality:image, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `cais/hle`
## 📝 Metadata
- **Author/Owner:** cais
- **Downloads:** 12784
- **Likes:** 496
- **Tags:** license:mit, size_categories:1K<n<10K, format:parquet, modality:image, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
[!NOTE]
IMPORTANT: Please help us protect the integrity of this benchmark by not publicly sharing, re-uploading, or distributing the dataset.
Humanity's Last Exam
🌐 Website | 📄 Paper | GitHub
Center for AI Safety & Scale AI
Humanity's Last Exam (HLE) is a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. Humanity's Last Exam consists of 2,500 questions across dozens of… See the full description on the dataset page: https://huggingface.co/datasets/cais/hle....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001.parquet`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'cais/hle' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/cais/hle to ask for access.
```
|
b-mc2/sql-create-context
|
b-mc2
|
task_categories:text-generation, task_categories:question-answering, task_categories:table-question-answering, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1809.08887, region:us, SQL, code, NLP, text-to-sql, context-sql, spider, wikisql, sqlglot
|
community
|
# Dataset: `b-mc2/sql-create-context`
## 📝 Metadata
- **Author/Owner:** b-mc2
- **Downloads:** 3778
- **Likes:** 486
- **Tags:** task_categories:text-generation, task_categories:question-answering, task_categories:table-question-answering, language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1809.08887, region:us, SQL, code, NLP, text-to-sql, context-sql, spider, wikisql, sqlglot
- **License:** Not specified
## 📖 Description
```text
Overview
This dataset builds from WikiSQL and Spider.
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column… See the full description on the dataset page: https://huggingface.co/datasets/b-mc2/sql-create-context....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `sql_create_context_v4.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `answer` | `str` |
| `question` | `str` |
| `context` | `str` |
|
allenai/c4
|
allenai
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:am, language:ar, language:az, language:be, language:bg, language:bn, language:ca, language:ceb, language:co, language:cs, language:cy, language:da, language:de, language:el, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fil, language:fr, language:fy, language:ga, language:gd, language:gl, language:gu, language:ha, language:haw, language:he, language:hi, language:hmn, language:ht, language:hu, language:hy, language:id, language:ig, language:is, language:it, language:iw, language:ja, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:ku, language:ky, language:la, language:lb, language:lo, language:lt, language:lv, language:mg, language:mi, language:mk, language:ml, language:mn, language:mr, language:ms, language:mt, language:my, language:ne, language:nl, language:no, language:ny, language:pa, language:pl, language:ps, language:pt, language:ro, language:ru, language:sd, language:si, language:sk, language:sl, language:sm, language:sn, language:so, language:sq, language:sr, language:st, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tr, language:uk, language:und, language:ur, language:uz, language:vi, language:xh, language:yi, language:yo, language:zh, language:zu, license:odc-by, size_categories:10B<n<100B, modality:text, arxiv:1910.10683, region:us
|
community
|
# Dataset: `allenai/c4`
## 📝 Metadata
- **Author/Owner:** allenai
- **Downloads:** 605825
- **Likes:** 476
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:am, language:ar, language:az, language:be, language:bg, language:bn, language:ca, language:ceb, language:co, language:cs, language:cy, language:da, language:de, language:el, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fil, language:fr, language:fy, language:ga, language:gd, language:gl, language:gu, language:ha, language:haw, language:he, language:hi, language:hmn, language:ht, language:hu, language:hy, language:id, language:ig, language:is, language:it, language:iw, language:ja, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:ku, language:ky, language:la, language:lb, language:lo, language:lt, language:lv, language:mg, language:mi, language:mk, language:ml, language:mn, language:mr, language:ms, language:mt, language:my, language:ne, language:nl, language:no, language:ny, language:pa, language:pl, language:ps, language:pt, language:ro, language:ru, language:sd, language:si, language:sk, language:sl, language:sm, language:sn, language:so, language:sq, language:sr, language:st, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tr, language:uk, language:und, language:ur, language:uz, language:vi, language:xh, language:yi, language:yo, language:zh, language:zu, license:odc-by, size_categories:10B<n<100B, modality:text, arxiv:1910.10683, region:us
- **License:** Not specified
## 📖 Description
```text
C4
Dataset Summary
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's C4 dataset
We prepared five variants of the data: en, en.noclean, en.noblocklist, realnewslike, and multilingual (mC4).
For reference, these are the sizes of the variants:
en: 305GB
en.noclean: 2.3TB
en.noblocklist: 380GB
realnewslike: 15GB
multilingual (mC4): 9.7TB (108 subsets, one per… See the full description on the dataset page: https://huggingface.co/datasets/allenai/c4....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `en.noblocklist/c4-train.00000-of-01024.json.gz`
- `en.noblocklist/c4-train.00001-of-01024.json.gz`
- `en.noblocklist/c4-train.00002-of-01024.json.gz`
- `en.noblocklist/c4-train.00003-of-01024.json.gz`
- `en.noblocklist/c4-train.00004-of-01024.json.gz`
- `en.noblocklist/c4-train.00005-of-01024.json.gz`
- `en.noblocklist/c4-train.00006-of-01024.json.gz`
- `en.noblocklist/c4-train.00007-of-01024.json.gz`
- `en.noblocklist/c4-train.00008-of-01024.json.gz`
- `en.noblocklist/c4-train.00009-of-01024.json.gz`
- `en.noblocklist/c4-train.00010-of-01024.json.gz`
- `en.noblocklist/c4-train.00011-of-01024.json.gz`
- `en.noblocklist/c4-train.00012-of-01024.json.gz`
- ... and more.
## 📊 Data Structure
### Config: `en`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
### Config: `en.noblocklist`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
### Config: `en.noclean`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `timestamp` | `str` |
| `url` | `str` |
|
gaia-benchmark/GAIA
|
gaia-benchmark
|
language:en, arxiv:2311.12983, region:us
|
community
|
# Dataset: `gaia-benchmark/GAIA`
## 📝 Metadata
- **Author/Owner:** gaia-benchmark
- **Downloads:** 6226
- **Likes:** 470
- **Tags:** language:en, arxiv:2311.12983, region:us
- **License:** Not specified
## 📖 Description
```text
GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
We added gating to prevent bots from scraping the dataset. Please do not reshare the validation or test set in a crawlable format.
Data and leaderboard
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. It… See the full description on the dataset page: https://huggingface.co/datasets/gaia-benchmark/GAIA....
```
## 📂 File System Sample
- `.gitattributes`
- `2023/test/021a5339-744f-42b7-bd9b-9368b3efda7a.pdf`
- `2023/test/03c577c9-4227-48a9-9b75-f8f598de14c1.mp3`
- `2023/test/063800f6-8832-4856-972b-17b877612533.png`
- `2023/test/07c3029f-7095-455d-a9e9-cd5e34001b38.json`
- `2023/test/0c393561-dd13-4b7c-ac49-20ac469aa276.MOV`
- `2023/test/171dd6d2-d1d4-439b-8d4e-7507018a816b.png`
- `2023/test/198ffd8f-6041-458d-bacc-fe49872cfa43.txt`
- `2023/test/23bcfab0-f47b-4dcb-8599-459c329ac153.mp3`
- `2023/test/2bb16c35-403a-4d4c-859e-a88ccd55f876.xml`
- `2023/test/32f386b9-73ee-4455-b412-ddad508aa979.pdf`
- `2023/test/355b827f-fff0-4e0c-9ff0-65dea0609838.xlsx`
- `2023/test/3cc53dbf-1ab9-4d21-a56a-fc0151c10f89.xlsx`
- `2023/test/4033181f-1988-476b-bc33-6da0f96d7bd0.xlsx`
- `2023/test/4044eab7-1282-42bd-a559-3bf3a4d5858e.pdf`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'gaia-benchmark/GAIA' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/gaia-benchmark/GAIA to ask for access.
```
|
glaiveai/glaive-function-calling-v2
|
glaiveai
|
task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `glaiveai/glaive-function-calling-v2`
## 📝 Metadata
- **Author/Owner:** glaiveai
- **Downloads:** 1870
- **Likes:** 469
- **Tags:** task_categories:text-generation, language:en, license:apache-2.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `glaive-function-calling-v2.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `system` | `str` |
| `chat` | `str` |
|
microsoft/orca-math-word-problems-200k
|
microsoft
|
task_categories:question-answering, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2402.14830, region:us, math
|
community
|
# Dataset: `microsoft/orca-math-word-problems-200k`
## 📝 Metadata
- **Author/Owner:** microsoft
- **Downloads:** 2399
- **Likes:** 462
- **Tags:** task_categories:question-answering, language:en, license:mit, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2402.14830, region:us, math
- **License:** Not specified
## 📖 Description
```text
Dataset Card
This dataset contains ~200K grade school math word problems. All the answers in this dataset is generated using Azure GPT4-Turbo. Please refer to Orca-Math: Unlocking the potential of
SLMs in Grade School Math for details about the dataset construction.
Dataset Sources
Repository: microsoft/orca-math-word-problems-200k
Paper: Orca-Math: Unlocking the potential of
SLMs in Grade School Math
Direct Use
This dataset has been designed to… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `answer` | `str` |
|
stingning/ultrachat
|
stingning
|
task_categories:text-generation, language:en, license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.14233, region:us
|
community
|
# Dataset: `stingning/ultrachat`
## 📝 Metadata
- **Author/Owner:** stingning
- **Downloads:** 2394
- **Likes:** 460
- **Tags:** task_categories:text-generation, language:en, license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2305.14233, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for Dataset Name
Dataset Description
An open-source, large-scale, and multi-round dialogue data powered by Turbo APIs. In consideration of factors such as safeguarding privacy, we do not directly use any data available on the Internet as prompts.
To ensure generation quality, two separate ChatGPT Turbo APIs are adopted in generation, where one plays the role of the user to generate queries and the other generates the response.
We instruct the user model with… See the full description on the dataset page: https://huggingface.co/datasets/stingning/ultrachat....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `train_0.jsonl`
- `train_1.jsonl`
- `train_2.jsonl`
- `train_3.jsonl`
- `train_4.jsonl`
- `train_5.jsonl`
- `train_6.jsonl`
- `train_7.jsonl`
- `train_8.jsonl`
- `train_9.jsonl`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `data` | `list` |
|
bigcode/starcoderdata
|
bigcode
|
task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `bigcode/starcoderdata`
## 📝 Metadata
- **Author/Owner:** bigcode
- **Downloads:** 5395
- **Likes:** 459
- **Tags:** task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
StarCoder Training Dataset
Dataset description
This is the dataset used for training StarCoder and StarCoderBase. It contains 783GB of code in 86 programming languages, and includes 54GB GitHub Issues + 13GB Jupyter notebooks in scripts and text-code pairs,
and 32GB of GitHub commits, which is approximately 250 Billion tokens.
Dataset creation
The creation and filtering of The Stack is explained in the original dataset, we additionally decontaminate and… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/starcoderdata....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `ada/train-00000-of-00001.parquet`
- `agda/train-00000-of-00001.parquet`
- `alloy/train-00000-of-00001.parquet`
- `antlr/train-00000-of-00001.parquet`
- `applescript/train-00000-of-00001.parquet`
- `assembly/train-00000-of-00002.parquet`
- `assembly/train-00001-of-00002.parquet`
- `augeas/train-00000-of-00001.parquet`
- `awk/train-00000-of-00001.parquet`
- `batchfile/train-00000-of-00001.parquet`
- `bluespec/train-00000-of-00001.parquet`
- `c-sharp/train-00000-of-00045.parquet`
- `c-sharp/train-00001-of-00045.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'bigcode/starcoderdata' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/bigcode/starcoderdata to ask for access.
```
|
Skylion007/openwebtext
|
Skylion007
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc0-1.0, size_categories:1M<n<10M, region:us
|
community
|
# Dataset: `Skylion007/openwebtext`
## 📝 Metadata
- **Author/Owner:** Skylion007
- **Downloads:** 29376
- **Likes:** 457
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc0-1.0, size_categories:1M<n<10M, region:us
- **License:** Not specified
## 📖 Description
```text
An open-source replication of the WebText dataset from OpenAI....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `openwebtext.py`
- `subsets/urlsf_subset00.tar`
- `subsets/urlsf_subset01.tar`
- `subsets/urlsf_subset02.tar`
- `subsets/urlsf_subset03.tar`
- `subsets/urlsf_subset04.tar`
- `subsets/urlsf_subset05.tar`
- `subsets/urlsf_subset06.tar`
- `subsets/urlsf_subset07.tar`
- `subsets/urlsf_subset08.tar`
- `subsets/urlsf_subset09.tar`
- `subsets/urlsf_subset10.tar`
- `subsets/urlsf_subset11.tar`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found openwebtext.py
```
|
EleutherAI/pile
|
EleutherAI
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100B<n<1T, arxiv:2201.07311, arxiv:2101.00027, region:us
|
community
|
# Dataset: `EleutherAI/pile`
## 📝 Metadata
- **Author/Owner:** EleutherAI
- **Downloads:** 8074
- **Likes:** 457
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100B<n<1T, arxiv:2201.07311, arxiv:2101.00027, region:us
- **License:** Not specified
## 📖 Description
```text
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `pile.py`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found pile.py
```
|
microsoft/orca-agentinstruct-1M-v1
|
microsoft
|
task_categories:question-answering, language:en, license:cdla-permissive-2.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `microsoft/orca-agentinstruct-1M-v1`
## 📝 Metadata
- **Author/Owner:** microsoft
- **Downloads:** 699
- **Likes:** 451
- **Tags:** task_categories:question-answering, language:en, license:cdla-permissive-2.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card
This dataset is a fully synthetic set of instruction pairs where both the prompts and the responses have been synthetically generated, using the AgentInstruct framework.
AgentInstruct is an extensible agentic framework for synthetic data generation.
This dataset contains ~1 million instruction pairs generated by the AgentInstruct, using only raw text content publicly avialble on the Web as seeds. The data covers different capabilities, such as text editing, creative… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/analytical_reasoning-00000-of-00001.parquet`
- `data/brain_teaser-00000-of-00001.parquet`
- `data/code_-00000-of-00002.parquet`
- `data/code_-00001-of-00002.parquet`
- `data/creative_content-00000-of-00001.parquet`
- `data/fermi-00000-of-00001.parquet`
- `data/follow_up-00000-of-00002.parquet`
- `data/follow_up-00001-of-00002.parquet`
- `data/fs_cot_flow-00000-of-00001.parquet`
- `data/mcq-00000-of-00001.parquet`
- `data/open_domain_qa-00000-of-00002.parquet`
- `data/open_domain_qa-00001-of-00002.parquet`
- `data/rag-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `creative_content`
| Column Name | Data Type |
|---|---|
| `messages` | `str` |
#### Split: `text_modification`
| Column Name | Data Type |
|---|---|
| `messages` | `str` |
#### Split: `struct2text_flow`
| Column Name | Data Type |
|---|---|
| `messages` | `str` |
|
BAAI/COIG
|
BAAI
|
language:zh, license:apache-2.0, size_categories:100K<n<1M, modality:text, library:datasets, library:mlcroissant, arxiv:2204.07705, arxiv:2212.10560, arxiv:2212.09689, arxiv:2304.07987, region:us
|
community
|
# Dataset: `BAAI/COIG`
## 📝 Metadata
- **Author/Owner:** BAAI
- **Downloads:** 536
- **Likes:** 449
- **Tags:** language:zh, license:apache-2.0, size_categories:100K<n<1M, modality:text, library:datasets, library:mlcroissant, arxiv:2204.07705, arxiv:2212.10560, arxiv:2212.09689, arxiv:2304.07987, region:us
- **License:** Not specified
## 📖 Description
```text
We propose the Chinese Open Instruction Generalist (COIG) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively....
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `COIG.py`
- `README.md`
- `counterfactural_correction_multi_round_chat.tar.gz`
- `exam_instructions.jsonl`
- `human_value_alignment_instructions_part1.json`
- `human_value_alignment_instructions_part2.json`
- `leetcode_instructions.jsonl`
- `translated_instructions.jsonl`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found COIG.py
```
|
nyu-mll/glue
|
nyu-mll
|
task_categories:text-classification, task_ids:acceptability-classification, task_ids:natural-language-inference, task_ids:semantic-similarity-scoring, task_ids:sentiment-classification, task_ids:text-scoring, annotations_creators:other, language_creators:other, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:1M<n<10M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1804.07461, region:us, qa-nli, coreference-nli, paraphrase-identification
|
community
|
# Dataset: `nyu-mll/glue`
## 📝 Metadata
- **Author/Owner:** nyu-mll
- **Downloads:** 373552
- **Likes:** 448
- **Tags:** task_categories:text-classification, task_ids:acceptability-classification, task_ids:natural-language-inference, task_ids:semantic-similarity-scoring, task_ids:sentiment-classification, task_ids:text-scoring, annotations_creators:other, language_creators:other, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:1M<n<10M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1804.07461, region:us, qa-nli, coreference-nli, paraphrase-identification
- **License:** Not specified
## 📖 Description
```text
Dataset Card for GLUE
Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:
ax
A manually-curated evaluation dataset for fine-grained analysis of system… See the full description on the dataset page: https://huggingface.co/datasets/nyu-mll/glue....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `ax/test-00000-of-00001.parquet`
- `cola/test-00000-of-00001.parquet`
- `cola/train-00000-of-00001.parquet`
- `cola/validation-00000-of-00001.parquet`
- `mnli/test_matched-00000-of-00001.parquet`
- `mnli/test_mismatched-00000-of-00001.parquet`
- `mnli/train-00000-of-00001.parquet`
- `mnli/validation_matched-00000-of-00001.parquet`
- `mnli/validation_mismatched-00000-of-00001.parquet`
- `mnli_matched/test-00000-of-00001.parquet`
- `mnli_matched/validation-00000-of-00001.parquet`
- `mnli_mismatched/test-00000-of-00001.parquet`
- `mnli_mismatched/validation-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
### Config: `ax`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `premise` | `str` |
| `hypothesis` | `str` |
| `label` | `int` |
| `idx` | `int` |
### Config: `cola`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `sentence` | `str` |
| `label` | `int` |
| `idx` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `sentence` | `str` |
| `label` | `int` |
| `idx` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `sentence` | `str` |
| `label` | `int` |
| `idx` | `int` |
### Config: `mnli`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `premise` | `str` |
| `hypothesis` | `str` |
| `label` | `int` |
| `idx` | `int` |
#### Split: `validation_matched`
| Column Name | Data Type |
|---|---|
| `premise` | `str` |
| `hypothesis` | `str` |
| `label` | `int` |
| `idx` | `int` |
#### Split: `validation_mismatched`
| Column Name | Data Type |
|---|---|
| `premise` | `str` |
| `hypothesis` | `str` |
| `label` | `int` |
| `idx` | `int` |
|
GAIR/lima
|
GAIR
|
license:other, size_categories:1K<n<10K, modality:text, library:datasets, library:mlcroissant, arxiv:2305.11206, region:us
|
community
|
# Dataset: `GAIR/lima`
## 📝 Metadata
- **Author/Owner:** GAIR
- **Downloads:** 586
- **Likes:** 445
- **Tags:** license:other, size_categories:1K<n<10K, modality:text, library:datasets, library:mlcroissant, arxiv:2305.11206, region:us
- **License:** Not specified
## 📖 Description
```text
A high-quality dataset for efficient instruction tuning....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `lima.py`
- `test.jsonl`
- `train.jsonl`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'GAIR/lima' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/GAIR/lima to ask for access.
```
|
Amod/mental_health_counseling_conversations
|
Amod
|
task_categories:text-generation, task_categories:question-answering, language:en, license:other, size_categories:1K<n<10K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, doi:10.57967/hf/1581, region:us, medical
|
community
|
# Dataset: `Amod/mental_health_counseling_conversations`
## 📝 Metadata
- **Author/Owner:** Amod
- **Downloads:** 3019
- **Likes:** 435
- **Tags:** task_categories:text-generation, task_categories:question-answering, language:en, license:other, size_categories:1K<n<10K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, doi:10.57967/hf/1581, region:us, medical
- **License:** Not specified
## 📖 Description
```text
Amod/mental_health_counseling_conversations
This dataset is a compilation of high-quality, real one-on-one mental health counseling conversations between individuals and licensed professionals. Each exchange is structured as a clear question–answer pair, making it directly suitable for fine-tuning or instruction-tuning language models that need to handle sensitive, empathetic, and contextually aware dialogue.
Since its public release, it has been downloaded over 77,000 times (Aug… See the full description on the dataset page: https://huggingface.co/datasets/Amod/mental_health_counseling_conversations....
```
## 📂 File System Sample
- `.gitattributes`
- `LICENSE-RAIL-D.txt`
- `README.md`
- `combined_dataset.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Context` | `str` |
| `Response` | `str` |
|
timdettmers/openassistant-guanaco
|
timdettmers
|
size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `timdettmers/openassistant-guanaco`
## 📝 Metadata
- **Author/Owner:** timdettmers
- **Downloads:** 6129
- **Likes:** 434
- **Tags:** size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0...
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `openassistant_best_replies_eval.jsonl`
- `openassistant_best_replies_train.jsonl`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
nvidia/HelpSteer2
|
nvidia
|
language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2410.01257, arxiv:2406.08673, region:us, human-feedback
|
community
|
# Dataset: `nvidia/HelpSteer2`
## 📝 Metadata
- **Author/Owner:** nvidia
- **Downloads:** 8290
- **Likes:** 431
- **Tags:** language:en, license:cc-by-4.0, size_categories:10K<n<100K, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2410.01257, arxiv:2406.08673, region:us, human-feedback
- **License:** Not specified
## 📖 Description
```text
HelpSteer2: Open-source dataset for training top-performing reward models
HelpSteer2 is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.
This dataset has been created in partnership with Scale AI.
When used to tune a Llama 3.1 70B Instruct Model, we achieve 94.1% on RewardBench, which makes it the best Reward Model as… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/HelpSteer2....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `disagreements/disagreements.jsonl.gz`
- `preference/preference.jsonl.gz`
- `train.jsonl.gz`
- `validation.jsonl.gz`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `response` | `str` |
| `helpfulness` | `int` |
| `correctness` | `int` |
| `coherence` | `int` |
| `complexity` | `int` |
| `verbosity` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `response` | `str` |
| `helpfulness` | `int` |
| `correctness` | `int` |
| `coherence` | `int` |
| `complexity` | `int` |
| `verbosity` | `int` |
|
bigcode/the-stack-v2
|
bigcode
|
task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:1B<n<10B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2402.19173, arxiv:2107.03374, arxiv:2207.14157, region:us
|
community
|
# Dataset: `bigcode/the-stack-v2`
## 📝 Metadata
- **Author/Owner:** bigcode
- **Downloads:** 5957
- **Likes:** 421
- **Tags:** task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:1B<n<10B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2402.19173, arxiv:2107.03374, arxiv:2207.14157, region:us
- **License:** Not specified
## 📖 Description
```text
The Stack v2
The dataset consists of 4 versions:
bigcode/the-stack-v2: the full "The Stack v2" dataset <-- you are here
bigcode/the-stack-v2-dedup: based on the bigcode/the-stack-v2 but further near-deduplicated
bigcode/the-stack-v2-train-full-ids: based on the bigcode/the-stack-v2-dedup dataset but further filtered with heuristics and spanning 600+ programming languages. The data is grouped into repositories.bigcode/the-stack-v2-train-smol-ids: based on the… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack-v2....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/1C_Enterprise/train-00000-of-00001.parquet`
- `data/2-Dimensional_Array/train-00000-of-00001.parquet`
- `data/4D/train-00000-of-00001.parquet`
- `data/ABAP/train-00000-of-00001.parquet`
- `data/ABAP_CDS/train-00000-of-00001.parquet`
- `data/ABNF/train-00000-of-00001.parquet`
- `data/AGS_Script/train-00000-of-00001.parquet`
- `data/AIDL/train-00000-of-00001.parquet`
- `data/AL/train-00000-of-00001.parquet`
- `data/AMPL/train-00000-of-00001.parquet`
- `data/ANTLR/train-00000-of-00001.parquet`
- `data/API_Blueprint/train-00000-of-00001.parquet`
- `data/APL/train-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'bigcode/the-stack-v2' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/bigcode/the-stack-v2 to ask for access.
```
|
QuixiAI/dolphin
|
QuixiAI
|
task_categories:text-generation, language:en, license:apache-2.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `QuixiAI/dolphin`
## 📝 Metadata
- **Author/Owner:** QuixiAI
- **Downloads:** 491
- **Likes:** 420
- **Tags:** task_categories:text-generation, language:en, license:apache-2.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Dolphin 🐬
https://erichartford.com/dolphin
Dataset details
This dataset is an attempt to replicate the results of Microsoft's Orca
Our dataset consists of:
~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl)
~3.5 million of FLANv2 augmented with GPT-3.5 completions (flan5m-alpaca-uncensored.jsonl)
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m… See the full description on the dataset page: https://huggingface.co/datasets/QuixiAI/dolphin....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `convertToShareGpt.py`
- `dedupeToShareGpt.py`
- `flan1m-alpaca-uncensored-deduped.jsonl`
- `flan1m-alpaca-uncensored.jsonl`
- `flan1m-sharegpt-deduped.json`
- `flan5m-alpaca-uncensored-deduped.jsonl`
- `flan5m-alpaca-uncensored.jsonl`
- `flan5m-sharegpt-deduped.json`
- `fp32_to_fp16.py`
- `llama_flash_attn_monkey_patch.py`
## 📊 Data Structure
### Config: `flan1m-alpaca-uncensored`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
### Config: `flan5m-alpaca-uncensored`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `input` | `str` |
| `output` | `str` |
|
lmsys/chatbot_arena_conversations
|
lmsys
|
license:cc, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2306.05685, region:us
|
community
|
# Dataset: `lmsys/chatbot_arena_conversations`
## 📝 Metadata
- **Author/Owner:** lmsys
- **Downloads:** 1503
- **Likes:** 418
- **Tags:** license:cc, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2306.05685, region:us
- **License:** Not specified
## 📖 Description
```text
Chatbot Arena Conversations Dataset
This dataset contains 33K cleaned conversations with pairwise human preferences.
It is collected from 13K unique IP addresses on the Chatbot Arena from April to June 2023.
Each sample includes a question ID, two model names, their full conversation text in OpenAI API JSON format, the user vote, the anonymized user ID, the detected language tag, the OpenAI moderation API tag, the additional toxic tag, and the timestamp.
To ensure the safe release… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/chatbot_arena_conversations....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001-cced8514c7ed782a.parquet`
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'lmsys/chatbot_arena_conversations' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/lmsys/chatbot_arena_conversations to ask for access.
```
|
meta-math/MetaMathQA
|
meta-math
|
license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2309.12284, region:us, math, math-qa
|
community
|
# Dataset: `meta-math/MetaMathQA`
## 📝 Metadata
- **Author/Owner:** meta-math
- **Downloads:** 6381
- **Likes:** 414
- **Tags:** license:mit, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2309.12284, region:us, math, math-qa
- **License:** Not specified
## 📖 Description
```text
View the project page:
https://meta-math.github.io/
see our paper at https://arxiv.org/abs/2309.12284
Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
None of the augmented data is from the testing set.
You can check the original_question in meta-math/MetaMathQA, each item is from the GSM8K or MATH train set.
Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is… See the full description on the dataset page: https://huggingface.co/datasets/meta-math/MetaMathQA....
```
## 📂 File System Sample
- `.gitattributes`
- `MetaMathQA-395K.json`
- `README.md`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `type` | `str` |
| `query` | `str` |
| `original_question` | `str` |
| `response` | `str` |
|
HuggingFaceM4/FineVision
|
HuggingFaceM4
|
size_categories:10M<n<100M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2510.17269, region:us
|
community
|
# Dataset: `HuggingFaceM4/FineVision`
## 📝 Metadata
- **Author/Owner:** HuggingFaceM4
- **Downloads:** 242784
- **Likes:** 409
- **Tags:** size_categories:10M<n<100M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2510.17269, region:us
- **License:** Not specified
## 📖 Description
```text
Fine Vision
FineVision is a massive collection of datasets with 17.3M images, 24.3M samples, 88.9M turns, and 9.5B answer tokens, designed for training state-of-the-art open Vision-Language-Models.
More detail can be found in the blog post: https://huggingface.co/spaces/HuggingFaceM4/FineVision
Load the data
from datasets import load_dataset, get_dataset_config_names
# Get all subset names and load the first one
available_subsets =… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/FineVision....
```
## 📂 File System Sample
- `.gitattributes`
- `CoSyn_400k_chart/train-00000-of-00052.parquet`
- `CoSyn_400k_chart/train-00001-of-00052.parquet`
- `CoSyn_400k_chart/train-00002-of-00052.parquet`
- `CoSyn_400k_chart/train-00003-of-00052.parquet`
- `CoSyn_400k_chart/train-00004-of-00052.parquet`
- `CoSyn_400k_chart/train-00005-of-00052.parquet`
- `CoSyn_400k_chart/train-00006-of-00052.parquet`
- `CoSyn_400k_chart/train-00007-of-00052.parquet`
- `CoSyn_400k_chart/train-00008-of-00052.parquet`
- `CoSyn_400k_chart/train-00009-of-00052.parquet`
- `CoSyn_400k_chart/train-00010-of-00052.parquet`
- `CoSyn_400k_chart/train-00011-of-00052.parquet`
- `CoSyn_400k_chart/train-00012-of-00052.parquet`
- `CoSyn_400k_chart/train-00013-of-00052.parquet`
- ... and more.
## 📊 Data Structure
### Config: `CoSyn_400k_chart`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
| `source` | `str` |
| `relevance_ratings` | `list` |
| `relevance_min` | `int` |
| `visual_dependency_ratings` | `list` |
| `visual_dependency_min` | `int` |
| `image_correspondence_ratings` | `list` |
| `image_correspondence_min` | `int` |
| `formatting_ratings` | `list` |
| `formatting_min` | `int` |
### Config: `CoSyn_400k_chemical`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
| `source` | `str` |
| `image_correspondence_ratings` | `list` |
| `image_correspondence_min` | `int` |
| `formatting_ratings` | `list` |
| `formatting_min` | `int` |
| `relevance_ratings` | `list` |
| `relevance_min` | `int` |
| `visual_dependency_ratings` | `list` |
| `visual_dependency_min` | `int` |
### Config: `CoSyn_400k_circuit`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `images` | `list` |
| `texts` | `list` |
| `source` | `str` |
| `image_correspondence_ratings` | `list` |
| `image_correspondence_min` | `int` |
| `formatting_ratings` | `list` |
| `formatting_min` | `int` |
| `relevance_ratings` | `list` |
| `relevance_min` | `int` |
| `visual_dependency_ratings` | `list` |
| `visual_dependency_min` | `int` |
|
allenai/objaverse
|
allenai
|
language:en, license:odc-by, arxiv:2212.08051, region:us
|
community
|
# Dataset: `allenai/objaverse`
## 📝 Metadata
- **Author/Owner:** allenai
- **Downloads:** 424311
- **Likes:** 408
- **Tags:** language:en, license:odc-by, arxiv:2212.08051, region:us
- **License:** Not specified
## 📖 Description
```text
Objaverse
Objaverse is a Massive Dataset with 800K+ Annotated 3D Objects.
More documentation is coming soon. In the meantime, please see our paper and website for additional details.
License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse are all licensed as creative commons distributable objects, and may be under the following licenses:
CC-BY 4.0 - 721K objects
CC-BY-NC 4.0 - 25K objects
CC-BY-NC-SA 4.0 - 52K… See the full description on the dataset page: https://huggingface.co/datasets/allenai/objaverse....
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `README.md`
- `glbs/000-000/000074a334c541878360457c672b6c2e.glb`
- `glbs/000-000/0000ecca9a234cae994be239f6fec552.glb`
- `glbs/000-000/00010d9634f645859d2e252d189b31d5.glb`
- `glbs/000-000/0002064ef02e46f9a10b4bff10bae805.glb`
- `glbs/000-000/0002c6eafa154e8bb08ebafb715a8d46.glb`
- `glbs/000-000/0002e50309b44e409c96f440202d90b3.glb`
- `glbs/000-000/0009a51366fb45e2876715bde6a574b7.glb`
- `glbs/000-000/0009f74807184fefa8eb58211edba390.glb`
- `glbs/000-000/000a00944e294f7a94f95d420fdd45eb.glb`
- `glbs/000-000/000a3d9fa4ff4c888e71e698694eb0b0.glb`
- `glbs/000-000/000a82b4e6bf4e909fbe5a3b0e6d67dc.glb`
- `glbs/000-000/000b76f2b03e44e8ab44e1a1614be0f4.glb`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `Band_Aid` | `list` |
| `Bible` | `list` |
| `CD_player` | `list` |
| `Christmas_tree` | `list` |
| `Dixie_cup` | `list` |
| `Ferris_wheel` | `list` |
| `Lego` | `list` |
| `Rollerblade` | `list` |
| `Sharpie` | `list` |
| `Tabasco_sauce` | `list` |
| `aerosol_can` | `list` |
| `air_conditioner` | `list` |
| `airplane` | `list` |
| `alarm_clock` | `list` |
| `alcohol` | `list` |
| `alligator` | `list` |
| `almond` | `list` |
| `ambulance` | `list` |
| `amplifier` | `list` |
| `anklet` | `list` |
| `antenna` | `list` |
| `apple` | `list` |
| `apricot` | `list` |
| `apron` | `list` |
| `aquarium` | `list` |
| `arctic_(type_of_shoe)` | `list` |
| `armband` | `list` |
| `armchair` | `list` |
| `armoire` | `list` |
| `armor` | `list` |
| `army_tank` | `list` |
| `artichoke` | `list` |
| `ashtray` | `list` |
| `asparagus` | `list` |
| `atomizer` | `list` |
| `automatic_washer` | `list` |
| `avocado` | `list` |
| `award` | `list` |
| `awning` | `list` |
| `ax` | `list` |
| `baboon` | `list` |
| `baby_buggy` | `list` |
| `backpack` | `list` |
| `bagel` | `list` |
| `baguet` | `list` |
| `bait` | `list` |
| `ball` | `list` |
| `ballet_skirt` | `list` |
| `balloon` | `list` |
| `bamboo` | `list` |
| `banana` | `list` |
| `bandage` | `list` |
| `bandanna` | `list` |
| `banjo` | `list` |
| `banner` | `list` |
| `barbell` | `list` |
| `barge` | `list` |
| `barrel` | `list` |
| `barrow` | `list` |
| `baseball` | `list` |
| `baseball_bat` | `list` |
| `baseball_cap` | `list` |
| `baseball_glove` | `list` |
| `basket` | `list` |
| `basketball` | `list` |
| `basketball_backboard` | `list` |
| `bass_horn` | `list` |
| `bat_(animal)` | `list` |
| `bath_mat` | `list` |
| `bath_towel` | `list` |
| `bathrobe` | `list` |
| `bathtub` | `list` |
| `battery` | `list` |
| `beachball` | `list` |
| `bead` | `list` |
| `beanbag` | `list` |
| `beanie` | `list` |
| `bear` | `list` |
| `bed` | `list` |
| `bedpan` | `list` |
| `bedspread` | `list` |
| `beef_(food)` | `list` |
| `beeper` | `list` |
| `beer_bottle` | `list` |
| `beer_can` | `list` |
| `beetle` | `list` |
| `bell` | `list` |
| `bell_pepper` | `list` |
| `belt` | `list` |
| `belt_buckle` | `list` |
| `bench` | `list` |
| `beret` | `list` |
| `bicycle` | `list` |
| `billboard` | `list` |
| `binder` | `list` |
| `binoculars` | `list` |
| `bird` | `list` |
| `birdbath` | `list` |
| `birdcage` | `list` |
| `birdfeeder` | `list` |
| `birdhouse` | `list` |
| `birthday_cake` | `list` |
| `birthday_card` | `list` |
| `blackberry` | `list` |
| `blackboard` | `list` |
| `blanket` | `list` |
| `blazer` | `list` |
| `blender` | `list` |
| `blimp` | `list` |
| `blouse` | `list` |
| `blueberry` | `list` |
| `boat` | `list` |
| `bob` | `list` |
| `bobbin` | `list` |
| `boiled_egg` | `list` |
| `bolo_tie` | `list` |
| `bolt` | `list` |
| `bonnet` | `list` |
| `book` | `list` |
| `bookcase` | `list` |
| `booklet` | `list` |
| `bookmark` | `list` |
| `boom_microphone` | `list` |
| `boot` | `list` |
| `bottle` | `list` |
| `bottle_cap` | `list` |
| `bottle_opener` | `list` |
| `bouquet` | `list` |
| `bow-tie` | `list` |
| `bow_(decorative_ribbons)` | `list` |
| `bow_(weapon)` | `list` |
| `bowl` | `list` |
| `bowler_hat` | `list` |
| `bowling_ball` | `list` |
| `box` | `list` |
| `boxing_glove` | `list` |
| `bracelet` | `list` |
| `brass_plaque` | `list` |
| `brassiere` | `list` |
| `bread` | `list` |
| `bread-bin` | `list` |
| `breechcloth` | `list` |
| `bridal_gown` | `list` |
| `briefcase` | `list` |
| `broach` | `list` |
| `broccoli` | `list` |
| `broom` | `list` |
| `brownie` | `list` |
| `brussels_sprouts` | `list` |
| `bubble_gum` | `list` |
| `bucket` | `list` |
| `bulldog` | `list` |
| `bulldozer` | `list` |
| `bullet_train` | `list` |
| `bulletin_board` | `list` |
| `bulletproof_vest` | `list` |
| `bullhorn` | `list` |
| `bun` | `list` |
| `bunk_bed` | `list` |
| `buoy` | `list` |
| `burrito` | `list` |
| `bus_(vehicle)` | `list` |
| `business_card` | `list` |
| `butter` | `list` |
| `butterfly` | `list` |
| `button` | `list` |
| `cab_(taxi)` | `list` |
| `cabana` | `list` |
| `cabin_car` | `list` |
| `cabinet` | `list` |
| `cake` | `list` |
| `calculator` | `list` |
| `calendar` | `list` |
| `calf` | `list` |
| `camcorder` | `list` |
| `camel` | `list` |
| `camera` | `list` |
| `camera_lens` | `list` |
| `camper_(vehicle)` | `list` |
| `can` | `list` |
| `can_opener` | `list` |
| `candle` | `list` |
| `candle_holder` | `list` |
| `candy_bar` | `list` |
| `candy_cane` | `list` |
| `canister` | `list` |
| `canoe` | `list` |
| `cantaloup` | `list` |
| `canteen` | `list` |
| `cap_(headwear)` | `list` |
| `cape` | `list` |
| `cappuccino` | `list` |
| `car_(automobile)` | `list` |
| `car_battery` | `list` |
| `card` | `list` |
| `cardigan` | `list` |
| `cargo_ship` | `list` |
| `carnation` | `list` |
| `carrot` | `list` |
| `cart` | `list` |
| `carton` | `list` |
| `cash_register` | `list` |
| `casserole` | `list` |
| `cassette` | `list` |
| `cast` | `list` |
| `cat` | `list` |
| `cauliflower` | `list` |
| `cayenne_(spice)` | `list` |
| `celery` | `list` |
| `cellular_telephone` | `list` |
| `chair` | `list` |
| `chaise_longue` | `list` |
| `chalice` | `list` |
| `chandelier` | `list` |
| `checkbook` | `list` |
| `checkerboard` | `list` |
| `cherry` | `list` |
| `chessboard` | `list` |
| `chicken_(animal)` | `list` |
| `chili_(vegetable)` | `list` |
| `chime` | `list` |
| `chinaware` | `list` |
| `chocolate_bar` | `list` |
| `chocolate_cake` | `list` |
| `chocolate_milk` | `list` |
| `chocolate_mousse` | `list` |
| `choker` | `list` |
| `chopping_board` | `list` |
| `chopstick` | `list` |
| `cider` | `list` |
| `cigar_box` | `list` |
| `cigarette` | `list` |
| `cigarette_case` | `list` |
| `cincture` | `list` |
| `cistern` | `list` |
| `clarinet` | `list` |
| `clasp` | `list` |
| `cleansing_agent` | `list` |
| `cleat_(for_securing_rope)` | `list` |
| `clementine` | `list` |
| `clip` | `list` |
| `clipboard` | `list` |
| `clippers_(for_plants)` | `list` |
| `cloak` | `list` |
| `clock` | `list` |
| `clock_tower` | `list` |
| `clothes_hamper` | `list` |
| `clothespin` | `list` |
| `clutch_bag` | `list` |
| `coaster` | `list` |
| `coat` | `list` |
| `coat_hanger` | `list` |
| `coatrack` | `list` |
| `cock` | `list` |
| `cockroach` | `list` |
| `cocoa_(beverage)` | `list` |
| `coconut` | `list` |
| `coffee_maker` | `list` |
| `coffee_table` | `list` |
| `coffeepot` | `list` |
| `coil` | `list` |
| `coin` | `list` |
| `colander` | `list` |
| `coloring_material` | `list` |
| `combination_lock` | `list` |
| `comic_book` | `list` |
| `compass` | `list` |
| `computer_keyboard` | `list` |
| `condiment` | `list` |
| `cone` | `list` |
| `control` | `list` |
| `convertible_(automobile)` | `list` |
| `cooker` | `list` |
| `cookie` | `list` |
| `cooking_utensil` | `list` |
| `cooler_(for_food)` | `list` |
| `cork_(bottle_plug)` | `list` |
| `corkboard` | `list` |
| `corkscrew` | `list` |
| `cornbread` | `list` |
| `cornet` | `list` |
| `cornice` | `list` |
| `cornmeal` | `list` |
| `corset` | `list` |
| `costume` | `list` |
| `cougar` | `list` |
| `cover` | `list` |
| `coverall` | `list` |
| `cow` | `list` |
| `cowbell` | `list` |
| `cowboy_hat` | `list` |
| `crab_(animal)` | `list` |
| `crabmeat` | `list` |
| `cracker` | `list` |
| `crape` | `list` |
| `crate` | `list` |
| `crawfish` | `list` |
| `crayon` | `list` |
| `cream_pitcher` | `list` |
| `crescent_roll` | `list` |
| `crib` | `list` |
| `crisp_(potato_chip)` | `list` |
| `crossbar` | `list` |
| `crouton` | `list` |
| `crow` | `list` |
| `crowbar` | `list` |
| `crown` | `list` |
| `crucifix` | `list` |
| `cruise_ship` | `list` |
| `crutch` | `list` |
| `cub_(animal)` | `list` |
| `cube` | `list` |
| `cucumber` | `list` |
| `cufflink` | `list` |
| `cup` | `list` |
| `cupboard` | `list` |
| `cupcake` | `list` |
| `curtain` | `list` |
| `cushion` | `list` |
| `cylinder` | `list` |
| `cymbal` | `list` |
| `dagger` | `list` |
| `dalmatian` | `list` |
| `dartboard` | `list` |
| `date_(fruit)` | `list` |
| `deadbolt` | `list` |
| `deck_chair` | `list` |
| `deer` | `list` |
| `desk` | `list` |
| `detergent` | `list` |
| `diaper` | `list` |
| `diary` | `list` |
| `die` | `list` |
| `dinghy` | `list` |
| `dining_table` | `list` |
| `dirt_bike` | `list` |
| `dish` | `list` |
| `dish_antenna` | `list` |
| `dishrag` | `list` |
| `dishtowel` | `list` |
| `dishwasher` | `list` |
| `dishwasher_detergent` | `list` |
| `dispenser` | `list` |
| `dog` | `list` |
| `dog_collar` | `list` |
| `doll` | `list` |
| `dollar` | `list` |
| `dollhouse` | `list` |
| `dolphin` | `list` |
| `domestic_ass` | `list` |
| `doorknob` | `list` |
| `doormat` | `list` |
| `doughnut` | `list` |
| `dove` | `list` |
| `dragonfly` | `list` |
| `drawer` | `list` |
| `dress` | `list` |
| `dress_hat` | `list` |
| `dress_suit` | `list` |
| `dresser` | `list` |
| `drill` | `list` |
| `drone` | `list` |
| `drum_(musical_instrument)` | `list` |
| `drumstick` | `list` |
| `duck` | `list` |
| `duckling` | `list` |
| `duct_tape` | `list` |
| `duffel_bag` | `list` |
| `dumbbell` | `list` |
| `dumpster` | `list` |
| `dustpan` | `list` |
| `eagle` | `list` |
| `earphone` | `list` |
| `earplug` | `list` |
| `earring` | `list` |
| `easel` | `list` |
| `eclair` | `list` |
| `edible_corn` | `list` |
| `eel` | `list` |
| `egg` | `list` |
| `egg_roll` | `list` |
| `egg_yolk` | `list` |
| `eggbeater` | `list` |
| `eggplant` | `list` |
| `elephant` | `list` |
| `elevator_car` | `list` |
| `elk` | `list` |
| `envelope` | `list` |
| `eraser` | `list` |
| `escargot` | `list` |
| `eyepatch` | `list` |
| `falcon` | `list` |
| `fan` | `list` |
| `faucet` | `list` |
| `fedora` | `list` |
| `ferret` | `list` |
| `ferry` | `list` |
| `fig_(fruit)` | `list` |
| `fighter_jet` | `list` |
| `figurine` | `list` |
| `file_(tool)` | `list` |
| `file_cabinet` | `list` |
| `fire_alarm` | `list` |
| `fire_engine` | `list` |
| `fire_extinguisher` | `list` |
| `fire_hose` | `list` |
| `fireplace` | `list` |
| `fireplug` | `list` |
| `first-aid_kit` | `list` |
| `fish` | `list` |
| `fish_(food)` | `list` |
| `fishbowl` | `list` |
| `fishing_rod` | `list` |
| `flag` | `list` |
| `flagpole` | `list` |
| `flamingo` | `list` |
| `flannel` | `list` |
| `flap` | `list` |
| `flash` | `list` |
| `flashlight` | `list` |
| `fleece` | `list` |
| `flip-flop_(sandal)` | `list` |
| `flipper_(footwear)` | `list` |
| `flower_arrangement` | `list` |
| `flowerpot` | `list` |
| `flute_glass` | `list` |
| `foal` | `list` |
| `folding_chair` | `list` |
| `food_processor` | `list` |
| `football_(American)` | `list` |
| `football_helmet` | `list` |
| `footstool` | `list` |
| `fork` | `list` |
| `forklift` | `list` |
| `freight_car` | `list` |
| `freshener` | `list` |
| `frisbee` | `list` |
| `frog` | `list` |
| `fruit_juice` | `list` |
| `frying_pan` | `list` |
| `fume_hood` | `list` |
| `funnel` | `list` |
| `futon` | `list` |
| `gameboard` | `list` |
| `garbage` | `list` |
| `garbage_truck` | `list` |
| `garden_hose` | `list` |
| `gargle` | `list` |
| `gargoyle` | `list` |
| `garlic` | `list` |
| `gasmask` | `list` |
| `gazelle` | `list` |
| `gelatin` | `list` |
| `gemstone` | `list` |
| `generator` | `list` |
| `giant_panda` | `list` |
| `gift_wrap` | `list` |
| `ginger` | `list` |
| `giraffe` | `list` |
| `glass_(drink_container)` | `list` |
| `globe` | `list` |
| `glove` | `list` |
| `goat` | `list` |
| `goggles` | `list` |
| `goldfish` | `list` |
| `golf_club` | `list` |
| `golfcart` | `list` |
| `gondola_(boat)` | `list` |
| `goose` | `list` |
| `gorilla` | `list` |
| `gourd` | `list` |
| `grape` | `list` |
| `grater` | `list` |
| `gravestone` | `list` |
| `gravy_boat` | `list` |
| `green_bean` | `list` |
| `green_onion` | `list` |
| `grill` | `list` |
| `grits` | `list` |
| `grizzly` | `list` |
| `grocery_bag` | `list` |
| `guitar` | `list` |
| `gull` | `list` |
| `gun` | `list` |
| `hair_dryer` | `list` |
| `hairbrush` | `list` |
| `hairnet` | `list` |
| `halter_top` | `list` |
| `ham` | `list` |
| `hamburger` | `list` |
| `hammer` | `list` |
| `hammock` | `list` |
| `hamper` | `list` |
| `hamster` | `list` |
| `hand_glass` | `list` |
| `hand_towel` | `list` |
| `handbag` | `list` |
| `handcart` | `list` |
| `handcuff` | `list` |
| `handkerchief` | `list` |
| `handle` | `list` |
| `handsaw` | `list` |
| `hardback_book` | `list` |
| `harmonium` | `list` |
| `hat` | `list` |
| `hatbox` | `list` |
| `headband` | `list` |
| `headboard` | `list` |
| `headlight` | `list` |
| `headscarf` | `list` |
| `headset` | `list` |
| `headstall_(for_horses)` | `list` |
| `heart` | `list` |
| `heater` | `list` |
| `helicopter` | `list` |
| `helmet` | `list` |
| `heron` | `list` |
| `highchair` | `list` |
| `hinge` | `list` |
| `hippopotamus` | `list` |
| `hockey_stick` | `list` |
| `hog` | `list` |
| `honey` | `list` |
| `hook` | `list` |
| `hookah` | `list` |
| `horned_cow` | `list` |
| `hornet` | `list` |
| `horse` | `list` |
| `horse_buggy` | `list` |
| `horse_carriage` | `list` |
| `hose` | `list` |
| `hot-air_balloon` | `list` |
| `hot_sauce` | `list` |
| `hotplate` | `list` |
| `hourglass` | `list` |
| `houseboat` | `list` |
| `hummingbird` | `list` |
| `iPod` | `list` |
| `ice_maker` | `list` |
| `ice_pack` | `list` |
| `ice_skate` | `list` |
| `icecream` | `list` |
| `identity_card` | `list` |
| `igniter` | `list` |
| `inhaler` | `list` |
| `inkpad` | `list` |
| `iron_(for_clothing)` | `list` |
| `ironing_board` | `list` |
| `jacket` | `list` |
| `jam` | `list` |
| `jar` | `list` |
| `jean` | `list` |
| `jeep` | `list` |
| `jersey` | `list` |
| `jet_plane` | `list` |
| `jewel` | `list` |
| `jewelry` | `list` |
| `joystick` | `list` |
| `jumpsuit` | `list` |
| `kayak` | `list` |
| `keg` | `list` |
| `kennel` | `list` |
| `kettle` | `list` |
| `key` | `list` |
| `keycard` | `list` |
| `kilt` | `list` |
| `kimono` | `list` |
| `kitchen_sink` | `list` |
| `kitchen_table` | `list` |
| `kite` | `list` |
| `kitten` | `list` |
| `kiwi_fruit` | `list` |
| `knee_pad` | `list` |
| `knife` | `list` |
| `knitting_needle` | `list` |
| `knob` | `list` |
| `knocker_(on_a_door)` | `list` |
| `koala` | `list` |
| `lab_coat` | `list` |
| `ladder` | `list` |
| `ladle` | `list` |
| `ladybug` | `list` |
| `lamb-chop` | `list` |
| `lamb_(animal)` | `list` |
| `lamp` | `list` |
| `lamppost` | `list` |
| `lampshade` | `list` |
| `lantern` | `list` |
| `laptop_computer` | `list` |
| `lasagna` | `list` |
| `latch` | `list` |
| `lawn_mower` | `list` |
| `leather` | `list` |
| `legging_(clothing)` | `list` |
| `legume` | `list` |
| `lemon` | `list` |
| `lemonade` | `list` |
| `lettuce` | `list` |
| `license_plate` | `list` |
| `life_buoy` | `list` |
| `life_jacket` | `list` |
| `lightbulb` | `list` |
| `lightning_rod` | `list` |
| `lime` | `list` |
| `limousine` | `list` |
| `lion` | `list` |
| `lip_balm` | `list` |
| `liquor` | `list` |
| `lizard` | `list` |
| `locker` | `list` |
| `log` | `list` |
| `lollipop` | `list` |
| `loveseat` | `list` |
| `machine_gun` | `list` |
| `magazine` | `list` |
| `magnet` | `list` |
| `mail_slot` | `list` |
| `mailbox_(at_home)` | `list` |
| `mallard` | `list` |
| `mallet` | `list` |
| `mammoth` | `list` |
| `manatee` | `list` |
| `mandarin_orange` | `list` |
| `manger` | `list` |
| `manhole` | `list` |
| `map` | `list` |
| `marker` | `list` |
| `martini` | `list` |
| `mascot` | `list` |
| `mashed_potato` | `list` |
| `mask` | `list` |
| `mast` | `list` |
| `mat_(gym_equipment)` | `list` |
| `matchbox` | `list` |
| `mattress` | `list` |
| `measuring_cup` | `list` |
| `measuring_stick` | `list` |
| `meatball` | `list` |
| `medicine` | `list` |
| `melon` | `list` |
| `microphone` | `list` |
| `microscope` | `list` |
| `microwave_oven` | `list` |
| `milestone` | `list` |
| `milk` | `list` |
| `milk_can` | `list` |
| `milkshake` | `list` |
| `minivan` | `list` |
| `mint_candy` | `list` |
| `mirror` | `list` |
| `mitten` | `list` |
| `mixer_(kitchen_tool)` | `list` |
| `money` | `list` |
| `monitor_(computer_equipment) computer_monitor` | `list` |
| `monkey` | `list` |
| `mop` | `list` |
| `motor` | `list` |
| `motor_scooter` | `list` |
| `motor_vehicle` | `list` |
| `motorcycle` | `list` |
| `mound_(baseball)` | `list` |
| `mouse_(computer_equipment)` | `list` |
| `mousepad` | `list` |
| `muffin` | `list` |
| `mug` | `list` |
| `mushroom` | `list` |
| `music_stool` | `list` |
| `musical_instrument` | `list` |
| `nailfile` | `list` |
| `napkin` | `list` |
| `neckerchief` | `list` |
| `necklace` | `list` |
| `necktie` | `list` |
| `needle` | `list` |
| `nest` | `list` |
| `newspaper` | `list` |
| `newsstand` | `list` |
| `nightshirt` | `list` |
| `notebook` | `list` |
| `notepad` | `list` |
| `nut` | `list` |
| `nutcracker` | `list` |
| `oar` | `list` |
| `octopus_(animal)` | `list` |
| `octopus_(food)` | `list` |
| `oil_lamp` | `list` |
| `olive_oil` | `list` |
| `omelet` | `list` |
| `onion` | `list` |
| `orange_(fruit)` | `list` |
| `orange_juice` | `list` |
| `ostrich` | `list` |
| `ottoman` | `list` |
| `oven` | `list` |
| `overalls_(clothing)` | `list` |
| `owl` | `list` |
| `pacifier` | `list` |
| `packet` | `list` |
| `paddle` | `list` |
| `padlock` | `list` |
| `paintbrush` | `list` |
| `painting` | `list` |
| `pajamas` | `list` |
| `palette` | `list` |
| `pan_(for_cooking)` | `list` |
| `pan_(metal_container)` | `list` |
| `pancake` | `list` |
| `papaya` | `list` |
| `paper_plate` | `list` |
| `paper_towel` | `list` |
| `paperback_book` | `list` |
| `paperweight` | `list` |
| `parachute` | `list` |
| `parakeet` | `list` |
| `parasail_(sports)` | `list` |
| `parasol` | `list` |
| `parchment` | `list` |
| `parka` | `list` |
| `parking_meter` | `list` |
| `parrot` | `list` |
| `passenger_car_(part_of_a_train)` | `list` |
| `passenger_ship` | `list` |
| `passport` | `list` |
| `pastry` | `list` |
| `patty_(food)` | `list` |
| `pea_(food)` | `list` |
| `peach` | `list` |
| `peanut_butter` | `list` |
| `pear` | `list` |
| `peeler_(tool_for_fruit_and_vegetables)` | `list` |
| `pegboard` | `list` |
| `pelican` | `list` |
| `pen` | `list` |
| `pencil` | `list` |
| `pencil_box` | `list` |
| `pencil_sharpener` | `list` |
| `pendulum` | `list` |
| `penguin` | `list` |
| `pennant` | `list` |
| `penny_(coin)` | `list` |
| `pepper` | `list` |
| `pepper_mill` | `list` |
| `perfume` | `list` |
| `persimmon` | `list` |
| `person` | `list` |
| `pet` | `list` |
| `pew_(church_bench)` | `list` |
| `phonebook` | `list` |
| `phonograph_record` | `list` |
| `piano` | `list` |
| `pickle` | `list` |
| `pickup_truck` | `list` |
| `pie` | `list` |
| `pigeon` | `list` |
| `piggy_bank` | `list` |
| `pillow` | `list` |
| `pineapple` | `list` |
| `pinecone` | `list` |
| `ping-pong_ball` | `list` |
| `pinwheel` | `list` |
| `pipe` | `list` |
| `pipe_bowl` | `list` |
| `pirate_flag` | `list` |
| `pistol` | `list` |
| `pita_(bread)` | `list` |
| `pitcher_(vessel_for_liquid)` | `list` |
| `pitchfork` | `list` |
| `pizza` | `list` |
| `place_mat` | `list` |
| `plastic_bag` | `list` |
| `plate` | `list` |
| `platter` | `list` |
| `playpen` | `list` |
| `pliers` | `list` |
| `plow_(farm_equipment)` | `list` |
| `plume` | `list` |
| `pocket_watch` | `list` |
| `pocketknife` | `list` |
| `poker_(fire_stirring_tool)` | `list` |
| `poker_chip` | `list` |
| `polar_bear` | `list` |
| `pole` | `list` |
| `police_cruiser` | `list` |
| `polo_shirt` | `list` |
| `poncho` | `list` |
| `pony` | `list` |
| `pool_table` | `list` |
| `pop_(soda)` | `list` |
| `popsicle` | `list` |
| `postbox_(public)` | `list` |
| `postcard` | `list` |
| `poster` | `list` |
| `pot` | `list` |
| `potato` | `list` |
| `potholder` | `list` |
| `pottery` | `list` |
| `pouch` | `list` |
| `power_shovel` | `list` |
| `prawn` | `list` |
| `pretzel` | `list` |
| `printer` | `list` |
| `projectile_(weapon)` | `list` |
| `projector` | `list` |
| `propeller` | `list` |
| `prune` | `list` |
| `pudding` | `list` |
| `puffer_(fish)` | `list` |
| `puffin` | `list` |
| `pug-dog` | `list` |
| `pumpkin` | `list` |
| `puncher` | `list` |
| `puppet` | `list` |
| `puppy` | `list` |
| `quesadilla` | `list` |
| `quiche` | `list` |
| `quilt` | `list` |
| `rabbit` | `list` |
| `race_car` | `list` |
| `racket` | `list` |
| `radar` | `list` |
| `radiator` | `list` |
| `radio_receiver` | `list` |
| `radish` | `list` |
| `raft` | `list` |
| `rag_doll` | `list` |
| `railcar_(part_of_a_train)` | `list` |
| `raincoat` | `list` |
| `ram_(animal)` | `list` |
| `raspberry` | `list` |
| `rat` | `list` |
| `reamer_(juicer)` | `list` |
| `rearview_mirror` | `list` |
| `receipt` | `list` |
| `recliner` | `list` |
| `record_player` | `list` |
| `reflector` | `list` |
| `refrigerator` | `list` |
| `remote_control` | `list` |
| `rhinoceros` | `list` |
| `rib_(food)` | `list` |
| `rifle` | `list` |
| `ring` | `list` |
| `river_boat` | `list` |
| `road_map` | `list` |
| `robe` | `list` |
| `rocking_chair` | `list` |
| `rodent` | `list` |
| `roller_skate` | `list` |
| `rolling_pin` | `list` |
| `root_beer` | `list` |
| `router_(computer_equipment)` | `list` |
| `rubber_band` | `list` |
| `runner_(carpet)` | `list` |
| `saddle_(on_an_animal)` | `list` |
| `saddle_blanket` | `list` |
| `saddlebag` | `list` |
| `safety_pin` | `list` |
| `sail` | `list` |
| `salad` | `list` |
| `salad_plate` | `list` |
| `salami` | `list` |
| `salmon_(fish)` | `list` |
| `salmon_(food)` | `list` |
| `salsa` | `list` |
| `saltshaker` | `list` |
| `sandal_(type_of_shoe)` | `list` |
| `sandwich` | `list` |
| `satchel` | `list` |
| `saucepan` | `list` |
| `saucer` | `list` |
| `sausage` | `list` |
| `sawhorse` | `list` |
| `saxophone` | `list` |
| `scale_(measuring_instrument)` | `list` |
| `scarecrow` | `list` |
| `scarf` | `list` |
| `school_bus` | `list` |
| `scissors` | `list` |
| `scoreboard` | `list` |
| `scraper` | `list` |
| `screwdriver` | `list` |
| `scrubbing_brush` | `list` |
| `sculpture` | `list` |
| `seabird` | `list` |
| `seahorse` | `list` |
| `seaplane` | `list` |
| `seashell` | `list` |
| `sewing_machine` | `list` |
| `shaker` | `list` |
| `shampoo` | `list` |
| `shark` | `list` |
| `sharpener` | `list` |
| `shaver_(electric)` | `list` |
| `shaving_cream` | `list` |
| `shawl` | `list` |
| `shears` | `list` |
| `sheep` | `list` |
| `shepherd_dog` | `list` |
| `sherbert` | `list` |
| `shield` | `list` |
| `shirt` | `list` |
| `shoe` | `list` |
| `shopping_bag` | `list` |
| `shopping_cart` | `list` |
| `short_pants` | `list` |
| `shot_glass` | `list` |
| `shoulder_bag` | `list` |
| `shovel` | `list` |
| `shower_cap` | `list` |
| `shower_curtain` | `list` |
| `shower_head` | `list` |
| `shredder_(for_paper)` | `list` |
| `signboard` | `list` |
| `silo` | `list` |
| `sink` | `list` |
| `skateboard` | `list` |
| `skewer` | `list` |
| `ski` | `list` |
| `ski_boot` | `list` |
| `ski_parka` | `list` |
| `ski_pole` | `list` |
| `skirt` | `list` |
| `skullcap` | `list` |
| `sled` | `list` |
| `sleeping_bag` | `list` |
| `slide` | `list` |
| `slipper_(footwear)` | `list` |
| `smoothie` | `list` |
| `snake` | `list` |
| `snowboard` | `list` |
| `snowman` | `list` |
| `snowmobile` | `list` |
| `soap` | `list` |
| `soccer_ball` | `list` |
| `sock` | `list` |
| `sofa` | `list` |
| `sofa_bed` | `list` |
| `softball` | `list` |
| `solar_array` | `list` |
| `sombrero` | `list` |
| `soup` | `list` |
| `soup_bowl` | `list` |
| `soupspoon` | `list` |
| `soya_milk` | `list` |
| `space_shuttle` | `list` |
| `sparkler_(fireworks)` | `list` |
| `spatula` | `list` |
| `speaker_(stero_equipment)` | `list` |
| `spear` | `list` |
| `spectacles` | `list` |
| `spice_rack` | `list` |
| `spider` | `list` |
| `sponge` | `list` |
| `spoon` | `list` |
| `sportswear` | `list` |
| `spotlight` | `list` |
| `squid_(food)` | `list` |
| `squirrel` | `list` |
| `stagecoach` | `list` |
| `stapler_(stapling_machine)` | `list` |
| `starfish` | `list` |
| `statue_(sculpture)` | `list` |
| `steak_(food)` | `list` |
| `steak_knife` | `list` |
| `steering_wheel` | `list` |
| `step_stool` | `list` |
| `stepladder` | `list` |
| `stereo_(sound_system)` | `list` |
| `stew` | `list` |
| `stirrer` | `list` |
| `stirrup` | `list` |
| `stool` | `list` |
| `stop_sign` | `list` |
| `stove` | `list` |
| `strainer` | `list` |
| `strap` | `list` |
| `straw_(for_drinking)` | `list` |
| `strawberry` | `list` |
| `street_sign` | `list` |
| `streetlight` | `list` |
| `string_cheese` | `list` |
| `stylus` | `list` |
| `subwoofer` | `list` |
| `sugar_bowl` | `list` |
| `sugarcane_(plant)` | `list` |
| `suit_(clothing)` | `list` |
| `suitcase` | `list` |
| `sunflower` | `list` |
| `sunglasses` | `list` |
| `sunhat` | `list` |
| `surfboard` | `list` |
| `sushi` | `list` |
| `suspenders` | `list` |
| `sweat_pants` | `list` |
| `sweatband` | `list` |
| `sweater` | `list` |
| `sweatshirt` | `list` |
| `sweet_potato` | `list` |
| `swimsuit` | `list` |
| `sword` | `list` |
| `syringe` | `list` |
| `table` | `list` |
| `table-tennis_table` | `list` |
| `table_lamp` | `list` |
| `tablecloth` | `list` |
| `tachometer` | `list` |
| `taco` | `list` |
| `tag` | `list` |
| `taillight` | `list` |
| `tambourine` | `list` |
| `tank_(storage_vessel)` | `list` |
| `tank_top_(clothing)` | `list` |
| `tape_(sticky_cloth_or_paper)` | `list` |
| `tape_measure` | `list` |
| `tapestry` | `list` |
| `tarp` | `list` |
| `tartan` | `list` |
| `tassel` | `list` |
| `teacup` | `list` |
| `teakettle` | `list` |
| `teapot` | `list` |
| `teddy_bear` | `list` |
| `telephone` | `list` |
| `telephone_booth` | `list` |
| `telephone_pole` | `list` |
| `telephoto_lens` | `list` |
| `television_camera` | `list` |
| `television_set` | `list` |
| `tennis_ball` | `list` |
| `tennis_racket` | `list` |
| `tequila` | `list` |
| `thermometer` | `list` |
| `thermos_bottle` | `list` |
| `thermostat` | `list` |
| `thimble` | `list` |
| `thread` | `list` |
| `thumbtack` | `list` |
| `tiara` | `list` |
| `tiger` | `list` |
| `tights_(clothing)` | `list` |
| `timer` | `list` |
| `tinfoil` | `list` |
| `tinsel` | `list` |
| `tissue_paper` | `list` |
| `toast_(food)` | `list` |
| `toaster` | `list` |
| `toaster_oven` | `list` |
| `tobacco_pipe` | `list` |
| `toilet` | `list` |
| `toilet_tissue` | `list` |
| `tomato` | `list` |
| `tongs` | `list` |
| `toolbox` | `list` |
| `toothbrush` | `list` |
| `toothpaste` | `list` |
| `toothpick` | `list` |
| `tortilla` | `list` |
| `tote_bag` | `list` |
| `tow_truck` | `list` |
| `towel` | `list` |
| `towel_rack` | `list` |
| `toy` | `list` |
| `tractor_(farm_equipment)` | `list` |
| `traffic_light` | `list` |
| `trailer_truck` | `list` |
| `train_(railroad_vehicle)` | `list` |
| `trampoline` | `list` |
| `trash_can` | `list` |
| `tray` | `list` |
| `trench_coat` | `list` |
| `triangle_(musical_instrument)` | `list` |
| `tricycle` | `list` |
| `tripod` | `list` |
| `trophy_cup` | `list` |
| `trousers` | `list` |
| `truck` | `list` |
| `truffle_(chocolate)` | `list` |
| `trunk` | `list` |
| `turban` | `list` |
| `turkey_(food)` | `list` |
| `turnip` | `list` |
| `turtle` | `list` |
| `turtleneck_(clothing)` | `list` |
| `tux` | `list` |
| `typewriter` | `list` |
| `umbrella` | `list` |
| `underdrawers` | `list` |
| `underwear` | `list` |
| `unicycle` | `list` |
| `urinal` | `list` |
| `urn` | `list` |
| `vacuum_cleaner` | `list` |
| `vase` | `list` |
| `veil` | `list` |
| `vending_machine` | `list` |
| `vent` | `list` |
| `vest` | `list` |
| `videotape` | `list` |
| `vinegar` | `list` |
| `violin` | `list` |
| `visor` | `list` |
| `vodka` | `list` |
| `volleyball` | `list` |
| `vulture` | `list` |
| `waffle` | `list` |
| `waffle_iron` | `list` |
| `wagon` | `list` |
| `wagon_wheel` | `list` |
| `walking_cane` | `list` |
| `walking_stick` | `list` |
| `wall_clock` | `list` |
| `wall_socket` | `list` |
| `wallet` | `list` |
| `walrus` | `list` |
| `wardrobe` | `list` |
| `washbasin` | `list` |
| `watch` | `list` |
| `water_bottle` | `list` |
| `water_cooler` | `list` |
| `water_faucet` | `list` |
| `water_gun` | `list` |
| `water_heater` | `list` |
| `water_jug` | `list` |
| `water_scooter` | `list` |
| `water_ski` | `list` |
| `water_tower` | `list` |
| `watering_can` | `list` |
| `watermelon` | `list` |
| `weathervane` | `list` |
| `webcam` | `list` |
| `wedding_cake` | `list` |
| `wedding_ring` | `list` |
| `wet_suit` | `list` |
| `wheel` | `list` |
| `wheelchair` | `list` |
| `whipped_cream` | `list` |
| `wig` | `list` |
| `wind_chime` | `list` |
| `windmill` | `list` |
| `window_box_(for_plants)` | `list` |
| `windsock` | `list` |
| `wine_bottle` | `list` |
| `wine_bucket` | `list` |
| `wineglass` | `list` |
| `wok` | `list` |
| `wolf` | `list` |
| `wooden_leg` | `list` |
| `wooden_spoon` | `list` |
| `wreath` | `list` |
| `wrench` | `list` |
| `wristband` | `list` |
| `wristlet` | `list` |
| `yacht` | `list` |
| `yogurt` | `list` |
| `zebra` | `list` |
| `zucchini` | `list` |
|
argilla/FinePersonas-v0.1
|
argilla
|
task_categories:text-generation, language:en, license:llama3, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, library:distilabel, arxiv:2406.20094, region:us, synthetic, distilabel
|
community
|
# Dataset: `argilla/FinePersonas-v0.1`
## 📝 Metadata
- **Author/Owner:** argilla
- **Downloads:** 9132
- **Likes:** 407
- **Tags:** task_categories:text-generation, language:en, license:llama3, size_categories:10M<n<100M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, library:distilabel, arxiv:2406.20094, region:us, synthetic, distilabel
- **License:** Not specified
## 📖 Description
```text
FinePersonas
Open dataset of 21 Million detailed personas for diverse and controllable synthetic text generation.
FinePersonas contains detailed personas for creating customized, realistic synthetic data.
With this dataset, AI researchers and engineers can easily integrate unique persona traits into text generation systems, enhancing the richness, diversity, and specificity of synthetic outputs without the complexity of crafting detailed attributes from… See the full description on the dataset page: https://huggingface.co/datasets/argilla/FinePersonas-v0.1....
```
## 📂 File System Sample
- `.gitattributes`
- `LICENSE`
- `README.md`
- `data/train-00000-of-00012.parquet`
- `data/train-00001-of-00012.parquet`
- `data/train-00002-of-00012.parquet`
- `data/train-00003-of-00012.parquet`
- `data/train-00004-of-00012.parquet`
- `data/train-00005-of-00012.parquet`
- `data/train-00006-of-00012.parquet`
- `data/train-00007-of-00012.parquet`
- `data/train-00008-of-00012.parquet`
- `data/train-00009-of-00012.parquet`
- `data/train-00010-of-00012.parquet`
- `data/train-00011-of-00012.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `persona` | `str` |
| `labels` | `str` |
### Config: `embeddings`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `model_name_embeddings` | `str` |
| `embedding` | `list` |
|
dair-ai/emotion
|
dair-ai
|
task_categories:text-classification, task_ids:multi-class-classification, annotations_creators:machine-generated, language_creators:machine-generated, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, emotion-classification
|
community
|
# Dataset: `dair-ai/emotion`
## 📝 Metadata
- **Author/Owner:** dair-ai
- **Downloads:** 30228
- **Likes:** 404
- **Tags:** task_categories:text-classification, task_ids:multi-class-classification, annotations_creators:machine-generated, language_creators:machine-generated, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, emotion-classification
- **License:** Not specified
## 📖 Description
```text
Dataset Card for "emotion"
Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
An example looks as follows.
{
"text": "im feeling quite sad and sorry for myself but… See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `split/test-00000-of-00001.parquet`
- `split/train-00000-of-00001.parquet`
- `split/validation-00000-of-00001.parquet`
- `unsplit/train-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `split`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
### Config: `unsplit`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
|
garage-bAInd/Open-Platypus
|
garage-bAInd
|
language:en, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2308.07317, arxiv:2305.20050, arxiv:2305.12524, region:us
|
community
|
# Dataset: `garage-bAInd/Open-Platypus`
## 📝 Metadata
- **Author/Owner:** garage-bAInd
- **Downloads:** 3516
- **Likes:** 404
- **Tags:** language:en, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2308.07317, arxiv:2305.20050, arxiv:2305.12524, region:us
- **License:** Not specified
## 📖 Description
```text
Open-Platypus
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
Dataset Name
License Type
PRM800K
MIT
MATH
MIT
ScienceQA
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
SciBench
MIT
ReClor
Non-commercial
TheoremQA
MIT… See the full description on the dataset page: https://huggingface.co/datasets/garage-bAInd/Open-Platypus....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001-4fe2df04669d1669.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `input` | `str` |
| `output` | `str` |
| `instruction` | `str` |
| `data_source` | `str` |
|
allenai/WildChat-1M
|
allenai
|
task_categories:text-generation, task_categories:question-answering, task_categories:text2text-generation, license:odc-by, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2405.01470, arxiv:2409.03753, arxiv:2406.13706, region:us, instruction-finetuning
|
community
|
# Dataset: `allenai/WildChat-1M`
## 📝 Metadata
- **Author/Owner:** allenai
- **Downloads:** 13556
- **Likes:** 396
- **Tags:** task_categories:text-generation, task_categories:question-answering, task_categories:text2text-generation, license:odc-by, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2405.01470, arxiv:2409.03753, arxiv:2406.13706, region:us, instruction-finetuning
- **License:** Not specified
## 📖 Description
```text
Dataset Card for WildChat
Dataset Description
Paper: https://arxiv.org/abs/2405.01470
Interactive Search Tool: https://wildvisualizer.com (paper)
License: ODC-BY
Language(s) (NLP): multi-lingual
Point of Contact: Yuntian Deng
Dataset Summary
WildChat is a collection of 1 million conversations between human users and ChatGPT, alongside demographic data, including state, country, hashed IP addresses, and request headers. We collected WildChat by… See the full description on the dataset page: https://huggingface.co/datasets/allenai/WildChat-1M....
```
## 📂 File System Sample
- `.gitattributes`
- `LICENSE.md`
- `README.md`
- `data/train-00000-of-00014.parquet`
- `data/train-00001-of-00014.parquet`
- `data/train-00002-of-00014.parquet`
- `data/train-00003-of-00014.parquet`
- `data/train-00004-of-00014.parquet`
- `data/train-00005-of-00014.parquet`
- `data/train-00006-of-00014.parquet`
- `data/train-00007-of-00014.parquet`
- `data/train-00008-of-00014.parquet`
- `data/train-00009-of-00014.parquet`
- `data/train-00010-of-00014.parquet`
- `data/train-00011-of-00014.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `conversation_hash` | `str` |
| `model` | `str` |
| `timestamp` | `datetime` |
| `conversation` | `list` |
| `turn` | `int` |
| `language` | `str` |
| `openai_moderation` | `list` |
| `detoxify_moderation` | `list` |
| `toxic` | `bool` |
| `redacted` | `bool` |
| `state` | `str` |
| `country` | `str` |
| `hashed_ip` | `str` |
| `header` | `dict` |
|
amphion/Emilia-Dataset
|
amphion
|
task_categories:text-to-speech, task_categories:automatic-speech-recognition, language:zh, language:en, language:ja, language:fr, language:de, language:ko, license:cc-by-4.0, size_categories:10M<n<100M, format:webdataset, modality:audio, modality:text, library:datasets, library:webdataset, library:mlcroissant, arxiv:2407.05361, arxiv:2501.15907, region:us
|
community
|
# Dataset: `amphion/Emilia-Dataset`
## 📝 Metadata
- **Author/Owner:** amphion
- **Downloads:** 85071
- **Likes:** 392
- **Tags:** task_categories:text-to-speech, task_categories:automatic-speech-recognition, language:zh, language:en, language:ja, language:fr, language:de, language:ko, license:cc-by-4.0, size_categories:10M<n<100M, format:webdataset, modality:audio, modality:text, library:datasets, library:webdataset, library:mlcroissant, arxiv:2407.05361, arxiv:2501.15907, region:us
- **License:** Not specified
## 📖 Description
```text
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
This is the official repository 👑 for the Emilia dataset and the source code for the Emilia-Pipe speech data preprocessing pipeline.
News 🔥
2025/02/26: The Emilia-Large dataset, featuring over 200,000 hours of data, is now available!!! Emilia-Large combines the original 101k-hour Emilia dataset (licensed under CC BY-NC 4.0) with the brand-new 114k-hour Emilia-YODAS… See the full description on the dataset page: https://huggingface.co/datasets/amphion/Emilia-Dataset....
```
## 📂 File System Sample
- `.gitattributes`
- `Emilia-YODAS/DE/DE-B000000.tar`
- `Emilia-YODAS/DE/DE-B000001.tar`
- `Emilia-YODAS/DE/DE-B000002.tar`
- `Emilia-YODAS/DE/DE-B000003.tar`
- `Emilia-YODAS/DE/DE-B000004.tar`
- `Emilia-YODAS/DE/DE-B000005.tar`
- `Emilia-YODAS/DE/DE-B000006.tar`
- `Emilia-YODAS/DE/DE-B000007.tar`
- `Emilia-YODAS/DE/DE-B000008.tar`
- `Emilia-YODAS/DE/DE-B000009.tar`
- `Emilia-YODAS/DE/DE-B000010.tar`
- `Emilia-YODAS/DE/DE-B000011.tar`
- `Emilia-YODAS/DE/DE-B000012.tar`
- `Emilia-YODAS/DE/DE-B000013.tar`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'amphion/Emilia-Dataset' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/amphion/Emilia-Dataset to ask for access.
```
|
Skywork/SkyPile-150B
|
Skywork
|
task_categories:text-generation, language:zh, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.19341, region:us, llm , casual-lm, language-modeling
|
community
|
# Dataset: `Skywork/SkyPile-150B`
## 📝 Metadata
- **Author/Owner:** Skywork
- **Downloads:** 2204
- **Likes:** 391
- **Tags:** task_categories:text-generation, language:zh, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.19341, region:us, llm , casual-lm, language-modeling
- **License:** Not specified
## 📖 Description
```text
SkyPile-150B
Dataset Summary
SkyPile-150B is a comprehensive, large-scale Chinese dataset specifically designed for the pre-training of large language models. It is derived from a broad array of publicly accessible Chinese Internet web pages. Rigorous filtering, extensive deduplication, and thorough sensitive data filtering have been employed to ensure its quality. Furthermore, we have utilized advanced tools such as fastText and BERT to filter out low-quality data.
The… See the full description on the dataset page: https://huggingface.co/datasets/Skywork/SkyPile-150B....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `Skywork Community License.pdf`
- `Skywork 模型社区许可协议.pdf`
- `data/2020-40_zh_head_0000.jsonl`
- `data/2020-40_zh_head_0001.jsonl`
- `data/2020-40_zh_head_0002.jsonl`
- `data/2020-40_zh_head_0003.jsonl`
- `data/2020-40_zh_head_0004.jsonl`
- `data/2020-40_zh_head_0005.jsonl`
- `data/2020-40_zh_head_0006.jsonl`
- `data/2020-40_zh_head_0007.jsonl`
- `data/2020-40_zh_head_0008.jsonl`
- `data/2020-40_zh_head_0009.jsonl`
- `data/2020-40_zh_head_0010.jsonl`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
shibing624/medical
|
shibing624
|
task_categories:text-generation, language:zh, license:apache-2.0, size_categories:n<1K, region:us, text-generation
|
community
|
# Dataset: `shibing624/medical`
## 📝 Metadata
- **Author/Owner:** shibing624
- **Downloads:** 577
- **Likes:** 389
- **Tags:** task_categories:text-generation, language:zh, license:apache-2.0, size_categories:n<1K, region:us, text-generation
- **License:** Not specified
## 📖 Description
```text
纯文本数据,中文医疗数据集,包含预训练数据的百科数据,指令微调数据和奖励模型数据。...
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `finetune/test_en_1.json`
- `finetune/test_zh_0.json`
- `finetune/train_en_1.json`
- `finetune/train_zh_0.json`
- `finetune/valid_en_1.json`
- `finetune/valid_zh_0.json`
- `medical.py`
- `pretrain/medical_book_zh.json`
- `pretrain/test_encyclopedia.json`
- `pretrain/train_encyclopedia.json`
- `pretrain/valid_encyclopedia.json`
- `reward/test.json`
- `reward/train.json`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found medical.py
```
|
HuggingFaceTB/smollm-corpus
|
HuggingFaceTB
|
language:en, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `HuggingFaceTB/smollm-corpus`
## 📝 Metadata
- **Author/Owner:** HuggingFaceTB
- **Downloads:** 16865
- **Likes:** 388
- **Tags:** language:en, license:odc-by, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
SmolLM-Corpus
This dataset is a curated collection of high-quality educational and synthetic data designed for training small language models.
You can find more details about the models trained on this dataset in our SmolLM blog post.
Dataset subsets
Cosmopedia v2
Cosmopedia v2 is an enhanced version of Cosmopedia, the largest synthetic dataset for pre-training, consisting of over 39 million textbooks, blog posts, and stories generated by… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `cosmopedia-v2/train-00000-of-00104.parquet`
- `cosmopedia-v2/train-00001-of-00104.parquet`
- `cosmopedia-v2/train-00002-of-00104.parquet`
- `cosmopedia-v2/train-00003-of-00104.parquet`
- `cosmopedia-v2/train-00004-of-00104.parquet`
- `cosmopedia-v2/train-00005-of-00104.parquet`
- `cosmopedia-v2/train-00006-of-00104.parquet`
- `cosmopedia-v2/train-00007-of-00104.parquet`
- `cosmopedia-v2/train-00008-of-00104.parquet`
- `cosmopedia-v2/train-00009-of-00104.parquet`
- `cosmopedia-v2/train-00010-of-00104.parquet`
- `cosmopedia-v2/train-00011-of-00104.parquet`
- `cosmopedia-v2/train-00012-of-00104.parquet`
- ... and more.
## 📊 Data Structure
### Config: `cosmopedia-v2`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `prompt` | `str` |
| `text` | `str` |
| `token_length` | `int` |
| `audience` | `str` |
| `format` | `str` |
| `seed_data` | `str` |
### Config: `fineweb-edu-dedup`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `id` | `str` |
| `metadata` | `dict` |
### Config: `python-edu`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `blob_id` | `str` |
| `repo_name` | `str` |
| `path` | `str` |
| `length_bytes` | `int` |
| `score` | `float` |
| `int_score` | `int` |
|
TIGER-Lab/MMLU-Pro
|
TIGER-Lab
|
task_categories:question-answering, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.01574, doi:10.57967/hf/2439, region:us, evaluation
|
community
|
# Dataset: `TIGER-Lab/MMLU-Pro`
## 📝 Metadata
- **Author/Owner:** TIGER-Lab
- **Downloads:** 52749
- **Likes:** 387
- **Tags:** task_categories:question-answering, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.01574, doi:10.57967/hf/2439, region:us, evaluation
- **License:** Not specified
## 📖 Description
```text
MMLU-Pro Dataset
MMLU-Pro dataset is a more robust and challenging massive multi-task understanding dataset tailored to more rigorously benchmark large language models' capabilities. This dataset contains 12K complex questions across various disciplines.
|Github | 🏆Leaderboard | 📖Paper |
🚀 What's New
[2025.10.25] Posted a consolidated note on Health-category issues and minor category updates (does not change overall micro-averaged scores; may slightly affect… See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001.parquet`
- `data/validation-00000-of-00001.parquet`
- `run_claude3.py`
- `run_gpt4o.py`
## 📊 Data Structure
### Config: `default`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question_id` | `int` |
| `question` | `str` |
| `options` | `list` |
| `answer` | `str` |
| `answer_index` | `int` |
| `cot_content` | `str` |
| `category` | `str` |
| `src` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `question_id` | `int` |
| `question` | `str` |
| `options` | `list` |
| `answer` | `str` |
| `answer_index` | `int` |
| `cot_content` | `str` |
| `category` | `str` |
| `src` | `str` |
|
openbmb/UltraFeedback
|
openbmb
|
task_categories:text-generation, language:en, license:mit, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.01377, region:us
|
community
|
# Dataset: `openbmb/UltraFeedback`
## 📝 Metadata
- **Author/Owner:** openbmb
- **Downloads:** 1593
- **Likes:** 386
- **Tags:** task_categories:text-generation, language:en, license:mit, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.01377, region:us
- **License:** Not specified
## 📖 Description
```text
Introduction
GitHub Repo
UltraRM-13b
UltraCM-13b
UltraFeedback is a large-scale, fine-grained, diverse preference dataset, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To… See the full description on the dataset page: https://huggingface.co/datasets/openbmb/UltraFeedback....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `evol_instruct.jsonl`
- `false_qa.jsonl`
- `flan.jsonl`
- `sharegpt.jsonl`
- `truthful_qa.jsonl`
- `ultrachat.jsonl`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `source` | `str` |
| `instruction` | `str` |
| `models` | `list` |
| `completions` | `list` |
| `correct_answers` | `list` |
| `incorrect_answers` | `list` |
|
togethercomputer/RedPajama-Data-V2
|
togethercomputer
|
task_categories:text-generation, language:en, language:de, language:fr, language:es, language:it, arxiv:2302.03169, arxiv:2302.13971, arxiv:2204.02311, arxiv:2112.06905, arxiv:1910.10683, arxiv:2305.13169, arxiv:2306.01116, arxiv:2112.11446, arxiv:2411.12372, region:us
|
community
|
# Dataset: `togethercomputer/RedPajama-Data-V2`
## 📝 Metadata
- **Author/Owner:** togethercomputer
- **Downloads:** 3207
- **Likes:** 382
- **Tags:** task_categories:text-generation, language:en, language:de, language:fr, language:es, language:it, arxiv:2302.03169, arxiv:2302.13971, arxiv:2204.02311, arxiv:2112.06905, arxiv:1910.10683, arxiv:2305.13169, arxiv:2306.01116, arxiv:2112.11446, arxiv:2411.12372, region:us
- **License:** Not specified
## 📖 Description
```text
RedPajama V2: an Open Dataset for Training Large Language Models...
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `README.md`
- `RedPajama-Data-V2.py`
- `_CC_SNAPSHOT_IDS`
- `listings/de-2014-15-head_middle.txt`
- `listings/de-2014-15-tail.txt`
- `listings/de-2014-23-head_middle.txt`
- `listings/de-2014-23-tail.txt`
- `listings/de-2014-35-head_middle.txt`
- `listings/de-2014-35-tail.txt`
- `listings/de-2014-41-head_middle.txt`
- `listings/de-2014-41-tail.txt`
- `listings/de-2014-42-head_middle.txt`
- `listings/de-2014-42-tail.txt`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found RedPajama-Data-V2.py
```
|
O1-OPEN/OpenO1-SFT
|
O1-OPEN
|
task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2504.13828, region:us
|
community
|
# Dataset: `O1-OPEN/OpenO1-SFT`
## 📝 Metadata
- **Author/Owner:** O1-OPEN
- **Downloads:** 466
- **Likes:** 382
- **Tags:** task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2504.13828, region:us
- **License:** Not specified
## 📖 Description
```text
This repository contains the dataset used for fine-tuning a language model using SFT for Chain-of-Thought Activation from the paper Generative AI Act II: Test Time Scaling Drives Cognition Engineering.
Code: https://github.com/GAIR-NLP/cognition-engineering
🎉🎉🎉This repository contains the dataset used for fine-tuning a language model using SFT for Chain-of-Thought Activation.
🌈🌈🌈The dataset is designed to enhance the model's ability to generate coherent and logical reasoning sequences.… See the full description on the dataset page: https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT....
```
## 📂 File System Sample
- `.gitattributes`
- `OpenO1-SFT.jsonl`
- `README.md`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `instruction` | `str` |
| `output` | `str` |
|
HuggingFaceM4/WebSight
|
HuggingFaceM4
|
language:en, license:cc-by-4.0, size_categories:1M<n<10M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2403.09029, region:us, code, synthetic
|
community
|
# Dataset: `HuggingFaceM4/WebSight`
## 📝 Metadata
- **Author/Owner:** HuggingFaceM4
- **Downloads:** 16787
- **Likes:** 372
- **Tags:** language:en, license:cc-by-4.0, size_categories:1M<n<10M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2403.09029, region:us, code, synthetic
- **License:** Not specified
## 📖 Description
```text
Dataset Card for WebSight
Dataset Description
WebSight is a large synthetic dataset containing HTML/CSS codes representing synthetically generated English websites, each accompanied by a corresponding screenshot.
This dataset serves as a valuable resource for tasks such as generating UI codes from a screenshot.
It comes in two versions:
v0.1: Websites are coded with HTML + CSS. They do not include real images.
v0.2: Websites are coded with HTML + Tailwind CSS. They do… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/WebSight....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00071-eb722b04b83e13b7.parquet`
- `data/train-00001-of-00071-df5cc75986b4e6ff.parquet`
- `data/train-00002-of-00071-9d21c8aebfdd8330.parquet`
- `data/train-00003-of-00071-83f53528b74fb44b.parquet`
- `data/train-00004-of-00071-bfbc2f1f2948d57f.parquet`
- `data/train-00005-of-00071-0c56bff1917d8e38.parquet`
- `data/train-00006-of-00071-fd095993e12f047d.parquet`
- `data/train-00007-of-00071-fc9a17a7ca1d1339.parquet`
- `data/train-00008-of-00071-a5075d54a01fe126.parquet`
- `data/train-00009-of-00071-54f26a2d18f9ff73.parquet`
- `data/train-00010-of-00071-7c89af7fb68feb34.parquet`
- `data/train-00011-of-00071-20632198348587e3.parquet`
- `data/train-00012-of-00071-4f3f6587610b47e1.parquet`
- ... and more.
## 📊 Data Structure
### Config: `v0.2`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `JpegImageFile` |
| `text` | `str` |
| `llm_generated_idea` | `str` |
### Config: `v0.1`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `PngImageFile` |
| `text` | `str` |
|
bigcode/the-stack-dedup
|
bigcode
|
task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2211.15533, arxiv:2107.03374, arxiv:2207.14157, region:us
|
community
|
# Dataset: `bigcode/the-stack-dedup`
## 📝 Metadata
- **Author/Owner:** bigcode
- **Downloads:** 6528
- **Likes:** 371
- **Tags:** task_categories:text-generation, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, size_categories:100M<n<1B, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2211.15533, arxiv:2107.03374, arxiv:2207.14157, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for The Stack
Changelog
Release
Description
v1.0
Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 1.5TB in size.
v1.1
The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack-dedup....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/abap/data-00000-of-00001.parquet`
- `data/actionscript/data-00000-of-00002.parquet`
- `data/actionscript/data-00001-of-00002.parquet`
- `data/ada/data-00000-of-00001.parquet`
- `data/agda/data-00000-of-00001.parquet`
- `data/ags-script/data-00000-of-00001.parquet`
- `data/alloy/data-00000-of-00001.parquet`
- `data/ampl/data-00000-of-00001.parquet`
- `data/antlr/data-00000-of-00001.parquet`
- `data/apacheconf/data-00000-of-00001.parquet`
- `data/api-blueprint/data-00000-of-00001.parquet`
- `data/apl/data-00000-of-00001.parquet`
- `data/applescript/data-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'bigcode/the-stack-dedup' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/bigcode/the-stack-dedup to ask for access.
```
|
HuggingFaceTB/smoltalk
|
HuggingFaceTB
|
language:en, size_categories:1M<n<10M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2502.02737, region:us, synthetic
|
community
|
# Dataset: `HuggingFaceTB/smoltalk`
## 📝 Metadata
- **Author/Owner:** HuggingFaceTB
- **Downloads:** 5011
- **Likes:** 369
- **Tags:** language:en, size_categories:1M<n<10M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2502.02737, region:us, synthetic
- **License:** Not specified
## 📖 Description
```text
SmolTalk
Dataset description
This is a synthetic dataset designed for supervised finetuning (SFT) of LLMs. It was used to build SmolLM2-Instruct family of models and contains 1M samples. More details in our paper https://arxiv.org/abs/2502.02737
During the development of SmolLM2, we observed that models finetuned on public SFT datasets underperformed compared to other models with proprietary instruction datasets. To address this gap, we created new synthetic datasets… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/smoltalk....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/all/.gitattributes`
- `data/all/test-00000-of-00001.parquet`
- `data/all/train-00000-of-00009.parquet`
- `data/all/train-00001-of-00009.parquet`
- `data/all/train-00002-of-00009.parquet`
- `data/all/train-00003-of-00009.parquet`
- `data/all/train-00004-of-00009.parquet`
- `data/all/train-00005-of-00009.parquet`
- `data/all/train-00006-of-00009.parquet`
- `data/all/train-00007-of-00009.parquet`
- `data/all/train-00008-of-00009.parquet`
- `data/apigen-80k/.gitattributes`
- `data/apigen-80k/test-00000-of-00001.parquet`
- ... and more.
## 📊 Data Structure
### Config: `all`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
| `source` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
| `source` | `str` |
### Config: `smol-magpie-ultra`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
| `category` | `str` |
| `difficulty` | `str` |
| `quality` | `str` |
| `reward_model_score` | `float` |
| `conversation_tokens` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
| `category` | `str` |
| `difficulty` | `str` |
| `quality` | `str` |
| `reward_model_score` | `float` |
| `conversation_tokens` | `int` |
### Config: `smol-constraints`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
|
PleIAs/YouTube-Commons
|
PleIAs
|
task_categories:text-generation, language:en, language:fr, language:es, language:pt, language:de, language:ru, license:cc-by-4.0, region:us, conversational
|
community
|
# Dataset: `PleIAs/YouTube-Commons`
## 📝 Metadata
- **Author/Owner:** PleIAs
- **Downloads:** 1857
- **Likes:** 366
- **Tags:** task_categories:text-generation, language:en, language:fr, language:es, language:pt, language:de, language:ru, license:cc-by-4.0, region:us, conversational
- **License:** Not specified
## 📖 Description
```text
📺 YouTube-Commons 📺
YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC-By license.
Content
The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).
In total, this represents nearly 45 billion words (44,811,518,375).
All the videos where shared on YouTube with a CC-BY license: the dataset provide all the necessary provenance information… See the full description on the dataset page: https://huggingface.co/datasets/PleIAs/YouTube-Commons....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `cctube_0.parquet`
- `cctube_1.parquet`
- `cctube_10.parquet`
- `cctube_100.parquet`
- `cctube_101.parquet`
- `cctube_102.parquet`
- `cctube_103.parquet`
- `cctube_104.parquet`
- `cctube_105.parquet`
- `cctube_106.parquet`
- `cctube_107.parquet`
- `cctube_108.parquet`
- `cctube_109.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: No (supported) data files found in PleIAs/YouTube-Commons
```
|
neuralwork/arxiver
|
neuralwork
|
license:cc-by-nc-sa-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `neuralwork/arxiver`
## 📝 Metadata
- **Author/Owner:** neuralwork
- **Downloads:** 539
- **Likes:** 364
- **Tags:** license:cc-by-nc-sa-4.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Arxiver Dataset
Arxiver consists of 63,357 arXiv papers converted to multi-markdown (.mmd) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.
We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
Curation
The Arxiver dataset is… See the full description on the dataset page: https://huggingface.co/datasets/neuralwork/arxiver....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train.parquet`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `title` | `str` |
| `abstract` | `str` |
| `authors` | `str` |
| `published_date` | `str` |
| `link` | `str` |
| `markdown` | `str` |
|
Anthropic/EconomicIndex
|
Anthropic
|
language:en, license:mit, arxiv:2503.04761, region:us, AI, LLM, Economic Impacts, Anthropic
|
community
|
# Dataset: `Anthropic/EconomicIndex`
## 📝 Metadata
- **Author/Owner:** Anthropic
- **Downloads:** 1687
- **Likes:** 359
- **Tags:** language:en, license:mit, arxiv:2503.04761, region:us, AI, LLM, Economic Impacts, Anthropic
- **License:** Not specified
## 📖 Description
```text
The Anthropic Economic Index
Overview
The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy.
Data Releases
This repository contains multiple data releases, each with its own documentation:
2025-09-15 Release: Updated analysis with geographic and first-party API data using Sonnet 4
2025-03-27 Release: Updated analysis with Claude 3.7 Sonnet data and cluster-level insights
2025-02-10… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex....
```
## 📂 File System Sample
- `.gitattributes`
- `.gitignore`
- `README.md`
- `release_2025_02_10/README.md`
- `release_2025_02_10/SOC_Structure.csv`
- `release_2025_02_10/automation_vs_augmentation.csv`
- `release_2025_02_10/bls_employment_may_2023.csv`
- `release_2025_02_10/onet_task_mappings.csv`
- `release_2025_02_10/onet_task_statements.csv`
- `release_2025_02_10/plots.ipynb`
- `release_2025_02_10/plots/automation_vs_augmentation.png`
- `release_2025_02_10/plots/occupational_category_distribution.png`
- `release_2025_02_10/plots/occupational_category_distribution_bls.png`
- `release_2025_02_10/plots/occupations_distribution.png`
- `release_2025_02_10/plots/task_distribution.png`
- ... and more.
## 📊 Data Structure
### Config: `release_2025_09_15`
#### Split: `raw_claude_ai`
| Column Name | Data Type |
|---|---|
| `geo_id` | `str` |
| `geography` | `str` |
| `date_start` | `str` |
| `date_end` | `str` |
| `platform_and_product` | `str` |
| `facet` | `str` |
| `level` | `int` |
| `variable` | `str` |
| `cluster_name` | `str` |
| `value` | `float` |
#### Split: `raw_1p_api`
| Column Name | Data Type |
|---|---|
| `geo_id` | `str` |
| `geography` | `str` |
| `date_start` | `str` |
| `date_end` | `str` |
| `platform_and_product` | `str` |
| `facet` | `str` |
| `level` | `int` |
| `variable` | `str` |
| `cluster_name` | `str` |
| `value` | `float` |
#### Split: `enriched_claude_ai`
| Column Name | Data Type |
|---|---|
| `geo_id` | `str` |
| `geography` | `str` |
| `date_start` | `str` |
| `date_end` | `str` |
| `platform_and_product` | `str` |
| `facet` | `str` |
| `level` | `int` |
| `variable` | `str` |
| `cluster_name` | `NoneType` |
| `value` | `float` |
| `geo_name` | `str` |
|
stanfordnlp/imdb
|
stanfordnlp
|
task_categories:text-classification, task_ids:sentiment-classification, annotations_creators:expert-generated, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `stanfordnlp/imdb`
## 📝 Metadata
- **Author/Owner:** stanfordnlp
- **Downloads:** 178836
- **Likes:** 346
- **Tags:** task_categories:text-classification, task_ids:sentiment-classification, annotations_creators:expert-generated, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Dataset Card for "imdb"
Dataset Summary
Large Movie Review Dataset.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/imdb....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `plain_text/test-00000-of-00001.parquet`
- `plain_text/train-00000-of-00001.parquet`
- `plain_text/unsupervised-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `plain_text`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
#### Split: `unsupervised`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `label` | `int` |
|
RyokoAI/ShareGPT52K
|
RyokoAI
|
task_categories:text-generation, language:en, language:es, language:de, language:multilingual, license:cc0-1.0, size_categories:10K<n<100K, region:us, conversation, rlhf, chatgpt, gpt-3.5
|
community
|
# Dataset: `RyokoAI/ShareGPT52K`
## 📝 Metadata
- **Author/Owner:** RyokoAI
- **Downloads:** 527
- **Likes:** 346
- **Tags:** task_categories:text-generation, language:en, language:es, language:de, language:multilingual, license:cc0-1.0, size_categories:10K<n<100K, region:us, conversation, rlhf, chatgpt, gpt-3.5
- **License:** Not specified
## 📖 Description
```text
Dataset Card for ShareGPT52K90K
Dataset Summary
This dataset is a collection of approximately 52,00090,000 conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the old/ directory.
Supported Tasks and Leaderboards
text-generation
Languages
This dataset is… See the full description on the dataset page: https://huggingface.co/datasets/RyokoAI/ShareGPT52K....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `old/sg_52k.json`
- `sg_90k_part1.json`
- `sg_90k_part2.json`
## 📊 Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `conversations` | `list` |
|
NousResearch/hermes-function-calling-v1
|
NousResearch
|
task_categories:text-generation, task_categories:question-answering, task_categories:feature-extraction, language:en, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `NousResearch/hermes-function-calling-v1`
## 📝 Metadata
- **Author/Owner:** NousResearch
- **Downloads:** 1464
- **Likes:** 346
- **Tags:** task_categories:text-generation, task_categories:question-answering, task_categories:feature-extraction, language:en, license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## 📖 Description
```text
Hermes Function-Calling V1
This dataset is the compilation of structured output and function calling data used in the Hermes 2 Pro series of models.
This repository contains a structured output dataset with function-calling conversations, json-mode, agentic json-mode and structured extraction samples, designed to train LLM models in performing function calls and returning structured output based on natural language instructions. The dataset features various conversational scenarios… See the full description on the dataset page: https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `func-calling-singleturn.json`
- `func-calling.json`
- `glaive-function-calling-5k.json`
- `json-mode-agentic.json`
- `json-mode-singleturn.json`
## 📊 Data Structure
### Config: `func_calling_singleturn`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `conversations` | `list` |
| `category` | `str` |
| `subcategory` | `str` |
| `task` | `str` |
### Config: `func_calling`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `conversations` | `list` |
| `category` | `str` |
| `subcategory` | `str` |
| `task` | `str` |
### Config: `glaive_func_calling`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `conversations` | `list` |
| `tools` | `str` |
| `category` | `str` |
| `subcategory` | `NoneType` |
| `task` | `str` |
| `source` | `str` |
|
nvidia/OpenMathReasoning
|
nvidia
|
task_categories:question-answering, task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2504.16891, region:us, math, nvidia
|
community
|
# Dataset: `nvidia/OpenMathReasoning`
## 📝 Metadata
- **Author/Owner:** nvidia
- **Downloads:** 5472
- **Likes:** 345
- **Tags:** task_categories:question-answering, task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2504.16891, region:us, math, nvidia
- **License:** Not specified
## 📖 Description
```text
OpenMathReasoning
OpenMathReasoning is a large-scale math reasoning dataset for training large language models (LLMs).
This dataset contains
306K unique mathematical problems sourced from AoPS forums with:
3.2M long chain-of-thought (CoT) solutions
1.7M long tool-integrated reasoning (TIR) solutions
566K samples that select the most promising solution out of many candidates (GenSelect)
Additional 193K problems sourced from AoPS forums (problems only, no solutions)
We used… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenMathReasoning....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/additional_problems-00000-of-00001.parquet`
- `data/cot-00000-of-00144.parquet`
- `data/cot-00001-of-00144.parquet`
- `data/cot-00002-of-00144.parquet`
- `data/cot-00003-of-00144.parquet`
- `data/cot-00004-of-00144.parquet`
- `data/cot-00005-of-00144.parquet`
- `data/cot-00006-of-00144.parquet`
- `data/cot-00007-of-00144.parquet`
- `data/cot-00008-of-00144.parquet`
- `data/cot-00009-of-00144.parquet`
- `data/cot-00010-of-00144.parquet`
- `data/cot-00011-of-00144.parquet`
- ... and more.
## 📊 Data Structure
### Config: `default`
#### Split: `cot`
| Column Name | Data Type |
|---|---|
| `expected_answer` | `str` |
| `problem_type` | `str` |
| `problem_source` | `str` |
| `generation_model` | `str` |
| `pass_rate_72b_tir` | `str` |
| `problem` | `str` |
| `generated_solution` | `str` |
| `inference_mode` | `str` |
| `used_in_kaggle` | `bool` |
#### Split: `tir`
| Column Name | Data Type |
|---|---|
| `expected_answer` | `str` |
| `problem_type` | `str` |
| `problem_source` | `str` |
| `generation_model` | `str` |
| `pass_rate_72b_tir` | `str` |
| `problem` | `str` |
| `generated_solution` | `str` |
| `inference_mode` | `str` |
| `used_in_kaggle` | `bool` |
#### Split: `genselect`
| Column Name | Data Type |
|---|---|
| `expected_answer` | `str` |
| `problem_type` | `str` |
| `problem_source` | `str` |
| `generation_model` | `str` |
| `pass_rate_72b_tir` | `str` |
| `problem` | `str` |
| `generated_solution` | `str` |
| `inference_mode` | `str` |
| `used_in_kaggle` | `bool` |
|
openai/openai_humaneval
|
openai
|
annotations_creators:expert-generated, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2107.03374, region:us, code-generation
|
community
|
# Dataset: `openai/openai_humaneval`
## 📝 Metadata
- **Author/Owner:** openai
- **Downloads:** 82083
- **Likes:** 344
- **Tags:** annotations_creators:expert-generated, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:mit, size_categories:n<1K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2107.03374, region:us, code-generation
- **License:** Not specified
## 📖 Description
```text
Dataset Card for OpenAI HumanEval
Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
Supported Tasks and Leaderboards
Languages
The programming problems are written in Python and contain English natural text in comments and docstrings.… See the full description on the dataset page: https://huggingface.co/datasets/openai/openai_humaneval....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `openai_humaneval/test-00000-of-00001.parquet`
## 📊 Data Structure
### Config: `openai_humaneval`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `task_id` | `str` |
| `prompt` | `str` |
| `canonical_solution` | `str` |
| `test` | `str` |
| `entry_point` | `str` |
|
codeparrot/github-code
|
codeparrot
|
task_categories:text-generation, task_ids:language-modeling, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, region:us
|
community
|
# Dataset: `codeparrot/github-code`
## 📝 Metadata
- **Author/Owner:** codeparrot
- **Downloads:** 11189
- **Likes:** 344
- **Tags:** task_categories:text-generation, task_ids:language-modeling, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:multilingual, language:code, license:other, region:us
- **License:** Not specified
## 📖 Description
```text
The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the GitHub dataset on BiqQuery....
```
## 📂 File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-01126.parquet`
- `data/train-00001-of-01126.parquet`
- `data/train-00002-of-01126.parquet`
- `data/train-00003-of-01126.parquet`
- `data/train-00004-of-01126.parquet`
- `data/train-00005-of-01126.parquet`
- `data/train-00006-of-01126.parquet`
- `data/train-00007-of-01126.parquet`
- `data/train-00008-of-01126.parquet`
- `data/train-00009-of-01126.parquet`
- `data/train-00010-of-01126.parquet`
- `data/train-00011-of-01126.parquet`
- `data/train-00012-of-01126.parquet`
- ... and more.
## 📊 Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found github-code.py
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.