repo_id
stringlengths 8
49
| owner
stringlengths 3
24
| tags
stringlengths 22
25.6k
| type
stringclasses 1
value | markdown_content
stringlengths 454
30.5k
|
|---|---|---|---|---|
nvidia/Nemotron-Personas
|
nvidia
|
task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, synthetic, personas, NVIDIA
|
community
|
# Dataset: `nvidia/Nemotron-Personas`
## ๐ Metadata
- **Author/Owner:** nvidia
- **Downloads:** 17319
- **Likes:** 206
- **Tags:** task_categories:text-generation, language:en, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, synthetic, personas, NVIDIA
- **License:** Not specified
## ๐ Description
```text
Nemotron-Personas: Synthetic Personas Aligned to Real-World Distributions
A compound AI approach to personas grounded in real-world distributions
Dataset Overview
Nemotron-Personas is an open-source (CC BY 4.0) dataset of synthetically-generated personas grounded in real-world demographic, geographic and personality trait distributions to capture the diversity and richness of the population. It is the first dataset of its kind aligned with statistics for namesโฆ See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Personas....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001.parquet`
- `images/nemotron_persona_approach.png`
- `images/nemotron_personas_age_group_distribution.png`
- `images/nemotron_personas_education_distribution.png`
- `images/nemotron_personas_education_map.png`
- `images/nemotron_personas_field_stats.png`
- `images/nemotron_personas_marital_status_distribution.png`
- `images/nemotron_personas_occupation_tree_map.png`
- `images/nemotron_personas_professional_personas_clustering.png`
- `images/nemotron_personas_schema.png`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `uuid` | `str` |
| `persona` | `str` |
| `professional_persona` | `str` |
| `sports_persona` | `str` |
| `arts_persona` | `str` |
| `travel_persona` | `str` |
| `culinary_persona` | `str` |
| `skills_and_expertise` | `str` |
| `skills_and_expertise_list` | `str` |
| `hobbies_and_interests` | `str` |
| `hobbies_and_interests_list` | `str` |
| `career_goals_and_ambitions` | `str` |
| `sex` | `str` |
| `age` | `int` |
| `marital_status` | `str` |
| `education_level` | `str` |
| `bachelors_field` | `NoneType` |
| `occupation` | `str` |
| `city` | `str` |
| `state` | `str` |
| `zipcode` | `str` |
| `country` | `str` |
|
EssentialAI/essential-web-v1.0
|
EssentialAI
|
license:odc-by, size_categories:10B<n<100B, arxiv:2506.14111, region:us
|
community
|
# Dataset: `EssentialAI/essential-web-v1.0`
## ๐ Metadata
- **Author/Owner:** EssentialAI
- **Downloads:** 81465
- **Likes:** 204
- **Tags:** license:odc-by, size_categories:10B<n<100B, arxiv:2506.14111, region:us
- **License:** Not specified
## ๐ Description
```text
๐ Essential-Web: Complete 24-Trillion Token Dataset
๐ Website | ๐ฅ๏ธ Code | ๐ Paper | โ๏ธ AWS
๐ Dataset Description
Essential-Web is a 24-trillion-token web dataset with document-level metadata designed for flexible dataset curation. The dataset provides metadata including subject matter classification, web page type, content complexity, and document quality scores for each of the 23.6 billion documents.
Researchers can filter and curate specialized datasets usingโฆ See the full description on the dataset page: https://huggingface.co/datasets/EssentialAI/essential-web-v1.0....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/crawl=CC-MAIN-2013-20/train-00000-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00001-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00002-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00003-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00004-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00005-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00006-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00007-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00008-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00009-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00010-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00011-of-04233.parquet`
- `data/crawl=CC-MAIN-2013-20/train-00012-of-04233.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `int` |
| `text` | `str` |
| `metadata` | `dict` |
| `line_start_n_end_idx` | `dict` |
| `quality_signals` | `dict` |
| `eai_taxonomy` | `dict` |
| `pid` | `str` |
|
Hello-SimpleAI/HC3
|
Hello-SimpleAI
|
task_categories:text-classification, task_categories:question-answering, task_categories:sentence-similarity, task_categories:zero-shot-classification, language:en, language:zh, license:cc-by-sa-4.0, size_categories:10K<n<100K, modality:text, library:datasets, library:mlcroissant, arxiv:2301.07597, region:us, ChatGPT, SimpleAI, Detection, OOD
|
community
|
# Dataset: `Hello-SimpleAI/HC3`
## ๐ Metadata
- **Author/Owner:** Hello-SimpleAI
- **Downloads:** 2814
- **Likes:** 203
- **Tags:** task_categories:text-classification, task_categories:question-answering, task_categories:sentence-similarity, task_categories:zero-shot-classification, language:en, language:zh, license:cc-by-sa-4.0, size_categories:10K<n<100K, modality:text, library:datasets, library:mlcroissant, arxiv:2301.07597, region:us, ChatGPT, SimpleAI, Detection, OOD
- **License:** Not specified
## ๐ Description
```text
Human ChatGPT Comparison Corpus (HC3)...
```
## ๐ File System Sample
- `.gitattributes`
- `HC3.py`
- `README.md`
- `all.jsonl`
- `finance.jsonl`
- `medicine.jsonl`
- `open_qa.jsonl`
- `reddit_eli5.jsonl`
- `wiki_csai.jsonl`
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found HC3.py
```
|
microsoft/ms_marco
|
microsoft
|
language:en, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1611.09268, region:us
|
community
|
# Dataset: `microsoft/ms_marco`
## ๐ Metadata
- **Author/Owner:** microsoft
- **Downloads:** 13895
- **Likes:** 201
- **Tags:** language:en, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1611.09268, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for "ms_marco"
Dataset Summary
Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
keyphrase extraction dataset, crawling dataset, and a conversational search.โฆ See the full description on the dataset page: https://huggingface.co/datasets/microsoft/ms_marco....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `v1.1/test-00000-of-00001.parquet`
- `v1.1/train-00000-of-00001.parquet`
- `v1.1/validation-00000-of-00001.parquet`
- `v2.1/test-00000-of-00001.parquet`
- `v2.1/train-00000-of-00007.parquet`
- `v2.1/train-00001-of-00007.parquet`
- `v2.1/train-00002-of-00007.parquet`
- `v2.1/train-00003-of-00007.parquet`
- `v2.1/train-00004-of-00007.parquet`
- `v2.1/train-00005-of-00007.parquet`
- `v2.1/train-00006-of-00007.parquet`
- `v2.1/validation-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `v1.1`
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
### Config: `v2.1`
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `answers` | `list` |
| `passages` | `dict` |
| `query` | `str` |
| `query_id` | `int` |
| `query_type` | `str` |
| `wellFormedAnswers` | `list` |
|
agibot-world/AgiBotWorld-Alpha
|
agibot-world
|
task_categories:robotics, task_categories:other, language:en, size_categories:10M<n<100M, format:webdataset, modality:text, library:datasets, library:webdataset, library:mlcroissant, region:us, real-world, dual-arm, Robotics manipulation
|
community
|
# Dataset: `agibot-world/AgiBotWorld-Alpha`
## ๐ Metadata
- **Author/Owner:** agibot-world
- **Downloads:** 10753
- **Likes:** 201
- **Tags:** task_categories:robotics, task_categories:other, language:en, size_categories:10M<n<100M, format:webdataset, modality:text, library:datasets, library:webdataset, library:mlcroissant, region:us, real-world, dual-arm, Robotics manipulation
- **License:** Not specified
## ๐ Description
```text
โ ๏ธImportant Notice !!!
Dear Users,
The Alpha Dataset has been updated as follows:
Frame Loss Data Removal: Several episodes with frame loss issues have been removed. For the complete list of removed episode IDs, please refer to this document.
Changes in Episode Count: The updated Alpha Dataset retains the original 36 tasks. The new version has been enriched with additional interactive objects, extending the total duration from 474.12โฆ See the full description on the dataset page: https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `observations/327/648642-685032.tar`
- `observations/327/685046-685393.tar`
- `observations/352/648544-655345.tar`
- `observations/352/655372-660414.tar`
- `observations/352/660443-664248.tar`
- `observations/352/664261-667703.tar`
- `observations/352/667707-671444.tar`
- `observations/352/671452-674501.tar`
- `observations/354/650214-671192.tar`
- `observations/354/671195-671825.tar`
- `observations/356/648562-659912.tar`
- `observations/356/659922-662981.tar`
- `observations/357/648751-673293.tar`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'agibot-world/AgiBotWorld-Alpha' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha to ask for access.
```
|
bigcode/ta-prompt
|
bigcode
|
language:code, license:apache-2.0, size_categories:n<1K, format:text, modality:text, library:datasets, library:mlcroissant, region:us
|
community
|
# Dataset: `bigcode/ta-prompt`
## ๐ Metadata
- **Author/Owner:** bigcode
- **Downloads:** 60
- **Likes:** 200
- **Tags:** language:code, license:apache-2.0, size_categories:n<1K, format:text, modality:text, library:datasets, library:mlcroissant, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset summary
This repository is dedicated to prompts used to perform in-context learning with starcoder. As a matter of fact, the model is an
autoregressive language model that is trained on both code and natural language text. It can be turned into an AI-powered technical assistant by prepending conversations to
its 8192-tokens context window.
Format
The prompt is a .txt file which contains multiple conversations between a human and the assistant. Here is the formatโฆ See the full description on the dataset page: https://huggingface.co/datasets/bigcode/ta-prompt....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `TA_prompt_v0.txt`
- `TA_prompt_v1.txt`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
ShareGPT4Video/ShareGPT4Video
|
ShareGPT4Video
|
task_categories:visual-question-answering, task_categories:question-answering, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:json, modality:image, modality:text, modality:video, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.04325, doi:10.57967/hf/2494, region:us
|
community
|
# Dataset: `ShareGPT4Video/ShareGPT4Video`
## ๐ Metadata
- **Author/Owner:** ShareGPT4Video
- **Downloads:** 2797
- **Likes:** 200
- **Tags:** task_categories:visual-question-answering, task_categories:question-answering, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:json, modality:image, modality:text, modality:video, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2406.04325, doi:10.57967/hf/2494, region:us
- **License:** Not specified
## ๐ Description
```text
ShareGPT4Video 4.8M Dataset Card
Dataset details
Dataset type:
ShareGPT4Video Captions 4.8M is a set of GPT4-Vision-powered multi-modal captions data of videos.
It is constructed to enhance modality alignment and fine-grained visual concept perception in Large Video-Language Models (LVLMs) and Text-to-Video Models (T2VMs). This advancement aims to bring LVLMs and T2VMs towards the capabilities of GPT4V and Sora.
sharegpt4video_40k.jsonl is generated by GPT4-Visionโฆ See the full description on the dataset page: https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `llava_v1_5_mix665k_with_video_chatgpt72k_share4video28k.json`
- `sharegpt4video_40k.jsonl`
- `sharegpt4video_mix181k_vqa-153k_share-cap-28k.json`
- `user_download/download_dataset.py`
- `user_download/download_panda_video.py`
- `user_download/jsonl_to_csv.py`
- `user_download/panda.csv`
- `zip_folder/bdd100k/bdd100k_videos.zip`
- `zip_folder/ego4d/ego4d_videos_1.zip`
- `zip_folder/ego4d/ego4d_videos_2.zip`
- `zip_folder/ego4d/ego4d_videos_3.zip`
- `zip_folder/ego4d/ego4d_videos_4.zip`
- `zip_folder/instruction_data/bdd.zip`
- ... and more.
## ๐ Data Structure
### Config: `ShareGPT4Video`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `video_id` | `str` |
| `video_path` | `str` |
| `timestamp` | `list` |
| `keyframe` | `list` |
| `captions` | `list` |
| `zip_folder` | `str` |
|
alpindale/two-million-bluesky-posts
|
alpindale
|
language:en, license:apache-2.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, region:us, bluesky
|
community
|
# Dataset: `alpindale/two-million-bluesky-posts`
## ๐ Metadata
- **Author/Owner:** alpindale
- **Downloads:** 732
- **Likes:** 200
- **Tags:** language:en, license:apache-2.0, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, region:us, bluesky
- **License:** Not specified
## ๐ Description
```text
2 Million Bluesky Posts
This dataset contains 2 million public posts collected from Bluesky Social's firehose API, intended for machine learning research and experimentation with social media data.
The with-language-predictions config contains the same data as the default config but with language predictions added using the glotlid model.
Dataset Details
Dataset Description
This dataset consists of 2 million public posts from Bluesky Social, collected through the platform's firehoseโฆ See the full description on the dataset page: https://huggingface.co/datasets/alpindale/two-million-bluesky-posts....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `posts_20241127_075347.jsonl`
- `posts_20241127_075630.jsonl`
- `posts_20241127_082640.jsonl`
- `posts_20241127_085625.jsonl`
- `posts_20241127_092509.jsonl`
- `posts_20241127_095318.jsonl`
- `posts_20241127_102059.jsonl`
- `posts_20241127_104826.jsonl`
- `posts_20241127_111413.jsonl`
- `posts_20241127_113858.jsonl`
- `posts_20241127_120237.jsonl`
- `posts_20241127_122447.jsonl`
- `posts_20241127_124559.jsonl`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `created_at` | `str` |
| `author` | `str` |
| `uri` | `str` |
| `has_images` | `bool` |
| `reply_to` | `NoneType` |
|
ylecun/mnist
|
ylecun
|
task_categories:image-classification, task_ids:multi-class-image-classification, annotations_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:extended|other-nist, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:image, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `ylecun/mnist`
## ๐ Metadata
- **Author/Owner:** ylecun
- **Downloads:** 54847
- **Likes:** 199
- **Tags:** task_categories:image-classification, task_ids:multi-class-image-classification, annotations_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:extended|other-nist, language:en, license:mit, size_categories:10K<n<100K, format:parquet, modality:image, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for MNIST
Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school studentsโฆ See the full description on the dataset page: https://huggingface.co/datasets/ylecun/mnist....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `mnist/test-00000-of-00001.parquet`
- `mnist/train-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `mnist`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `PngImageFile` |
| `label` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `image` | `PngImageFile` |
| `label` | `int` |
|
oscar-corpus/oscar
|
oscar-corpus
|
task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:als, language:am, language:an, language:ar, language:arz, language:as, language:ast, language:av, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bn, language:bo, language:bpy, language:br, language:bs, language:bxr, language:ca, language:cbk, language:ce, language:ceb, language:ckb, language:cs, language:cv, language:cy, language:da, language:de, language:diq, language:dsb, language:dv, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fr, language:frr, language:fy, language:ga, language:gd, language:gl, language:gn, language:gom, language:gu, language:he, language:hi, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ilo, language:io, language:is, language:it, language:ja, language:jbo, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:krc, language:ku, language:kv, language:kw, language:ky, language:la, language:lb, language:lez, language:li, language:lmo, language:lo, language:lrc, language:lt, language:lv, language:mai, language:mg, language:mhr, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nap, language:nds, language:ne, language:new, language:nl, language:nn, language:no, language:oc, language:or, language:os, language:pa, language:pam, language:pl, language:pms, language:pnb, language:ps, language:pt, language:qu, language:rm, language:ro, language:ru, language:sa, language:sah, language:scn, language:sd, language:sh, language:si, language:sk, language:sl, language:so, language:sq, language:sr, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tk, language:tl, language:tr, language:tt, language:tyv, language:ug, language:uk, language:ur, language:uz, language:vec, language:vi, language:vo, language:wa, language:war, language:wuu, language:xal, language:xmf, language:yi, language:yo, language:yue, language:zh, license:cc0-1.0, size_categories:100K<n<1M, arxiv:2010.14571, region:us
|
community
|
# Dataset: `oscar-corpus/oscar`
## ๐ Metadata
- **Author/Owner:** oscar-corpus
- **Downloads:** 902
- **Likes:** 199
- **Tags:** task_categories:text-generation, task_categories:fill-mask, task_ids:language-modeling, task_ids:masked-language-modeling, annotations_creators:no-annotation, language_creators:found, multilinguality:multilingual, source_datasets:original, language:af, language:als, language:am, language:an, language:ar, language:arz, language:as, language:ast, language:av, language:az, language:azb, language:ba, language:bar, language:bcl, language:be, language:bg, language:bh, language:bn, language:bo, language:bpy, language:br, language:bs, language:bxr, language:ca, language:cbk, language:ce, language:ceb, language:ckb, language:cs, language:cv, language:cy, language:da, language:de, language:diq, language:dsb, language:dv, language:el, language:eml, language:en, language:eo, language:es, language:et, language:eu, language:fa, language:fi, language:fr, language:frr, language:fy, language:ga, language:gd, language:gl, language:gn, language:gom, language:gu, language:he, language:hi, language:hr, language:hsb, language:ht, language:hu, language:hy, language:ia, language:id, language:ie, language:ilo, language:io, language:is, language:it, language:ja, language:jbo, language:jv, language:ka, language:kk, language:km, language:kn, language:ko, language:krc, language:ku, language:kv, language:kw, language:ky, language:la, language:lb, language:lez, language:li, language:lmo, language:lo, language:lrc, language:lt, language:lv, language:mai, language:mg, language:mhr, language:min, language:mk, language:ml, language:mn, language:mr, language:mrj, language:ms, language:mt, language:mwl, language:my, language:myv, language:mzn, language:nah, language:nap, language:nds, language:ne, language:new, language:nl, language:nn, language:no, language:oc, language:or, language:os, language:pa, language:pam, language:pl, language:pms, language:pnb, language:ps, language:pt, language:qu, language:rm, language:ro, language:ru, language:sa, language:sah, language:scn, language:sd, language:sh, language:si, language:sk, language:sl, language:so, language:sq, language:sr, language:su, language:sv, language:sw, language:ta, language:te, language:tg, language:th, language:tk, language:tl, language:tr, language:tt, language:tyv, language:ug, language:uk, language:ur, language:uz, language:vec, language:vi, language:vo, language:wa, language:war, language:wuu, language:xal, language:xmf, language:yi, language:yo, language:yue, language:zh, license:cc0-1.0, size_categories:100K<n<1M, arxiv:2010.14571, region:us
- **License:** Not specified
## ๐ Description
```text
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\...
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `generate_dummy.py`
- `oscar.py`
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'oscar-corpus/oscar' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/oscar-corpus/oscar to ask for access.
```
|
a686d380/h-corpus-2023
|
a686d380
|
language:zh, region:us
|
community
|
# Dataset: `a686d380/h-corpus-2023`
## ๐ Metadata
- **Author/Owner:** a686d380
- **Downloads:** 136
- **Likes:** 199
- **Tags:** language:zh, region:us
- **License:** Not specified
## ๐ Description
```text
็ป่ฟๆธ
ๆดๅๅป้่ฟ็Hๅฐ่ฏด
ๅ
ฑ205,028็ฏๆ็ซ ๏ผ่งฃๅๅ17.0ย GB
ไป
็จไบ็งๅญฆ็ ็ฉถ๏ผ...
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `h-corpus.zip`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
common-pile/caselaw_access_project
|
common-pile
|
task_categories:text-generation, language:en, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2506.05209, region:us
|
community
|
# Dataset: `common-pile/caselaw_access_project`
## ๐ Metadata
- **Author/Owner:** common-pile
- **Downloads:** 679
- **Likes:** 199
- **Tags:** task_categories:text-generation, language:en, size_categories:1M<n<10M, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2506.05209, region:us
- **License:** Not specified
## ๐ Description
```text
Caselaw Access Project
Description
This dataset contains 6.7 million cases from the Caselaw Access Project and Court Listener.
The Caselaw Access Project consists of nearly 40 million pages of U.S. federal and state court decisions and judgesโ opinions from the last 365 years.
In addition, Court Listener adds over 900 thousand cases scraped from 479 courts.
The Caselaw Access Project and Court Listener source legal data from a wide variety of resources such as theโฆ See the full description on the dataset page: https://huggingface.co/datasets/common-pile/caselaw_access_project....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `cap_00000.jsonl.gz`
- `cap_00001.jsonl.gz`
- `cap_00002.jsonl.gz`
- `cap_00003.jsonl.gz`
- `cap_00004.jsonl.gz`
- `cap_00005.jsonl.gz`
- `cap_00006.jsonl.gz`
- `cap_00007.jsonl.gz`
- `cap_00008.jsonl.gz`
- `cap_00009.jsonl.gz`
- `cap_00010.jsonl.gz`
- `cap_00011.jsonl.gz`
- `cap_00012.jsonl.gz`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `source` | `str` |
| `added` | `str` |
| `created` | `str` |
| `metadata` | `dict` |
| `text` | `str` |
|
Josephgflowers/Finance-Instruct-500k
|
Josephgflowers
|
license:apache-2.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, finance, fine-tuning, conversational-ai, named-entity-recognition, sentiment-analysis, topic-classification, rag, multilingual, lightweight-llm
|
community
|
# Dataset: `Josephgflowers/Finance-Instruct-500k`
## ๐ Metadata
- **Author/Owner:** Josephgflowers
- **Downloads:** 1300
- **Likes:** 199
- **Tags:** license:apache-2.0, size_categories:100K<n<1M, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, finance, fine-tuning, conversational-ai, named-entity-recognition, sentiment-analysis, topic-classification, rag, multilingual, lightweight-llm
- **License:** Not specified
## ๐ Description
```text
Finance-Instruct-500k Dataset
Overview
Finance-Instruct-500k is a comprehensive and meticulously curated dataset designed to train advanced language models for financial tasks, reasoning, and multi-turn conversations. Combining data from numerous high-quality financial datasets, this corpus provides over 500,000 entries, offering unparalleled depth and versatility for finance-related instruction tuning and fine-tuning.
The dataset includes content tailored for financialโฆ See the full description on the dataset page: https://huggingface.co/datasets/Josephgflowers/Finance-Instruct-500k....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `train.json`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `system` | `str` |
| `user` | `str` |
| `assistant` | `str` |
|
Locutusque/UltraTextbooks
|
Locutusque
|
task_categories:text-generation, language:en, language:code, license:cc-by-sa-4.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, math, code, science, synthetic
|
community
|
# Dataset: `Locutusque/UltraTextbooks`
## ๐ Metadata
- **Author/Owner:** Locutusque
- **Downloads:** 775
- **Likes:** 196
- **Tags:** task_categories:text-generation, language:en, language:code, license:cc-by-sa-4.0, size_categories:1M<n<10M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, math, code, science, synthetic
- **License:** Not specified
## ๐ Description
```text
Dataset Card for "UltraTextbooks"
In the digital expanse, a Tree of Knowledge grows,
Its branches of code and words intertwine in prose.
Synthetic leaves shimmer, human insights compose,
A binary symphony where wisdom forever flows.
Dataset Description
Repository
The "UltraTextbooks" dataset is hosted on the Hugging Face platform.
Purpose
The "UltraTextbooks" dataset is a comprehensive collection of high-quality synthetic andโฆ See the full description on the dataset page: https://huggingface.co/datasets/Locutusque/UltraTextbooks....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00023-ed119fecaaa10760.parquet`
- `data/train-00001-of-00023-78a8bfc48def01d2.parquet`
- `data/train-00002-of-00023-87434277373b3c35.parquet`
- `data/train-00003-of-00023-a8895eb4c9049443.parquet`
- `data/train-00004-of-00023-30cb60417ba22dee.parquet`
- `data/train-00005-of-00023-c0e51b86be4ed0a8.parquet`
- `data/train-00006-of-00023-3ef934e18e8e28b1.parquet`
- `data/train-00007-of-00023-6db61953a165b41c.parquet`
- `data/train-00008-of-00023-6cba0a7545163a7b.parquet`
- `data/train-00009-of-00023-687dff5bb820868f.parquet`
- `data/train-00010-of-00023-8366b1cc3dff57a1.parquet`
- `data/train-00011-of-00023-d8c5642c367962b8.parquet`
- `data/train-00012-of-00023-90f1d383a82a535a.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
| `__index_level_0__` | `int` |
|
Anthropic/persuasion
|
Anthropic
|
language:en, license:cc-by-nc-sa-4.0, size_categories:1K<n<10K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `Anthropic/persuasion`
## ๐ Metadata
- **Author/Owner:** Anthropic
- **Downloads:** 374
- **Likes:** 196
- **Tags:** language:en, license:cc-by-nc-sa-4.0, size_categories:1K<n<10K, format:csv, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for Persuasion Dataset
Dataset Summary
The Persuasion Dataset contains claims and corresponding human-written and model-generated arguments, along with persuasiveness scores.
This dataset was created for research on measuring the persuasiveness of language models, as described in this blog post: Measuring the Persuasiveness of Language Models.
Dataset Description
The dataset consists of a CSV file with the following columns:
worker_id: Id of theโฆ See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/persuasion....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `persuasion_data.csv`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `worker_id` | `str` |
| `claim` | `str` |
| `argument` | `str` |
| `source` | `str` |
| `prompt_type` | `str` |
| `rating_initial` | `str` |
| `rating_final` | `str` |
| `persuasiveness_metric` | `int` |
|
SirNeural/flan_v2
|
SirNeural
|
license:apache-2.0, size_categories:100M<n<1B, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2301.13688, region:us, flan, flan 2022, flan v2
|
community
|
# Dataset: `SirNeural/flan_v2`
## ๐ Metadata
- **Author/Owner:** SirNeural
- **Downloads:** 480
- **Likes:** 195
- **Tags:** license:apache-2.0, size_categories:100M<n<1B, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2301.13688, region:us, flan, flan 2022, flan v2
- **License:** Not specified
## ๐ Description
```text
Dataset Card for Flan V2
Dataset Summary
This is a processed version of the Flan V2 dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
Setup Instructions
Here are the steps I followed to get everything working:
Build AESLC and WinoGrande datasetsโฆ See the full description on the dataset page: https://huggingface.co/datasets/SirNeural/flan_v2....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `cot_fs_noopt_train.jsonl.gz`
- `cot_fs_opt_train.jsonl.gz`
- `cot_zs_noopt_train.jsonl.gz`
- `cot_zs_opt_train.jsonl.gz`
- `dialog_fs_noopt_train.jsonl.gz`
- `dialog_fs_opt_train.jsonl.gz`
- `dialog_zs_noopt_train.jsonl.gz`
- `dialog_zs_opt_train.jsonl.gz`
- `flan_fs_noopt_train.jsonl.gz`
- `flan_fs_opt_train_part1.jsonl.gz`
- `flan_fs_opt_train_part2.jsonl.gz`
- `flan_fs_opt_train_part3.jsonl.gz`
- `flan_zs_noopt_train.jsonl.gz`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `inputs` | `str` |
| `targets` | `str` |
| `task` | `str` |
|
derek-thomas/ScienceQA
|
derek-thomas
|
task_categories:multiple-choice, task_categories:question-answering, task_categories:other, task_categories:visual-question-answering, task_categories:text-classification, task_ids:multiple-choice-qa, task_ids:closed-domain-qa, task_ids:open-domain-qa, task_ids:visual-question-answering, task_ids:multi-class-classification, annotations_creators:expert-generated, annotations_creators:found, language_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-4.0, size_categories:10K<n<100K, format:parquet, modality:image, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2209.09513, region:us, multi-modal-qa, science, chemistry, biology, physics, earth-science, engineering, geography, history, world-history, civics, economics, global-studies, grammar, writing, vocabulary, natural-science, language-science, social-science
|
community
|
# Dataset: `derek-thomas/ScienceQA`
## ๐ Metadata
- **Author/Owner:** derek-thomas
- **Downloads:** 6402
- **Likes:** 194
- **Tags:** task_categories:multiple-choice, task_categories:question-answering, task_categories:other, task_categories:visual-question-answering, task_categories:text-classification, task_ids:multiple-choice-qa, task_ids:closed-domain-qa, task_ids:open-domain-qa, task_ids:visual-question-answering, task_ids:multi-class-classification, annotations_creators:expert-generated, annotations_creators:found, language_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-4.0, size_categories:10K<n<100K, format:parquet, modality:image, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2209.09513, region:us, multi-modal-qa, science, chemistry, biology, physics, earth-science, engineering, geography, history, world-history, civics, economics, global-studies, grammar, writing, vocabulary, natural-science, language-science, social-science
- **License:** Not specified
## ๐ Description
```text
Dataset Card Creation Guide
Dataset Summary
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Supported Tasks and Leaderboards
Multi-modal Multiple Choice
Languages
English
Dataset Structure
Data Instances
Explore more samples here.
{'image': Image,
'question': 'Which of these states is farthest north?',
'choices': ['West Virginia', 'Louisiana', 'Arizona', 'Oklahoma'],
'answer': 0โฆ See the full description on the dataset page: https://huggingface.co/datasets/derek-thomas/ScienceQA....
```
## ๐ File System Sample
- `.gitattributes`
- `.gitignore`
- `README.md`
- `data/test-00000-of-00001-f0e719df791966ff.parquet`
- `data/train-00000-of-00001-1028f23e353fbe3e.parquet`
- `data/validation-00000-of-00001-6c7328ff6c84284c.parquet`
- `tutorial/ScienceQA.py`
- `tutorial/create_dataset.ipynb`
- `tutorial/download.sh`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `PngImageFile` |
| `question` | `str` |
| `choices` | `list` |
| `answer` | `int` |
| `hint` | `str` |
| `task` | `str` |
| `grade` | `str` |
| `subject` | `str` |
| `topic` | `str` |
| `category` | `str` |
| `skill` | `str` |
| `lecture` | `str` |
| `solution` | `str` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `image` | `NoneType` |
| `question` | `str` |
| `choices` | `list` |
| `answer` | `int` |
| `hint` | `str` |
| `task` | `str` |
| `grade` | `str` |
| `subject` | `str` |
| `topic` | `str` |
| `category` | `str` |
| `skill` | `str` |
| `lecture` | `str` |
| `solution` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `image` | `NoneType` |
| `question` | `str` |
| `choices` | `list` |
| `answer` | `int` |
| `hint` | `str` |
| `task` | `str` |
| `grade` | `str` |
| `subject` | `str` |
| `topic` | `str` |
| `category` | `str` |
| `skill` | `str` |
| `lecture` | `str` |
| `solution` | `str` |
|
yizhongw/self_instruct
|
yizhongw
|
license:apache-2.0, size_categories:100K<n<1M, modality:text, library:datasets, library:mlcroissant, arxiv:2212.10560, arxiv:2204.07705, region:us
|
community
|
# Dataset: `yizhongw/self_instruct`
## ๐ Metadata
- **Author/Owner:** yizhongw
- **Downloads:** 232
- **Likes:** 194
- **Tags:** license:apache-2.0, size_categories:100K<n<1M, modality:text, library:datasets, library:mlcroissant, arxiv:2212.10560, arxiv:2204.07705, region:us
- **License:** Not specified
## ๐ Description
```text
Self-Instruct is a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `self_instruct.py`
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found self_instruct.py
```
|
openbmb/RLAIF-V-Dataset
|
openbmb
|
task_categories:image-text-to-text, task_categories:visual-question-answering, task_categories:any-to-any, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, arxiv:2405.17220, arxiv:2509.18154, arxiv:2312.00849, region:us, multimodal, feedback, preference-alignment, mllm
|
community
|
# Dataset: `openbmb/RLAIF-V-Dataset`
## ๐ Metadata
- **Author/Owner:** openbmb
- **Downloads:** 1168
- **Likes:** 194
- **Tags:** task_categories:image-text-to-text, task_categories:visual-question-answering, task_categories:any-to-any, language:en, license:cc-by-nc-4.0, size_categories:10K<n<100K, arxiv:2405.17220, arxiv:2509.18154, arxiv:2312.00849, region:us, multimodal, feedback, preference-alignment, mllm
- **License:** Not specified
## ๐ Description
```text
Dataset Card for RLAIF-V-Dataset
This dataset was introduced in RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness.
GitHub
This dataset was also used in MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe
News:
[2025.09.18] ๐ Our data is used in the powerful MiniCPM-V 4.5 model, which represents a state-of-the-art end-side MLLM achieving GPT-4o level performance!
[2025.03.01] ๐ RLAIF-V is accepted by CVPR 2025!โฆ See the full description on the dataset page: https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `RLAIF-V-Dataset_000.parquet`
- `RLAIF-V-Dataset_001.parquet`
- `RLAIF-V-Dataset_002.parquet`
- `RLAIF-V-Dataset_003.parquet`
- `RLAIF-V-Dataset_004.parquet`
- `RLAIF-V-Dataset_005.parquet`
- `RLAIF-V-Dataset_006.parquet`
- `RLAIF-V-Dataset_007.parquet`
- `RLAIF-V-Dataset_008.parquet`
- `RLAIF-V-Dataset_009.parquet`
- `RLAIF-V-Dataset_010.parquet`
- `RLAIF-V-Dataset_011.parquet`
- `RLAIF-V-Dataset_012.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `ds_name` | `str` |
| `image` | `JpegImageFile` |
| `question` | `str` |
| `chosen` | `str` |
| `rejected` | `str` |
| `origin_dataset` | `str` |
| `origin_split` | `str` |
| `idx` | `str` |
| `image_path` | `str` |
|
HuggingFaceH4/MATH-500
|
HuggingFaceH4
|
task_categories:text-generation, language:en, size_categories:n<1K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `HuggingFaceH4/MATH-500`
## ๐ Metadata
- **Author/Owner:** HuggingFaceH4
- **Downloads:** 55781
- **Likes:** 193
- **Tags:** task_categories:text-generation, language:en, size_categories:n<1K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for MATH-500
This dataset contains a subset of 500 problems from the MATH benchmark that OpenAI created in their Let's Verify Step by Step paper. See their GitHub repo for the source file: https://github.com/openai/prm800k/tree/main?tab=readme-ov-file#math-splits...
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `test.jsonl`
## ๐ Data Structure
### Config: `default`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `problem` | `str` |
| `solution` | `str` |
| `answer` | `str` |
| `subject` | `str` |
| `level` | `int` |
| `unique_id` | `str` |
|
deepmind/code_contests
|
deepmind
|
task_categories:translation, annotations_creators:found, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:1K<n<10K, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2203.07814, arxiv:2105.12655, region:us
|
community
|
# Dataset: `deepmind/code_contests`
## ๐ Metadata
- **Author/Owner:** deepmind
- **Downloads:** 4693
- **Likes:** 192
- **Tags:** task_categories:translation, annotations_creators:found, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:1K<n<10K, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2203.07814, arxiv:2105.12655, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for CodeContests
Dataset Summary
CodeContests is a competitive programming dataset for machine-learning. This
dataset was used when training AlphaCode.
It consists of programming problems, from a variety of sources:
Site
URL
Source
Aizu
https://judge.u-aizu.ac.jp
CodeNet
AtCoder
https://atcoder.jp
CodeNet
CodeChef
https://www.codechef.com
description2code
Codeforces
https://codeforces.com
description2code and Codeforces
HackerEarthโฆ See the full description on the dataset page: https://huggingface.co/datasets/deepmind/code_contests....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001-9c49eeff30aacaa8.parquet`
- `data/train-00000-of-00039-e991a271dbfa9925.parquet`
- `data/train-00001-of-00039-e092fe56fda18715.parquet`
- `data/train-00002-of-00039-9cea23812e920e41.parquet`
- `data/train-00003-of-00039-e3822fccad6e083a.parquet`
- `data/train-00004-of-00039-cefe355b4667b27e.parquet`
- `data/train-00005-of-00039-b7580d2d846c2136.parquet`
- `data/train-00006-of-00039-65184bb9f7d61fde.parquet`
- `data/train-00007-of-00039-05785de21e8b8429.parquet`
- `data/train-00008-of-00039-7246e6b7423b404f.parquet`
- `data/train-00009-of-00039-b8c920f6629b57b2.parquet`
- `data/train-00010-of-00039-6de28ba20654f69b.parquet`
- `data/train-00011-of-00039-5de236be5188959d.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `name` | `str` |
| `description` | `str` |
| `public_tests` | `dict` |
| `private_tests` | `dict` |
| `generated_tests` | `dict` |
| `source` | `int` |
| `difficulty` | `int` |
| `solutions` | `dict` |
| `incorrect_solutions` | `dict` |
| `cf_contest_id` | `int` |
| `cf_index` | `str` |
| `cf_points` | `float` |
| `cf_rating` | `int` |
| `cf_tags` | `list` |
| `is_description_translated` | `bool` |
| `untranslated_description` | `str` |
| `time_limit` | `NoneType` |
| `memory_limit_bytes` | `int` |
| `input_file` | `str` |
| `output_file` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `name` | `str` |
| `description` | `str` |
| `public_tests` | `dict` |
| `private_tests` | `dict` |
| `generated_tests` | `dict` |
| `source` | `int` |
| `difficulty` | `int` |
| `solutions` | `dict` |
| `incorrect_solutions` | `dict` |
| `cf_contest_id` | `int` |
| `cf_index` | `str` |
| `cf_points` | `float` |
| `cf_rating` | `int` |
| `cf_tags` | `list` |
| `is_description_translated` | `bool` |
| `untranslated_description` | `str` |
| `time_limit` | `dict` |
| `memory_limit_bytes` | `int` |
| `input_file` | `str` |
| `output_file` | `str` |
#### Split: `valid`
| Column Name | Data Type |
|---|---|
| `name` | `str` |
| `description` | `str` |
| `public_tests` | `dict` |
| `private_tests` | `dict` |
| `generated_tests` | `dict` |
| `source` | `int` |
| `difficulty` | `int` |
| `solutions` | `dict` |
| `incorrect_solutions` | `dict` |
| `cf_contest_id` | `int` |
| `cf_index` | `str` |
| `cf_points` | `float` |
| `cf_rating` | `int` |
| `cf_tags` | `list` |
| `is_description_translated` | `bool` |
| `untranslated_description` | `str` |
| `time_limit` | `dict` |
| `memory_limit_bytes` | `int` |
| `input_file` | `str` |
| `output_file` | `str` |
|
WizardLMTeam/WizardLM_evol_instruct_70k
|
WizardLMTeam
|
license:mit, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2308.09583, arxiv:2304.12244, arxiv:2306.08568, region:us
|
community
|
# Dataset: `WizardLMTeam/WizardLM_evol_instruct_70k`
## ๐ Metadata
- **Author/Owner:** WizardLMTeam
- **Downloads:** 510
- **Likes:** 191
- **Tags:** license:mit, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2308.09583, arxiv:2304.12244, arxiv:2306.08568, region:us
- **License:** Not specified
## ๐ Description
```text
This is the training data of WizardLM.
News
๐ฅ ๐ฅ ๐ฅ [08/11/2023] We release WizardMath Models.
๐ฅ Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
๐ฅ Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM.
๐ฅ Our WizardMath-70B-V1.0 model achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 pointsโฆ See the full description on the dataset page: https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `alpaca_evol_instruct_70k.json`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `output` | `str` |
| `instruction` | `str` |
|
UCSC-VLAA/Recap-DataComp-1B
|
UCSC-VLAA
|
task_categories:zero-shot-classification, task_categories:text-retrieval, task_categories:image-to-text, task_categories:text-to-image, license:cc-by-4.0, size_categories:1B<n<10B, format:parquet, modality:image, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.08478, region:us
|
community
|
# Dataset: `UCSC-VLAA/Recap-DataComp-1B`
## ๐ Metadata
- **Author/Owner:** UCSC-VLAA
- **Downloads:** 12050
- **Likes:** 191
- **Tags:** task_categories:zero-shot-classification, task_categories:text-retrieval, task_categories:image-to-text, task_categories:text-to-image, license:cc-by-4.0, size_categories:1B<n<10B, format:parquet, modality:image, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2406.08478, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for Recap-DataComp-1B
Recap-DataComp-1B is a large-scale image-text dataset that has been recaptioned using an advanced LLaVA-1.5-LLaMA3-8B model to enhance the alignment and detail of textual descriptions.
Dataset Details
Dataset Description
Our paper aims to bridge this community effort, leveraging the powerful and open-sourced LLaMA-3, a GPT-4 level LLM.
Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered LLaVA-1.5โฆ See the full description on the dataset page: https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/preview_data/preview-00000-of-00001.parquet`
- `data/train_data/train-00000-of-04627.parquet`
- `data/train_data/train-00001-of-04627.parquet`
- `data/train_data/train-00002-of-04627.parquet`
- `data/train_data/train-00003-of-04627.parquet`
- `data/train_data/train-00004-of-04627.parquet`
- `data/train_data/train-00005-of-04627.parquet`
- `data/train_data/train-00006-of-04627.parquet`
- `data/train_data/train-00007-of-04627.parquet`
- `data/train_data/train-00008-of-04627.parquet`
- `data/train_data/train-00009-of-04627.parquet`
- `data/train_data/train-00010-of-04627.parquet`
- `data/train_data/train-00011-of-04627.parquet`
- ... and more.
## ๐ Data Structure
### Config: `condition_diverse_topk`
#### Split: `preview`
| Column Name | Data Type |
|---|---|
| `url` | `str` |
| `re_caption` | `str` |
| `org_caption` | `str` |
| `sha256` | `str` |
| `key` | `str` |
| `re_clip_score` | `float` |
| `org_clip_score` | `float` |
| `re_length` | `int` |
| `org_length` | `int` |
| `re_gpt4v_score` | `int` |
| `org_gpt4v_score` | `int` |
| `re_caption_condition_diverse_topk` | `str` |
| `re_condition_length` | `int` |
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `url` | `str` |
| `re_caption` | `str` |
| `org_caption` | `str` |
| `sha256` | `str` |
| `key` | `str` |
| `re_clip_score` | `float` |
| `org_clip_score` | `float` |
| `re_length` | `int` |
| `org_length` | `int` |
| `re_gpt4v_score` | `int` |
| `org_gpt4v_score` | `int` |
| `re_caption_condition_diverse_topk` | `str` |
| `re_condition_length` | `int` |
### Config: `default`
#### Split: `preview`
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Couldn't cast
url: string
re_caption: string
org_caption: string
sha256: string
key: string
re_clip_score: double
org_clip_score: double
re_length: int64
org_length: int64
re_gpt4v_score: int64
org_gpt4v_score: int64
re_caption_condition_diverse_topk: string
re_condition_length: int64
-- schema metadata --
huggingface: '{"info": {"features": {"url": {"dtype": "string", "_type": ' + 681
to
{'url': Value('string'), 're_caption': Value('string'), 'org_caption': Value('string'), 'sha256': Value('string'), 'key': Value('string'), 're_clip_score': Value('float64'), 'org_clip_score': Value('float64'), 're_length': Value('int64'), 'org_length': Value('int64'), 're_gpt4v_score': Value('int64'), 'org_gpt4v_score': Value('int64')}
because column names don't match
```
|
zai-org/LongWriter-6k
|
zai-org
|
task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:1K<n<10K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2408.07055, region:us, Long Context, sft, writing
|
community
|
# Dataset: `zai-org/LongWriter-6k`
## ๐ Metadata
- **Author/Owner:** zai-org
- **Downloads:** 2202
- **Likes:** 191
- **Tags:** task_categories:text-generation, language:en, language:zh, license:apache-2.0, size_categories:1K<n<10K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2408.07055, region:us, Long Context, sft, writing
- **License:** Not specified
## ๐ Description
```text
LongWriter-6k
๐ค [LongWriter Dataset] โข ๐ป [Github Repo] โข ๐ [LongWriter Paper]
LongWriter-6k dataset contains 6,000 SFT data with ultra-long output ranging from 2k-32k words in length (both English and Chinese). The data can support training LLMs to extend their maximum output window size to 10,000+ words.
All Models
We open-sourced the following list of models trained on LongWriter-6k:
Model
Huggingface Repo
Description
LongWriter-glm4-9b
๐คโฆ See the full description on the dataset page: https://huggingface.co/datasets/zai-org/LongWriter-6k....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `long.jsonl`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `messages` | `list` |
|
AI4Math/MathVista
|
AI4Math
|
task_categories:multiple-choice, task_categories:question-answering, task_categories:visual-question-answering, task_categories:text-classification, task_ids:multiple-choice-qa, task_ids:closed-domain-qa, task_ids:open-domain-qa, task_ids:visual-question-answering, task_ids:multi-class-classification, annotations_creators:expert-generated, annotations_creators:found, language_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, language:zh, language:fa, license:cc-by-sa-4.0, size_categories:1K<n<10K, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.02255, region:us, multi-modal-qa, math-qa, figure-qa, geometry-qa, math-word-problem, textbook-qa, vqa, arithmetic-reasoning, statistical-reasoning, algebraic-reasoning, geometry-reasoning, numeric-common-sense, scientific-reasoning, logical-reasoning, geometry-diagram, synthetic-scene, chart, plot, scientific-figure, table, function-plot, abstract-scene, puzzle-test, document-image, medical-image, mathematics, science, chemistry, biology, physics, engineering, natural-science
|
community
|
# Dataset: `AI4Math/MathVista`
## ๐ Metadata
- **Author/Owner:** AI4Math
- **Downloads:** 12039
- **Likes:** 190
- **Tags:** task_categories:multiple-choice, task_categories:question-answering, task_categories:visual-question-answering, task_categories:text-classification, task_ids:multiple-choice-qa, task_ids:closed-domain-qa, task_ids:open-domain-qa, task_ids:visual-question-answering, task_ids:multi-class-classification, annotations_creators:expert-generated, annotations_creators:found, language_creators:expert-generated, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, language:zh, language:fa, license:cc-by-sa-4.0, size_categories:1K<n<10K, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2310.02255, region:us, multi-modal-qa, math-qa, figure-qa, geometry-qa, math-word-problem, textbook-qa, vqa, arithmetic-reasoning, statistical-reasoning, algebraic-reasoning, geometry-reasoning, numeric-common-sense, scientific-reasoning, logical-reasoning, geometry-diagram, synthetic-scene, chart, plot, scientific-figure, table, function-plot, abstract-scene, puzzle-test, document-image, medical-image, mathematics, science, chemistry, biology, physics, engineering, natural-science
- **License:** Not specified
## ๐ Description
```text
Dataset Card for MathVista
Dataset Description
Paper Information
Dataset Examples
Leaderboard
Dataset Usage
Data Downloading
Data Format
Data Visualization
Data Source
Automatic Evaluation
License
Citation
Dataset Description
MathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate logicalโฆ See the full description on the dataset page: https://huggingface.co/datasets/AI4Math/MathVista....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `annot_testmini.json`
- `data/test-00000-of-00002-6b81bd7f7e2065e6.parquet`
- `data/test-00001-of-00002-6a611c71596db30f.parquet`
- `data/testmini-00000-of-00001-725687bf7a18d64b.parquet`
- `images.zip`
- `source.json`
## ๐ Data Structure
### Config: `default`
#### Split: `testmini`
| Column Name | Data Type |
|---|---|
| `pid` | `str` |
| `question` | `str` |
| `image` | `str` |
| `decoded_image` | `PngImageFile` |
| `choices` | `NoneType` |
| `unit` | `NoneType` |
| `precision` | `float` |
| `answer` | `str` |
| `question_type` | `str` |
| `answer_type` | `str` |
| `metadata` | `dict` |
| `query` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `pid` | `str` |
| `question` | `str` |
| `image` | `str` |
| `decoded_image` | `PngImageFile` |
| `choices` | `NoneType` |
| `unit` | `NoneType` |
| `precision` | `NoneType` |
| `answer` | `str` |
| `question_type` | `str` |
| `answer_type` | `str` |
| `metadata` | `dict` |
| `query` | `str` |
|
GAIR/MathPile
|
GAIR
|
language:en, license:cc-by-nc-sa-4.0, size_categories:1B<n<10B, library:mlcroissant, arxiv:2312.17120, region:us, croissant
|
community
|
# Dataset: `GAIR/MathPile`
## ๐ Metadata
- **Author/Owner:** GAIR
- **Downloads:** 145
- **Likes:** 188
- **Tags:** language:en, license:cc-by-nc-sa-4.0, size_categories:1B<n<10B, library:mlcroissant, arxiv:2312.17120, region:us, croissant
- **License:** Not specified
## ๐ Description
```text
๐ฅUpdate:
[2023/01/06] We release the commercial-use version of MathPile, namely MathPile_Commercial.
[2023/01/06] We release the new version (v0.2, cleaner version) of MathPile. It has been updated to the main branch (also the v0.2 branch). The main updates are as follows:
fixed a problem with the display of mathematical formulas in the Wikipedia subset, which was caused by the HTML conversion to markdown;
fixed unclosed caption parentheses in the image environment in arXiv and macroโฆ See the full description on the dataset page: https://huggingface.co/datasets/GAIR/MathPile....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `imgs/mathpile-features.png`
- `imgs/mathpile-overview.png`
- `train/arXiv/math_arXiv_v0.2_chunk_1.jsonl.gz`
- `train/arXiv/math_arXiv_v0.2_chunk_2.jsonl.gz`
- `train/arXiv/math_arXiv_v0.2_chunk_3.jsonl.gz`
- `train/arXiv/math_arXiv_v0.2_chunk_4.jsonl.gz`
- `train/commoncrawl/C4_math_docs_chunk_0.jsonl.gz`
- `train/commoncrawl/CC_math_docs_chunk_0.jsonl.gz`
- `train/proofwiki/ProofWiki_definitions.jsonl.gz`
- `train/proofwiki/ProofWiki_theorem_proofs.jsonl.gz`
- `train/stackexchange/cs.stackexchange.com.jsonl.gz`
- `train/stackexchange/cstheory.stackexchange.com.jsonl.gz`
- `train/stackexchange/datascience.stackexchange.com.jsonl.gz`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'GAIR/MathPile' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/GAIR/MathPile to ask for access.
```
|
open-r1/codeforces-cots
|
open-r1
|
license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `open-r1/codeforces-cots`
## ๐ Metadata
- **Author/Owner:** open-r1
- **Downloads:** 1034
- **Likes:** 188
- **Tags:** license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for CodeForces-CoTs
Dataset description
CodeForces-CoTs is a large-scale dataset for training reasoning models on competitive programming tasks. It consists of 10k CodeForces problems with up to five reasoning traces generated by DeepSeek R1. We did not filter the traces for correctness, but found that around 84% of the Python ones pass the public tests.
The dataset consists of several subsets:
solutions: we prompt R1 to solve the problem and produce code.โฆ See the full description on the dataset page: https://huggingface.co/datasets/open-r1/codeforces-cots....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `checker_interactor/train-00000-of-00002.parquet`
- `checker_interactor/train-00001-of-00002.parquet`
- `solutions/train-00000-of-00010.parquet`
- `solutions/train-00001-of-00010.parquet`
- `solutions/train-00002-of-00010.parquet`
- `solutions/train-00003-of-00010.parquet`
- `solutions/train-00004-of-00010.parquet`
- `solutions/train-00005-of-00010.parquet`
- `solutions/train-00006-of-00010.parquet`
- `solutions/train-00007-of-00010.parquet`
- `solutions/train-00008-of-00010.parquet`
- `solutions/train-00009-of-00010.parquet`
- `solutions_decontaminated/train-00000-of-00014.parquet`
- ... and more.
## ๐ Data Structure
### Config: `checker_interactor`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `aliases` | `NoneType` |
| `contest_id` | `str` |
| `contest_name` | `str` |
| `contest_type` | `str` |
| `contest_start` | `int` |
| `contest_start_year` | `int` |
| `index` | `str` |
| `time_limit` | `float` |
| `memory_limit` | `float` |
| `title` | `str` |
| `description` | `str` |
| `input_format` | `str` |
| `output_format` | `str` |
| `interaction_format` | `NoneType` |
| `note` | `NoneType` |
| `examples` | `list` |
| `editorial` | `NoneType` |
| `prompt` | `str` |
| `generation` | `str` |
| `finish_reason` | `str` |
| `api_metadata` | `dict` |
| `messages` | `list` |
### Config: `solutions`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `aliases` | `NoneType` |
| `contest_id` | `str` |
| `contest_name` | `str` |
| `contest_type` | `str` |
| `contest_start` | `int` |
| `contest_start_year` | `int` |
| `index` | `str` |
| `time_limit` | `float` |
| `memory_limit` | `int` |
| `title` | `str` |
| `description` | `str` |
| `input_format` | `str` |
| `output_format` | `str` |
| `examples` | `list` |
| `note` | `str` |
| `editorial` | `str` |
| `prompt` | `str` |
| `generation` | `str` |
| `finish_reason` | `str` |
| `api_metadata` | `dict` |
| `interaction_format` | `NoneType` |
| `messages` | `list` |
### Config: `solutions_decontaminated`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `aliases` | `NoneType` |
| `contest_id` | `str` |
| `contest_name` | `str` |
| `contest_type` | `str` |
| `contest_start` | `int` |
| `contest_start_year` | `int` |
| `index` | `str` |
| `time_limit` | `float` |
| `memory_limit` | `float` |
| `title` | `str` |
| `description` | `str` |
| `input_format` | `str` |
| `output_format` | `str` |
| `examples` | `list` |
| `note` | `str` |
| `editorial` | `str` |
| `problem` | `str` |
| `generation` | `str` |
| `finish_reason` | `str` |
| `api_metadata` | `dict` |
| `interaction_format` | `NoneType` |
| `messages` | `list` |
| `problem_type` | `str` |
| `public_tests` | `NoneType` |
| `private_tests` | `NoneType` |
| `generated_tests` | `NoneType` |
| `public_tests_ms` | `NoneType` |
| `failed_solutions` | `NoneType` |
| `accepted_solutions` | `list` |
|
pixparse/idl-wds
|
pixparse
|
task_categories:image-to-text, license:other, size_categories:1M<n<10M, format:webdataset, modality:image, modality:text, library:datasets, library:webdataset, library:mlcroissant, region:us
|
community
|
# Dataset: `pixparse/idl-wds`
## ๐ Metadata
- **Author/Owner:** pixparse
- **Downloads:** 4279
- **Likes:** 187
- **Tags:** task_categories:image-to-text, license:other, size_categories:1M<n<10M, format:webdataset, modality:image, modality:text, library:datasets, library:webdataset, library:mlcroissant, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for Industry Documents Library (IDL)
Dataset Summary
Industry Documents Library (IDL) is a document dataset filtered from UCSF documents library with 19 million pages kept as valid samples.
Each document exists as a collection of a pdf, a tiff image with the same contents rendered, a json file containing extensive Textract OCR annotations from the idl_data project, and a .ocr file with the original, older OCR annotation. In each pdf, there may be from 1 to upโฆ See the full description on the dataset page: https://huggingface.co/datasets/pixparse/idl-wds....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `_idl-train-info.json`
- `_info.json`
- `doc_images/arrows_plot.png`
- `doc_images/arrows_plot_straight.png`
- `doc_images/bounding_boxes.png`
- `doc_images/bounding_boxes_straight.png`
- `doc_images/idl_page_example.png`
- `idl-train-00000.tar`
- `idl-train-00001.tar`
- `idl-train-00002.tar`
- `idl-train-00003.tar`
- `idl-train-00004.tar`
- `idl-train-00005.tar`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `json` | `dict` |
| `ocr` | `bytes` |
| `pdf` | `bytes` |
| `tif` | `TiffImageFile` |
| `__key__` | `str` |
| `__url__` | `str` |
|
storytracer/US-PD-Books
|
storytracer
|
task_categories:text-generation, language:en, license:cc0-1.0, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, books, public domain, ocr, open culture
|
community
|
# Dataset: `storytracer/US-PD-Books`
## ๐ Metadata
- **Author/Owner:** storytracer
- **Downloads:** 180
- **Likes:** 186
- **Tags:** task_categories:text-generation, language:en, license:cc0-1.0, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us, books, public domain, ocr, open culture
- **License:** Not specified
## ๐ Description
```text
UPDATE: The Internet Archive has requested that this dataset be deleted (see discussion #2) because they consider the IA's metadata too unreliable to determine whether a book is in the public domain. To alleviate the IA's concerns, the full texts of the books have been removed from this dataset until a more reliable way to curate public domain books from the IA collections is established. The metadata and documentation remain for reference purposes.
I was able to recreate one subcollectionโฆ See the full description on the dataset page: https://huggingface.co/datasets/storytracer/US-PD-Books....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `metadata.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `ocaid` | `str` |
| `title` | `str` |
| `author` | `str` |
| `year` | `int` |
| `page_count` | `int` |
| `openlibrary_edition` | `str` |
| `openlibrary_work` | `str` |
| `full_text_url` | `str` |
|
ystemsrx/Erotic_Literature_Collection
|
ystemsrx
|
task_categories:text-generation, language:zh, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, region:us, porn, Pre-training, Fine-tuning, Explicit Content, Chinese, Erotic Literature
|
community
|
# Dataset: `ystemsrx/Erotic_Literature_Collection`
## ๐ Metadata
- **Author/Owner:** ystemsrx
- **Downloads:** 536
- **Likes:** 186
- **Tags:** task_categories:text-generation, language:zh, license:cc-by-nc-4.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, region:us, porn, Pre-training, Fine-tuning, Explicit Content, Chinese, Erotic Literature
- **License:** Not specified
## ๐ Description
```text
English
ไธญๆ่ฒๆ
ๆๅญฆๆฐๆฎ้ๅ้
ๆฆ่ฟฐ
ๆฌไปๅบๅ
ๅซไบ51ไธชไธญๆ่ฒๆ
ๆๅญฆๆฐๆฎ้ใๆฏไธชๆฐๆฎ้็ฑ็ญ็ฏ่ฒๆ
ๅฐ่ฏดใไธชไบบ่ฒๆ
็ป้ชๅๅ
ถไปๅฝขๅผ็่ฒๆ
ๅ
ๅฎน็ปๆใๆฐๆฎ้็ๆ ผๅผไธบJSON๏ผๆฏไธชๆไปถๅ
ๅซไธไธชๅฏน่ฑกๆฐ็ป๏ผๆฏไธชๅฏน่ฑกไปฃ่กจไธ็ฏๆๆกฃ๏ผ
[
{"text": "document"},
{"text": "document"}
]
่ฟไบๆฐๆฎ้ๅฏ็จไบ่ฏญ่จๆจกๅ็้ข่ฎญ็ป๏ผ็ป่ฟ้ๅฝ่ฐๆดๅไนๅฏ็จไบๆจกๅ็ๅพฎ่ฐใ
ๆฐๆฎ้ๆ ผๅผ
ๆไปถๆ ผๅผ๏ผ JSON
ๅ
ๅฎน๏ผ ็ญ็ฏ่ฒๆ
ๅฐ่ฏดใไธชไบบ่ฒๆ
็ป้ชๅๅ
ถไป่ฒๆ
ๅ
ๅฎน
็ปๆ๏ผ
ๆฏไธชๆไปถๅ
ๅซไธไธชๅฏน่ฑกๆฐ็ป
ๆฏไธชๅฏน่ฑกๅ
ๅซไธไธช้ฎ "text"๏ผๅ
ถๅผไธบ็ธๅบ็ๆๆกฃๅ
ๅฎน
ไฝฟ็จๆนๆณ
่ฟไบๆฐๆฎ้ไธป่ฆ็จไบ็ ็ฉถ็ฎ็๏ผ็นๅซๆฏๅจ่ฏญ่จๆจกๅ็ๅผๅๅๅพฎ่ฐไธญไฝฟ็จใ็ฑไบๅ
ๅฎน็ๆๆๆง๏ผ็จๆทๅบ่ฐจๆ
ๅค็่ฟไบๆฐๆฎ้๏ผๅนถ็กฎไฟ้ตๅฎๅฝๅฐ็ๆณๅพๆณ่งๅ็ธๅ
ณๆๅฏผๅๅใ
็คบไพ็จๆณ
import json
# ๅ ่ฝฝๆฐๆฎ้with open('path_to_json_file.json', 'r'โฆ See the full description on the dataset page: https://huggingface.co/datasets/ystemsrx/Erotic_Literature_Collection....
```
## ๐ File System Sample
- `.gitattributes`
- `README.en.md`
- `README.md`
- `ไนก้ด่ฎฐ่ถฃ.json`
- `ไบบๅฆป็พๅฆ.json`
- `ไผดไพฃไบคๆข.json`
- `ๅจๆผซๆน็ผ.json`
- `ๅป็ๆคๅฃซ.json`
- `ๅๅคๆช่ฐ.json`
- `ๅๅฒๆ
่ฒ.json`
- `ๅ่บซ็ณปๅ.json`
- `ๅคๅ
ธๆ
่ฒ.json`
- `ๅฆ็ฑปๅ
ถไป.json`
- `ๅไบไน้ด.json`
- `ๅๆงๆ
่ฒ.json`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `text` | `str` |
|
NovaSky-AI/Sky-T1_data_17k
|
NovaSky-AI
|
license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `NovaSky-AI/Sky-T1_data_17k`
## ๐ Metadata
- **Author/Owner:** NovaSky-AI
- **Downloads:** 252
- **Likes:** 186
- **Tags:** license:apache-2.0, size_categories:10K<n<100K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Sky-T1_data_17k.json: The 17k training data used to train Sky-T1-32B-Preview. The final data contains 5k coding data from APPs and TACO, and 10k math data from AIME, MATH, and Olympiads subsets of the NuminaMATH dataset. In addition, we maintain 1k science and puzzle data from STILL-2....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `Sky-T1_data_17k.json`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `system` | `str` |
| `conversations` | `list` |
|
codeparrot/apps
|
codeparrot
|
task_categories:text-generation, task_ids:language-modeling, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, language:code, license:mit, arxiv:2105.09938, arxiv:2203.07814, region:us
|
community
|
# Dataset: `codeparrot/apps`
## ๐ Metadata
- **Author/Owner:** codeparrot
- **Downloads:** 9611
- **Likes:** 185
- **Tags:** task_categories:text-generation, task_ids:language-modeling, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, language:code, license:mit, arxiv:2105.09938, arxiv:2203.07814, region:us
- **License:** Not specified
## ๐ Description
```text
APPS is a benchmark for Python code generation, it includes 10,000 problems, which range from having simple oneline solutions to being substantial algorithmic challenges, for more details please refer to this paper: https://arxiv.org/pdf/2105.09938.pdf....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `apps.py`
- `test.jsonl`
- `train.jsonl`
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found apps.py
```
|
OpenGVLab/ShareGPT-4o
|
OpenGVLab
|
task_categories:visual-question-answering, task_categories:question-answering, language:en, license:mit, size_categories:10K<n<100K, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `OpenGVLab/ShareGPT-4o`
## ๐ Metadata
- **Author/Owner:** OpenGVLab
- **Downloads:** 93723
- **Likes:** 185
- **Tags:** task_categories:visual-question-answering, task_categories:question-answering, language:en, license:mit, size_categories:10K<n<100K, format:json, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `audio_benchmark/AI2D/AI2D_audio.jsonl`
- `audio_benchmark/AI2D/audio/ai2d_audio_0.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_10.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_100.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1000.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1001.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1002.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1003.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1004.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1005.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1006.wav`
- `audio_benchmark/AI2D/audio/ai2d_audio_1007.wav`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'OpenGVLab/ShareGPT-4o' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/OpenGVLab/ShareGPT-4o to ask for access.
```
|
google-research-datasets/mbpp
|
google-research-datasets
|
annotations_creators:crowdsourced, annotations_creators:expert-generated, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:1K<n<10K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2108.07732, region:us, code-generation
|
community
|
# Dataset: `google-research-datasets/mbpp`
## ๐ Metadata
- **Author/Owner:** google-research-datasets
- **Downloads:** 32388
- **Likes:** 183
- **Tags:** annotations_creators:crowdsourced, annotations_creators:expert-generated, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:1K<n<10K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2108.07732, region:us, code-generation
- **License:** Not specified
## ๐ Description
```text
Dataset Card for Mostly Basic Python Problems (mbpp)
Dataset Summary
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
Released here as part ofโฆ See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/mbpp....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `full/prompt-00000-of-00001.parquet`
- `full/test-00000-of-00001.parquet`
- `full/train-00000-of-00001.parquet`
- `full/validation-00000-of-00001.parquet`
- `sanitized/prompt-00000-of-00001.parquet`
- `sanitized/test-00000-of-00001.parquet`
- `sanitized/train-00000-of-00001.parquet`
- `sanitized/validation-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `full`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `task_id` | `int` |
| `text` | `str` |
| `code` | `str` |
| `test_list` | `list` |
| `test_setup_code` | `str` |
| `challenge_test_list` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `task_id` | `int` |
| `text` | `str` |
| `code` | `str` |
| `test_list` | `list` |
| `test_setup_code` | `str` |
| `challenge_test_list` | `list` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `task_id` | `int` |
| `text` | `str` |
| `code` | `str` |
| `test_list` | `list` |
| `test_setup_code` | `str` |
| `challenge_test_list` | `list` |
### Config: `sanitized`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `source_file` | `str` |
| `task_id` | `int` |
| `prompt` | `str` |
| `code` | `str` |
| `test_imports` | `list` |
| `test_list` | `list` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `source_file` | `str` |
| `task_id` | `int` |
| `prompt` | `str` |
| `code` | `str` |
| `test_imports` | `list` |
| `test_list` | `list` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `source_file` | `str` |
| `task_id` | `int` |
| `prompt` | `str` |
| `code` | `str` |
| `test_imports` | `list` |
| `test_list` | `list` |
|
allenai/peS2o
|
allenai
|
task_categories:text-generation, task_categories:fill-mask, source_datasets:allenai/s2orc, language:en, license:odc-by, size_categories:10B<n<100B, region:us, biology, chemistry, engineering, computer science, physics, material science, math, psychology, economics, political science, business, geology, sociology, geography, environmental science, art, history, philosophy
|
community
|
# Dataset: `allenai/peS2o`
## ๐ Metadata
- **Author/Owner:** allenai
- **Downloads:** 1991
- **Likes:** 183
- **Tags:** task_categories:text-generation, task_categories:fill-mask, source_datasets:allenai/s2orc, language:en, license:odc-by, size_categories:10B<n<100B, region:us, biology, chemistry, engineering, computer science, physics, material science, math, psychology, economics, political science, business, geology, sociology, geography, environmental science, art, history, philosophy
- **License:** Not specified
## ๐ Description
```text
Pretraining Effectively on S2ORC!
The peS2o dataset is a collection of ~40M creative open-access academic papers,
cleaned, filtered, and formatted for pre-training of language models. It is derived from
the Semantic Scholar Open Research Corpus(Lo et al, 2020), or S2ORC.
We release multiple version of peS2o, each with different processing and knowledge cutoff
date. We recommend you to use the latest version available.
If you use this dataset, please cite:
@techreport{peS2o,
author =โฆ See the full description on the dataset page: https://huggingface.co/datasets/allenai/peS2o....
```
## ๐ File System Sample
- `.flake8`
- `.gitattributes`
- `.gitignore`
- `README.md`
- `data/v1/train-00000-of-00020.json.gz`
- `data/v1/train-00001-of-00020.json.gz`
- `data/v1/train-00002-of-00020.json.gz`
- `data/v1/train-00003-of-00020.json.gz`
- `data/v1/train-00004-of-00020.json.gz`
- `data/v1/train-00005-of-00020.json.gz`
- `data/v1/train-00006-of-00020.json.gz`
- `data/v1/train-00007-of-00020.json.gz`
- `data/v1/train-00008-of-00020.json.gz`
- `data/v1/train-00009-of-00020.json.gz`
- `data/v1/train-00010-of-00020.json.gz`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset scripts are no longer supported, but found peS2o.py
```
|
Open-Orca/FLAN
|
Open-Orca
|
language:en, license:cc-by-4.0, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2301.13688, arxiv:2109.01652, arxiv:2110.08207, arxiv:2204.07705, region:us
|
community
|
# Dataset: `Open-Orca/FLAN`
## ๐ Metadata
- **Author/Owner:** Open-Orca
- **Downloads:** 8070
- **Likes:** 182
- **Tags:** language:en, license:cc-by-4.0, size_categories:100M<n<1B, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2301.13688, arxiv:2109.01652, arxiv:2110.08207, arxiv:2204.07705, region:us
- **License:** Not specified
## ๐ Description
```text
๐ฎ The WHOLE FLAN Collection! ๐ฎ
Overview
This repository includes the full dataset from the FLAN Collection, totalling ~300GB as parquets.
Generated using the official seqio templating from the Google FLAN Collection GitHub repo.
The data is subject to all the same licensing of the component datasets.
To keep up with our continued work on OpenOrca and other exciting research, find our Discord here:
https://AlignmentLab.ai
Motivation
This work was done as part ofโฆ See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/FLAN....
```
## ๐ File System Sample
- `.gitattributes`
- `OOFlanLogo.png`
- `README.md`
- `cot_fsopt_data/part.0.parquet`
- `cot_submix_data.jsonl`
- `cot_zs_submix_data.json`
- `cot_zsopt_data/part.0.parquet`
- `dialog_fsopt_data/part.0.parquet`
- `dialog_fsopt_data/part.1.parquet`
- `dialog_fsopt_data/part.10.parquet`
- `dialog_fsopt_data/part.11.parquet`
- `dialog_fsopt_data/part.12.parquet`
- `dialog_fsopt_data/part.13.parquet`
- `dialog_fsopt_data/part.14.parquet`
- `dialog_fsopt_data/part.15.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `inputs` | `str` |
| `targets` | `str` |
| `_template_idx` | `int` |
| `_task_source` | `str` |
| `_task_name` | `str` |
| `_template_type` | `str` |
|
SciPhi/textbooks-are-all-you-need-lite
|
SciPhi
|
license:llama2, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `SciPhi/textbooks-are-all-you-need-lite`
## ๐ Metadata
- **Author/Owner:** SciPhi
- **Downloads:** 101
- **Likes:** 182
- **Tags:** license:llama2, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Textbooks are all you need : A SciPhi Collection
Dataset Description
With LLMs, we can create a fully open-source Library of Alexandria.
As a first attempt, we have generated 650,000 unique textbook samples from a diverse span of courses, kindergarten through graduate school.
These are open source samples, which likely fall under the Llama-2 license. They were generated using the SciPhi repository.
All samples were created with TheBloke/Phind-CodeLlama-34B-v2-AWQ.
Lastly, I oweโฆ See the full description on the dataset page: https://huggingface.co/datasets/SciPhi/textbooks-are-all-you-need-lite....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00007-eb50287110fc883d.parquet`
- `data/train-00001-of-00007-046f12126f63c8d6.parquet`
- `data/train-00002-of-00007-e4af4ced172630e6.parquet`
- `data/train-00003-of-00007-f668cf5e855ca670.parquet`
- `data/train-00004-of-00007-915a5924313bb1bf.parquet`
- `data/train-00005-of-00007-4fe686d29c2f407c.parquet`
- `data/train-00006-of-00007-6bfd1affc76349bd.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `formatted_prompt` | `str` |
| `completion` | `str` |
| `first_task` | `str` |
| `second_task` | `str` |
| `last_task` | `str` |
| `notes` | `str` |
| `title` | `str` |
| `model` | `str` |
| `temperature` | `float` |
|
argilla/distilabel-capybara-dpo-7k-binarized
|
argilla
|
task_categories:question-answering, task_categories:text-generation, language:en, license:apache-2.0, size_categories:1K<n<10K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, library:distilabel, library:argilla, region:us, Physics, Biology, Math, Chemistry, Culture, Logic, Roleplay, rlaif, rlhf, dpo, distilabel, synthetic, argilla
|
community
|
# Dataset: `argilla/distilabel-capybara-dpo-7k-binarized`
## ๐ Metadata
- **Author/Owner:** argilla
- **Downloads:** 3251
- **Likes:** 182
- **Tags:** task_categories:question-answering, task_categories:text-generation, language:en, license:apache-2.0, size_categories:1K<n<10K, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, library:distilabel, library:argilla, region:us, Physics, Biology, Math, Chemistry, Culture, Logic, Roleplay, rlaif, rlhf, dpo, distilabel, synthetic, argilla
- **License:** Not specified
## ๐ Description
```text
Capybara-DPO 7K binarized
A DPO dataset built with distilabel atop the awesome LDJnr/Capybara
This is a preview version to collect feedback from the community. v2 will include the full base dataset and responses from more powerful models.
Why?
Multi-turn dialogue data is key to fine-tune capable chat models. Multi-turn preference data has been used by the most relevant RLHF works (Anthropic, Meta Llama2, etc.). Unfortunately, there are very fewโฆ See the full description on the dataset page: https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `source` | `str` |
| `conversation` | `list` |
| `original_response` | `str` |
| `generation_prompt` | `list` |
| `raw_generation_responses` | `list` |
| `new_generations` | `list` |
| `prompt` | `str` |
| `chosen` | `list` |
| `rejected` | `list` |
| `rating_chosen` | `int` |
| `rating_rejected` | `int` |
| `chosen_model` | `str` |
| `rejected_model` | `str` |
|
openlifescienceai/medmcqa
|
openlifescienceai
|
task_categories:question-answering, task_categories:multiple-choice, task_ids:multiple-choice-qa, task_ids:open-domain-qa, annotations_creators:no-annotation, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `openlifescienceai/medmcqa`
## ๐ Metadata
- **Author/Owner:** openlifescienceai
- **Downloads:** 14183
- **Likes:** 181
- **Tags:** task_categories:question-answering, task_categories:multiple-choice, task_ids:multiple-choice-qa, task_ids:open-domain-qa, annotations_creators:no-annotation, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:apache-2.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for MedMCQA
Dataset Summary
MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
Each sample contains a question, correct answer(s), and other options which requireโฆ See the full description on the dataset page: https://huggingface.co/datasets/openlifescienceai/medmcqa....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/test-00000-of-00001.parquet`
- `data/train-00000-of-00001.parquet`
- `data/validation-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `opa` | `str` |
| `opb` | `str` |
| `opc` | `str` |
| `opd` | `str` |
| `cop` | `int` |
| `choice_type` | `str` |
| `exp` | `str` |
| `subject_name` | `str` |
| `topic_name` | `str` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `opa` | `str` |
| `opb` | `str` |
| `opc` | `str` |
| `opd` | `str` |
| `cop` | `int` |
| `choice_type` | `str` |
| `exp` | `str` |
| `subject_name` | `str` |
| `topic_name` | `NoneType` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `opa` | `str` |
| `opb` | `str` |
| `opc` | `str` |
| `opd` | `str` |
| `cop` | `int` |
| `choice_type` | `str` |
| `exp` | `NoneType` |
| `subject_name` | `str` |
| `topic_name` | `NoneType` |
|
argilla/distilabel-intel-orca-dpo-pairs
|
argilla
|
license:apache-2.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `argilla/distilabel-intel-orca-dpo-pairs`
## ๐ Metadata
- **Author/Owner:** argilla
- **Downloads:** 3478
- **Likes:** 181
- **Tags:** license:apache-2.0, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
distilabel Orca Pairs for DPO
The dataset is a "distilabeled" version of the widely used dataset: Intel/orca_dpo_pairs. The original dataset has been used by 100s of open-source practitioners and models. We knew from fixing UltraFeedback (and before that, Alpacas and Dollys) that this dataset could be highly improved.
Continuing with our mission to build the best alignment datasets for open-source LLMs and the community, we spent a few hours improving it withโฆ See the full description on the dataset page: https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `system` | `str` |
| `input` | `str` |
| `chosen` | `str` |
| `rejected` | `str` |
| `generations` | `list` |
| `order` | `list` |
| `labelling_model` | `str` |
| `labelling_prompt` | `list` |
| `raw_labelling_response` | `str` |
| `rating` | `list` |
| `rationale` | `str` |
| `status` | `str` |
| `original_chosen` | `str` |
| `original_rejected` | `str` |
| `chosen_score` | `float` |
| `in_gsm8k_train` | `bool` |
|
math-ai/AutoMathText
|
math-ai
|
task_categories:text-generation, task_categories:question-answering, language:en, license:cc-by-sa-4.0, size_categories:1M<n<10M, modality:text, arxiv:2402.07625, region:us, mathematical-reasoning, reasoning, finetuning, pretraining, llm
|
community
|
# Dataset: `math-ai/AutoMathText`
## ๐ Metadata
- **Author/Owner:** math-ai
- **Downloads:** 4414
- **Likes:** 181
- **Tags:** task_categories:text-generation, task_categories:question-answering, language:en, license:cc-by-sa-4.0, size_categories:1M<n<10M, modality:text, arxiv:2402.07625, region:us, mathematical-reasoning, reasoning, finetuning, pretraining, llm
- **License:** Not specified
## ๐ Description
```text
๐ This work, introducing the AutoMathText dataset and the AutoDS method, has been accepted to The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025 Findings)! ๐
AutoMathText
AutoMathText is an extensive and carefully curated dataset encompassing around 200 GB of mathematical texts. It's a compilation sourced from a diverse range of platforms including various websites, arXiv, and GitHub (OpenWebMath, RedPajama, Algebraic Stack). This rich repositoryโฆ See the full description on the dataset page: https://huggingface.co/datasets/math-ai/AutoMathText....
```
## ๐ File System Sample
- `.gitattributes`
- `LICENSE`
- `README.md`
- `data/arxiv/0.00-0.50/0.00-0.01.jsonl`
- `data/arxiv/0.00-0.50/0.01-0.02.jsonl`
- `data/arxiv/0.00-0.50/0.02-0.03.jsonl`
- `data/arxiv/0.00-0.50/0.03-0.04.jsonl`
- `data/arxiv/0.00-0.50/0.04-0.05.jsonl`
- `data/arxiv/0.00-0.50/0.05-0.06.jsonl`
- `data/arxiv/0.00-0.50/0.06-0.07.jsonl`
- `data/arxiv/0.00-0.50/0.07-0.08.jsonl`
- `data/arxiv/0.00-0.50/0.08-0.09.jsonl`
- `data/arxiv/0.00-0.50/0.09-0.10.jsonl`
- `data/arxiv/0.00-0.50/0.10-0.11.jsonl`
- `data/arxiv/0.00-0.50/0.11-0.12.jsonl`
- ... and more.
## ๐ Data Structure
### Config: `web-0.50-to-1.00`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `url` | `str` |
| `text` | `str` |
| `date` | `datetime` |
| `meta` | `dict` |
### Config: `web-0.60-to-1.00`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `url` | `str` |
| `text` | `str` |
| `date` | `datetime` |
| `meta` | `dict` |
### Config: `web-0.70-to-1.00`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `url` | `str` |
| `text` | `str` |
| `date` | `datetime` |
| `meta` | `dict` |
|
allenai/tulu-3-sft-mixture
|
allenai
|
task_categories:other, annotations_creators:crowdsourced, annotations_creators:expert-generated, annotations_creators:machine-generated, multilinguality:multilingual, source_datasets:allenai/coconot, source_datasets:ai2-adapt-dev/flan_v2_converted, source_datasets:HuggingFaceH4/no_robots, source_datasets:OpenAssistant/oasst1, source_datasets:allenai/tulu-3-personas-math, source_datasets:allenai/tulu-3-sft-personas-math-grade, source_datasets:allenai/tulu-3-sft-personas-code, source_datasets:allenai/tulu-3-personas-algebra, source_datasets:allenai/tulu-3-sft-personas-instruction-following, source_datasets:AI-MO/NuminaMath-TIR, source_datasets:allenai/wildguardmix, source_datasets:allenai/wildjailbreak, source_datasets:allenai/tulu-3-hard-coded, source_datasets:CohereForAI/aya_dataset, source_datasets:allenai/WildChat-1M, source_datasets:LipengCS/Table-GPT, source_datasets:allenai/SciRIFF, source_datasets:theblackcat102/evol-codealpaca-v1, language:amh, language:arb, language:ary, language:ars, language:acq, language:arz, language:apc, language:ben, language:ceb, language:dan, language:deu, language:ell, language:eng, language:eus, language:fil, language:fin, language:fra, language:gle, language:guj, language:hat, language:hau, language:hin, language:hun, language:ibo, language:ind, language:ita, language:jav, language:jpn, language:kan, language:kir, language:kor, language:kur, language:lit, language:mal, language:mar, language:mlg, language:msa, language:mya, language:nep, language:nld, language:nso, language:nya, language:pan, language:pes, language:pol, language:por, language:pus, language:rus, language:sin, language:sna, language:snd, language:som, language:spa, language:sqi, language:srp, language:sun, language:swa, language:swe, language:tam, language:tel, language:tha, language:tur, language:ukr, language:urd, language:vie, language:wol, language:xho, language:yor, language:zho, language:zul, license:odc-by, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `allenai/tulu-3-sft-mixture`
## ๐ Metadata
- **Author/Owner:** allenai
- **Downloads:** 14586
- **Likes:** 181
- **Tags:** task_categories:other, annotations_creators:crowdsourced, annotations_creators:expert-generated, annotations_creators:machine-generated, multilinguality:multilingual, source_datasets:allenai/coconot, source_datasets:ai2-adapt-dev/flan_v2_converted, source_datasets:HuggingFaceH4/no_robots, source_datasets:OpenAssistant/oasst1, source_datasets:allenai/tulu-3-personas-math, source_datasets:allenai/tulu-3-sft-personas-math-grade, source_datasets:allenai/tulu-3-sft-personas-code, source_datasets:allenai/tulu-3-personas-algebra, source_datasets:allenai/tulu-3-sft-personas-instruction-following, source_datasets:AI-MO/NuminaMath-TIR, source_datasets:allenai/wildguardmix, source_datasets:allenai/wildjailbreak, source_datasets:allenai/tulu-3-hard-coded, source_datasets:CohereForAI/aya_dataset, source_datasets:allenai/WildChat-1M, source_datasets:LipengCS/Table-GPT, source_datasets:allenai/SciRIFF, source_datasets:theblackcat102/evol-codealpaca-v1, language:amh, language:arb, language:ary, language:ars, language:acq, language:arz, language:apc, language:ben, language:ceb, language:dan, language:deu, language:ell, language:eng, language:eus, language:fil, language:fin, language:fra, language:gle, language:guj, language:hat, language:hau, language:hin, language:hun, language:ibo, language:ind, language:ita, language:jav, language:jpn, language:kan, language:kir, language:kor, language:kur, language:lit, language:mal, language:mar, language:mlg, language:msa, language:mya, language:nep, language:nld, language:nso, language:nya, language:pan, language:pes, language:pol, language:por, language:pus, language:rus, language:sin, language:sna, language:snd, language:som, language:spa, language:sqi, language:srp, language:sun, language:swa, language:swe, language:tam, language:tel, language:tha, language:tur, language:ukr, language:urd, language:vie, language:wol, language:xho, language:yor, language:zho, language:zul, license:odc-by, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Tulu 3 SFT Mixture
Note that this collection is licensed under ODC-BY-1.0 license; different licenses apply to subsets of the data. Some portions of the dataset are non-commercial. We present the mixture as a research artifact.
The Tulu 3 SFT mixture was used to train the Tulu 3 series of models.
It contains 939,344 samples from the following sets:
CoCoNot (ODC-BY-1.0), 10,983 prompts (Brahman et al., 2024)
FLAN v2 via ai2-adapt-dev/flan_v2_converted, 89,982 prompts (Longpre etโฆ See the full description on the dataset page: https://huggingface.co/datasets/allenai/tulu-3-sft-mixture....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00006.parquet`
- `data/train-00001-of-00006.parquet`
- `data/train-00002-of-00006.parquet`
- `data/train-00003-of-00006.parquet`
- `data/train-00004-of-00006.parquet`
- `data/train-00005-of-00006.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `messages` | `list` |
| `source` | `str` |
|
hotpotqa/hotpot_qa
|
hotpotqa
|
task_categories:question-answering, annotations_creators:crowdsourced, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1809.09600, region:us, multi-hop
|
community
|
# Dataset: `hotpotqa/hotpot_qa`
## ๐ Metadata
- **Author/Owner:** hotpotqa
- **Downloads:** 26801
- **Likes:** 180
- **Tags:** task_categories:question-answering, annotations_creators:crowdsourced, language_creators:found, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-sa-4.0, size_categories:100K<n<1M, format:parquet, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:1809.09600, region:us, multi-hop
- **License:** Not specified
## ๐ Description
```text
Dataset Card for "hotpot_qa"
Dataset Summary
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reasonโฆ See the full description on the dataset page: https://huggingface.co/datasets/hotpotqa/hotpot_qa....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `distractor/train-00000-of-00002.parquet`
- `distractor/train-00001-of-00002.parquet`
- `distractor/validation-00000-of-00001.parquet`
- `fullwiki/test-00000-of-00001.parquet`
- `fullwiki/train-00000-of-00002.parquet`
- `fullwiki/train-00001-of-00002.parquet`
- `fullwiki/validation-00000-of-00001.parquet`
## ๐ Data Structure
### Config: `distractor`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `answer` | `str` |
| `type` | `str` |
| `level` | `str` |
| `supporting_facts` | `dict` |
| `context` | `dict` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `answer` | `str` |
| `type` | `str` |
| `level` | `str` |
| `supporting_facts` | `dict` |
| `context` | `dict` |
### Config: `fullwiki`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `answer` | `str` |
| `type` | `str` |
| `level` | `str` |
| `supporting_facts` | `dict` |
| `context` | `dict` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `answer` | `str` |
| `type` | `str` |
| `level` | `str` |
| `supporting_facts` | `dict` |
| `context` | `dict` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `id` | `str` |
| `question` | `str` |
| `answer` | `NoneType` |
| `type` | `NoneType` |
| `level` | `NoneType` |
| `supporting_facts` | `dict` |
| `context` | `dict` |
|
vikp/textbook_quality_programming
|
vikp
|
language:en, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `vikp/textbook_quality_programming`
## ๐ Metadata
- **Author/Owner:** vikp
- **Downloads:** 50
- **Likes:** 180
- **Tags:** language:en, size_categories:10K<n<100K, format:parquet, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for "textbook_quality_programming"
Synthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.
~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens inโฆ See the full description on the dataset page: https://huggingface.co/datasets/vikp/textbook_quality_programming....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00001-6815e36c7337e2db.parquet`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `topic` | `str` |
| `model` | `str` |
| `concepts` | `list` |
| `outline` | `list` |
| `markdown` | `str` |
|
openslr/librispeech_asr
|
openslr
|
task_categories:automatic-speech-recognition, task_categories:audio-classification, task_ids:speaker-identification, annotations_creators:expert-generated, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:audio, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
|
community
|
# Dataset: `openslr/librispeech_asr`
## ๐ Metadata
- **Author/Owner:** openslr
- **Downloads:** 38088
- **Likes:** 179
- **Tags:** task_categories:automatic-speech-recognition, task_categories:audio-classification, task_ids:speaker-identification, annotations_creators:expert-generated, language_creators:crowdsourced, language_creators:expert-generated, multilinguality:monolingual, source_datasets:original, language:en, license:cc-by-4.0, size_categories:100K<n<1M, format:parquet, modality:audio, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset Card for librispeech_asr
Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automaticโฆ See the full description on the dataset page: https://huggingface.co/datasets/openslr/librispeech_asr....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `all/test.clean/0000.parquet`
- `all/test.other/0000.parquet`
- `all/train.clean.100/0000.parquet`
- `all/train.clean.100/0001.parquet`
- `all/train.clean.100/0002.parquet`
- `all/train.clean.100/0003.parquet`
- `all/train.clean.100/0004.parquet`
- `all/train.clean.100/0005.parquet`
- `all/train.clean.100/0006.parquet`
- `all/train.clean.100/0007.parquet`
- `all/train.clean.100/0008.parquet`
- `all/train.clean.100/0009.parquet`
- `all/train.clean.100/0010.parquet`
- ... and more.
## ๐ Data Structure
### Config: `clean`
#### Split: `test`
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: To support decoding audio data, please install 'torchcodec'.
```
|
huggan/wikiart
|
huggan
|
task_categories:image-classification, task_categories:text-to-image, task_categories:image-to-text, license:unknown, size_categories:10K<n<100K, format:parquet, modality:image, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, art
|
community
|
# Dataset: `huggan/wikiart`
## ๐ Metadata
- **Author/Owner:** huggan
- **Downloads:** 6086
- **Likes:** 178
- **Tags:** task_categories:image-classification, task_categories:text-to-image, task_categories:image-to-text, license:unknown, size_categories:10K<n<100K, format:parquet, modality:image, library:datasets, library:dask, library:mlcroissant, library:polars, region:us, art
- **License:** Not specified
## ๐ Description
```text
Dataset Summary
Dataset containing 81,444 pieces of visual art from various artists, taken from WikiArt.org,
along with class labels for each image :
"artist" : 129 artist classes, including a "Unknown Artist" class
"genre" : 11 genre classes, including a "Unknown Genre" class
"style" : 27 style classes
On WikiArt.org, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of genresโฆ See the full description on the dataset page: https://huggingface.co/datasets/huggan/wikiart....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `data/train-00000-of-00072.parquet`
- `data/train-00001-of-00072.parquet`
- `data/train-00002-of-00072.parquet`
- `data/train-00003-of-00072.parquet`
- `data/train-00004-of-00072.parquet`
- `data/train-00005-of-00072.parquet`
- `data/train-00006-of-00072.parquet`
- `data/train-00007-of-00072.parquet`
- `data/train-00008-of-00072.parquet`
- `data/train-00009-of-00072.parquet`
- `data/train-00010-of-00072.parquet`
- `data/train-00011-of-00072.parquet`
- `data/train-00012-of-00072.parquet`
- ... and more.
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `image` | `JpegImageFile` |
| `artist` | `int` |
| `genre` | `int` |
| `style` | `int` |
|
ibrahimhamamci/CT-RATE
|
ibrahimhamamci
|
license:cc-by-nc-sa-4.0, size_categories:100K<n<1M, format:csv, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2403.17834, region:us
|
community
|
# Dataset: `ibrahimhamamci/CT-RATE`
## ๐ Metadata
- **Author/Owner:** ibrahimhamamci
- **Downloads:** 82492
- **Likes:** 177
- **Tags:** license:cc-by-nc-sa-4.0, size_categories:100K<n<1M, format:csv, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2403.17834, region:us
- **License:** Not specified
## ๐ Description
```text
The CT-RATE team is organizing the VLM3D Challenge!
Challenge Finals and Presentations โ MICCAI 2025
Workshop โ ICCV 2025
Workshop papers will be published in the ICCV 2025 Proceedings!
Workshops
Challenge Tasks
CT-CHAT Demo
Developing Generalist Foundation Models from a Multimodal Dataset for 3D Computed Tomography
Welcome to the official page for our paper, which introduces CT-RATEโa pioneering dataset in 3Dโฆ See the full description on the dataset page: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `dataset/README.md`
- `dataset/anatomy_segmentation_labels/train_label_summary.xlsx`
- `dataset/anatomy_segmentation_labels/valid_label_summary.xlsx`
- `dataset/data_correction_note.md`
- `dataset/metadata/Metadata_Attributes.xlsx`
- `dataset/metadata/no_chest_train.txt`
- `dataset/metadata/no_chest_valid.txt`
- `dataset/metadata/train_metadata.csv`
- `dataset/metadata/validation_metadata.csv`
- `dataset/multi_abnormality_labels/train_predicted_labels.csv`
- `dataset/multi_abnormality_labels/valid_predicted_labels.csv`
- `dataset/radiology_text_reports/train_reports.csv`
- `dataset/radiology_text_reports/validation_reports.csv`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'ibrahimhamamci/CT-RATE' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/ibrahimhamamci/CT-RATE to ask for access.
```
|
aps/super_glue
|
aps
|
task_categories:text-classification, task_categories:token-classification, task_categories:question-answering, task_ids:natural-language-inference, task_ids:word-sense-disambiguation, task_ids:coreference-resolution, task_ids:extractive-qa, annotations_creators:expert-generated, language_creators:other, multilinguality:monolingual, source_datasets:extended|other, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1905.00537, region:us, superglue, NLU, natural language understanding
|
community
|
# Dataset: `aps/super_glue`
## ๐ Metadata
- **Author/Owner:** aps
- **Downloads:** 121833
- **Likes:** 176
- **Tags:** task_categories:text-classification, task_categories:token-classification, task_categories:question-answering, task_ids:natural-language-inference, task_ids:word-sense-disambiguation, task_ids:coreference-resolution, task_ids:extractive-qa, annotations_creators:expert-generated, language_creators:other, multilinguality:monolingual, source_datasets:extended|other, language:en, license:other, size_categories:100K<n<1M, format:parquet, modality:tabular, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:1905.00537, region:us, superglue, NLU, natural language understanding
- **License:** Not specified
## ๐ Description
```text
Dataset Card for "super_glue"
Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
axb
Size of downloaded dataset files: 0.03 MB
Size ofโฆ See the full description on the dataset page: https://huggingface.co/datasets/aps/super_glue....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `axb/test-00000-of-00001.parquet`
- `axg/test-00000-of-00001.parquet`
- `boolq/test-00000-of-00001.parquet`
- `boolq/train-00000-of-00001.parquet`
- `boolq/validation-00000-of-00001.parquet`
- `cb/test-00000-of-00001.parquet`
- `cb/train-00000-of-00001.parquet`
- `cb/validation-00000-of-00001.parquet`
- `copa/test-00000-of-00001.parquet`
- `copa/train-00000-of-00001.parquet`
- `copa/validation-00000-of-00001.parquet`
- `multirc/test-00000-of-00001.parquet`
- `multirc/train-00000-of-00001.parquet`
- ... and more.
## ๐ Data Structure
### Config: `axb`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `sentence1` | `str` |
| `sentence2` | `str` |
| `idx` | `int` |
| `label` | `int` |
### Config: `axg`
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `premise` | `str` |
| `hypothesis` | `str` |
| `idx` | `int` |
| `label` | `int` |
### Config: `boolq`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `passage` | `str` |
| `idx` | `int` |
| `label` | `int` |
#### Split: `validation`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `passage` | `str` |
| `idx` | `int` |
| `label` | `int` |
#### Split: `test`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `passage` | `str` |
| `idx` | `int` |
| `label` | `int` |
|
UCSC-VLAA/MedTrinity-25M
|
UCSC-VLAA
|
task_categories:question-answering, language:en, size_categories:10M<n<100M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2408.02900, region:us, medical
|
community
|
# Dataset: `UCSC-VLAA/MedTrinity-25M`
## ๐ Metadata
- **Author/Owner:** UCSC-VLAA
- **Downloads:** 1763
- **Likes:** 176
- **Tags:** task_categories:question-answering, language:en, size_categories:10M<n<100M, format:parquet, modality:image, modality:text, library:datasets, library:dask, library:mlcroissant, library:polars, arxiv:2408.02900, region:us, medical
- **License:** Not specified
## ๐ Description
```text
Tutorial of using Medtrinity-25M
MedTrinity-25M, a comprehensive, large-scale multimodal dataset for medicine, covering over 25 million images across 10 modalities, with multigranular annotations for more than 65 diseases. These enriched annotations encompass both global textual information, such as disease/lesion type, modality, region-specific descriptions, and inter-regional relationships, as well as detailed local annotations for regions of interest (ROIs), including boundingโฆ See the full description on the dataset page: https://huggingface.co/datasets/UCSC-VLAA/MedTrinity-25M....
```
## ๐ File System Sample
- `.gitattributes`
- `25M_accessible/dataset_shard_1.tar.zst`
- `25M_accessible/dataset_shard_11.tar.zst`
- `25M_accessible/dataset_shard_12.tar.zst`
- `25M_accessible/dataset_shard_13.tar.zst`
- `25M_accessible/dataset_shard_14.tar.zst`
- `25M_accessible/dataset_shard_15.tar.zst`
- `25M_accessible/dataset_shard_16.tar.zst`
- `25M_accessible/dataset_shard_17.tar.zst`
- `25M_accessible/dataset_shard_18.tar.zst`
- `25M_accessible/dataset_shard_19.tar.zst`
- `25M_accessible/dataset_shard_2.tar.zst`
- `25M_accessible/dataset_shard_20.tar.zst`
- `25M_accessible/dataset_shard_21.tar.zst`
- `25M_accessible/dataset_shard_22.tar.zst`
- ... and more.
## ๐ Data Structure
**Graceful Failure:**
```
Could not inspect the dataset's structure.
This is common for complex datasets that require executing remote code, which is disabled for stability.
Details: Dataset 'UCSC-VLAA/MedTrinity-25M' is a gated dataset on the Hub. Visit the dataset page at https://huggingface.co/datasets/UCSC-VLAA/MedTrinity-25M to ask for access.
```
|
GAIR/LIMO
|
GAIR
|
language:en, license:apache-2.0, size_categories:n<1K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2502.03387, region:us
|
community
|
# Dataset: `GAIR/LIMO`
## ๐ Metadata
- **Author/Owner:** GAIR
- **Downloads:** 1304
- **Likes:** 175
- **Tags:** language:en, license:apache-2.0, size_categories:n<1K, format:json, modality:text, library:datasets, library:pandas, library:mlcroissant, library:polars, arxiv:2502.03387, region:us
- **License:** Not specified
## ๐ Description
```text
Dataset for LIMO: Less is More for Reasoning
Usage
from datasets import load_dataset
dataset = load_dataset("GAIR/LIMO", split="train")
Citation
If you find our dataset useful, please cite:
@misc{ye2025limoreasoning,
title={LIMO: Less is More for Reasoning},
author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
year={2025},
eprint={2502.03387},
archivePrefix={arXiv},
primaryClass={cs.CL}โฆ See the full description on the dataset page: https://huggingface.co/datasets/GAIR/LIMO....
```
## ๐ File System Sample
- `.gitattributes`
- `README.md`
- `limo.jsonl`
## ๐ Data Structure
### Config: `default`
#### Split: `train`
| Column Name | Data Type |
|---|---|
| `question` | `str` |
| `solution` | `str` |
| `answer` | `str` |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.