url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.89B
1.93B
| node_id
stringlengths 18
19
| number
int64 6.23k
6.28k
| title
stringlengths 16
140
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 10
6.69k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6284
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6284/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6284/events
|
https://github.com/huggingface/datasets/issues/6284
| 1,929,551,712
|
I_kwDODunzps5zAp9g
| 6,284
|
Add Belebele multiple-choice machine reading comprehension (MRC) dataset
|
{
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[] | 2023-10-06T06:58:03
| 2023-10-06T06:58:03
| null |
NONE
| null |
### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884).
## Composition
- 900 questions per language variant
- 488 distinct passages, there are 1-2 associated questions for each.
- For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
- 122 language/language variants (including English).
- 900 x 122 = 109,800 total questions.
### Motivation
official repo https://github.com/facebookresearch/belebele
### Your contribution
-
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6284/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6283
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6283/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6283/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6283/events
|
https://github.com/huggingface/datasets/pull/6283
| 1,928,552,257
|
PR_kwDODunzps5cBlKq
| 6,283
|
Fix `array.values` handling in array cast/embed
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006278 / 0.011353 (-0.005075) | 0.003692 / 0.011008 (-0.007316) | 0.080464 / 0.038508 (0.041956) | 0.064751 / 0.023109 (0.041642) | 0.318586 / 0.275898 (0.042688) | 0.351435 / 0.323480 (0.027955) | 0.005044 / 0.007986 (-0.002942) | 0.003034 / 0.004328 (-0.001295) | 0.063710 / 0.004250 (0.059460) | 0.050607 / 0.037052 (0.013555) | 0.318491 / 0.258489 (0.060001) | 0.365688 / 0.293841 (0.071847) | 0.027818 / 0.128546 (-0.100729) | 0.008119 / 0.075646 (-0.067527) | 0.262141 / 0.419271 (-0.157131) | 0.044710 / 0.043533 (0.001177) | 0.318875 / 0.255139 (0.063736) | 0.344559 / 0.283200 (0.061360) | 0.022861 / 0.141683 (-0.118822) | 1.452402 / 1.452155 (0.000247) | 1.502340 / 1.492716 (0.009624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219355 / 0.018006 (0.201349) | 0.433311 / 0.000490 (0.432822) | 0.006545 / 0.000200 (0.006345) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024538 / 0.037411 (-0.012874) | 0.073346 / 0.014526 (0.058821) | 0.083824 / 0.176557 (-0.092733) | 0.145176 / 0.737135 (-0.591959) | 0.085941 / 0.296338 (-0.210397) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395153 / 0.215209 (0.179944) | 3.944734 / 2.077655 (1.867080) | 1.883910 / 1.504120 (0.379790) | 1.690560 / 1.541195 (0.149365) | 1.775180 / 1.468490 (0.306690) | 0.506873 / 4.584777 (-4.077904) | 3.111095 / 3.745712 (-0.634617) | 2.915358 / 5.269862 (-2.354504) | 1.892886 / 4.565676 (-2.672791) | 0.058690 / 0.424275 (-0.365585) | 0.006550 / 0.007607 (-0.001057) | 0.463372 / 0.226044 (0.237328) | 4.640511 / 2.268929 (2.371583) | 2.321051 / 55.444624 (-53.123573) | 1.986330 / 6.876477 (-4.890147) | 2.160046 / 2.142072 (0.017973) | 0.597833 / 4.805227 (-4.207394) | 0.127946 / 6.500664 (-6.372718) | 0.059709 / 0.075469 (-0.015760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278966 / 1.841788 (-0.562822) | 17.863102 / 8.074308 (9.788794) | 13.896057 / 10.191392 (3.704665) | 0.147512 / 0.680424 (-0.532912) | 0.016771 / 0.534201 (-0.517430) | 0.335260 / 0.579283 (-0.244024) | 0.383019 / 0.434364 (-0.051345) | 0.384821 / 0.540337 (-0.155516) | 0.550143 / 1.386936 (-0.836793) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006234 / 0.011353 (-0.005118) | 0.003695 / 0.011008 (-0.007313) | 0.062654 / 0.038508 (0.024146) | 0.059397 / 0.023109 (0.036287) | 0.458375 / 0.275898 (0.182477) | 0.488951 / 0.323480 (0.165471) | 0.004971 / 0.007986 (-0.003014) | 0.002914 / 0.004328 (-0.001415) | 0.061184 / 0.004250 (0.056934) | 0.051246 / 0.037052 (0.014194) | 0.458035 / 0.258489 (0.199546) | 0.490838 / 0.293841 (0.196997) | 0.028746 / 0.128546 (-0.099800) | 0.008167 / 0.075646 (-0.067480) | 0.068006 / 0.419271 (-0.351265) | 0.041809 / 0.043533 (-0.001724) | 0.453896 / 0.255139 (0.198757) | 0.477583 / 0.283200 (0.194383) | 0.020906 / 0.141683 (-0.120777) | 1.443275 / 1.452155 (-0.008879) | 1.493431 / 1.492716 (0.000714) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219903 / 0.018006 (0.201896) | 0.410275 / 0.000490 (0.409785) | 0.003919 / 0.000200 (0.003719) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027850 / 0.037411 (-0.009561) | 0.080444 / 0.014526 (0.065918) | 0.089943 / 0.176557 (-0.086614) | 0.145810 / 0.737135 (-0.591326) | 0.090908 / 0.296338 (-0.205430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464386 / 0.215209 (0.249177) | 4.633787 / 2.077655 (2.556133) | 2.581658 / 1.504120 (1.077538) | 2.408486 / 1.541195 (0.867291) | 2.460491 / 1.468490 (0.992001) | 0.507512 / 4.584777 (-4.077265) | 3.190363 / 3.745712 (-0.555349) | 2.895581 / 5.269862 (-2.374280) | 1.871506 / 4.565676 (-2.694171) | 0.058469 / 0.424275 (-0.365806) | 0.006526 / 0.007607 (-0.001082) | 0.537641 / 0.226044 (0.311596) | 5.396660 / 2.268929 (3.127731) | 3.027028 / 55.444624 (-52.417596) | 2.703771 / 6.876477 (-4.172705) | 2.865576 / 2.142072 (0.723503) | 0.600103 / 4.805227 (-4.205124) | 0.127109 / 6.500664 (-6.373555) | 0.060985 / 0.075469 (-0.014484) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365030 / 1.841788 (-0.476758) | 17.988218 / 8.074308 (9.913909) | 14.900796 / 10.191392 (4.709404) | 0.158211 / 0.680424 (-0.522213) | 0.018291 / 0.534201 (-0.515910) | 0.337437 / 0.579283 (-0.241846) | 0.383710 / 0.434364 (-0.050654) | 0.392341 / 0.540337 (-0.147997) | 0.561584 / 1.386936 (-0.825352) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated"
] | 2023-10-05T15:24:05
| 2023-10-05T15:55:10
| null |
CONTRIBUTOR
| null |
Fix #6280
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6283/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6283",
"html_url": "https://github.com/huggingface/datasets/pull/6283",
"diff_url": "https://github.com/huggingface/datasets/pull/6283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6283.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6282
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6282/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6282/events
|
https://github.com/huggingface/datasets/pull/6282
| 1,928,473,630
|
PR_kwDODunzps5cBT5p
| 6,282
|
Drop data_files duplicates
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006934 / 0.011353 (-0.004419) | 0.004097 / 0.011008 (-0.006911) | 0.084662 / 0.038508 (0.046154) | 0.077106 / 0.023109 (0.053996) | 0.355035 / 0.275898 (0.079137) | 0.381466 / 0.323480 (0.057986) | 0.004182 / 0.007986 (-0.003803) | 0.003411 / 0.004328 (-0.000917) | 0.065279 / 0.004250 (0.061029) | 0.058192 / 0.037052 (0.021140) | 0.372363 / 0.258489 (0.113874) | 0.401621 / 0.293841 (0.107780) | 0.031719 / 0.128546 (-0.096827) | 0.008753 / 0.075646 (-0.066893) | 0.287125 / 0.419271 (-0.132146) | 0.052943 / 0.043533 (0.009410) | 0.349680 / 0.255139 (0.094541) | 0.364004 / 0.283200 (0.080805) | 0.026705 / 0.141683 (-0.114977) | 1.472708 / 1.452155 (0.020553) | 1.556559 / 1.492716 (0.063842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224868 / 0.018006 (0.206862) | 0.458793 / 0.000490 (0.458304) | 0.009434 / 0.000200 (0.009234) | 0.000356 / 0.000054 (0.000301) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029670 / 0.037411 (-0.007741) | 0.086517 / 0.014526 (0.071991) | 0.097342 / 0.176557 (-0.079215) | 0.153722 / 0.737135 (-0.583413) | 0.098465 / 0.296338 (-0.197874) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400739 / 0.215209 (0.185530) | 3.998087 / 2.077655 (1.920432) | 2.025772 / 1.504120 (0.521652) | 1.858679 / 1.541195 (0.317485) | 1.951573 / 1.468490 (0.483083) | 0.483028 / 4.584777 (-4.101749) | 3.554085 / 3.745712 (-0.191627) | 3.306983 / 5.269862 (-1.962879) | 2.087043 / 4.565676 (-2.478633) | 0.057127 / 0.424275 (-0.367148) | 0.007252 / 0.007607 (-0.000355) | 0.480180 / 0.226044 (0.254136) | 4.787183 / 2.268929 (2.518255) | 2.489667 / 55.444624 (-52.954957) | 2.150774 / 6.876477 (-4.725703) | 2.403197 / 2.142072 (0.261124) | 0.581843 / 4.805227 (-4.223384) | 0.134915 / 6.500664 (-6.365749) | 0.061283 / 0.075469 (-0.014186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285700 / 1.841788 (-0.556088) | 19.474093 / 8.074308 (11.399785) | 14.336349 / 10.191392 (4.144957) | 0.170932 / 0.680424 (-0.509492) | 0.018348 / 0.534201 (-0.515853) | 0.391909 / 0.579283 (-0.187374) | 0.414706 / 0.434364 (-0.019658) | 0.458156 / 0.540337 (-0.082182) | 0.656303 / 1.386936 (-0.730633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006738 / 0.011353 (-0.004615) | 0.004029 / 0.011008 (-0.006979) | 0.064411 / 0.038508 (0.025903) | 0.078225 / 0.023109 (0.055116) | 0.408468 / 0.275898 (0.132569) | 0.445585 / 0.323480 (0.122105) | 0.005490 / 0.007986 (-0.002495) | 0.003419 / 0.004328 (-0.000910) | 0.063966 / 0.004250 (0.059715) | 0.056779 / 0.037052 (0.019727) | 0.415258 / 0.258489 (0.156769) | 0.461258 / 0.293841 (0.167418) | 0.032051 / 0.128546 (-0.096495) | 0.008471 / 0.075646 (-0.067176) | 0.071004 / 0.419271 (-0.348267) | 0.049068 / 0.043533 (0.005536) | 0.409575 / 0.255139 (0.154436) | 0.430748 / 0.283200 (0.147548) | 0.023784 / 0.141683 (-0.117899) | 1.507894 / 1.452155 (0.055739) | 1.586575 / 1.492716 (0.093859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228574 / 0.018006 (0.210568) | 0.451389 / 0.000490 (0.450900) | 0.006312 / 0.000200 (0.006112) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033391 / 0.037411 (-0.004020) | 0.096816 / 0.014526 (0.082290) | 0.107269 / 0.176557 (-0.069288) | 0.159749 / 0.737135 (-0.577387) | 0.108240 / 0.296338 (-0.188098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437643 / 0.215209 (0.222434) | 4.378173 / 2.077655 (2.300518) | 2.367218 / 1.504120 (0.863098) | 2.229493 / 1.541195 (0.688298) | 2.329849 / 1.468490 (0.861359) | 0.494985 / 4.584777 (-4.089792) | 3.578540 / 3.745712 (-0.167172) | 3.338220 / 5.269862 (-1.931642) | 2.092482 / 4.565676 (-2.473194) | 0.058495 / 0.424275 (-0.365780) | 0.007396 / 0.007607 (-0.000211) | 0.511001 / 0.226044 (0.284957) | 5.113497 / 2.268929 (2.844568) | 2.806215 / 55.444624 (-52.638409) | 2.485428 / 6.876477 (-4.391048) | 2.764907 / 2.142072 (0.622835) | 0.598824 / 4.805227 (-4.206404) | 0.134988 / 6.500664 (-6.365676) | 0.061752 / 0.075469 (-0.013717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365583 / 1.841788 (-0.476205) | 20.270297 / 8.074308 (12.195989) | 15.331673 / 10.191392 (5.140281) | 0.166152 / 0.680424 (-0.514272) | 0.020678 / 0.534201 (-0.513523) | 0.394821 / 0.579283 (-0.184462) | 0.420493 / 0.434364 (-0.013871) | 0.468551 / 0.540337 (-0.071787) | 0.654903 / 1.386936 (-0.732033) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007803 / 0.011353 (-0.003550) | 0.004664 / 0.011008 (-0.006344) | 0.099908 / 0.038508 (0.061400) | 0.090674 / 0.023109 (0.067565) | 0.406009 / 0.275898 (0.130111) | 0.465098 / 0.323480 (0.141618) | 0.004667 / 0.007986 (-0.003319) | 0.003880 / 0.004328 (-0.000449) | 0.076552 / 0.004250 (0.072301) | 0.066345 / 0.037052 (0.029292) | 0.419195 / 0.258489 (0.160706) | 0.478581 / 0.293841 (0.184741) | 0.036967 / 0.128546 (-0.091579) | 0.010000 / 0.075646 (-0.065647) | 0.347126 / 0.419271 (-0.072145) | 0.062265 / 0.043533 (0.018733) | 0.406653 / 0.255139 (0.151514) | 0.439044 / 0.283200 (0.155845) | 0.031289 / 0.141683 (-0.110394) | 1.797674 / 1.452155 (0.345520) | 1.835183 / 1.492716 (0.342467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268194 / 0.018006 (0.250187) | 0.493614 / 0.000490 (0.493124) | 0.015636 / 0.000200 (0.015436) | 0.000417 / 0.000054 (0.000362) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034188 / 0.037411 (-0.003223) | 0.099127 / 0.014526 (0.084601) | 0.113949 / 0.176557 (-0.062607) | 0.181209 / 0.737135 (-0.555926) | 0.114943 / 0.296338 (-0.181395) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455767 / 0.215209 (0.240558) | 4.542947 / 2.077655 (2.465293) | 2.214605 / 1.504120 (0.710485) | 2.015163 / 1.541195 (0.473969) | 2.084945 / 1.468490 (0.616455) | 0.583827 / 4.584777 (-4.000950) | 4.187009 / 3.745712 (0.441297) | 3.920841 / 5.269862 (-1.349020) | 2.447260 / 4.565676 (-2.118417) | 0.069139 / 0.424275 (-0.355137) | 0.008734 / 0.007607 (0.001127) | 0.544673 / 0.226044 (0.318629) | 5.445094 / 2.268929 (3.176165) | 2.788284 / 55.444624 (-52.656340) | 2.395863 / 6.876477 (-4.480614) | 2.622632 / 2.142072 (0.480560) | 0.703931 / 4.805227 (-4.101297) | 0.160502 / 6.500664 (-6.340162) | 0.073734 / 0.075469 (-0.001735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.498992 / 1.841788 (-0.342795) | 22.761476 / 8.074308 (14.687168) | 17.123919 / 10.191392 (6.932527) | 0.170272 / 0.680424 (-0.510151) | 0.021307 / 0.534201 (-0.512894) | 0.467548 / 0.579283 (-0.111735) | 0.480777 / 0.434364 (0.046413) | 0.542168 / 0.540337 (0.001830) | 0.771092 / 1.386936 (-0.615844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007923 / 0.011353 (-0.003430) | 0.004664 / 0.011008 (-0.006344) | 0.077795 / 0.038508 (0.039286) | 0.090293 / 0.023109 (0.067184) | 0.494682 / 0.275898 (0.218784) | 0.539973 / 0.323480 (0.216494) | 0.006302 / 0.007986 (-0.001684) | 0.003794 / 0.004328 (-0.000535) | 0.076567 / 0.004250 (0.072317) | 0.067141 / 0.037052 (0.030089) | 0.501279 / 0.258489 (0.242790) | 0.555670 / 0.293841 (0.261829) | 0.037773 / 0.128546 (-0.090773) | 0.009930 / 0.075646 (-0.065716) | 0.084839 / 0.419271 (-0.334433) | 0.056876 / 0.043533 (0.013344) | 0.499329 / 0.255139 (0.244190) | 0.518449 / 0.283200 (0.235249) | 0.026041 / 0.141683 (-0.115642) | 1.787259 / 1.452155 (0.335105) | 1.853505 / 1.492716 (0.360788) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238413 / 0.018006 (0.220407) | 0.488889 / 0.000490 (0.488399) | 0.007476 / 0.000200 (0.007277) | 0.000141 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038701 / 0.037411 (0.001290) | 0.115391 / 0.014526 (0.100865) | 0.125553 / 0.176557 (-0.051004) | 0.190267 / 0.737135 (-0.546868) | 0.126401 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509270 / 0.215209 (0.294061) | 5.087631 / 2.077655 (3.009976) | 2.745863 / 1.504120 (1.241743) | 2.560259 / 1.541195 (1.019064) | 2.653124 / 1.468490 (1.184634) | 0.582118 / 4.584777 (-4.002659) | 4.181144 / 3.745712 (0.435431) | 3.871179 / 5.269862 (-1.398683) | 2.459849 / 4.565676 (-2.105827) | 0.068844 / 0.424275 (-0.355431) | 0.008672 / 0.007607 (0.001065) | 0.604898 / 0.226044 (0.378854) | 6.073263 / 2.268929 (3.804334) | 3.366638 / 55.444624 (-52.077986) | 2.937261 / 6.876477 (-3.939215) | 3.181173 / 2.142072 (1.039100) | 0.700478 / 4.805227 (-4.104750) | 0.158361 / 6.500664 (-6.342303) | 0.072860 / 0.075469 (-0.002609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621363 / 1.841788 (-0.220425) | 23.614315 / 8.074308 (15.540007) | 17.607213 / 10.191392 (7.415821) | 0.198031 / 0.680424 (-0.482393) | 0.023859 / 0.534201 (-0.510342) | 0.474674 / 0.579283 (-0.104609) | 0.491173 / 0.434364 (0.056809) | 0.581995 / 0.540337 (0.041658) | 0.792168 / 1.386936 (-0.594768) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-05T14:43:08
| 2023-10-05T15:28:26
| null |
MEMBER
| null |
I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6282/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6282",
"html_url": "https://github.com/huggingface/datasets/pull/6282",
"diff_url": "https://github.com/huggingface/datasets/pull/6282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6282.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6281
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6281/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6281/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6281/events
|
https://github.com/huggingface/datasets/pull/6281
| 1,928,456,959
|
PR_kwDODunzps5cBQPd
| 6,281
|
Improve documentation of dataset.from_generator
|
{
"login": "hartmans",
"id": 53510,
"node_id": "MDQ6VXNlcjUzNTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hartmans",
"html_url": "https://github.com/hartmans",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"repos_url": "https://api.github.com/users/hartmans/repos",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"I have looked at the doc failures, and I do not think that my change caused the doc build failure, but I'm not 100% sure about that.\r\nI have high confidence that the integration test failures are not something I introduced:-)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008557 / 0.011353 (-0.002796) | 0.005224 / 0.011008 (-0.005784) | 0.109402 / 0.038508 (0.070893) | 0.075008 / 0.023109 (0.051899) | 0.388910 / 0.275898 (0.113012) | 0.425481 / 0.323480 (0.102002) | 0.005046 / 0.007986 (-0.002939) | 0.004166 / 0.004328 (-0.000162) | 0.079890 / 0.004250 (0.075639) | 0.061992 / 0.037052 (0.024940) | 0.409933 / 0.258489 (0.151444) | 0.444096 / 0.293841 (0.150255) | 0.043958 / 0.128546 (-0.084588) | 0.013655 / 0.075646 (-0.061991) | 0.402620 / 0.419271 (-0.016651) | 0.062784 / 0.043533 (0.019251) | 0.399653 / 0.255139 (0.144514) | 0.432926 / 0.283200 (0.149727) | 0.034631 / 0.141683 (-0.107052) | 1.801450 / 1.452155 (0.349296) | 1.965007 / 1.492716 (0.472290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305744 / 0.018006 (0.287738) | 0.590825 / 0.000490 (0.590335) | 0.014561 / 0.000200 (0.014361) | 0.000430 / 0.000054 (0.000375) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030449 / 0.037411 (-0.006962) | 0.091753 / 0.014526 (0.077227) | 0.106259 / 0.176557 (-0.070298) | 0.174599 / 0.737135 (-0.562537) | 0.107069 / 0.296338 (-0.189269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.607544 / 0.215209 (0.392335) | 6.182592 / 2.077655 (4.104937) | 2.699782 / 1.504120 (1.195663) | 2.386915 / 1.541195 (0.845720) | 2.441763 / 1.468490 (0.973273) | 0.811360 / 4.584777 (-3.773417) | 5.253799 / 3.745712 (1.508087) | 4.762054 / 5.269862 (-0.507807) | 3.045161 / 4.565676 (-1.520515) | 0.095983 / 0.424275 (-0.328292) | 0.008653 / 0.007607 (0.001046) | 0.714218 / 0.226044 (0.488174) | 7.279325 / 2.268929 (5.010397) | 3.356107 / 55.444624 (-52.088517) | 2.765867 / 6.876477 (-4.110610) | 2.997756 / 2.142072 (0.855684) | 1.008740 / 4.805227 (-3.796487) | 0.201462 / 6.500664 (-6.299202) | 0.075780 / 0.075469 (0.000311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.677034 / 1.841788 (-0.164754) | 23.546919 / 8.074308 (15.472610) | 21.576985 / 10.191392 (11.385593) | 0.239253 / 0.680424 (-0.441171) | 0.028740 / 0.534201 (-0.505460) | 0.468519 / 0.579283 (-0.110765) | 0.593935 / 0.434364 (0.159571) | 0.536830 / 0.540337 (-0.003507) | 0.779925 / 1.386936 (-0.607011) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009582 / 0.011353 (-0.001771) | 0.004971 / 0.011008 (-0.006037) | 0.081304 / 0.038508 (0.042796) | 0.077588 / 0.023109 (0.054478) | 0.486610 / 0.275898 (0.210712) | 0.580228 / 0.323480 (0.256748) | 0.006707 / 0.007986 (-0.001279) | 0.004325 / 0.004328 (-0.000004) | 0.086170 / 0.004250 (0.081920) | 0.060591 / 0.037052 (0.023539) | 0.501723 / 0.258489 (0.243234) | 0.548633 / 0.293841 (0.254793) | 0.050306 / 0.128546 (-0.078240) | 0.017458 / 0.075646 (-0.058188) | 0.093295 / 0.419271 (-0.325977) | 0.064588 / 0.043533 (0.021056) | 0.519395 / 0.255139 (0.264256) | 0.526021 / 0.283200 (0.242821) | 0.035795 / 0.141683 (-0.105888) | 1.792927 / 1.452155 (0.340772) | 1.956499 / 1.492716 (0.463783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296249 / 0.018006 (0.278243) | 0.594482 / 0.000490 (0.593992) | 0.007318 / 0.000200 (0.007118) | 0.000182 / 0.000054 (0.000128) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036110 / 0.037411 (-0.001301) | 0.107924 / 0.014526 (0.093399) | 0.119975 / 0.176557 (-0.056582) | 0.177499 / 0.737135 (-0.559636) | 0.123299 / 0.296338 (-0.173039) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.632994 / 0.215209 (0.417785) | 6.481663 / 2.077655 (4.404008) | 3.231259 / 1.504120 (1.727139) | 2.768298 / 1.541195 (1.227103) | 2.694543 / 1.468490 (1.226053) | 0.837384 / 4.584777 (-3.747393) | 5.405278 / 3.745712 (1.659566) | 4.639424 / 5.269862 (-0.630437) | 2.944251 / 4.565676 (-1.621426) | 0.094978 / 0.424275 (-0.329297) | 0.008716 / 0.007607 (0.001108) | 0.795820 / 0.226044 (0.569776) | 8.514233 / 2.268929 (6.245304) | 3.800463 / 55.444624 (-51.644161) | 3.000005 / 6.876477 (-3.876472) | 3.298853 / 2.142072 (1.156781) | 0.994112 / 4.805227 (-3.811115) | 0.209435 / 6.500664 (-6.291229) | 0.075610 / 0.075469 (0.000141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681127 / 1.841788 (-0.160661) | 23.874465 / 8.074308 (15.800156) | 21.638567 / 10.191392 (11.447175) | 0.233303 / 0.680424 (-0.447121) | 0.032504 / 0.534201 (-0.501697) | 0.460462 / 0.579283 (-0.118821) | 0.560043 / 0.434364 (0.125679) | 0.555059 / 0.540337 (0.014721) | 0.831444 / 1.386936 (-0.555492) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-05T14:34:49
| 2023-10-05T19:09:07
| 2023-10-05T18:57:41
|
CONTRIBUTOR
| null |
Improve documentation to clarify sharding behavior (#6270)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6281/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6281",
"html_url": "https://github.com/huggingface/datasets/pull/6281",
"diff_url": "https://github.com/huggingface/datasets/pull/6281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6281.patch",
"merged_at": "2023-10-05T18:57:41"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6280
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6280/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6280/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6280/events
|
https://github.com/huggingface/datasets/issues/6280
| 1,928,215,278
|
I_kwDODunzps5y7jru
| 6,280
|
Couldn't cast array of type fixed_size_list to Sequence(Value(float64))
|
{
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks for the quick response @mariosasko! I just installed your branch via `poetry add 'git+https://github.com/huggingface/datasets#fix-array_values'` and I can confirm it works on the example provided.\r\n\r\nFollow up question for you, should `None`s be supported in these types of features as they are in others?\r\n\r\nFor example, the following script:\r\n\r\n```\r\nfrom datasets import Features, Value, Sequence, ClassLabel, Dataset\r\n\r\ndataset_features = Features({\r\n 'text': Value('string'),\r\n 'embedding': Sequence(Value('double'), length=2),\r\n 'categories': Sequence(ClassLabel(names=sorted([\r\n 'one',\r\n 'two',\r\n 'three'\r\n ]))),\r\n})\r\n\r\ndataset = Dataset.from_dict(\r\n {\r\n 'text': ['A'] * 10000,\r\n \"embedding\": [None] * 10000, # THIS LINE CHANGED\r\n 'categories': [[0]] * 10000,\r\n },\r\n features=dataset_features\r\n)\r\n\r\ndef test_mapper(r):\r\n r['text'] = list(map(lambda t: t + ' b', r['text']))\r\n return r\r\n\r\n\r\ndataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2)\r\n```\r\n\r\nfails with\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py\", line 1354, in _write_generator_to_queue\r\n for i, result in enumerate(func(**kwargs)):\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3493, in _map_single\r\n writer.write_batch(batch)\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 549, in write_batch\r\n array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 1831, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 1831, in <listcomp>\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/table.py\", line 2160, in cast_array_to_feature\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nfixed_size_list<item: double>[2]\r\nto\r\nSequence(feature=Value(dtype='float64', id=None), length=2, id=None)\r\n```\r\n\r\nIdeally we can have empty embedding columns as well!"
] | 2023-10-05T12:48:31
| 2023-10-05T16:55:46
| null |
NONE
| null |
### Describe the bug
I have a dataset with an embedding column, when I try to map that dataset I get the following exception:
```
Traceback (most recent call last):
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map
for rank, done, content in iflatmap_unordered(
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: Couldn't cast array of type
fixed_size_list<item: float>[2]
to
Sequence(feature=Value(dtype='float32', id=None), length=2, id=None)
```
### Steps to reproduce the bug
Here's a simple repro script:
```
from datasets import Features, Value, Sequence, ClassLabel, Dataset
dataset_features = Features({
'text': Value('string'),
'embedding': Sequence(Value('double'), length=2),
'categories': Sequence(ClassLabel(names=sorted([
'one',
'two',
'three'
]))),
})
dataset = Dataset.from_dict(
{
'text': ['A'] * 10000,
'embedding': [[0.0, 0.1]] * 10000,
'categories': [[0]] * 10000,
},
features=dataset_features
)
def test_mapper(r):
r['text'] = list(map(lambda t: t + ' b', r['text']))
return r
dataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2)
```
Removing the embedding column fixes the issue!
### Expected behavior
The mapping completes successfully.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.1
- PyArrow version: 13.0.0
- Pandas version: 2.0.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6280/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6279
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6279/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6279/events
|
https://github.com/huggingface/datasets/issues/6279
| 1,928,028,226
|
I_kwDODunzps5y62BC
| 6,279
|
Batched IterableDataset
|
{
"login": "lneukom",
"id": 7010688,
"node_id": "MDQ6VXNlcjcwMTA2ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lneukom",
"html_url": "https://github.com/lneukom",
"followers_url": "https://api.github.com/users/lneukom/followers",
"following_url": "https://api.github.com/users/lneukom/following{/other_user}",
"gists_url": "https://api.github.com/users/lneukom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lneukom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lneukom/subscriptions",
"organizations_url": "https://api.github.com/users/lneukom/orgs",
"repos_url": "https://api.github.com/users/lneukom/repos",
"events_url": "https://api.github.com/users/lneukom/events{/privacy}",
"received_events_url": "https://api.github.com/users/lneukom/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"This is exactly what I was looking for. It would also be very useful for me :-)"
] | 2023-10-05T11:12:49
| 2023-10-05T11:50:28
| null |
NONE
| null |
### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation loads each element of a batch individually which can be very slow in cases of a big batch_size. I did some experiments [here](https://discuss.huggingface.co/t/slow-dataloader-with-big-batch-size/57224) and using a batched iteration would speed up data loading significantly.
### Your contribution
N/A
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6279/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6278
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6278/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6278/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6278/events
|
https://github.com/huggingface/datasets/pull/6278
| 1,927,957,877
|
PR_kwDODunzps5b_iKb
| 6,278
|
No data files duplicates
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009624 / 0.011353 (-0.001729) | 0.005121 / 0.011008 (-0.005887) | 0.105560 / 0.038508 (0.067052) | 0.090749 / 0.023109 (0.067640) | 0.430274 / 0.275898 (0.154376) | 0.443399 / 0.323480 (0.119919) | 0.006575 / 0.007986 (-0.001411) | 0.004396 / 0.004328 (0.000068) | 0.080900 / 0.004250 (0.076649) | 0.064921 / 0.037052 (0.027868) | 0.410092 / 0.258489 (0.151603) | 0.470058 / 0.293841 (0.176217) | 0.054160 / 0.128546 (-0.074386) | 0.014367 / 0.075646 (-0.061279) | 0.384844 / 0.419271 (-0.034428) | 0.072818 / 0.043533 (0.029285) | 0.429341 / 0.255139 (0.174202) | 0.430968 / 0.283200 (0.147769) | 0.038437 / 0.141683 (-0.103246) | 1.814456 / 1.452155 (0.362301) | 1.832122 / 1.492716 (0.339406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329266 / 0.018006 (0.311260) | 0.596848 / 0.000490 (0.596358) | 0.018291 / 0.000200 (0.018091) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030505 / 0.037411 (-0.006907) | 0.097394 / 0.014526 (0.082869) | 0.127144 / 0.176557 (-0.049412) | 0.190251 / 0.737135 (-0.546884) | 0.116543 / 0.296338 (-0.179795) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.592124 / 0.215209 (0.376915) | 5.979801 / 2.077655 (3.902146) | 2.837753 / 1.504120 (1.333633) | 2.492942 / 1.541195 (0.951747) | 2.548083 / 1.468490 (1.079593) | 0.870446 / 4.584777 (-3.714330) | 5.493718 / 3.745712 (1.748006) | 4.945135 / 5.269862 (-0.324727) | 3.133994 / 4.565676 (-1.431683) | 0.097742 / 0.424275 (-0.326533) | 0.008750 / 0.007607 (0.001143) | 0.723304 / 0.226044 (0.497260) | 7.353766 / 2.268929 (5.084838) | 3.504808 / 55.444624 (-51.939816) | 2.872490 / 6.876477 (-4.003987) | 3.186628 / 2.142072 (1.044556) | 1.035470 / 4.805227 (-3.769758) | 0.211980 / 6.500664 (-6.288684) | 0.080356 / 0.075469 (0.004887) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623389 / 1.841788 (-0.218399) | 23.492350 / 8.074308 (15.418042) | 21.053525 / 10.191392 (10.862133) | 0.225668 / 0.680424 (-0.454756) | 0.028311 / 0.534201 (-0.505890) | 0.472672 / 0.579283 (-0.106611) | 0.581536 / 0.434364 (0.147172) | 0.525180 / 0.540337 (-0.015158) | 0.790420 / 1.386936 (-0.596516) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009091 / 0.011353 (-0.002262) | 0.004978 / 0.011008 (-0.006030) | 0.077633 / 0.038508 (0.039125) | 0.103189 / 0.023109 (0.080080) | 0.500194 / 0.275898 (0.224296) | 0.524310 / 0.323480 (0.200831) | 0.006656 / 0.007986 (-0.001329) | 0.004586 / 0.004328 (0.000257) | 0.075535 / 0.004250 (0.071284) | 0.065100 / 0.037052 (0.028048) | 0.513776 / 0.258489 (0.255287) | 0.528483 / 0.293841 (0.234642) | 0.049877 / 0.128546 (-0.078669) | 0.012494 / 0.075646 (-0.063152) | 0.090225 / 0.419271 (-0.329046) | 0.054648 / 0.043533 (0.011116) | 0.510369 / 0.255139 (0.255230) | 0.540042 / 0.283200 (0.256842) | 0.035966 / 0.141683 (-0.105717) | 1.825965 / 1.452155 (0.373810) | 1.965647 / 1.492716 (0.472931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295921 / 0.018006 (0.277914) | 0.605751 / 0.000490 (0.605262) | 0.007243 / 0.000200 (0.007043) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032954 / 0.037411 (-0.004457) | 0.093613 / 0.014526 (0.079087) | 0.120010 / 0.176557 (-0.056546) | 0.176168 / 0.737135 (-0.560967) | 0.113978 / 0.296338 (-0.182360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.682904 / 0.215209 (0.467695) | 6.674640 / 2.077655 (4.596986) | 3.360660 / 1.504120 (1.856540) | 3.227246 / 1.541195 (1.686051) | 3.188852 / 1.468490 (1.720362) | 0.862293 / 4.584777 (-3.722484) | 5.518455 / 3.745712 (1.772743) | 4.881904 / 5.269862 (-0.387957) | 3.066964 / 4.565676 (-1.498712) | 0.099284 / 0.424275 (-0.324991) | 0.008644 / 0.007607 (0.001037) | 0.789231 / 0.226044 (0.563186) | 7.872017 / 2.268929 (5.603089) | 4.037105 / 55.444624 (-51.407519) | 3.318921 / 6.876477 (-3.557555) | 3.621953 / 2.142072 (1.479881) | 1.012049 / 4.805227 (-3.793178) | 0.204541 / 6.500664 (-6.296123) | 0.074509 / 0.075469 (-0.000960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748215 / 1.841788 (-0.093573) | 24.274974 / 8.074308 (16.200665) | 20.582389 / 10.191392 (10.390997) | 0.251001 / 0.680424 (-0.429423) | 0.032390 / 0.534201 (-0.501811) | 0.479211 / 0.579283 (-0.100072) | 0.607482 / 0.434364 (0.173118) | 0.587867 / 0.540337 (0.047530) | 0.822399 / 1.386936 (-0.564537) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009715 / 0.011353 (-0.001638) | 0.005449 / 0.011008 (-0.005559) | 0.108556 / 0.038508 (0.070048) | 0.080512 / 0.023109 (0.057403) | 0.450736 / 0.275898 (0.174838) | 0.487771 / 0.323480 (0.164291) | 0.005155 / 0.007986 (-0.002830) | 0.004213 / 0.004328 (-0.000115) | 0.087247 / 0.004250 (0.082997) | 0.063962 / 0.037052 (0.026909) | 0.454153 / 0.258489 (0.195664) | 0.499917 / 0.293841 (0.206076) | 0.052605 / 0.128546 (-0.075942) | 0.013019 / 0.075646 (-0.062627) | 0.379716 / 0.419271 (-0.039555) | 0.073241 / 0.043533 (0.029708) | 0.473488 / 0.255139 (0.218349) | 0.482944 / 0.283200 (0.199745) | 0.041541 / 0.141683 (-0.100142) | 1.829415 / 1.452155 (0.377261) | 1.953280 / 1.492716 (0.460564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313725 / 0.018006 (0.295719) | 0.591336 / 0.000490 (0.590847) | 0.021224 / 0.000200 (0.021025) | 0.000969 / 0.000054 (0.000914) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031874 / 0.037411 (-0.005537) | 0.099786 / 0.014526 (0.085260) | 0.116987 / 0.176557 (-0.059569) | 0.205538 / 0.737135 (-0.531597) | 0.118716 / 0.296338 (-0.177622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617145 / 0.215209 (0.401936) | 6.079144 / 2.077655 (4.001489) | 2.567233 / 1.504120 (1.063113) | 2.265301 / 1.541195 (0.724107) | 2.314001 / 1.468490 (0.845511) | 0.871561 / 4.584777 (-3.713216) | 5.477049 / 3.745712 (1.731337) | 4.720552 / 5.269862 (-0.549309) | 3.107515 / 4.565676 (-1.458162) | 0.100438 / 0.424275 (-0.323838) | 0.008586 / 0.007607 (0.000979) | 0.716913 / 0.226044 (0.490869) | 7.108417 / 2.268929 (4.839489) | 3.391336 / 55.444624 (-52.053288) | 2.734052 / 6.876477 (-4.142425) | 2.857226 / 2.142072 (0.715153) | 1.024121 / 4.805227 (-3.781106) | 0.216735 / 6.500664 (-6.283929) | 0.081605 / 0.075469 (0.006136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.678176 / 1.841788 (-0.163611) | 23.606037 / 8.074308 (15.531729) | 21.485331 / 10.191392 (11.293939) | 0.218312 / 0.680424 (-0.462112) | 0.027061 / 0.534201 (-0.507140) | 0.481188 / 0.579283 (-0.098096) | 0.620592 / 0.434364 (0.186228) | 0.574778 / 0.540337 (0.034441) | 0.831529 / 1.386936 (-0.555407) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011666 / 0.011353 (0.000313) | 0.005187 / 0.011008 (-0.005821) | 0.080692 / 0.038508 (0.042184) | 0.079159 / 0.023109 (0.056049) | 0.530823 / 0.275898 (0.254925) | 0.577807 / 0.323480 (0.254327) | 0.006246 / 0.007986 (-0.001740) | 0.004355 / 0.004328 (0.000026) | 0.080702 / 0.004250 (0.076452) | 0.062279 / 0.037052 (0.025226) | 0.553712 / 0.258489 (0.295223) | 0.579112 / 0.293841 (0.285271) | 0.056374 / 0.128546 (-0.072172) | 0.014681 / 0.075646 (-0.060966) | 0.097110 / 0.419271 (-0.322161) | 0.061040 / 0.043533 (0.017507) | 0.524718 / 0.255139 (0.269579) | 0.568586 / 0.283200 (0.285386) | 0.035774 / 0.141683 (-0.105909) | 1.864590 / 1.452155 (0.412435) | 1.953715 / 1.492716 (0.460998) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271315 / 0.018006 (0.253309) | 0.571343 / 0.000490 (0.570854) | 0.015812 / 0.000200 (0.015612) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038582 / 0.037411 (0.001170) | 0.117523 / 0.014526 (0.102997) | 0.128864 / 0.176557 (-0.047693) | 0.191164 / 0.737135 (-0.545971) | 0.133161 / 0.296338 (-0.163178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.679305 / 0.215209 (0.464096) | 6.814451 / 2.077655 (4.736796) | 3.377431 / 1.504120 (1.873311) | 3.011008 / 1.541195 (1.469813) | 3.093200 / 1.468490 (1.624710) | 0.905827 / 4.584777 (-3.678950) | 5.456094 / 3.745712 (1.710382) | 4.848511 / 5.269862 (-0.421351) | 3.064230 / 4.565676 (-1.501447) | 0.107478 / 0.424275 (-0.316798) | 0.009234 / 0.007607 (0.001627) | 0.833944 / 0.226044 (0.607899) | 8.286100 / 2.268929 (6.017171) | 4.241455 / 55.444624 (-51.203169) | 3.405460 / 6.876477 (-3.471017) | 3.660618 / 2.142072 (1.518546) | 1.046310 / 4.805227 (-3.758917) | 0.210891 / 6.500664 (-6.289773) | 0.079413 / 0.075469 (0.003944) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.825448 / 1.841788 (-0.016340) | 24.639059 / 8.074308 (16.564750) | 21.970417 / 10.191392 (11.779025) | 0.247708 / 0.680424 (-0.432715) | 0.033810 / 0.534201 (-0.500391) | 0.495517 / 0.579283 (-0.083766) | 0.601820 / 0.434364 (0.167456) | 0.585618 / 0.540337 (0.045280) | 0.858722 / 1.386936 (-0.528214) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006137 / 0.011353 (-0.005216) | 0.003685 / 0.011008 (-0.007324) | 0.079985 / 0.038508 (0.041476) | 0.060937 / 0.023109 (0.037828) | 0.390583 / 0.275898 (0.114685) | 0.425307 / 0.323480 (0.101827) | 0.003433 / 0.007986 (-0.004552) | 0.002868 / 0.004328 (-0.001461) | 0.062572 / 0.004250 (0.058322) | 0.048642 / 0.037052 (0.011590) | 0.401096 / 0.258489 (0.142607) | 0.436988 / 0.293841 (0.143147) | 0.027645 / 0.128546 (-0.100901) | 0.007973 / 0.075646 (-0.067673) | 0.261997 / 0.419271 (-0.157275) | 0.045393 / 0.043533 (0.001860) | 0.394266 / 0.255139 (0.139127) | 0.414448 / 0.283200 (0.131248) | 0.022551 / 0.141683 (-0.119131) | 1.438458 / 1.452155 (-0.013697) | 1.501568 / 1.492716 (0.008852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224335 / 0.018006 (0.206329) | 0.421918 / 0.000490 (0.421428) | 0.006883 / 0.000200 (0.006683) | 0.000210 / 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023505 / 0.037411 (-0.013906) | 0.072438 / 0.014526 (0.057912) | 0.083576 / 0.176557 (-0.092981) | 0.142906 / 0.737135 (-0.594229) | 0.083910 / 0.296338 (-0.212428) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396004 / 0.215209 (0.180795) | 3.969852 / 2.077655 (1.892197) | 1.966000 / 1.504120 (0.461880) | 1.786453 / 1.541195 (0.245258) | 1.866082 / 1.468490 (0.397592) | 0.502633 / 4.584777 (-4.082144) | 3.114331 / 3.745712 (-0.631382) | 2.940003 / 5.269862 (-2.329859) | 1.901844 / 4.565676 (-2.663832) | 0.058109 / 0.424275 (-0.366166) | 0.006502 / 0.007607 (-0.001105) | 0.463465 / 0.226044 (0.237420) | 4.641531 / 2.268929 (2.372603) | 2.315759 / 55.444624 (-53.128865) | 2.253088 / 6.876477 (-4.623389) | 2.151399 / 2.142072 (0.009326) | 0.592225 / 4.805227 (-4.213002) | 0.125072 / 6.500664 (-6.375592) | 0.059966 / 0.075469 (-0.015503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231392 / 1.841788 (-0.610396) | 17.533893 / 8.074308 (9.459585) | 13.710478 / 10.191392 (3.519086) | 0.147389 / 0.680424 (-0.533035) | 0.017932 / 0.534201 (-0.516269) | 0.334144 / 0.579283 (-0.245139) | 0.368817 / 0.434364 (-0.065547) | 0.383790 / 0.540337 (-0.156547) | 0.540262 / 1.386936 (-0.846674) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006066 / 0.011353 (-0.005287) | 0.003804 / 0.011008 (-0.007205) | 0.062474 / 0.038508 (0.023966) | 0.060547 / 0.023109 (0.037437) | 0.448643 / 0.275898 (0.172745) | 0.487005 / 0.323480 (0.163525) | 0.004884 / 0.007986 (-0.003102) | 0.002911 / 0.004328 (-0.001418) | 0.062950 / 0.004250 (0.058700) | 0.049672 / 0.037052 (0.012620) | 0.477491 / 0.258489 (0.219002) | 0.488234 / 0.293841 (0.194393) | 0.028711 / 0.128546 (-0.099835) | 0.008101 / 0.075646 (-0.067545) | 0.068333 / 0.419271 (-0.350939) | 0.040959 / 0.043533 (-0.002574) | 0.450716 / 0.255139 (0.195577) | 0.471089 / 0.283200 (0.187890) | 0.020710 / 0.141683 (-0.120973) | 1.474850 / 1.452155 (0.022695) | 1.540115 / 1.492716 (0.047399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229811 / 0.018006 (0.211805) | 0.419526 / 0.000490 (0.419036) | 0.003818 / 0.000200 (0.003618) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026045 / 0.037411 (-0.011366) | 0.080325 / 0.014526 (0.065799) | 0.091549 / 0.176557 (-0.085007) | 0.145253 / 0.737135 (-0.591882) | 0.091849 / 0.296338 (-0.204489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463047 / 0.215209 (0.247838) | 4.598727 / 2.077655 (2.521072) | 2.558996 / 1.504120 (1.054877) | 2.405896 / 1.541195 (0.864701) | 2.447291 / 1.468490 (0.978801) | 0.510393 / 4.584777 (-4.074384) | 3.173344 / 3.745712 (-0.572368) | 2.901201 / 5.269862 (-2.368661) | 1.896440 / 4.565676 (-2.669236) | 0.058374 / 0.424275 (-0.365901) | 0.006449 / 0.007607 (-0.001158) | 0.539653 / 0.226044 (0.313608) | 5.408217 / 2.268929 (3.139289) | 3.042453 / 55.444624 (-52.402172) | 2.656724 / 6.876477 (-4.219753) | 2.838165 / 2.142072 (0.696092) | 0.598663 / 4.805227 (-4.206565) | 0.126211 / 6.500664 (-6.374453) | 0.062830 / 0.075469 (-0.012639) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.392412 / 1.841788 (-0.449376) | 18.195170 / 8.074308 (10.120862) | 14.788251 / 10.191392 (4.596859) | 0.132579 / 0.680424 (-0.547845) | 0.017867 / 0.534201 (-0.516334) | 0.340020 / 0.579283 (-0.239263) | 0.386719 / 0.434364 (-0.047645) | 0.398863 / 0.540337 (-0.141475) | 0.579320 / 1.386936 (-0.807617) |\n\n</details>\n</details>\n\n\n",
"closing in favor of https://github.com/huggingface/datasets/pull/6282"
] | 2023-10-05T10:31:58
| 2023-10-05T14:43:17
| 2023-10-05T14:43:17
|
MEMBER
| null |
I added a new DataFilesSet class to disallow duplicate data files.
I also deprecated DataFilesList.
EDIT: actually I might just add drop_duplicates=True to `.from_patterns`
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
TODO:
- [ ] tests
- [ ] preserve data files order
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6278/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6278",
"html_url": "https://github.com/huggingface/datasets/pull/6278",
"diff_url": "https://github.com/huggingface/datasets/pull/6278.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6278.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6277
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6277/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6277/events
|
https://github.com/huggingface/datasets/issues/6277
| 1,927,044,546
|
I_kwDODunzps5y3F3C
| 6,277
|
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
|
{
"login": "diegogonzalezc",
"id": 66733346,
"node_id": "MDQ6VXNlcjY2NzMzMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diegogonzalezc",
"html_url": "https://github.com/diegogonzalezc",
"followers_url": "https://api.github.com/users/diegogonzalezc/followers",
"following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}",
"gists_url": "https://api.github.com/users/diegogonzalezc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diegogonzalezc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diegogonzalezc/subscriptions",
"organizations_url": "https://api.github.com/users/diegogonzalezc/orgs",
"repos_url": "https://api.github.com/users/diegogonzalezc/repos",
"events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}",
"received_events_url": "https://api.github.com/users/diegogonzalezc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"`evaluate.load(\"paws-x\", \"es\")` throws the error because there is no such metric in the `evaluate` lib.\r\n\r\nSo, this is unrelated to our lib."
] | 2023-10-04T22:01:25
| 2023-10-05T14:00:58
| null |
NONE
| null |
### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
### Steps to reproduce the bug
https://colab.research.google.com/drive/11xUUFxloClpmqLvDy_Xxfmo3oUzjY5nx#scrollTo=kUn74FigzhHm
### Expected behavior
The the trained model
### Environment info
colab, "paws-x" dataset , DistilRoBERTa-base model
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6277/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6276
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6276/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6276/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6276/events
|
https://github.com/huggingface/datasets/issues/6276
| 1,925,961,878
|
I_kwDODunzps5yy9iW
| 6,276
|
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error
|
{
"login": "valaofficial",
"id": 50768065,
"node_id": "MDQ6VXNlcjUwNzY4MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50768065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valaofficial",
"html_url": "https://github.com/valaofficial",
"followers_url": "https://api.github.com/users/valaofficial/followers",
"following_url": "https://api.github.com/users/valaofficial/following{/other_user}",
"gists_url": "https://api.github.com/users/valaofficial/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valaofficial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valaofficial/subscriptions",
"organizations_url": "https://api.github.com/users/valaofficial/orgs",
"repos_url": "https://api.github.com/users/valaofficial/repos",
"events_url": "https://api.github.com/users/valaofficial/events{/privacy}",
"received_events_url": "https://api.github.com/users/valaofficial/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n```python\r\nif __name__ == \"__main__\":\r\n common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n```\r\n\r\nOtherwise, the only solution is to set `num_proc=1`.",
"> Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n> \r\n> ```python\r\n> if __name__ == \"__main__\":\r\n> common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n> ```\r\n> \r\n> Otherwise, the only solution is to set `num_proc=1`.\r\n\r\nThank you very much for the response, i eventually tried setting `num_proc=1` and now the jupyter notebook kernel keers dying after running the command, what do you think the issue could be, could it be that my system is not capable of running the command \"i'm using a Lenovo Thinkpad T440 with no GPU\""
] | 2023-10-04T11:03:41
| 2023-10-04T22:14:38
| null |
NONE
| null |
### Describe the bug
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
I tried google collab and it works but because I'm on the free version the training doesn't complete
the error comes in jupyter notebook when i run this line
`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)`
here is the error message
```
Map (num_proc=4): 0% 0/2506 [00:52<?, ? examples/s]
The above exception was the direct cause of the following exception:
NameError Traceback (most recent call last) Cell In[19], line 1 ----> 1 common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)
File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:853, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( --> 853 { 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 )
File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:854, in <dictcomp>(.0) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( 853 { --> 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 )
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:3189, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3182 logger.info(f"Spawning {num_proc} processes") 3183 with logging.tqdm( 3184 disable=not logging.is_progress_bar_enabled(), 3185 unit=" examples", 3186 total=pbar_total, 3187 desc=(desc or "Map") + f" (num_proc={num_proc})", 3188 ) as pbar: -> 3189 for rank, done, content in iflatmap_unordered( 3190 pool, Dataset._map_single, kwargs_iterable=kwargs_per_job 3191 ): 3192 if done: 3193 shards_done += 1
File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in iflatmap_unordered(pool, func, kwargs_iterable) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results]
File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in <listcomp>(.0) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results]
File ~\anaconda\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout) 772 return self._value 773 else: --> 774 raise self._value
NameError: name 'feature_extractor' is not defined
```
### Steps to reproduce the bug
1. follow the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
2. run this line of code
`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)`
3. I'm using jupyter notebook from anaconda
### Expected behavior
No error message
### Environment info
datasets version: 2.8.0
Python version: 3.11
Windows 10
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6276/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6275
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6275/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6275/events
|
https://github.com/huggingface/datasets/issues/6275
| 1,921,354,680
|
I_kwDODunzps5yhYu4
| 6,275
|
Would like to Contribute a dataset
|
{
"login": "vikas70607",
"id": 97907750,
"node_id": "U_kgDOBdX0Jg",
"avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas70607",
"html_url": "https://github.com/vikas70607",
"followers_url": "https://api.github.com/users/vikas70607/followers",
"following_url": "https://api.github.com/users/vikas70607/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions",
"organizations_url": "https://api.github.com/users/vikas70607/orgs",
"repos_url": "https://api.github.com/users/vikas70607/repos",
"events_url": "https://api.github.com/users/vikas70607/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas70607/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset."
] | 2023-10-02T07:00:21
| 2023-10-02T15:56:34
| null |
NONE
| null |
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6275/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6274
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6274/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6274/events
|
https://github.com/huggingface/datasets/issues/6274
| 1,921,036,328
|
I_kwDODunzps5ygLAo
| 6,274
|
FileNotFoundError for dataset with multiple builder config
|
{
"login": "LouisChen15",
"id": 97120485,
"node_id": "U_kgDOBcnw5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LouisChen15",
"html_url": "https://github.com/LouisChen15",
"followers_url": "https://api.github.com/users/LouisChen15/followers",
"following_url": "https://api.github.com/users/LouisChen15/following{/other_user}",
"gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions",
"organizations_url": "https://api.github.com/users/LouisChen15/orgs",
"repos_url": "https://api.github.com/users/LouisChen15/repos",
"events_url": "https://api.github.com/users/LouisChen15/events{/privacy}",
"received_events_url": "https://api.github.com/users/LouisChen15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Please tell me if the above info is not enough for solving the problem. I will then make my dataset public temporarily so that you can really reproduce the bug. "
] | 2023-10-01T23:45:56
| 2023-10-02T20:09:38
| 2023-10-02T20:09:38
|
NONE
| null |
### Describe the bug
When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen.
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow'
The "XXX.incomplete folder" in the cache folder of my dataset will disappear before "generating test split", which does not happen when config name is not entered and the config name is "default"
C:\Users\chenx\.cache\huggingface\datasets\my_dataset\0_shot_multiple_choice\1.0.0
The folder that is supposed to remain under the above directory will disappear, and the data generator will not have a place to generate data into.
### Steps to reproduce the bug
test = load_dataset('my_dataset', '0_shot_multiple_choice')
### Expected behavior
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow'
### Environment info
datasets 2.14.5
python 3.8.18
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6274/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6273
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6273/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6273/events
|
https://github.com/huggingface/datasets/issues/6273
| 1,920,922,260
|
I_kwDODunzps5yfvKU
| 6,273
|
Broken Link to PubMed Abstracts dataset .
|
{
"login": "sameemqureshi",
"id": 100606327,
"node_id": "U_kgDOBf8hdw",
"avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sameemqureshi",
"html_url": "https://github.com/sameemqureshi",
"followers_url": "https://api.github.com/users/sameemqureshi/followers",
"following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}",
"gists_url": "https://api.github.com/users/sameemqureshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sameemqureshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sameemqureshi/subscriptions",
"organizations_url": "https://api.github.com/users/sameemqureshi/orgs",
"repos_url": "https://api.github.com/users/sameemqureshi/repos",
"events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sameemqureshi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"This has already been reported in the HF Course repo (https://github.com/huggingface/course/issues/623).",
"@lhoestq @albertvillanova @lewtun I don't think we are allowed to host these data files on the Hub (due to DMCA), which means the only option is to use a different dataset in the course (and to re-record the video 🙂), no?",
"Keeping the video is maybe fine, we can add a note on youtube to suggest to load a dataset with a different name. Maybe C4 ? And update the code snippets on the website ?"
] | 2023-10-01T19:08:48
| 2023-10-02T16:40:18
| null |
NONE
| null |
### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url)
2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link.
### Expected behavior
The link should Redirect to the "PubMed Abstracts dataset" as expected .
### Environment info
.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6273/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6272
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6272/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6272/events
|
https://github.com/huggingface/datasets/issues/6272
| 1,920,831,487
|
I_kwDODunzps5yfY__
| 6,272
|
Duplicate `data_files` when named `<split>/<split>.parquet`
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null |
[
"Also reported in https://github.com/huggingface/datasets/issues/6259",
"I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?",
"Alternatively we could just use this no ?\r\n\r\n```python\r\nif config.FSSPEC_VERSION < version.parse(\"2023.9.0\"):\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**[{sep}]{keyword}[{sep}/]**\",\r\n \"**/{keyword}[{sep}/]**\",\r\n ]\r\nelse:\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**/*[{sep}]{keyword}[{sep}/]**\",\r\n \"**/*/{keyword}[{sep}/]**\",\r\n ]\r\n```\r\n\r\nThis way no need to implement sets, which would require a bit of work since we've always considered a list of pattern to be resolved as the concatenated list of resolved files for each pattern (including duplicates)\r\n",
"Arf `\"**/*/{keyword}[{sep}/]**\"` does return `data/keyword.txt` in latest `fsspec` but not in `glob.glob`\r\n\r\nEDIT: actually forgot to set `recursive=True`",
"Actually `glob.glob` does return it with `recursive=True` ! my bad",
"Pff just tested and my idea sucks, pattern 1 and 3 obviously give duplicates ",
"> I think it's best to drop duplicates with a set (as a temporary fix)\r\n\r\nI started https://github.com/huggingface/datasets/pull/6278 to use DataFilesSet objects instead of DataFilesList"
] | 2023-10-01T15:43:56
| 2023-10-05T10:32:27
| null |
MEMBER
| null |
e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'],
NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'],
NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']}
```
This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6272/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6271
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6271/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6271/events
|
https://github.com/huggingface/datasets/issues/6271
| 1,920,420,295
|
I_kwDODunzps5yd0nH
| 6,271
|
Overwriting Split overwrites data but not metadata, corrupting dataset
|
{
"login": "govindrai",
"id": 13859249,
"node_id": "MDQ6VXNlcjEzODU5MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/govindrai",
"html_url": "https://github.com/govindrai",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/govindrai/subscriptions",
"organizations_url": "https://api.github.com/users/govindrai/orgs",
"repos_url": "https://api.github.com/users/govindrai/repos",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"received_events_url": "https://api.github.com/users/govindrai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-09-30T22:37:31
| 2023-09-30T22:37:31
| null |
NONE
| null |
### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
When I push to an existing split I get this error:
`ValueError: Split complexRoofLocation_01Apr2023_to_31May2023test already present`
This seems to suggest that the library doesn't support overwriting splits.
**Potential Bug**
What’s strange is that datasets, despite the operation erroring out with the ValueError above, does, in fact, overwrite the split:
`Pushing dataset shards to the dataset hub: 100% [.....................] 1/1 [00:00<00:00, 55.04it/s]`
Even though you got an error message and your code fails, your dataset is now changed. That seems like a bug. Either don't change the dataset, or don't throw the error and allow the script to proceed.
Additional Bug
While it overwrites the split, it doesn’t overwrite the split’s information. Because of this when you pull down the dataset you may end up getting a `NonMatchingSplitsSizesError` if the size of the dataset during the overwrite is different. For example, my original split had 5 rows, but on my overwrite, I only had 4. Then when I try to download the dataset, I get a `NonMatchingSplitsSizesError` because the dataset's data.json states there’s 5 but only 4 exist in the split.
Expected Behavior
This corrupts the dataset rendering it unusable (until you take manual intervention). Either the library should let the overwrite happen (which it does but should also update the metadata) or it shouldn’t do anything.
### Steps to reproduce the bug
[Colab Notebook](https://colab.research.google.com/drive/1bqVkD06Ngs9MQNdSk_ygCG6y1UqXA4pC?usp=sharing)
### Expected behavior
The split should be overwritten and I should be able to use the new version of the dataset without issue.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6271/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6270
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6270/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6270/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6270/events
|
https://github.com/huggingface/datasets/issues/6270
| 1,920,329,373
|
I_kwDODunzps5ydead
| 6,270
|
Dataset.from_generator raises with sharded gen_args
|
{
"login": "hartmans",
"id": 53510,
"node_id": "MDQ6VXNlcjUzNTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hartmans",
"html_url": "https://github.com/hartmans",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"repos_url": "https://api.github.com/users/hartmans/repos",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"`gen_kwargs` should be a `dict`, as stated in the docstring, but you are passing a `list`.\r\n\r\nSo, to fix the error, replace the list of dicts with a dict of lists (and slightly modify the generator function):\r\n```python\r\nfrom pathlib import Path\r\nimport datasets\r\n\r\ndef process_yaml(files):\r\n for f in files:\r\n # process\r\n yield dict(...)\r\n\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n dir = Path(sys.argv[0]).parent\r\n ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs={'files': [f for f in dir.glob('*.yml')]})\r\n ds.to_json('training.jsonl')\r\n```",
"That runs, and because my dataset is small, it's what I did to get past the problem.\r\nHowever, it does not produce a sharded dataset. From the doc string I expect there ought to be a way to call from_generator such that num_shards in the resulting data set is equal to the number of items in the list.\r\nThe part of the doc string that your suggestion is not responsive to is:\r\n` You can define a sharded dataset by passing the list of shards in *g\r\nen_kwargs*.\r\n`\r\n\r\nWhat your suggestion does is calls the generator once, with the list argument, and produces a single shard dataset.\r\n",
"The sharding mentioned here refers to using this function with `num_proc` (multiprocessing splits the `kwargs` into shards and passes them to the generator function)\r\n\r\n> That runs, and because my dataset is small, it's what I did to get past the problem.\r\n\r\n`from_generator` generates a memory-mapped dataset (can be larger than RAM), so the dataset size should not be an issue unless the generator function's implementation does not properly free the memory.\r\n",
"It sounds like you are saying that num_proc affects the form of gen_kwargs.\r\nAre you saying that for non-zero num_proc gen_kwargs should be a list whose length is the same as num_proc?\r\nOr are you saying that for non-zero num_proc, gen_kwargs should be a dict whose elements are lists the length of num_proc?\r\n",
"I ran some tests. So, it looks like with num_proc greater than 1, gen_kwargs is expected to be a dict of lists. It calls the generator also with a dict of lists, but the lists are split.\r\nI.E. if my original has `gen_kwargs=dict(a=[0,1,2])`, then my generator might get called with `gen_kwalrgs=dict([0])`.\r\nThat all makes sense, but I definitely think there is room for improvement in the doc string here.\r\nIn order to suggest improvements to the doc string, I need to look at how the gen_kwargs are split, and figure out if:\r\n* num_proc needs to exactly equal the length of the lists\r\n* num_proc needs to evenly divide the length of the lists\r\n* Or there's no required relationship.\r\nI'll look into that and then propose an improved doc string if no one else gets to it first.",
"Okay, that was fun; I took a dive through the dataset code and feel like I have a much better understanding.\r\nHere is my understanding of the behavior:\r\n* max_proc is an upper limit on the number of shards that `from_generator` produces\r\n* If `max_proc` is greater than 1, then all lists in *gen_kwargs* must be the same length\r\n* If the lists in *gen_kwargs* are shorter than *num_proc* elements, *num_proc* will be reduced and a warning produced. Put another way, `min(list_length, num_shards)` shards will be produced\r\n* The members of the lists in *gen_kwargs* will be partitioned among the created jobs.\r\nTo validate the above, take a look at\r\n`_number_of_shards_in_gen_kwargs` and `_distribute_shards` and `_split_gen_kwargs` in utils/sharding.py.\r\nI've also chased down starting at *from_generator* all the way through to GeneratorBuilder and the calls to the functions in sharding.py.\r\nTomorrow I'll take a look at the contributing guidelines and see what's involved in putting together a PR to improve the doc string."
] | 2023-09-30T16:50:06
| 2023-10-03T01:21:39
| null |
CONTRIBUTOR
| null |
### Describe the bug
According to the docs of Datasets.from_generator:
```
gen_kwargs(`dict`, *optional*):
Keyword arguments to be passed to the `generator` callable.
You can define a sharded dataset by passing the list of shards in `gen_kwargs`.
```
So I'd expect that if gen_kwargs was a list, then my generator would be called once for each element in the list with the dict in the list for that element.
It doesn't work that way though.
### Steps to reproduce the bug
```python
#!/usr/bin/python
from pathlib import Path
import datasets
def process_yaml(file):
yield dict(example=42)
if __name__ == '__main__':
import sys
dir = Path(sys.argv[0]).parent
ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')],
)
ds.to_json('training.jsonl')
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/tmp/dataset_bug.py", line 13, in <module>
ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1072, in from_generator
).read()
^^^^^^
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/io/generator.py", line 47, in read
self.builder.download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _download_and_prepare
super()._download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1656, in _prepare_split_single
generator = self._generate_examples(**gen_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: datasets.packaged_modules.generator.generator.Generator._generate_examples() argument after ** must be a ```
mapping, not list
### Expected behavior
I would expect that process_yaml would be called once for each yaml file in the directory where the script is run.
I also tried with the list being in gen_kwargs, but in that case process_yaml gets called with a list.
### Environment info
- `datasets` version: 2.14.6.dev0 (git commit 0cc77d7f45c7369; also tested with 2.14.0)
- Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36
- Python version: 3.11.2
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6270/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6269
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6269/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6269/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6269/events
|
https://github.com/huggingface/datasets/pull/6269
| 1,919,572,790
|
PR_kwDODunzps5bjbDc
| 6,269
|
Test single commit `push_to_hub` API
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6269). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005864 / 0.011353 (-0.005489) | 0.003535 / 0.011008 (-0.007474) | 0.080732 / 0.038508 (0.042224) | 0.057072 / 0.023109 (0.033963) | 0.334342 / 0.275898 (0.058444) | 0.361345 / 0.323480 (0.037865) | 0.003290 / 0.007986 (-0.004696) | 0.003794 / 0.004328 (-0.000534) | 0.063414 / 0.004250 (0.059163) | 0.046901 / 0.037052 (0.009848) | 0.335973 / 0.258489 (0.077484) | 0.377929 / 0.293841 (0.084088) | 0.027199 / 0.128546 (-0.101348) | 0.008049 / 0.075646 (-0.067597) | 0.261810 / 0.419271 (-0.157462) | 0.044669 / 0.043533 (0.001136) | 0.333600 / 0.255139 (0.078461) | 0.356362 / 0.283200 (0.073162) | 0.020325 / 0.141683 (-0.121358) | 1.458138 / 1.452155 (0.005984) | 1.505923 / 1.492716 (0.013207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216456 / 0.018006 (0.198450) | 0.421750 / 0.000490 (0.421261) | 0.007359 / 0.000200 (0.007159) | 0.000246 / 0.000054 (0.000191) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023400 / 0.037411 (-0.014012) | 0.073363 / 0.014526 (0.058838) | 0.083533 / 0.176557 (-0.093023) | 0.144045 / 0.737135 (-0.593090) | 0.084050 / 0.296338 (-0.212288) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398354 / 0.215209 (0.183145) | 3.982875 / 2.077655 (1.905220) | 2.047299 / 1.504120 (0.543180) | 1.873780 / 1.541195 (0.332585) | 1.977044 / 1.468490 (0.508554) | 0.497038 / 4.584777 (-4.087739) | 3.039743 / 3.745712 (-0.705969) | 2.832885 / 5.269862 (-2.436977) | 1.827300 / 4.565676 (-2.738377) | 0.057503 / 0.424275 (-0.366772) | 0.006272 / 0.007607 (-0.001335) | 0.468681 / 0.226044 (0.242637) | 4.696551 / 2.268929 (2.427622) | 2.413805 / 55.444624 (-53.030819) | 2.157199 / 6.876477 (-4.719278) | 2.345986 / 2.142072 (0.203914) | 0.584632 / 4.805227 (-4.220595) | 0.124684 / 6.500664 (-6.375980) | 0.060090 / 0.075469 (-0.015379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293551 / 1.841788 (-0.548236) | 17.198292 / 8.074308 (9.123984) | 13.677910 / 10.191392 (3.486518) | 0.146633 / 0.680424 (-0.533791) | 0.016711 / 0.534201 (-0.517490) | 0.331644 / 0.579283 (-0.247639) | 0.360148 / 0.434364 (-0.074215) | 0.381194 / 0.540337 (-0.159143) | 0.537952 / 1.386936 (-0.848984) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006020 / 0.011353 (-0.005333) | 0.003557 / 0.011008 (-0.007451) | 0.061926 / 0.038508 (0.023418) | 0.056246 / 0.023109 (0.033137) | 0.446679 / 0.275898 (0.170781) | 0.479843 / 0.323480 (0.156363) | 0.004656 / 0.007986 (-0.003330) | 0.002823 / 0.004328 (-0.001505) | 0.061366 / 0.004250 (0.057115) | 0.045793 / 0.037052 (0.008740) | 0.460807 / 0.258489 (0.202318) | 0.485467 / 0.293841 (0.191626) | 0.028555 / 0.128546 (-0.099991) | 0.007973 / 0.075646 (-0.067674) | 0.068305 / 0.419271 (-0.350966) | 0.040844 / 0.043533 (-0.002689) | 0.463715 / 0.255139 (0.208576) | 0.474553 / 0.283200 (0.191354) | 0.019959 / 0.141683 (-0.121723) | 1.432527 / 1.452155 (-0.019628) | 1.485410 / 1.492716 (-0.007307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205555 / 0.018006 (0.187549) | 0.408271 / 0.000490 (0.407781) | 0.004325 / 0.000200 (0.004125) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026338 / 0.037411 (-0.011074) | 0.080534 / 0.014526 (0.066008) | 0.093935 / 0.176557 (-0.082622) | 0.146446 / 0.737135 (-0.590689) | 0.092890 / 0.296338 (-0.203448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463879 / 0.215209 (0.248670) | 4.646411 / 2.077655 (2.568756) | 2.567320 / 1.504120 (1.063200) | 2.384376 / 1.541195 (0.843181) | 2.412738 / 1.468490 (0.944248) | 0.510240 / 4.584777 (-4.074537) | 3.094988 / 3.745712 (-0.650724) | 2.837700 / 5.269862 (-2.432161) | 1.850163 / 4.565676 (-2.715513) | 0.059320 / 0.424275 (-0.364955) | 0.006330 / 0.007607 (-0.001277) | 0.537770 / 0.226044 (0.311726) | 5.385556 / 2.268929 (3.116627) | 3.036088 / 55.444624 (-52.408536) | 2.650464 / 6.876477 (-4.226013) | 2.755676 / 2.142072 (0.613603) | 0.607353 / 4.805227 (-4.197875) | 0.124589 / 6.500664 (-6.376075) | 0.060778 / 0.075469 (-0.014691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343243 / 1.841788 (-0.498545) | 17.630281 / 8.074308 (9.555973) | 14.401219 / 10.191392 (4.209827) | 0.143252 / 0.680424 (-0.537172) | 0.017880 / 0.534201 (-0.516321) | 0.337391 / 0.579283 (-0.241892) | 0.373531 / 0.434364 (-0.060833) | 0.398408 / 0.540337 (-0.141929) | 0.558925 / 1.386936 (-0.828011) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006552 / 0.011353 (-0.004801) | 0.003853 / 0.011008 (-0.007155) | 0.077673 / 0.038508 (0.039165) | 0.066043 / 0.023109 (0.042934) | 0.289858 / 0.275898 (0.013960) | 0.299009 / 0.323480 (-0.024471) | 0.004806 / 0.007986 (-0.003179) | 0.003517 / 0.004328 (-0.000811) | 0.058227 / 0.004250 (0.053977) | 0.052134 / 0.037052 (0.015082) | 0.328800 / 0.258489 (0.070311) | 0.317616 / 0.293841 (0.023776) | 0.028344 / 0.128546 (-0.100202) | 0.007853 / 0.075646 (-0.067794) | 0.291207 / 0.419271 (-0.128065) | 0.052977 / 0.043533 (0.009444) | 0.287548 / 0.255139 (0.032409) | 0.307647 / 0.283200 (0.024448) | 0.023899 / 0.141683 (-0.117784) | 1.382267 / 1.452155 (-0.069888) | 1.589915 / 1.492716 (0.097199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246244 / 0.018006 (0.228238) | 0.478255 / 0.000490 (0.477766) | 0.014115 / 0.000200 (0.013915) | 0.000305 / 0.000054 (0.000250) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027033 / 0.037411 (-0.010378) | 0.073988 / 0.014526 (0.059462) | 0.088337 / 0.176557 (-0.088219) | 0.144067 / 0.737135 (-0.593069) | 0.091295 / 0.296338 (-0.205043) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.365904 / 0.215209 (0.150695) | 3.537330 / 2.077655 (1.459675) | 1.678341 / 1.504120 (0.174221) | 1.530297 / 1.541195 (-0.010898) | 1.605634 / 1.468490 (0.137144) | 0.437461 / 4.584777 (-4.147316) | 3.419040 / 3.745712 (-0.326672) | 3.203549 / 5.269862 (-2.066312) | 1.913214 / 4.565676 (-2.652463) | 0.052675 / 0.424275 (-0.371600) | 0.006681 / 0.007607 (-0.000926) | 0.429269 / 0.226044 (0.203225) | 4.214051 / 2.268929 (1.945122) | 2.217928 / 55.444624 (-53.226696) | 1.842679 / 6.876477 (-5.033798) | 1.867961 / 2.142072 (-0.274111) | 0.550566 / 4.805227 (-4.254661) | 0.118015 / 6.500664 (-6.382649) | 0.054749 / 0.075469 (-0.020720) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.170547 / 1.841788 (-0.671241) | 18.410567 / 8.074308 (10.336259) | 12.729992 / 10.191392 (2.538600) | 0.160426 / 0.680424 (-0.519998) | 0.021259 / 0.534201 (-0.512942) | 0.369573 / 0.579283 (-0.209710) | 0.440350 / 0.434364 (0.005986) | 0.443755 / 0.540337 (-0.096582) | 0.645614 / 1.386936 (-0.741322) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005913 / 0.011353 (-0.005440) | 0.003542 / 0.011008 (-0.007466) | 0.057621 / 0.038508 (0.019113) | 0.065822 / 0.023109 (0.042713) | 0.390847 / 0.275898 (0.114949) | 0.393127 / 0.323480 (0.069647) | 0.005040 / 0.007986 (-0.002945) | 0.002944 / 0.004328 (-0.001384) | 0.069058 / 0.004250 (0.064808) | 0.051594 / 0.037052 (0.014542) | 0.383745 / 0.258489 (0.125256) | 0.414372 / 0.293841 (0.120531) | 0.030038 / 0.128546 (-0.098508) | 0.008109 / 0.075646 (-0.067538) | 0.065444 / 0.419271 (-0.353828) | 0.045974 / 0.043533 (0.002441) | 0.401695 / 0.255139 (0.146556) | 0.417834 / 0.283200 (0.134635) | 0.020137 / 0.141683 (-0.121546) | 1.452130 / 1.452155 (-0.000025) | 1.455259 / 1.492716 (-0.037458) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228262 / 0.018006 (0.210255) | 0.455155 / 0.000490 (0.454665) | 0.006667 / 0.000200 (0.006467) | 0.000207 / 0.000054 (0.000153) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030159 / 0.037411 (-0.007252) | 0.098478 / 0.014526 (0.083952) | 0.101409 / 0.176557 (-0.075147) | 0.148689 / 0.737135 (-0.588446) | 0.103067 / 0.296338 (-0.193272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444095 / 0.215209 (0.228886) | 3.991588 / 2.077655 (1.913934) | 2.147845 / 1.504120 (0.643725) | 2.007871 / 1.541195 (0.466676) | 2.042074 / 1.468490 (0.573584) | 0.451592 / 4.584777 (-4.133185) | 3.439400 / 3.745712 (-0.306312) | 3.107756 / 5.269862 (-2.162106) | 1.909785 / 4.565676 (-2.655891) | 0.051718 / 0.424275 (-0.372558) | 0.006597 / 0.007607 (-0.001010) | 0.480822 / 0.226044 (0.254777) | 4.913235 / 2.268929 (2.644307) | 2.631882 / 55.444624 (-52.812742) | 2.397209 / 6.876477 (-4.479267) | 2.487191 / 2.142072 (0.345119) | 0.566321 / 4.805227 (-4.238906) | 0.121741 / 6.500664 (-6.378924) | 0.053399 / 0.075469 (-0.022070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256599 / 1.841788 (-0.585189) | 18.891127 / 8.074308 (10.816819) | 13.219662 / 10.191392 (3.028270) | 0.154570 / 0.680424 (-0.525854) | 0.022599 / 0.534201 (-0.511602) | 0.361998 / 0.579283 (-0.217286) | 0.413287 / 0.434364 (-0.021077) | 0.464867 / 0.540337 (-0.075470) | 0.638880 / 1.386936 (-0.748056) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010625 / 0.011353 (-0.000728) | 0.005129 / 0.011008 (-0.005879) | 0.119975 / 0.038508 (0.081467) | 0.100128 / 0.023109 (0.077019) | 0.448678 / 0.275898 (0.172780) | 0.533150 / 0.323480 (0.209670) | 0.005881 / 0.007986 (-0.002105) | 0.007451 / 0.004328 (0.003123) | 0.090792 / 0.004250 (0.086542) | 0.073416 / 0.037052 (0.036363) | 0.455395 / 0.258489 (0.196906) | 0.497572 / 0.293841 (0.203731) | 0.053112 / 0.128546 (-0.075434) | 0.014619 / 0.075646 (-0.061027) | 0.388023 / 0.419271 (-0.031248) | 0.074004 / 0.043533 (0.030471) | 0.435319 / 0.255139 (0.180180) | 0.465985 / 0.283200 (0.182785) | 0.046991 / 0.141683 (-0.094692) | 1.895717 / 1.452155 (0.443563) | 2.086600 / 1.492716 (0.593884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334412 / 0.018006 (0.316406) | 0.645510 / 0.000490 (0.645020) | 0.019175 / 0.000200 (0.018975) | 0.000429 / 0.000054 (0.000374) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034385 / 0.037411 (-0.003026) | 0.108939 / 0.014526 (0.094413) | 0.125937 / 0.176557 (-0.050619) | 0.205643 / 0.737135 (-0.531493) | 0.127662 / 0.296338 (-0.168676) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.674093 / 0.215209 (0.458884) | 6.646554 / 2.077655 (4.568900) | 2.837698 / 1.504120 (1.333578) | 2.397199 / 1.541195 (0.856004) | 2.485856 / 1.468490 (1.017366) | 0.955142 / 4.584777 (-3.629635) | 5.667462 / 3.745712 (1.921750) | 5.354129 / 5.269862 (0.084268) | 3.301609 / 4.565676 (-1.264068) | 0.106051 / 0.424275 (-0.318224) | 0.009287 / 0.007607 (0.001680) | 0.766678 / 0.226044 (0.540634) | 7.786701 / 2.268929 (5.517772) | 3.665463 / 55.444624 (-51.779161) | 2.982912 / 6.876477 (-3.893564) | 3.053363 / 2.142072 (0.911290) | 1.141090 / 4.805227 (-3.664137) | 0.223975 / 6.500664 (-6.276689) | 0.093024 / 0.075469 (0.017555) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728175 / 1.841788 (-0.113613) | 25.640134 / 8.074308 (17.565826) | 22.124769 / 10.191392 (11.933377) | 0.237489 / 0.680424 (-0.442935) | 0.030353 / 0.534201 (-0.503848) | 0.509371 / 0.579283 (-0.069913) | 0.642320 / 0.434364 (0.207956) | 0.576889 / 0.540337 (0.036552) | 0.899377 / 1.386936 (-0.487559) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010846 / 0.011353 (-0.000507) | 0.005876 / 0.011008 (-0.005132) | 0.090810 / 0.038508 (0.052302) | 0.106651 / 0.023109 (0.083542) | 0.551064 / 0.275898 (0.275166) | 0.608328 / 0.323480 (0.284848) | 0.007563 / 0.007986 (-0.000423) | 0.004595 / 0.004328 (0.000267) | 0.089125 / 0.004250 (0.084874) | 0.076577 / 0.037052 (0.039525) | 0.579970 / 0.258489 (0.321481) | 0.620214 / 0.293841 (0.326373) | 0.052577 / 0.128546 (-0.075970) | 0.013734 / 0.075646 (-0.061912) | 0.099825 / 0.419271 (-0.319447) | 0.068391 / 0.043533 (0.024858) | 0.564733 / 0.255139 (0.309594) | 0.593925 / 0.283200 (0.310726) | 0.037201 / 0.141683 (-0.104482) | 1.880969 / 1.452155 (0.428815) | 2.065094 / 1.492716 (0.572377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.426148 / 0.018006 (0.408141) | 0.673935 / 0.000490 (0.673445) | 0.124190 / 0.000200 (0.123990) | 0.001219 / 0.000054 (0.001164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040280 / 0.037411 (0.002868) | 0.122042 / 0.014526 (0.107516) | 0.131333 / 0.176557 (-0.045223) | 0.203039 / 0.737135 (-0.534096) | 0.134851 / 0.296338 (-0.161487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684599 / 0.215209 (0.469390) | 6.727529 / 2.077655 (4.649874) | 3.255228 / 1.504120 (1.751108) | 2.925865 / 1.541195 (1.384670) | 2.978762 / 1.468490 (1.510272) | 0.931769 / 4.584777 (-3.653008) | 5.988956 / 3.745712 (2.243244) | 5.228049 / 5.269862 (-0.041812) | 3.341470 / 4.565676 (-1.224206) | 0.106737 / 0.424275 (-0.317539) | 0.009847 / 0.007607 (0.002240) | 0.813954 / 0.226044 (0.587909) | 8.137071 / 2.268929 (5.868143) | 4.140725 / 55.444624 (-51.303899) | 3.500579 / 6.876477 (-3.375898) | 3.623120 / 2.142072 (1.481047) | 1.096634 / 4.805227 (-3.708593) | 0.236938 / 6.500664 (-6.263726) | 0.083099 / 0.075469 (0.007630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.856112 / 1.841788 (0.014324) | 26.531325 / 8.074308 (18.457017) | 24.435866 / 10.191392 (14.244474) | 0.264093 / 0.680424 (-0.416331) | 0.034872 / 0.534201 (-0.499329) | 0.520682 / 0.579283 (-0.058601) | 0.635010 / 0.434364 (0.200646) | 0.645451 / 0.540337 (0.105113) | 0.914616 / 1.386936 (-0.472320) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-29T16:22:31
| 2023-10-02T14:53:06
| null |
CONTRIBUTOR
| null |
Test PR to check the compatibility with https://github.com/huggingface/huggingface_hub/pull/1699
cc @Wauplin
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6269/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6269/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6269",
"html_url": "https://github.com/huggingface/datasets/pull/6269",
"diff_url": "https://github.com/huggingface/datasets/pull/6269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6269.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6268
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6268/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6268/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6268/events
|
https://github.com/huggingface/datasets/pull/6268
| 1,919,010,645
|
PR_kwDODunzps5bhgs7
| 6,268
|
Add repo_id to DatasetInfo
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6268). All of your documentation changes will be reflected on that endpoint.",
"In https://github.com/huggingface/datasets/issues/4129 we want to track the origin of a dataset, e.g. if it comes from multiple datasets.\r\n\r\nI think it's out of scope of DatasetInfo alone, which has info for one dataset only.\r\nTherefore it makes sense to add repo_id, which is for one dataset only.\r\n\r\nIMO if we want to track multiple origins we will need a new DatasetInfo that would have fields relevant to a mix of datasets (out of scope of this PR)\r\n\r\ncc @mariosasko I'd like your opinion on this",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009009 / 0.011353 (-0.002344) | 0.004169 / 0.011008 (-0.006840) | 0.098634 / 0.038508 (0.060126) | 0.069526 / 0.023109 (0.046417) | 0.337963 / 0.275898 (0.062065) | 0.379737 / 0.323480 (0.056257) | 0.004318 / 0.007986 (-0.003668) | 0.005347 / 0.004328 (0.001019) | 0.069875 / 0.004250 (0.065624) | 0.055964 / 0.037052 (0.018912) | 0.340305 / 0.258489 (0.081816) | 0.429718 / 0.293841 (0.135877) | 0.045101 / 0.128546 (-0.083445) | 0.012610 / 0.075646 (-0.063036) | 0.312366 / 0.419271 (-0.106905) | 0.064711 / 0.043533 (0.021178) | 0.345216 / 0.255139 (0.090077) | 0.367245 / 0.283200 (0.084046) | 0.034638 / 0.141683 (-0.107045) | 1.541947 / 1.452155 (0.089793) | 1.645268 / 1.492716 (0.152551) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233501 / 0.018006 (0.215495) | 0.514207 / 0.000490 (0.513717) | 0.014271 / 0.000200 (0.014072) | 0.000366 / 0.000054 (0.000311) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026288 / 0.037411 (-0.011124) | 0.083206 / 0.014526 (0.068680) | 0.098172 / 0.176557 (-0.078385) | 0.158529 / 0.737135 (-0.578606) | 0.095183 / 0.296338 (-0.201155) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.538300 / 0.215209 (0.323091) | 5.486939 / 2.077655 (3.409285) | 2.321812 / 1.504120 (0.817692) | 2.002124 / 1.541195 (0.460929) | 2.045043 / 1.468490 (0.576553) | 0.852772 / 4.584777 (-3.732005) | 5.014897 / 3.745712 (1.269185) | 4.428115 / 5.269862 (-0.841746) | 2.750126 / 4.565676 (-1.815550) | 0.099028 / 0.424275 (-0.325247) | 0.007678 / 0.007607 (0.000070) | 0.664463 / 0.226044 (0.438418) | 6.617811 / 2.268929 (4.348883) | 2.888382 / 55.444624 (-52.556242) | 2.190753 / 6.876477 (-4.685724) | 2.414586 / 2.142072 (0.272513) | 1.010302 / 4.805227 (-3.794925) | 0.194925 / 6.500664 (-6.305739) | 0.063490 / 0.075469 (-0.011979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543464 / 1.841788 (-0.298323) | 20.566666 / 8.074308 (12.492358) | 19.410745 / 10.191392 (9.219353) | 0.207077 / 0.680424 (-0.473347) | 0.028895 / 0.534201 (-0.505306) | 0.427525 / 0.579283 (-0.151758) | 0.535450 / 0.434364 (0.101086) | 0.494632 / 0.540337 (-0.045705) | 0.723705 / 1.386936 (-0.663231) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008209 / 0.011353 (-0.003144) | 0.004184 / 0.011008 (-0.006824) | 0.072420 / 0.038508 (0.033912) | 0.066851 / 0.023109 (0.043742) | 0.424137 / 0.275898 (0.148239) | 0.473156 / 0.323480 (0.149676) | 0.005394 / 0.007986 (-0.002591) | 0.003898 / 0.004328 (-0.000430) | 0.069996 / 0.004250 (0.065746) | 0.053113 / 0.037052 (0.016061) | 0.453214 / 0.258489 (0.194725) | 0.495921 / 0.293841 (0.202080) | 0.043028 / 0.128546 (-0.085519) | 0.012320 / 0.075646 (-0.063326) | 0.080270 / 0.419271 (-0.339002) | 0.053337 / 0.043533 (0.009804) | 0.436604 / 0.255139 (0.181465) | 0.463422 / 0.283200 (0.180223) | 0.030277 / 0.141683 (-0.111406) | 1.560261 / 1.452155 (0.108106) | 1.647209 / 1.492716 (0.154493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232556 / 0.018006 (0.214550) | 0.502387 / 0.000490 (0.501897) | 0.006688 / 0.000200 (0.006488) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030204 / 0.037411 (-0.007207) | 0.089438 / 0.014526 (0.074912) | 0.118939 / 0.176557 (-0.057617) | 0.160537 / 0.737135 (-0.576598) | 0.113432 / 0.296338 (-0.182906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586469 / 0.215209 (0.371260) | 5.916156 / 2.077655 (3.838502) | 2.904960 / 1.504120 (1.400840) | 2.346838 / 1.541195 (0.805644) | 2.373688 / 1.468490 (0.905198) | 0.829917 / 4.584777 (-3.754860) | 4.851283 / 3.745712 (1.105571) | 4.220103 / 5.269862 (-1.049758) | 2.706139 / 4.565676 (-1.859538) | 0.094095 / 0.424275 (-0.330180) | 0.008201 / 0.007607 (0.000594) | 0.699099 / 0.226044 (0.473054) | 7.046940 / 2.268929 (4.778011) | 3.374837 / 55.444624 (-52.069788) | 2.690839 / 6.876477 (-4.185638) | 2.845717 / 2.142072 (0.703645) | 0.989698 / 4.805227 (-3.815529) | 0.190413 / 6.500664 (-6.310251) | 0.066233 / 0.075469 (-0.009236) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.513607 / 1.841788 (-0.328180) | 21.544200 / 8.074308 (13.469892) | 20.297337 / 10.191392 (10.105945) | 0.216390 / 0.680424 (-0.464034) | 0.029962 / 0.534201 (-0.504239) | 0.451531 / 0.579283 (-0.127752) | 0.530147 / 0.434364 (0.095783) | 0.520739 / 0.540337 (-0.019598) | 0.716431 / 1.386936 (-0.670505) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006509 / 0.011353 (-0.004844) | 0.003987 / 0.011008 (-0.007022) | 0.085233 / 0.038508 (0.046725) | 0.077765 / 0.023109 (0.054656) | 0.310467 / 0.275898 (0.034569) | 0.343363 / 0.323480 (0.019883) | 0.005557 / 0.007986 (-0.002429) | 0.003430 / 0.004328 (-0.000898) | 0.064948 / 0.004250 (0.060697) | 0.056864 / 0.037052 (0.019812) | 0.314005 / 0.258489 (0.055516) | 0.360638 / 0.293841 (0.066798) | 0.031134 / 0.128546 (-0.097412) | 0.008869 / 0.075646 (-0.066777) | 0.286409 / 0.419271 (-0.132862) | 0.051338 / 0.043533 (0.007805) | 0.311329 / 0.255139 (0.056190) | 0.334373 / 0.283200 (0.051174) | 0.024816 / 0.141683 (-0.116867) | 1.502872 / 1.452155 (0.050718) | 1.569941 / 1.492716 (0.077224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269639 / 0.018006 (0.251633) | 0.558510 / 0.000490 (0.558020) | 0.011748 / 0.000200 (0.011548) | 0.000234 / 0.000054 (0.000180) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029139 / 0.037411 (-0.008272) | 0.083586 / 0.014526 (0.069060) | 0.102426 / 0.176557 (-0.074131) | 0.162398 / 0.737135 (-0.574737) | 0.101364 / 0.296338 (-0.194975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382281 / 0.215209 (0.167072) | 3.826412 / 2.077655 (1.748758) | 1.815911 / 1.504120 (0.311791) | 1.644539 / 1.541195 (0.103344) | 1.688487 / 1.468490 (0.219996) | 0.482115 / 4.584777 (-4.102662) | 3.574773 / 3.745712 (-0.170939) | 3.262733 / 5.269862 (-2.007129) | 2.058115 / 4.565676 (-2.507562) | 0.056367 / 0.424275 (-0.367908) | 0.007233 / 0.007607 (-0.000374) | 0.456859 / 0.226044 (0.230815) | 4.565935 / 2.268929 (2.297006) | 2.311802 / 55.444624 (-53.132823) | 1.943936 / 6.876477 (-4.932541) | 2.129811 / 2.142072 (-0.012261) | 0.575098 / 4.805227 (-4.230129) | 0.130495 / 6.500664 (-6.370169) | 0.059757 / 0.075469 (-0.015712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238495 / 1.841788 (-0.603293) | 18.940000 / 8.074308 (10.865692) | 14.034240 / 10.191392 (3.842848) | 0.166418 / 0.680424 (-0.514006) | 0.018420 / 0.534201 (-0.515781) | 0.395330 / 0.579283 (-0.183953) | 0.413518 / 0.434364 (-0.020846) | 0.461499 / 0.540337 (-0.078838) | 0.661371 / 1.386936 (-0.725565) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006673 / 0.011353 (-0.004680) | 0.004335 / 0.011008 (-0.006673) | 0.064568 / 0.038508 (0.026060) | 0.072763 / 0.023109 (0.049653) | 0.429488 / 0.275898 (0.153590) | 0.456900 / 0.323480 (0.133420) | 0.005481 / 0.007986 (-0.002505) | 0.003649 / 0.004328 (-0.000680) | 0.064975 / 0.004250 (0.060724) | 0.056839 / 0.037052 (0.019786) | 0.439451 / 0.258489 (0.180962) | 0.461691 / 0.293841 (0.167850) | 0.031455 / 0.128546 (-0.097092) | 0.008848 / 0.075646 (-0.066798) | 0.071719 / 0.419271 (-0.347553) | 0.047116 / 0.043533 (0.003583) | 0.429055 / 0.255139 (0.173916) | 0.434204 / 0.283200 (0.151004) | 0.022594 / 0.141683 (-0.119089) | 1.539231 / 1.452155 (0.087077) | 1.568111 / 1.492716 (0.075394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267374 / 0.018006 (0.249368) | 0.553202 / 0.000490 (0.552712) | 0.005410 / 0.000200 (0.005210) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031478 / 0.037411 (-0.005933) | 0.092438 / 0.014526 (0.077912) | 0.103874 / 0.176557 (-0.072682) | 0.158428 / 0.737135 (-0.578708) | 0.111617 / 0.296338 (-0.184721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434783 / 0.215209 (0.219574) | 4.332536 / 2.077655 (2.254881) | 2.354522 / 1.504120 (0.850402) | 2.220271 / 1.541195 (0.679076) | 2.338524 / 1.468490 (0.870034) | 0.494508 / 4.584777 (-4.090269) | 3.619592 / 3.745712 (-0.126120) | 3.320897 / 5.269862 (-1.948964) | 2.107475 / 4.565676 (-2.458202) | 0.058479 / 0.424275 (-0.365796) | 0.007427 / 0.007607 (-0.000180) | 0.509298 / 0.226044 (0.283254) | 5.067940 / 2.268929 (2.799012) | 2.815336 / 55.444624 (-52.629288) | 2.470958 / 6.876477 (-4.405519) | 2.672801 / 2.142072 (0.530728) | 0.588199 / 4.805227 (-4.217028) | 0.134062 / 6.500664 (-6.366602) | 0.060951 / 0.075469 (-0.014518) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353955 / 1.841788 (-0.487832) | 20.386012 / 8.074308 (12.311704) | 15.032463 / 10.191392 (4.841071) | 0.167243 / 0.680424 (-0.513181) | 0.020426 / 0.534201 (-0.513775) | 0.396815 / 0.579283 (-0.182469) | 0.421806 / 0.434364 (-0.012558) | 0.471866 / 0.540337 (-0.068471) | 0.667206 / 1.386936 (-0.719730) |\n\n</details>\n</details>\n\n\n",
"Really happy to see this! It could also be helpful to track some other metadata about how the dataset was built in the future. i.e. for the Stack loaded like this:\r\n\r\n```\r\nds = load_dataset(\"bigcode/the-stack\", data_dir=\"data/dockerfile\", split=\"train\")\r\n```\r\nIt could be helpful to have easy access to the `data_dir` argument used during loading since that changes the training data quite a bit vs. loading the full dataset. You can also recover this from `download_checksums`, which seems a bit hacky. That is not necessary for this PR, though.\r\n",
"Perhaps it is also interesting to track the revision? I suppose the version also kind of covers that.\r\n\r\nThat said, this is looking great already! I'm quite excited about this. Losing the `repo_id` after merging (different) datasets also makes sense to me, well done.",
"One other thought. Is it worth tracking if a `token` was passed during loading? \r\n\r\nThe Hub ID for private datasets could in some cases contain information someone wouldn't want to make public i.e. `davanstrien/super_secret_dataset_using_GPT_created_data`. \r\n\r\nAdding a bool like `is_private` could then be used by another library to determine if the dataset ID should be shared or not (or default to not sharing the ID for private datasets). i.e. in SpanMarker @tomaarsen might do a check like \r\n\r\n```python\r\nif ds.is_private and not push_hub_id_for_private_ds:\r\n\tds_name = None\r\n```\r\nPotentially this is overkill but could be useful for downstream libraries who might use this information for creating automatic model cards. \r\n\r\n\r\n",
"We should probably find a way to remove `DatasetInfo`, as (most of) its attributes are outdated (homepage, description, etc.), not introduce new ones :). But I guess storing `repo_id` there is the simplest solution for now, so I'm OK with it.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007757 / 0.011353 (-0.003595) | 0.004543 / 0.011008 (-0.006465) | 0.100193 / 0.038508 (0.061685) | 0.082333 / 0.023109 (0.059224) | 0.374586 / 0.275898 (0.098688) | 0.412617 / 0.323480 (0.089137) | 0.006148 / 0.007986 (-0.001838) | 0.003826 / 0.004328 (-0.000503) | 0.077077 / 0.004250 (0.072827) | 0.064057 / 0.037052 (0.027005) | 0.391435 / 0.258489 (0.132946) | 0.436439 / 0.293841 (0.142599) | 0.036534 / 0.128546 (-0.092012) | 0.009986 / 0.075646 (-0.065660) | 0.344243 / 0.419271 (-0.075028) | 0.062013 / 0.043533 (0.018480) | 0.378113 / 0.255139 (0.122974) | 0.398476 / 0.283200 (0.115276) | 0.026552 / 0.141683 (-0.115131) | 1.740505 / 1.452155 (0.288350) | 1.835684 / 1.492716 (0.342968) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267917 / 0.018006 (0.249911) | 0.510676 / 0.000490 (0.510186) | 0.010810 / 0.000200 (0.010610) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032113 / 0.037411 (-0.005299) | 0.097679 / 0.014526 (0.083153) | 0.113213 / 0.176557 (-0.063344) | 0.177897 / 0.737135 (-0.559238) | 0.111761 / 0.296338 (-0.184577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450544 / 0.215209 (0.235335) | 4.476746 / 2.077655 (2.399091) | 2.205391 / 1.504120 (0.701271) | 2.006457 / 1.541195 (0.465262) | 2.058859 / 1.468490 (0.590369) | 0.571549 / 4.584777 (-4.013228) | 4.175039 / 3.745712 (0.429327) | 3.815445 / 5.269862 (-1.454416) | 2.376673 / 4.565676 (-2.189004) | 0.067048 / 0.424275 (-0.357227) | 0.008544 / 0.007607 (0.000937) | 0.536384 / 0.226044 (0.310340) | 5.386232 / 2.268929 (3.117304) | 2.825620 / 55.444624 (-52.619004) | 2.339821 / 6.876477 (-4.536656) | 2.535736 / 2.142072 (0.393663) | 0.679572 / 4.805227 (-4.125655) | 0.156799 / 6.500664 (-6.343865) | 0.071667 / 0.075469 (-0.003802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.512198 / 1.841788 (-0.329590) | 21.786760 / 8.074308 (13.712452) | 16.386274 / 10.191392 (6.194882) | 0.169108 / 0.680424 (-0.511316) | 0.021312 / 0.534201 (-0.512889) | 0.466153 / 0.579283 (-0.113130) | 0.496192 / 0.434364 (0.061829) | 0.549420 / 0.540337 (0.009082) | 0.780974 / 1.386936 (-0.605962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004654 / 0.011008 (-0.006354) | 0.075280 / 0.038508 (0.036772) | 0.083044 / 0.023109 (0.059935) | 0.481704 / 0.275898 (0.205805) | 0.514828 / 0.323480 (0.191348) | 0.006245 / 0.007986 (-0.001740) | 0.003715 / 0.004328 (-0.000614) | 0.074498 / 0.004250 (0.070248) | 0.064406 / 0.037052 (0.027353) | 0.481874 / 0.258489 (0.223385) | 0.518527 / 0.293841 (0.224686) | 0.037549 / 0.128546 (-0.090997) | 0.010106 / 0.075646 (-0.065541) | 0.084266 / 0.419271 (-0.335006) | 0.056659 / 0.043533 (0.013126) | 0.497707 / 0.255139 (0.242568) | 0.503201 / 0.283200 (0.220001) | 0.027086 / 0.141683 (-0.114597) | 1.834715 / 1.452155 (0.382560) | 1.919927 / 1.492716 (0.427210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249288 / 0.018006 (0.231282) | 0.500950 / 0.000490 (0.500460) | 0.005856 / 0.000200 (0.005656) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037674 / 0.037411 (0.000263) | 0.111141 / 0.014526 (0.096615) | 0.123408 / 0.176557 (-0.053149) | 0.186604 / 0.737135 (-0.550531) | 0.125360 / 0.296338 (-0.170979) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520480 / 0.215209 (0.305271) | 5.171108 / 2.077655 (3.093453) | 2.812746 / 1.504120 (1.308626) | 2.602941 / 1.541195 (1.061746) | 2.666196 / 1.468490 (1.197706) | 0.578684 / 4.584777 (-4.006092) | 4.238722 / 3.745712 (0.493010) | 3.844361 / 5.269862 (-1.425501) | 2.369214 / 4.565676 (-2.196462) | 0.068543 / 0.424275 (-0.355732) | 0.008695 / 0.007607 (0.001088) | 0.621869 / 0.226044 (0.395825) | 6.200566 / 2.268929 (3.931637) | 3.340846 / 55.444624 (-52.103779) | 2.920691 / 6.876477 (-3.955786) | 3.132438 / 2.142072 (0.990366) | 0.697394 / 4.805227 (-4.107834) | 0.158385 / 6.500664 (-6.342280) | 0.072566 / 0.075469 (-0.002903) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599070 / 1.841788 (-0.242717) | 22.767139 / 8.074308 (14.692831) | 17.053988 / 10.191392 (6.862596) | 0.188414 / 0.680424 (-0.492009) | 0.023409 / 0.534201 (-0.510792) | 0.472092 / 0.579283 (-0.107191) | 0.486107 / 0.434364 (0.051743) | 0.562190 / 0.540337 (0.021852) | 0.791606 / 1.386936 (-0.595330) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-29T10:24:55
| 2023-10-01T15:29:45
| null |
MEMBER
| null |
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/demo1", split="train")
ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"])
print(ds.repo_id)
# lhoestq/demo1
```
- repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict
- repo_id is set to None when concatenating datasets with different repo ids
related to https://github.com/huggingface/datasets/issues/4129
TODO:
- [ ] discuss if it's ok for now
- [ ] tests
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6268/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6268/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6268",
"html_url": "https://github.com/huggingface/datasets/pull/6268",
"diff_url": "https://github.com/huggingface/datasets/pull/6268.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6268.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6267
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6267/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6267/events
|
https://github.com/huggingface/datasets/issues/6267
| 1,916,443,262
|
I_kwDODunzps5yOpp-
| 6,267
|
Multi label class encoding
|
{
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n 'text': ['one', 'two', 'three', 'four'],\r\n 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]\r\n}\r\n\r\ndataset = Dataset.from_dict(data)\r\ndataset = dataset.cast_column('labels', Sequence(ClassLabel(names=[\"a\", \"b\", \"c\", \"d\"])))\r\n```",
"Great! Can you elaborate on \"class_encode_column does support nested fields\"? Do you mean that there is a way to `class_encode_column` on a Sequence?",
"Yes, exactly! This would be a nice contribution, though.",
"Sorry, I'm still not following. Are you saying that there currently exists a way to call `class_encode_column` on a `Sequence(ClassLabel)` type? Or that the underlying data structures support it and a contribution of a method to do that would be welcome?"
] | 2023-09-27T22:48:08
| 2023-10-05T12:51:23
| null |
NONE
| null |
### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
'text': ['one', 'two', 'three', 'four'],
'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]
}
dataset = Dataset.from_dict(data)
dataset = dataset.class_encode_column('labels')
```
I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow.
I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected.
After digging more I did notice a few issues
- After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this.
- I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior.
### Motivation
See above - would like to support multi label class encodings.
### Your contribution
This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6267/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6266
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6266/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6266/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6266/events
|
https://github.com/huggingface/datasets/pull/6266
| 1,916,334,394
|
PR_kwDODunzps5bYYb8
| 6,266
|
Use LibYAML with PyYAML if available
|
{
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6266). All of your documentation changes will be reflected on that endpoint.",
"On Ubuntu, if `libyaml-dev` is installed, you can install PyYAML 6.0.1 with LibYAML with the following command (as it's automatically detected):\r\n\r\n```bash\r\npip install git+https://github.com/yaml/[email protected]\r\n```",
"Are the failing tests flaky?",
"We use `huggingface_hub`'s RepoCard API instead of these modules to parse the YAML block (notice the deprecations), so the `huggingface_hub` repo is the right place to suggest these changes.\r\n\r\nPersonally, I'm not a fan of these changes, as a single non-standard usage of the `ClassLabel` type is not a sufficient reason to merge them. Also, the dataset in question stores data in a single Parquet file, with the features info embedded in its (schema) metadata, which means the YAML parsing can be skipped while preserving the features by directly loading the Parquet file:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files=\"https://huggingface.co/datasets/HuggingFaceM4/SugarCrepe_swap_obj/resolve/main/data/test-00000-of-00001-ca2ae6017a2336d7.parquet\")\r\n```\r\n\r\nPS: Yes, these tests are flaky. We are working on fixing them.",
"Oh, I didn't realize they were deprecated. Thanks for the tip on how to work around this issue!\r\n\r\nFor future reference, the places to change the code in `huggingface_hub` would be:\r\n\r\nhttps://github.com/huggingface/huggingface_hub/blob/89cc69105074f1d071e0471144605f3cdfe1dab3/src/huggingface_hub/repocard.py#L506\r\n\r\nhttps://github.com/huggingface/huggingface_hub/blob/89cc69105074f1d071e0471144605f3cdfe1dab3/src/huggingface_hub/utils/_fixes.py#L34"
] | 2023-09-27T21:13:36
| 2023-09-28T14:29:24
| null |
CONTRIBUTOR
| null |
PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibYAML version of the `Loader` and `Dumper` and falling back to the default ones. This PR implements this change. See [PyYAML docs](https://pyyaml.org/wiki/PyYAMLDocumentation) for more info.
This change was motivated after trying to use any of [the SugarCREPE datasets in the Hub](https://huggingface.co/datasets?search=sugarcrepe) provided by [the org HuggingFaceM4](https://huggingface.co/datasets/HuggingFaceM4). Such datasets save a lot of information (~1MB) in the YAML metadata from the `README.md` file and I noticed this slowed down the data loading process. BTW, I also noticed cache files for it is also slow because it tries to hash an instance of `DatasetInfo`, which in turn has all this metadata.
Also, I changed two list comprehensions into generator expressions to avoid allocating extra memory unnecessarily.
And BTW, there's [an issue in PyYAML suggesting to make this automatic](https://github.com/yaml/pyyaml/issues/437).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6266/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6266",
"html_url": "https://github.com/huggingface/datasets/pull/6266",
"diff_url": "https://github.com/huggingface/datasets/pull/6266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6266.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6265
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6265/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6265/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6265/events
|
https://github.com/huggingface/datasets/pull/6265
| 1,915,651,566
|
PR_kwDODunzps5bWDfc
| 6,265
|
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005896 / 0.011353 (-0.005457) | 0.003642 / 0.011008 (-0.007366) | 0.081917 / 0.038508 (0.043409) | 0.059513 / 0.023109 (0.036404) | 0.341422 / 0.275898 (0.065524) | 0.359278 / 0.323480 (0.035798) | 0.004707 / 0.007986 (-0.003279) | 0.002938 / 0.004328 (-0.001390) | 0.063095 / 0.004250 (0.058845) | 0.051777 / 0.037052 (0.014725) | 0.321114 / 0.258489 (0.062625) | 0.363823 / 0.293841 (0.069982) | 0.027590 / 0.128546 (-0.100957) | 0.007846 / 0.075646 (-0.067800) | 0.261197 / 0.419271 (-0.158074) | 0.045812 / 0.043533 (0.002279) | 0.319787 / 0.255139 (0.064648) | 0.341839 / 0.283200 (0.058640) | 0.021913 / 0.141683 (-0.119770) | 1.397525 / 1.452155 (-0.054630) | 1.495902 / 1.492716 (0.003186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224815 / 0.018006 (0.206809) | 0.425780 / 0.000490 (0.425290) | 0.006934 / 0.000200 (0.006734) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024342 / 0.037411 (-0.013070) | 0.073923 / 0.014526 (0.059398) | 0.082108 / 0.176557 (-0.094448) | 0.143017 / 0.737135 (-0.594119) | 0.083163 / 0.296338 (-0.213175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398244 / 0.215209 (0.183035) | 3.957688 / 2.077655 (1.880033) | 1.904615 / 1.504120 (0.400495) | 1.710353 / 1.541195 (0.169158) | 1.798980 / 1.468490 (0.330490) | 0.499307 / 4.584777 (-4.085470) | 3.026734 / 3.745712 (-0.718978) | 2.923940 / 5.269862 (-2.345922) | 1.831870 / 4.565676 (-2.733807) | 0.058551 / 0.424275 (-0.365724) | 0.006403 / 0.007607 (-0.001204) | 0.464164 / 0.226044 (0.238119) | 4.644556 / 2.268929 (2.375628) | 2.341455 / 55.444624 (-53.103169) | 2.004385 / 6.876477 (-4.872092) | 2.051819 / 2.142072 (-0.090253) | 0.585610 / 4.805227 (-4.219617) | 0.124735 / 6.500664 (-6.375929) | 0.061150 / 0.075469 (-0.014319) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224665 / 1.841788 (-0.617122) | 17.476227 / 8.074308 (9.401919) | 13.867617 / 10.191392 (3.676225) | 0.144177 / 0.680424 (-0.536247) | 0.017045 / 0.534201 (-0.517156) | 0.337468 / 0.579283 (-0.241815) | 0.374476 / 0.434364 (-0.059888) | 0.393428 / 0.540337 (-0.146910) | 0.535335 / 1.386936 (-0.851601) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006208 / 0.011353 (-0.005145) | 0.003650 / 0.011008 (-0.007359) | 0.062843 / 0.038508 (0.024335) | 0.062272 / 0.023109 (0.039162) | 0.446336 / 0.275898 (0.170438) | 0.477476 / 0.323480 (0.153996) | 0.004862 / 0.007986 (-0.003124) | 0.002822 / 0.004328 (-0.001506) | 0.063427 / 0.004250 (0.059177) | 0.049023 / 0.037052 (0.011971) | 0.453633 / 0.258489 (0.195144) | 0.486494 / 0.293841 (0.192653) | 0.028634 / 0.128546 (-0.099912) | 0.008187 / 0.075646 (-0.067460) | 0.068846 / 0.419271 (-0.350425) | 0.041104 / 0.043533 (-0.002429) | 0.446646 / 0.255139 (0.191507) | 0.468860 / 0.283200 (0.185660) | 0.020980 / 0.141683 (-0.120703) | 1.455565 / 1.452155 (0.003410) | 1.511142 / 1.492716 (0.018426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224242 / 0.018006 (0.206236) | 0.408483 / 0.000490 (0.407993) | 0.003495 / 0.000200 (0.003296) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027286 / 0.037411 (-0.010125) | 0.081151 / 0.014526 (0.066625) | 0.096598 / 0.176557 (-0.079959) | 0.146193 / 0.737135 (-0.590942) | 0.092213 / 0.296338 (-0.204125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463837 / 0.215209 (0.248628) | 4.636820 / 2.077655 (2.559165) | 2.576100 / 1.504120 (1.071980) | 2.396974 / 1.541195 (0.855779) | 2.461526 / 1.468490 (0.993036) | 0.502360 / 4.584777 (-4.082417) | 3.099973 / 3.745712 (-0.645739) | 2.937260 / 5.269862 (-2.332602) | 1.871274 / 4.565676 (-2.694402) | 0.057913 / 0.424275 (-0.366362) | 0.006511 / 0.007607 (-0.001096) | 0.536917 / 0.226044 (0.310873) | 5.396966 / 2.268929 (3.128038) | 3.015646 / 55.444624 (-52.428978) | 2.673793 / 6.876477 (-4.202684) | 2.712376 / 2.142072 (0.570304) | 0.591632 / 4.805227 (-4.213595) | 0.124872 / 6.500664 (-6.375792) | 0.061820 / 0.075469 (-0.013649) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356828 / 1.841788 (-0.484960) | 18.076995 / 8.074308 (10.002687) | 15.116482 / 10.191392 (4.925090) | 0.151375 / 0.680424 (-0.529049) | 0.017867 / 0.534201 (-0.516334) | 0.335012 / 0.579283 (-0.244271) | 0.384137 / 0.434364 (-0.050226) | 0.397792 / 0.540337 (-0.142546) | 0.551521 / 1.386936 (-0.835415) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009418 / 0.011353 (-0.001935) | 0.005186 / 0.011008 (-0.005822) | 0.112270 / 0.038508 (0.073761) | 0.114856 / 0.023109 (0.091747) | 0.402267 / 0.275898 (0.126369) | 0.445213 / 0.323480 (0.121733) | 0.005588 / 0.007986 (-0.002398) | 0.004315 / 0.004328 (-0.000013) | 0.083561 / 0.004250 (0.079311) | 0.087319 / 0.037052 (0.050267) | 0.400989 / 0.258489 (0.142500) | 0.455636 / 0.293841 (0.161795) | 0.045168 / 0.128546 (-0.083378) | 0.010939 / 0.075646 (-0.064707) | 0.400120 / 0.419271 (-0.019151) | 0.071599 / 0.043533 (0.028066) | 0.418112 / 0.255139 (0.162973) | 0.443889 / 0.283200 (0.160690) | 0.032433 / 0.141683 (-0.109250) | 1.886313 / 1.452155 (0.434159) | 2.012909 / 1.492716 (0.520193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306991 / 0.018006 (0.288985) | 0.590426 / 0.000490 (0.589937) | 0.011811 / 0.000200 (0.011611) | 0.000596 / 0.000054 (0.000542) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042520 / 0.037411 (0.005108) | 0.129808 / 0.014526 (0.115283) | 0.125481 / 0.176557 (-0.051075) | 0.199181 / 0.737135 (-0.537954) | 0.130426 / 0.296338 (-0.165913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526455 / 0.215209 (0.311246) | 5.213304 / 2.077655 (3.135649) | 2.643406 / 1.504120 (1.139286) | 2.611214 / 1.541195 (1.070019) | 2.586730 / 1.468490 (1.118240) | 0.639103 / 4.584777 (-3.945674) | 5.197421 / 3.745712 (1.451709) | 4.634642 / 5.269862 (-0.635220) | 2.741079 / 4.565676 (-1.824598) | 0.073064 / 0.424275 (-0.351211) | 0.009441 / 0.007607 (0.001834) | 0.635984 / 0.226044 (0.409940) | 6.283268 / 2.268929 (4.014339) | 3.337205 / 55.444624 (-52.107419) | 3.192362 / 6.876477 (-3.684114) | 2.910367 / 2.142072 (0.768294) | 0.767937 / 4.805227 (-4.037290) | 0.177467 / 6.500664 (-6.323198) | 0.081162 / 0.075469 (0.005693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.803717 / 1.841788 (-0.038071) | 26.823235 / 8.074308 (18.748927) | 19.714471 / 10.191392 (9.523079) | 0.204048 / 0.680424 (-0.476376) | 0.025992 / 0.534201 (-0.508209) | 0.521438 / 0.579283 (-0.057845) | 0.596524 / 0.434364 (0.162160) | 0.600763 / 0.540337 (0.060425) | 0.945971 / 1.386936 (-0.440965) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009126 / 0.011353 (-0.002226) | 0.005109 / 0.011008 (-0.005899) | 0.083046 / 0.038508 (0.044538) | 0.115930 / 0.023109 (0.092821) | 0.534311 / 0.275898 (0.258413) | 0.552846 / 0.323480 (0.229366) | 0.007240 / 0.007986 (-0.000746) | 0.004617 / 0.004328 (0.000289) | 0.083927 / 0.004250 (0.079676) | 0.075926 / 0.037052 (0.038873) | 0.534750 / 0.258489 (0.276261) | 0.575122 / 0.293841 (0.281281) | 0.041001 / 0.128546 (-0.087545) | 0.010851 / 0.075646 (-0.064795) | 0.096574 / 0.419271 (-0.322697) | 0.063533 / 0.043533 (0.020001) | 0.546850 / 0.255139 (0.291711) | 0.547122 / 0.283200 (0.263922) | 0.032437 / 0.141683 (-0.109245) | 1.926191 / 1.452155 (0.474036) | 2.029841 / 1.492716 (0.537125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275582 / 0.018006 (0.257576) | 0.574212 / 0.000490 (0.573722) | 0.006863 / 0.000200 (0.006663) | 0.000236 / 0.000054 (0.000181) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.045340 / 0.037411 (0.007928) | 0.129196 / 0.014526 (0.114670) | 0.136637 / 0.176557 (-0.039920) | 0.200040 / 0.737135 (-0.537096) | 0.136328 / 0.296338 (-0.160011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612379 / 0.215209 (0.397170) | 5.874664 / 2.077655 (3.797010) | 3.070626 / 1.504120 (1.566506) | 2.999319 / 1.541195 (1.458124) | 3.000571 / 1.468490 (1.532081) | 0.732119 / 4.584777 (-3.852658) | 5.193226 / 3.745712 (1.447514) | 4.714571 / 5.269862 (-0.555291) | 2.870438 / 4.565676 (-1.695239) | 0.075793 / 0.424275 (-0.348482) | 0.009238 / 0.007607 (0.001631) | 0.695192 / 0.226044 (0.469148) | 6.897996 / 2.268929 (4.629067) | 3.923474 / 55.444624 (-51.521150) | 3.458326 / 6.876477 (-3.418151) | 3.331652 / 2.142072 (1.189579) | 0.821132 / 4.805227 (-3.984095) | 0.182252 / 6.500664 (-6.318412) | 0.084730 / 0.075469 (0.009260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.919861 / 1.841788 (0.078073) | 27.437228 / 8.074308 (19.362920) | 21.109899 / 10.191392 (10.918507) | 0.245998 / 0.680424 (-0.434426) | 0.025817 / 0.534201 (-0.508384) | 0.517757 / 0.579283 (-0.061526) | 0.576375 / 0.434364 (0.142011) | 0.625283 / 0.540337 (0.084945) | 0.956877 / 1.386936 (-0.430059) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.004815 / 0.011008 (-0.006194) | 0.099657 / 0.038508 (0.061149) | 0.064737 / 0.023109 (0.041628) | 0.461773 / 0.275898 (0.185875) | 0.444810 / 0.323480 (0.121330) | 0.004247 / 0.007986 (-0.003739) | 0.004956 / 0.004328 (0.000628) | 0.068664 / 0.004250 (0.064414) | 0.052039 / 0.037052 (0.014986) | 0.406750 / 0.258489 (0.148261) | 0.452832 / 0.293841 (0.158991) | 0.044518 / 0.128546 (-0.084028) | 0.013220 / 0.075646 (-0.062426) | 0.317713 / 0.419271 (-0.101558) | 0.061897 / 0.043533 (0.018364) | 0.398664 / 0.255139 (0.143525) | 0.531494 / 0.283200 (0.248294) | 0.064033 / 0.141683 (-0.077650) | 1.590385 / 1.452155 (0.138231) | 1.769918 / 1.492716 (0.277202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230795 / 0.018006 (0.212789) | 0.568797 / 0.000490 (0.568308) | 0.013498 / 0.000200 (0.013298) | 0.000448 / 0.000054 (0.000393) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028394 / 0.037411 (-0.009017) | 0.081973 / 0.014526 (0.067447) | 0.097623 / 0.176557 (-0.078934) | 0.158691 / 0.737135 (-0.578445) | 0.101548 / 0.296338 (-0.194791) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574459 / 0.215209 (0.359249) | 5.709871 / 2.077655 (3.632217) | 2.521460 / 1.504120 (1.017340) | 2.239463 / 1.541195 (0.698268) | 2.195067 / 1.468490 (0.726577) | 0.792390 / 4.584777 (-3.792387) | 4.841665 / 3.745712 (1.095952) | 4.201620 / 5.269862 (-1.068241) | 2.664081 / 4.565676 (-1.901595) | 0.097661 / 0.424275 (-0.326614) | 0.008428 / 0.007607 (0.000821) | 0.698729 / 0.226044 (0.472684) | 6.908867 / 2.268929 (4.639939) | 3.247480 / 55.444624 (-52.197145) | 2.563921 / 6.876477 (-4.312556) | 2.738249 / 2.142072 (0.596177) | 0.972066 / 4.805227 (-3.833161) | 0.191196 / 6.500664 (-6.309468) | 0.064732 / 0.075469 (-0.010737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.421910 / 1.841788 (-0.419877) | 20.633538 / 8.074308 (12.559230) | 18.054562 / 10.191392 (7.863170) | 0.194125 / 0.680424 (-0.486299) | 0.028097 / 0.534201 (-0.506104) | 0.417857 / 0.579283 (-0.161426) | 0.518758 / 0.434364 (0.084394) | 0.500199 / 0.540337 (-0.040138) | 0.754662 / 1.386936 (-0.632274) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008452 / 0.011353 (-0.002901) | 0.004646 / 0.011008 (-0.006362) | 0.077286 / 0.038508 (0.038778) | 0.072507 / 0.023109 (0.049398) | 0.439580 / 0.275898 (0.163682) | 0.506166 / 0.323480 (0.182686) | 0.006035 / 0.007986 (-0.001950) | 0.003886 / 0.004328 (-0.000442) | 0.075091 / 0.004250 (0.070841) | 0.063163 / 0.037052 (0.026110) | 0.468550 / 0.258489 (0.210061) | 0.523273 / 0.293841 (0.229432) | 0.048728 / 0.128546 (-0.079818) | 0.012991 / 0.075646 (-0.062655) | 0.087964 / 0.419271 (-0.331308) | 0.058920 / 0.043533 (0.015387) | 0.451247 / 0.255139 (0.196108) | 0.489827 / 0.283200 (0.206628) | 0.031164 / 0.141683 (-0.110519) | 1.675504 / 1.452155 (0.223349) | 1.806098 / 1.492716 (0.313382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253567 / 0.018006 (0.235561) | 0.508971 / 0.000490 (0.508481) | 0.010882 / 0.000200 (0.010682) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029490 / 0.037411 (-0.007921) | 0.090255 / 0.014526 (0.075729) | 0.110075 / 0.176557 (-0.066482) | 0.159375 / 0.737135 (-0.577760) | 0.109313 / 0.296338 (-0.187025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580252 / 0.215209 (0.365043) | 5.911741 / 2.077655 (3.834086) | 2.659405 / 1.504120 (1.155285) | 2.344943 / 1.541195 (0.803749) | 2.390748 / 1.468490 (0.922258) | 0.827823 / 4.584777 (-3.756954) | 4.973544 / 3.745712 (1.227832) | 4.300220 / 5.269862 (-0.969642) | 2.826181 / 4.565676 (-1.739495) | 0.101013 / 0.424275 (-0.323263) | 0.008025 / 0.007607 (0.000418) | 0.728414 / 0.226044 (0.502369) | 7.508045 / 2.268929 (5.239117) | 3.687627 / 55.444624 (-51.756997) | 2.902953 / 6.876477 (-3.973524) | 3.094624 / 2.142072 (0.952551) | 1.054696 / 4.805227 (-3.750531) | 0.212297 / 6.500664 (-6.288367) | 0.070211 / 0.075469 (-0.005258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567117 / 1.841788 (-0.274670) | 21.420746 / 8.074308 (13.346438) | 19.857467 / 10.191392 (9.666075) | 0.228554 / 0.680424 (-0.451870) | 0.032278 / 0.534201 (-0.501923) | 0.459966 / 0.579283 (-0.119317) | 0.541219 / 0.434364 (0.106855) | 0.549599 / 0.540337 (0.009261) | 0.731476 / 1.386936 (-0.655460) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-27T13:56:34
| 2023-09-28T18:34:02
| 2023-09-28T18:23:35
|
CONTRIBUTOR
| null |
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS)
Fix https://github.com/huggingface/datasets/issues/6260
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6265/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6265",
"html_url": "https://github.com/huggingface/datasets/pull/6265",
"diff_url": "https://github.com/huggingface/datasets/pull/6265.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6265.patch",
"merged_at": "2023-09-28T18:23:35"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6264
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6264/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6264/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6264/events
|
https://github.com/huggingface/datasets/pull/6264
| 1,914,958,781
|
PR_kwDODunzps5bTvzh
| 6,264
|
Temporarily pin tensorflow < 2.14.0
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008356 / 0.011353 (-0.002997) | 0.004553 / 0.011008 (-0.006455) | 0.101025 / 0.038508 (0.062517) | 0.090194 / 0.023109 (0.067085) | 0.427127 / 0.275898 (0.151229) | 0.469116 / 0.323480 (0.145636) | 0.007593 / 0.007986 (-0.000393) | 0.003751 / 0.004328 (-0.000578) | 0.077432 / 0.004250 (0.073182) | 0.082744 / 0.037052 (0.045692) | 0.433638 / 0.258489 (0.175149) | 0.482387 / 0.293841 (0.188546) | 0.040658 / 0.128546 (-0.087888) | 0.009799 / 0.075646 (-0.065848) | 0.345274 / 0.419271 (-0.073998) | 0.076642 / 0.043533 (0.033109) | 0.424417 / 0.255139 (0.169278) | 0.457045 / 0.283200 (0.173846) | 0.033642 / 0.141683 (-0.108041) | 1.765446 / 1.452155 (0.313291) | 1.859279 / 1.492716 (0.366562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273629 / 0.018006 (0.255623) | 0.505743 / 0.000490 (0.505253) | 0.009300 / 0.000200 (0.009100) | 0.000359 / 0.000054 (0.000305) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032510 / 0.037411 (-0.004901) | 0.099628 / 0.014526 (0.085103) | 0.112904 / 0.176557 (-0.063652) | 0.179118 / 0.737135 (-0.558018) | 0.115946 / 0.296338 (-0.180393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456431 / 0.215209 (0.241222) | 4.556559 / 2.077655 (2.478904) | 2.207893 / 1.504120 (0.703773) | 2.024706 / 1.541195 (0.483512) | 2.165424 / 1.468490 (0.696934) | 0.571745 / 4.584777 (-4.013031) | 4.341017 / 3.745712 (0.595305) | 3.980520 / 5.269862 (-1.289342) | 2.333077 / 4.565676 (-2.232599) | 0.067200 / 0.424275 (-0.357075) | 0.008563 / 0.007607 (0.000956) | 0.545294 / 0.226044 (0.319250) | 5.445152 / 2.268929 (3.176224) | 2.740657 / 55.444624 (-52.703968) | 2.370635 / 6.876477 (-4.505842) | 2.451642 / 2.142072 (0.309570) | 0.679385 / 4.805227 (-4.125842) | 0.155967 / 6.500664 (-6.344697) | 0.072812 / 0.075469 (-0.002657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.494483 / 1.841788 (-0.347305) | 23.673700 / 8.074308 (15.599392) | 16.608529 / 10.191392 (6.417137) | 0.170220 / 0.680424 (-0.510204) | 0.021630 / 0.534201 (-0.512571) | 0.470771 / 0.579283 (-0.108512) | 0.535874 / 0.434364 (0.101510) | 0.550376 / 0.540337 (0.010039) | 0.776633 / 1.386936 (-0.610303) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007899 / 0.011353 (-0.003454) | 0.004581 / 0.011008 (-0.006427) | 0.076520 / 0.038508 (0.038012) | 0.090374 / 0.023109 (0.067265) | 0.495016 / 0.275898 (0.219118) | 0.532384 / 0.323480 (0.208904) | 0.006160 / 0.007986 (-0.001825) | 0.003780 / 0.004328 (-0.000548) | 0.077164 / 0.004250 (0.072914) | 0.064444 / 0.037052 (0.027391) | 0.501642 / 0.258489 (0.243153) | 0.549170 / 0.293841 (0.255329) | 0.038051 / 0.128546 (-0.090495) | 0.010081 / 0.075646 (-0.065565) | 0.083752 / 0.419271 (-0.335520) | 0.061334 / 0.043533 (0.017801) | 0.493502 / 0.255139 (0.238363) | 0.518018 / 0.283200 (0.234818) | 0.029534 / 0.141683 (-0.112149) | 1.929432 / 1.452155 (0.477277) | 1.889985 / 1.492716 (0.397268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254802 / 0.018006 (0.236795) | 0.494463 / 0.000490 (0.493974) | 0.005040 / 0.000200 (0.004840) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038372 / 0.037411 (0.000960) | 0.112247 / 0.014526 (0.097721) | 0.124365 / 0.176557 (-0.052191) | 0.187142 / 0.737135 (-0.549993) | 0.126070 / 0.296338 (-0.170269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513418 / 0.215209 (0.298209) | 5.132267 / 2.077655 (3.054613) | 2.773676 / 1.504120 (1.269556) | 2.576840 / 1.541195 (1.035645) | 2.681729 / 1.468490 (1.213238) | 0.581809 / 4.584777 (-4.002968) | 4.327075 / 3.745712 (0.581363) | 4.040264 / 5.269862 (-1.229598) | 2.436192 / 4.565676 (-2.129484) | 0.067819 / 0.424275 (-0.356456) | 0.008760 / 0.007607 (0.001153) | 0.610765 / 0.226044 (0.384720) | 6.105679 / 2.268929 (3.836750) | 3.341341 / 55.444624 (-52.103284) | 2.926695 / 6.876477 (-3.949781) | 3.017269 / 2.142072 (0.875196) | 0.707289 / 4.805227 (-4.097938) | 0.157379 / 6.500664 (-6.343285) | 0.072549 / 0.075469 (-0.002920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.666738 / 1.841788 (-0.175050) | 23.698567 / 8.074308 (15.624259) | 17.806437 / 10.191392 (7.615045) | 0.172103 / 0.680424 (-0.508321) | 0.023508 / 0.534201 (-0.510693) | 0.473171 / 0.579283 (-0.106112) | 0.524834 / 0.434364 (0.090470) | 0.562562 / 0.540337 (0.022224) | 0.788667 / 1.386936 (-0.598269) |\n\n</details>\n</details>\n\n\n",
"CI 404 errors are unrelated. See:\r\n- #6262 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006657 / 0.011353 (-0.004696) | 0.003975 / 0.011008 (-0.007033) | 0.084614 / 0.038508 (0.046106) | 0.074557 / 0.023109 (0.051448) | 0.309213 / 0.275898 (0.033315) | 0.338245 / 0.323480 (0.014765) | 0.005375 / 0.007986 (-0.002610) | 0.003355 / 0.004328 (-0.000973) | 0.064406 / 0.004250 (0.060156) | 0.061763 / 0.037052 (0.024711) | 0.313405 / 0.258489 (0.054916) | 0.352149 / 0.293841 (0.058308) | 0.031597 / 0.128546 (-0.096949) | 0.008499 / 0.075646 (-0.067147) | 0.289098 / 0.419271 (-0.130174) | 0.054415 / 0.043533 (0.010882) | 0.313210 / 0.255139 (0.058071) | 0.326728 / 0.283200 (0.043528) | 0.024597 / 0.141683 (-0.117086) | 1.449916 / 1.452155 (-0.002239) | 1.526314 / 1.492716 (0.033598) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231435 / 0.018006 (0.213429) | 0.537224 / 0.000490 (0.536734) | 0.007287 / 0.000200 (0.007088) | 0.000227 / 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028340 / 0.037411 (-0.009071) | 0.084085 / 0.014526 (0.069560) | 0.428211 / 0.176557 (0.251655) | 0.157360 / 0.737135 (-0.579775) | 0.139470 / 0.296338 (-0.156868) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389311 / 0.215209 (0.174102) | 3.871329 / 2.077655 (1.793674) | 1.861533 / 1.504120 (0.357413) | 1.688082 / 1.541195 (0.146887) | 1.804036 / 1.468490 (0.335546) | 0.489154 / 4.584777 (-4.095623) | 3.603843 / 3.745712 (-0.141869) | 3.424868 / 5.269862 (-1.844994) | 2.013525 / 4.565676 (-2.552152) | 0.057387 / 0.424275 (-0.366888) | 0.007274 / 0.007607 (-0.000333) | 0.462340 / 0.226044 (0.236295) | 4.620095 / 2.268929 (2.351167) | 2.326641 / 55.444624 (-53.117984) | 1.990082 / 6.876477 (-4.886395) | 2.037841 / 2.142072 (-0.104232) | 0.581973 / 4.805227 (-4.223254) | 0.135932 / 6.500664 (-6.364732) | 0.061092 / 0.075469 (-0.014377) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249586 / 1.841788 (-0.592202) | 19.036233 / 8.074308 (10.961925) | 14.083365 / 10.191392 (3.891973) | 0.169802 / 0.680424 (-0.510622) | 0.018547 / 0.534201 (-0.515654) | 0.392926 / 0.579283 (-0.186357) | 0.409993 / 0.434364 (-0.024371) | 0.460081 / 0.540337 (-0.080257) | 0.643836 / 1.386936 (-0.743100) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006889 / 0.011353 (-0.004464) | 0.004060 / 0.011008 (-0.006948) | 0.064332 / 0.038508 (0.025824) | 0.077067 / 0.023109 (0.053958) | 0.401235 / 0.275898 (0.125337) | 0.437139 / 0.323480 (0.113659) | 0.005510 / 0.007986 (-0.002476) | 0.003338 / 0.004328 (-0.000991) | 0.064446 / 0.004250 (0.060195) | 0.055537 / 0.037052 (0.018485) | 0.432689 / 0.258489 (0.174200) | 0.460005 / 0.293841 (0.166164) | 0.033122 / 0.128546 (-0.095424) | 0.008637 / 0.075646 (-0.067010) | 0.071088 / 0.419271 (-0.348183) | 0.049024 / 0.043533 (0.005491) | 0.400258 / 0.255139 (0.145119) | 0.419324 / 0.283200 (0.136124) | 0.022050 / 0.141683 (-0.119632) | 1.475744 / 1.452155 (0.023589) | 1.546565 / 1.492716 (0.053848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226241 / 0.018006 (0.208235) | 0.448574 / 0.000490 (0.448085) | 0.004732 / 0.000200 (0.004533) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033260 / 0.037411 (-0.004151) | 0.092622 / 0.014526 (0.078096) | 0.105501 / 0.176557 (-0.071056) | 0.157981 / 0.737135 (-0.579155) | 0.105993 / 0.296338 (-0.190345) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445716 / 0.215209 (0.230507) | 4.451848 / 2.077655 (2.374194) | 2.404769 / 1.504120 (0.900649) | 2.232594 / 1.541195 (0.691399) | 2.312735 / 1.468490 (0.844245) | 0.491208 / 4.584777 (-4.093569) | 3.561629 / 3.745712 (-0.184083) | 3.444269 / 5.269862 (-1.825592) | 2.060365 / 4.565676 (-2.505311) | 0.057723 / 0.424275 (-0.366552) | 0.007392 / 0.007607 (-0.000215) | 0.526447 / 0.226044 (0.300403) | 5.264307 / 2.268929 (2.995379) | 2.951481 / 55.444624 (-52.493143) | 2.593178 / 6.876477 (-4.283299) | 2.689780 / 2.142072 (0.547707) | 0.588649 / 4.805227 (-4.216579) | 0.133566 / 6.500664 (-6.367098) | 0.060462 / 0.075469 (-0.015008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.381008 / 1.841788 (-0.460780) | 19.452394 / 8.074308 (11.378086) | 15.255912 / 10.191392 (5.064520) | 0.171043 / 0.680424 (-0.509381) | 0.020395 / 0.534201 (-0.513806) | 0.396429 / 0.579283 (-0.182854) | 0.422820 / 0.434364 (-0.011544) | 0.477305 / 0.540337 (-0.063032) | 0.658274 / 1.386936 (-0.728663) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-27T08:16:06
| 2023-09-27T08:45:24
| 2023-09-27T08:36:39
|
MEMBER
| null |
Temporarily pin tensorflow < 2.14.0 until permanent solution is found.
Hot fix #6263.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6264/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6264",
"html_url": "https://github.com/huggingface/datasets/pull/6264",
"diff_url": "https://github.com/huggingface/datasets/pull/6264.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6264.patch",
"merged_at": "2023-09-27T08:36:39"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6263
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6263/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6263/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6263/events
|
https://github.com/huggingface/datasets/issues/6263
| 1,914,951,043
|
I_kwDODunzps5yI9WD
| 6,263
|
CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python'
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-09-27T08:12:05
| 2023-09-27T08:36:40
| 2023-09-27T08:36:40
|
MEMBER
| null |
Python 3.10 CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py)
```
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw1] linux -- Python 3.10.13 /opt/hostedtoolcache/Python/3.10.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
model = layers.Dense(2)
def gen_random_output():
x = tf.random.uniform((1, 3))
return model(x).numpy()
> with temp_seed(42, set_tensorflow=True):
tests/test_py_utils.py:155:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
seed = 42, set_pytorch = False, set_tensorflow = True
@contextmanager
def temp_seed(seed: int, set_pytorch=False, set_tensorflow=False):
"""Temporarily set the random seed. This works for python numpy, pytorch and tensorflow."""
np_state = np.random.get_state()
np.random.seed(seed)
if set_pytorch and config.TORCH_AVAILABLE:
import torch
torch_state = torch.random.get_rng_state()
torch.random.manual_seed(seed)
if torch.cuda.is_available():
torch_cuda_states = torch.cuda.get_rng_state_all()
torch.cuda.manual_seed_all(seed)
if set_tensorflow and config.TF_AVAILABLE:
import tensorflow as tf
> from tensorflow.python import context as tfpycontext
E ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/datasets/utils/py_utils.py:257: ImportError
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6263/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6262
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6262/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6262/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6262/events
|
https://github.com/huggingface/datasets/pull/6262
| 1,914,895,459
|
PR_kwDODunzps5bTh6H
| 6,262
|
Fix CI 404 errors
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008220 / 0.011353 (-0.003133) | 0.005560 / 0.011008 (-0.005448) | 0.100147 / 0.038508 (0.061639) | 0.070106 / 0.023109 (0.046996) | 0.411906 / 0.275898 (0.136008) | 0.432825 / 0.323480 (0.109345) | 0.004795 / 0.007986 (-0.003190) | 0.004094 / 0.004328 (-0.000235) | 0.075719 / 0.004250 (0.071468) | 0.067426 / 0.037052 (0.030374) | 0.428531 / 0.258489 (0.170042) | 0.437114 / 0.293841 (0.143273) | 0.045603 / 0.128546 (-0.082943) | 0.013333 / 0.075646 (-0.062313) | 0.353137 / 0.419271 (-0.066134) | 0.067902 / 0.043533 (0.024369) | 0.396633 / 0.255139 (0.141494) | 0.399185 / 0.283200 (0.115985) | 0.036377 / 0.141683 (-0.105306) | 1.624249 / 1.452155 (0.172094) | 1.792575 / 1.492716 (0.299859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.315847 / 0.018006 (0.297840) | 0.595009 / 0.000490 (0.594519) | 0.018876 / 0.000200 (0.018676) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029886 / 0.037411 (-0.007526) | 0.085765 / 0.014526 (0.071239) | 0.108680 / 0.176557 (-0.067877) | 0.174588 / 0.737135 (-0.562548) | 0.104494 / 0.296338 (-0.191844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.594429 / 0.215209 (0.379220) | 5.912352 / 2.077655 (3.834698) | 2.408501 / 1.504120 (0.904381) | 2.050914 / 1.541195 (0.509720) | 2.199349 / 1.468490 (0.730859) | 0.813797 / 4.584777 (-3.770980) | 5.169577 / 3.745712 (1.423864) | 4.653951 / 5.269862 (-0.615911) | 2.805423 / 4.565676 (-1.760253) | 0.092278 / 0.424275 (-0.331997) | 0.007394 / 0.007607 (-0.000213) | 0.684029 / 0.226044 (0.457985) | 6.964260 / 2.268929 (4.695331) | 3.108408 / 55.444624 (-52.336217) | 2.470907 / 6.876477 (-4.405569) | 2.460153 / 2.142072 (0.318081) | 0.986445 / 4.805227 (-3.818782) | 0.213069 / 6.500664 (-6.287596) | 0.074061 / 0.075469 (-0.001408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590732 / 1.841788 (-0.251056) | 23.736918 / 8.074308 (15.662609) | 21.223910 / 10.191392 (11.032518) | 0.236173 / 0.680424 (-0.444251) | 0.030056 / 0.534201 (-0.504145) | 0.489461 / 0.579283 (-0.089822) | 0.607582 / 0.434364 (0.173218) | 0.539889 / 0.540337 (-0.000449) | 0.817942 / 1.386936 (-0.568994) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008042 / 0.011353 (-0.003311) | 0.004836 / 0.011008 (-0.006173) | 0.075434 / 0.038508 (0.036926) | 0.080818 / 0.023109 (0.057709) | 0.474797 / 0.275898 (0.198899) | 0.526168 / 0.323480 (0.202689) | 0.006463 / 0.007986 (-0.001522) | 0.004031 / 0.004328 (-0.000297) | 0.074141 / 0.004250 (0.069891) | 0.068265 / 0.037052 (0.031212) | 0.562550 / 0.258489 (0.304061) | 0.544820 / 0.293841 (0.250979) | 0.047263 / 0.128546 (-0.081283) | 0.014113 / 0.075646 (-0.061534) | 0.086061 / 0.419271 (-0.333210) | 0.062475 / 0.043533 (0.018942) | 0.479912 / 0.255139 (0.224773) | 0.494784 / 0.283200 (0.211584) | 0.035847 / 0.141683 (-0.105836) | 1.726452 / 1.452155 (0.274297) | 1.770113 / 1.492716 (0.277396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286713 / 0.018006 (0.268707) | 0.609704 / 0.000490 (0.609214) | 0.009342 / 0.000200 (0.009143) | 0.000134 / 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035137 / 0.037411 (-0.002275) | 0.099331 / 0.014526 (0.084805) | 0.108971 / 0.176557 (-0.067586) | 0.170952 / 0.737135 (-0.566183) | 0.111736 / 0.296338 (-0.184603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.617434 / 0.215209 (0.402225) | 6.204351 / 2.077655 (4.126697) | 2.854347 / 1.504120 (1.350227) | 2.557424 / 1.541195 (1.016229) | 2.638173 / 1.468490 (1.169683) | 0.854234 / 4.584777 (-3.730543) | 5.383288 / 3.745712 (1.637576) | 4.698098 / 5.269862 (-0.571763) | 2.903860 / 4.565676 (-1.661817) | 0.094689 / 0.424275 (-0.329586) | 0.007892 / 0.007607 (0.000285) | 0.729420 / 0.226044 (0.503376) | 7.356691 / 2.268929 (5.087763) | 3.708039 / 55.444624 (-51.736585) | 2.979734 / 6.876477 (-3.896743) | 2.978983 / 2.142072 (0.836911) | 1.040554 / 4.805227 (-3.764673) | 0.211246 / 6.500664 (-6.289418) | 0.079880 / 0.075469 (0.004411) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.676057 / 1.841788 (-0.165731) | 23.428443 / 8.074308 (15.354135) | 21.016293 / 10.191392 (10.824901) | 0.260927 / 0.680424 (-0.419497) | 0.030689 / 0.534201 (-0.503512) | 0.495652 / 0.579283 (-0.083632) | 0.622976 / 0.434364 (0.188612) | 0.561175 / 0.540337 (0.020837) | 0.786733 / 1.386936 (-0.600203) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005942 / 0.011353 (-0.005410) | 0.003706 / 0.011008 (-0.007302) | 0.081002 / 0.038508 (0.042493) | 0.056854 / 0.023109 (0.033745) | 0.358668 / 0.275898 (0.082770) | 0.369718 / 0.323480 (0.046238) | 0.005202 / 0.007986 (-0.002784) | 0.002841 / 0.004328 (-0.001487) | 0.062976 / 0.004250 (0.058726) | 0.051308 / 0.037052 (0.014255) | 0.373636 / 0.258489 (0.115147) | 0.390480 / 0.293841 (0.096639) | 0.027480 / 0.128546 (-0.101067) | 0.007960 / 0.075646 (-0.067686) | 0.262719 / 0.419271 (-0.156552) | 0.046488 / 0.043533 (0.002955) | 0.347299 / 0.255139 (0.092160) | 0.393448 / 0.283200 (0.110249) | 0.019445 / 0.141683 (-0.122238) | 1.431314 / 1.452155 (-0.020841) | 1.495578 / 1.492716 (0.002862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223724 / 0.018006 (0.205718) | 0.416929 / 0.000490 (0.416440) | 0.005253 / 0.000200 (0.005053) | 0.000217 / 0.000054 (0.000163) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023571 / 0.037411 (-0.013841) | 0.073503 / 0.014526 (0.058978) | 0.081366 / 0.176557 (-0.095190) | 0.142716 / 0.737135 (-0.594420) | 0.082612 / 0.296338 (-0.213727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407319 / 0.215209 (0.192109) | 4.141404 / 2.077655 (2.063749) | 1.910842 / 1.504120 (0.406722) | 1.731694 / 1.541195 (0.190499) | 1.805228 / 1.468490 (0.336738) | 0.497109 / 4.584777 (-4.087668) | 3.107624 / 3.745712 (-0.638088) | 2.890687 / 5.269862 (-2.379174) | 1.795913 / 4.565676 (-2.769763) | 0.057099 / 0.424275 (-0.367176) | 0.006414 / 0.007607 (-0.001194) | 0.482127 / 0.226044 (0.256083) | 4.835158 / 2.268929 (2.566229) | 2.368909 / 55.444624 (-53.075715) | 2.001608 / 6.876477 (-4.874868) | 2.004492 / 2.142072 (-0.137580) | 0.579910 / 4.805227 (-4.225317) | 0.123541 / 6.500664 (-6.377123) | 0.059651 / 0.075469 (-0.015818) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.242364 / 1.841788 (-0.599424) | 16.982676 / 8.074308 (8.908368) | 13.718885 / 10.191392 (3.527493) | 0.132759 / 0.680424 (-0.547665) | 0.017012 / 0.534201 (-0.517189) | 0.333447 / 0.579283 (-0.245836) | 0.360149 / 0.434364 (-0.074215) | 0.385526 / 0.540337 (-0.154811) | 0.536915 / 1.386936 (-0.850021) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005946 / 0.011353 (-0.005407) | 0.003442 / 0.011008 (-0.007566) | 0.062595 / 0.038508 (0.024087) | 0.058699 / 0.023109 (0.035590) | 0.442626 / 0.275898 (0.166728) | 0.473773 / 0.323480 (0.150293) | 0.004622 / 0.007986 (-0.003364) | 0.002812 / 0.004328 (-0.001516) | 0.064099 / 0.004250 (0.059849) | 0.046784 / 0.037052 (0.009731) | 0.466049 / 0.258489 (0.207560) | 0.487912 / 0.293841 (0.194071) | 0.028372 / 0.128546 (-0.100174) | 0.007992 / 0.075646 (-0.067654) | 0.068151 / 0.419271 (-0.351120) | 0.041010 / 0.043533 (-0.002523) | 0.442331 / 0.255139 (0.187192) | 0.469686 / 0.283200 (0.186487) | 0.019694 / 0.141683 (-0.121989) | 1.467928 / 1.452155 (0.015774) | 1.525635 / 1.492716 (0.032918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204459 / 0.018006 (0.186453) | 0.407766 / 0.000490 (0.407276) | 0.003898 / 0.000200 (0.003698) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025909 / 0.037411 (-0.011503) | 0.080341 / 0.014526 (0.065816) | 0.088231 / 0.176557 (-0.088325) | 0.144056 / 0.737135 (-0.593079) | 0.089769 / 0.296338 (-0.206569) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462876 / 0.215209 (0.247667) | 4.625983 / 2.077655 (2.548329) | 2.580079 / 1.504120 (1.075959) | 2.402792 / 1.541195 (0.861597) | 2.424982 / 1.468490 (0.956491) | 0.503654 / 4.584777 (-4.081123) | 3.178995 / 3.745712 (-0.566717) | 2.956126 / 5.269862 (-2.313735) | 1.847837 / 4.565676 (-2.717840) | 0.057964 / 0.424275 (-0.366311) | 0.006405 / 0.007607 (-0.001202) | 0.536036 / 0.226044 (0.309992) | 5.374416 / 2.268929 (3.105487) | 3.036440 / 55.444624 (-52.408184) | 2.682054 / 6.876477 (-4.194422) | 2.683462 / 2.142072 (0.541390) | 0.592751 / 4.805227 (-4.212477) | 0.124313 / 6.500664 (-6.376351) | 0.061127 / 0.075469 (-0.014342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.383539 / 1.841788 (-0.458249) | 17.766221 / 8.074308 (9.691913) | 15.306600 / 10.191392 (5.115208) | 0.145035 / 0.680424 (-0.535389) | 0.018078 / 0.534201 (-0.516123) | 0.330102 / 0.579283 (-0.249181) | 0.375380 / 0.434364 (-0.058984) | 0.388531 / 0.540337 (-0.151807) | 0.548720 / 1.386936 (-0.838216) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006757 / 0.011353 (-0.004596) | 0.004110 / 0.011008 (-0.006898) | 0.084727 / 0.038508 (0.046219) | 0.074328 / 0.023109 (0.051219) | 0.310467 / 0.275898 (0.034569) | 0.343209 / 0.323480 (0.019729) | 0.004228 / 0.007986 (-0.003757) | 0.003400 / 0.004328 (-0.000929) | 0.065546 / 0.004250 (0.061296) | 0.063057 / 0.037052 (0.026005) | 0.315023 / 0.258489 (0.056534) | 0.356395 / 0.293841 (0.062554) | 0.031959 / 0.128546 (-0.096588) | 0.008577 / 0.075646 (-0.067069) | 0.289075 / 0.419271 (-0.130196) | 0.055011 / 0.043533 (0.011478) | 0.308861 / 0.255139 (0.053722) | 0.328691 / 0.283200 (0.045491) | 0.027037 / 0.141683 (-0.114646) | 1.464314 / 1.452155 (0.012159) | 1.549644 / 1.492716 (0.056927) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238330 / 0.018006 (0.220324) | 0.451570 / 0.000490 (0.451080) | 0.010873 / 0.000200 (0.010673) | 0.000341 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029909 / 0.037411 (-0.007503) | 0.085222 / 0.014526 (0.070696) | 0.100180 / 0.176557 (-0.076377) | 0.154842 / 0.737135 (-0.582293) | 0.099253 / 0.296338 (-0.197086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401603 / 0.215209 (0.186394) | 4.009781 / 2.077655 (1.932126) | 2.021807 / 1.504120 (0.517687) | 1.861017 / 1.541195 (0.319822) | 2.009072 / 1.468490 (0.540582) | 0.483798 / 4.584777 (-4.100979) | 3.580394 / 3.745712 (-0.165318) | 3.464587 / 5.269862 (-1.805275) | 2.018400 / 4.565676 (-2.547276) | 0.057134 / 0.424275 (-0.367141) | 0.007303 / 0.007607 (-0.000304) | 0.473627 / 0.226044 (0.247582) | 4.722634 / 2.268929 (2.453706) | 2.490884 / 55.444624 (-52.953741) | 2.121568 / 6.876477 (-4.754909) | 2.200699 / 2.142072 (0.058626) | 0.576728 / 4.805227 (-4.228499) | 0.135633 / 6.500664 (-6.365032) | 0.061625 / 0.075469 (-0.013844) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250545 / 1.841788 (-0.591243) | 19.167642 / 8.074308 (11.093334) | 14.189891 / 10.191392 (3.998499) | 0.164552 / 0.680424 (-0.515872) | 0.018215 / 0.534201 (-0.515986) | 0.389962 / 0.579283 (-0.189321) | 0.413972 / 0.434364 (-0.020392) | 0.460253 / 0.540337 (-0.080085) | 0.647897 / 1.386936 (-0.739039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006714 / 0.011353 (-0.004639) | 0.004081 / 0.011008 (-0.006927) | 0.065627 / 0.038508 (0.027119) | 0.077644 / 0.023109 (0.054535) | 0.409950 / 0.275898 (0.134052) | 0.442940 / 0.323480 (0.119460) | 0.005523 / 0.007986 (-0.002463) | 0.003366 / 0.004328 (-0.000962) | 0.065425 / 0.004250 (0.061174) | 0.056222 / 0.037052 (0.019169) | 0.429928 / 0.258489 (0.171439) | 0.457136 / 0.293841 (0.163296) | 0.032356 / 0.128546 (-0.096190) | 0.008676 / 0.075646 (-0.066970) | 0.071785 / 0.419271 (-0.347486) | 0.048458 / 0.043533 (0.004925) | 0.408003 / 0.255139 (0.152864) | 0.433529 / 0.283200 (0.150330) | 0.023232 / 0.141683 (-0.118450) | 1.483640 / 1.452155 (0.031485) | 1.552425 / 1.492716 (0.059709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282347 / 0.018006 (0.264341) | 0.448742 / 0.000490 (0.448253) | 0.039590 / 0.000200 (0.039390) | 0.000407 / 0.000054 (0.000353) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032516 / 0.037411 (-0.004896) | 0.095269 / 0.014526 (0.080744) | 0.106363 / 0.176557 (-0.070193) | 0.157945 / 0.737135 (-0.579191) | 0.106783 / 0.296338 (-0.189556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436334 / 0.215209 (0.221125) | 4.348147 / 2.077655 (2.270492) | 2.326830 / 1.504120 (0.822710) | 2.162586 / 1.541195 (0.621391) | 2.257769 / 1.468490 (0.789279) | 0.491677 / 4.584777 (-4.093099) | 3.707385 / 3.745712 (-0.038328) | 3.567147 / 5.269862 (-1.702715) | 2.099451 / 4.565676 (-2.466226) | 0.058486 / 0.424275 (-0.365789) | 0.007324 / 0.007607 (-0.000283) | 0.510962 / 0.226044 (0.284917) | 5.106550 / 2.268929 (2.837622) | 2.785723 / 55.444624 (-52.658901) | 2.452928 / 6.876477 (-4.423548) | 2.545034 / 2.142072 (0.402961) | 0.611124 / 4.805227 (-4.194103) | 0.133503 / 6.500664 (-6.367161) | 0.061118 / 0.075469 (-0.014351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386640 / 1.841788 (-0.455148) | 20.485670 / 8.074308 (12.411362) | 15.332223 / 10.191392 (5.140831) | 0.164070 / 0.680424 (-0.516354) | 0.019962 / 0.534201 (-0.514239) | 0.394217 / 0.579283 (-0.185066) | 0.428442 / 0.434364 (-0.005922) | 0.473784 / 0.540337 (-0.066553) | 0.665141 / 1.386936 (-0.721795) |\n\n</details>\n</details>\n\n\n",
"The CI errors seem unrelated to this PR but I think they need further investigation in another PR.\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - KeyError: 'url'\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008766 / 0.011353 (-0.002587) | 0.005289 / 0.011008 (-0.005720) | 0.097220 / 0.038508 (0.058712) | 0.072246 / 0.023109 (0.049137) | 0.369359 / 0.275898 (0.093461) | 0.422571 / 0.323480 (0.099091) | 0.004941 / 0.007986 (-0.003044) | 0.006103 / 0.004328 (0.001774) | 0.075828 / 0.004250 (0.071578) | 0.065795 / 0.037052 (0.028743) | 0.412835 / 0.258489 (0.154346) | 0.430062 / 0.293841 (0.136221) | 0.045806 / 0.128546 (-0.082741) | 0.013760 / 0.075646 (-0.061887) | 0.351542 / 0.419271 (-0.067729) | 0.064836 / 0.043533 (0.021304) | 0.370162 / 0.255139 (0.115023) | 0.434949 / 0.283200 (0.151749) | 0.039198 / 0.141683 (-0.102485) | 1.670940 / 1.452155 (0.218785) | 1.809677 / 1.492716 (0.316961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295104 / 0.018006 (0.277097) | 0.594584 / 0.000490 (0.594095) | 0.010923 / 0.000200 (0.010723) | 0.000479 / 0.000054 (0.000425) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029174 / 0.037411 (-0.008237) | 0.094637 / 0.014526 (0.080111) | 0.102948 / 0.176557 (-0.073608) | 0.171048 / 0.737135 (-0.566087) | 0.111465 / 0.296338 (-0.184873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582017 / 0.215209 (0.366808) | 5.727008 / 2.077655 (3.649354) | 2.563211 / 1.504120 (1.059091) | 2.308912 / 1.541195 (0.767717) | 2.301258 / 1.468490 (0.832768) | 0.819594 / 4.584777 (-3.765183) | 5.177536 / 3.745712 (1.431824) | 4.473602 / 5.269862 (-0.796260) | 2.743819 / 4.565676 (-1.821857) | 0.090052 / 0.424275 (-0.334223) | 0.007903 / 0.007607 (0.000295) | 0.679142 / 0.226044 (0.453097) | 6.887891 / 2.268929 (4.618962) | 3.337926 / 55.444624 (-52.106699) | 2.659228 / 6.876477 (-4.217249) | 2.641289 / 2.142072 (0.499216) | 0.974829 / 4.805227 (-3.830398) | 0.205775 / 6.500664 (-6.294890) | 0.075268 / 0.075469 (-0.000201) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.500562 / 1.841788 (-0.341226) | 22.688483 / 8.074308 (14.614175) | 19.634878 / 10.191392 (9.443486) | 0.227409 / 0.680424 (-0.453015) | 0.029794 / 0.534201 (-0.504407) | 0.475204 / 0.579283 (-0.104079) | 0.579379 / 0.434364 (0.145016) | 0.541244 / 0.540337 (0.000907) | 0.739187 / 1.386936 (-0.647749) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.006139 / 0.011008 (-0.004870) | 0.075048 / 0.038508 (0.036540) | 0.074070 / 0.023109 (0.050961) | 0.508288 / 0.275898 (0.232390) | 0.539770 / 0.323480 (0.216290) | 0.006092 / 0.007986 (-0.001894) | 0.003748 / 0.004328 (-0.000581) | 0.077945 / 0.004250 (0.073695) | 0.056989 / 0.037052 (0.019936) | 0.526889 / 0.258489 (0.268400) | 0.560862 / 0.293841 (0.267021) | 0.046507 / 0.128546 (-0.082040) | 0.013249 / 0.075646 (-0.062397) | 0.088363 / 0.419271 (-0.330908) | 0.058776 / 0.043533 (0.015243) | 0.495869 / 0.255139 (0.240730) | 0.538615 / 0.283200 (0.255415) | 0.034055 / 0.141683 (-0.107628) | 1.658713 / 1.452155 (0.206558) | 1.736599 / 1.492716 (0.243883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288355 / 0.018006 (0.270349) | 0.571481 / 0.000490 (0.570991) | 0.006765 / 0.000200 (0.006565) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031836 / 0.037411 (-0.005575) | 0.101312 / 0.014526 (0.086786) | 0.111433 / 0.176557 (-0.065124) | 0.169599 / 0.737135 (-0.567536) | 0.114595 / 0.296338 (-0.181743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.645258 / 0.215209 (0.430049) | 6.446653 / 2.077655 (4.368998) | 2.983498 / 1.504120 (1.479379) | 2.573820 / 1.541195 (1.032625) | 2.624286 / 1.468490 (1.155796) | 0.815997 / 4.584777 (-3.768780) | 5.140248 / 3.745712 (1.394536) | 4.636915 / 5.269862 (-0.632947) | 2.866313 / 4.565676 (-1.699364) | 0.096643 / 0.424275 (-0.327633) | 0.008452 / 0.007607 (0.000845) | 0.765837 / 0.226044 (0.539793) | 7.622897 / 2.268929 (5.353968) | 3.796247 / 55.444624 (-51.648378) | 3.019349 / 6.876477 (-3.857128) | 3.034187 / 2.142072 (0.892115) | 1.001682 / 4.805227 (-3.803546) | 0.211841 / 6.500664 (-6.288823) | 0.073351 / 0.075469 (-0.002119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.740254 / 1.841788 (-0.101534) | 23.465619 / 8.074308 (15.391311) | 21.651670 / 10.191392 (11.460278) | 0.226129 / 0.680424 (-0.454294) | 0.029611 / 0.534201 (-0.504590) | 0.441140 / 0.579283 (-0.138143) | 0.605591 / 0.434364 (0.171227) | 0.552427 / 0.540337 (0.012090) | 0.771975 / 1.386936 (-0.614961) |\n\n</details>\n</details>\n\n\n",
"> The CI errors seem unrelated to this PR but I think they need further investigation in another PR.\r\n> \r\n> ```\r\n> FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - KeyError: 'url'\r\n> ```\r\n\r\nWe need to wait for `huggingface_hub`'s next release to fix this (see https://github.com/huggingface/huggingface_hub/pull/1675; 409 error is currently ignored, hence the `KeyError`)\r\n\r\nAlso, we should be able to fix `test_push_dataset_dict_to_hub_overwrite_files` by inserting `gc.collect()` (to drop the \"reference\" to an Arrow file) between the `load_dataset` calls to avoid the `PermissionError` (also reported in https://github.com/huggingface/datasets/issues/3139)\r\n\r\n(Indeed, this can be addressed in subsequent PRs.)\r\n\r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008988 / 0.011353 (-0.002365) | 0.005270 / 0.011008 (-0.005738) | 0.114577 / 0.038508 (0.076068) | 0.091630 / 0.023109 (0.068521) | 0.409217 / 0.275898 (0.133319) | 0.440903 / 0.323480 (0.117424) | 0.005226 / 0.007986 (-0.002760) | 0.004289 / 0.004328 (-0.000040) | 0.082246 / 0.004250 (0.077995) | 0.084926 / 0.037052 (0.047873) | 0.407822 / 0.258489 (0.149333) | 0.440891 / 0.293841 (0.147051) | 0.052225 / 0.128546 (-0.076321) | 0.014218 / 0.075646 (-0.061429) | 0.436994 / 0.419271 (0.017722) | 0.066433 / 0.043533 (0.022901) | 0.413909 / 0.255139 (0.158770) | 0.425729 / 0.283200 (0.142530) | 0.039576 / 0.141683 (-0.102107) | 1.905604 / 1.452155 (0.453449) | 1.907032 / 1.492716 (0.414315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313662 / 0.018006 (0.295655) | 0.614541 / 0.000490 (0.614051) | 0.015631 / 0.000200 (0.015431) | 0.000507 / 0.000054 (0.000453) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029049 / 0.037411 (-0.008362) | 0.094626 / 0.014526 (0.080100) | 0.104718 / 0.176557 (-0.071838) | 0.187346 / 0.737135 (-0.549790) | 0.108001 / 0.296338 (-0.188337) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578997 / 0.215209 (0.363788) | 5.815546 / 2.077655 (3.737892) | 2.411301 / 1.504120 (0.907181) | 2.110088 / 1.541195 (0.568893) | 2.147839 / 1.468490 (0.679349) | 0.861285 / 4.584777 (-3.723492) | 5.264245 / 3.745712 (1.518533) | 4.695786 / 5.269862 (-0.574076) | 2.867522 / 4.565676 (-1.698154) | 0.096523 / 0.424275 (-0.327752) | 0.008777 / 0.007607 (0.001170) | 0.716316 / 0.226044 (0.490272) | 7.257574 / 2.268929 (4.988645) | 3.141502 / 55.444624 (-52.303123) | 2.480604 / 6.876477 (-4.395872) | 2.530031 / 2.142072 (0.387958) | 1.054274 / 4.805227 (-3.750953) | 0.210781 / 6.500664 (-6.289883) | 0.073837 / 0.075469 (-0.001632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.607689 / 1.841788 (-0.234099) | 23.856780 / 8.074308 (15.782472) | 19.507196 / 10.191392 (9.315804) | 0.232712 / 0.680424 (-0.447712) | 0.027037 / 0.534201 (-0.507164) | 0.466613 / 0.579283 (-0.112670) | 0.571139 / 0.434364 (0.136775) | 0.543109 / 0.540337 (0.002771) | 0.785558 / 1.386936 (-0.601378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008104 / 0.011353 (-0.003249) | 0.004923 / 0.011008 (-0.006086) | 0.075093 / 0.038508 (0.036585) | 0.075218 / 0.023109 (0.052109) | 0.476615 / 0.275898 (0.200717) | 0.506984 / 0.323480 (0.183504) | 0.006371 / 0.007986 (-0.001614) | 0.004818 / 0.004328 (0.000489) | 0.075634 / 0.004250 (0.071383) | 0.059513 / 0.037052 (0.022461) | 0.523763 / 0.258489 (0.265274) | 0.531858 / 0.293841 (0.238017) | 0.048168 / 0.128546 (-0.080379) | 0.014110 / 0.075646 (-0.061537) | 0.086052 / 0.419271 (-0.333219) | 0.058369 / 0.043533 (0.014836) | 0.475537 / 0.255139 (0.220398) | 0.509429 / 0.283200 (0.226229) | 0.033924 / 0.141683 (-0.107758) | 1.657490 / 1.452155 (0.205336) | 1.762544 / 1.492716 (0.269828) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263863 / 0.018006 (0.245857) | 0.584468 / 0.000490 (0.583978) | 0.007063 / 0.000200 (0.006863) | 0.000181 / 0.000054 (0.000126) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032229 / 0.037411 (-0.005183) | 0.096750 / 0.014526 (0.082224) | 0.117798 / 0.176557 (-0.058758) | 0.173376 / 0.737135 (-0.563760) | 0.117241 / 0.296338 (-0.179098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.701935 / 0.215209 (0.486726) | 6.544655 / 2.077655 (4.467001) | 3.055531 / 1.504120 (1.551411) | 2.896339 / 1.541195 (1.355144) | 3.013157 / 1.468490 (1.544667) | 0.852989 / 4.584777 (-3.731788) | 5.399355 / 3.745712 (1.653643) | 5.119811 / 5.269862 (-0.150051) | 3.167269 / 4.565676 (-1.398407) | 0.096962 / 0.424275 (-0.327313) | 0.008843 / 0.007607 (0.001236) | 0.776170 / 0.226044 (0.550125) | 7.735093 / 2.268929 (5.466164) | 3.792629 / 55.444624 (-51.651996) | 3.249911 / 6.876477 (-3.626565) | 3.235590 / 2.142072 (1.093517) | 1.046426 / 4.805227 (-3.758801) | 0.239854 / 6.500664 (-6.260810) | 0.100648 / 0.075469 (0.025179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.774488 / 1.841788 (-0.067300) | 25.646958 / 8.074308 (17.572650) | 23.181577 / 10.191392 (12.990185) | 0.231948 / 0.680424 (-0.448476) | 0.030147 / 0.534201 (-0.504054) | 0.464161 / 0.579283 (-0.115122) | 0.598980 / 0.434364 (0.164616) | 0.571156 / 0.540337 (0.030819) | 0.833221 / 1.386936 (-0.553715) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006010 / 0.011353 (-0.005343) | 0.003662 / 0.011008 (-0.007346) | 0.079971 / 0.038508 (0.041463) | 0.066790 / 0.023109 (0.043681) | 0.311387 / 0.275898 (0.035489) | 0.346781 / 0.323480 (0.023301) | 0.003500 / 0.007986 (-0.004485) | 0.002831 / 0.004328 (-0.001498) | 0.063238 / 0.004250 (0.058988) | 0.056163 / 0.037052 (0.019110) | 0.317456 / 0.258489 (0.058967) | 0.356106 / 0.293841 (0.062265) | 0.027358 / 0.128546 (-0.101188) | 0.007906 / 0.075646 (-0.067741) | 0.261779 / 0.419271 (-0.157492) | 0.046385 / 0.043533 (0.002852) | 0.312587 / 0.255139 (0.057448) | 0.339513 / 0.283200 (0.056314) | 0.021474 / 0.141683 (-0.120209) | 1.418637 / 1.452155 (-0.033518) | 1.510257 / 1.492716 (0.017540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211761 / 0.018006 (0.193755) | 0.424387 / 0.000490 (0.423898) | 0.002579 / 0.000200 (0.002379) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024038 / 0.037411 (-0.013374) | 0.072524 / 0.014526 (0.057998) | 0.083443 / 0.176557 (-0.093113) | 0.144835 / 0.737135 (-0.592300) | 0.084754 / 0.296338 (-0.211585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392423 / 0.215209 (0.177214) | 3.927220 / 2.077655 (1.849565) | 1.877853 / 1.504120 (0.373733) | 1.699275 / 1.541195 (0.158081) | 1.793144 / 1.468490 (0.324654) | 0.503809 / 4.584777 (-4.080968) | 3.052569 / 3.745712 (-0.693143) | 2.907432 / 5.269862 (-2.362429) | 1.811220 / 4.565676 (-2.754457) | 0.057249 / 0.424275 (-0.367026) | 0.006433 / 0.007607 (-0.001174) | 0.463257 / 0.226044 (0.237213) | 4.631038 / 2.268929 (2.362109) | 2.315870 / 55.444624 (-53.128754) | 2.000476 / 6.876477 (-4.876001) | 2.043581 / 2.142072 (-0.098492) | 0.588911 / 4.805227 (-4.216317) | 0.125370 / 6.500664 (-6.375295) | 0.061721 / 0.075469 (-0.013748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244486 / 1.841788 (-0.597301) | 17.862422 / 8.074308 (9.788114) | 13.890205 / 10.191392 (3.698813) | 0.145467 / 0.680424 (-0.534957) | 0.016856 / 0.534201 (-0.517345) | 0.329357 / 0.579283 (-0.249926) | 0.367550 / 0.434364 (-0.066814) | 0.377541 / 0.540337 (-0.162796) | 0.534087 / 1.386936 (-0.852849) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006030 / 0.011353 (-0.005323) | 0.003650 / 0.011008 (-0.007359) | 0.063300 / 0.038508 (0.024792) | 0.058877 / 0.023109 (0.035767) | 0.454662 / 0.275898 (0.178764) | 0.489362 / 0.323480 (0.165882) | 0.004856 / 0.007986 (-0.003130) | 0.002909 / 0.004328 (-0.001420) | 0.063356 / 0.004250 (0.059105) | 0.047867 / 0.037052 (0.010814) | 0.465461 / 0.258489 (0.206972) | 0.506684 / 0.293841 (0.212843) | 0.028599 / 0.128546 (-0.099947) | 0.008076 / 0.075646 (-0.067570) | 0.068695 / 0.419271 (-0.350576) | 0.041487 / 0.043533 (-0.002045) | 0.448676 / 0.255139 (0.193537) | 0.471206 / 0.283200 (0.188007) | 0.020401 / 0.141683 (-0.121282) | 1.461181 / 1.452155 (0.009026) | 1.517079 / 1.492716 (0.024363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222827 / 0.018006 (0.204821) | 0.425074 / 0.000490 (0.424585) | 0.004153 / 0.000200 (0.003953) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026980 / 0.037411 (-0.010431) | 0.080786 / 0.014526 (0.066260) | 0.092040 / 0.176557 (-0.084517) | 0.146082 / 0.737135 (-0.591053) | 0.092739 / 0.296338 (-0.203600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461663 / 0.215209 (0.246454) | 4.604828 / 2.077655 (2.527173) | 2.566926 / 1.504120 (1.062806) | 2.394419 / 1.541195 (0.853224) | 2.458375 / 1.468490 (0.989885) | 0.505140 / 4.584777 (-4.079637) | 3.155916 / 3.745712 (-0.589796) | 3.014474 / 5.269862 (-2.255388) | 1.900296 / 4.565676 (-2.665380) | 0.058063 / 0.424275 (-0.366212) | 0.006409 / 0.007607 (-0.001198) | 0.541165 / 0.226044 (0.315120) | 5.410700 / 2.268929 (3.141772) | 3.010239 / 55.444624 (-52.434386) | 2.668103 / 6.876477 (-4.208373) | 2.730418 / 2.142072 (0.588346) | 0.603471 / 4.805227 (-4.201756) | 0.129852 / 6.500664 (-6.370812) | 0.061507 / 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.355272 / 1.841788 (-0.486516) | 18.170088 / 8.074308 (10.095780) | 15.583855 / 10.191392 (5.392463) | 0.146246 / 0.680424 (-0.534178) | 0.018093 / 0.534201 (-0.516108) | 0.331695 / 0.579283 (-0.247588) | 0.380845 / 0.434364 (-0.053519) | 0.388564 / 0.540337 (-0.151774) | 0.551465 / 1.386936 (-0.835471) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-27T07:40:18
| 2023-09-28T15:39:16
| 2023-09-28T15:30:40
|
MEMBER
| null |
Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884
```
FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fb99-4a52c561752ece3d77eb6d57;2b61cae4-613d-4a73-bbb1-2faf9e32b02d)
Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_audio - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbb2-0333dd666d42f0e173c2bb68;dfdc4271-b49b-4008-8c49-f05cf7c1d53d)
Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_custom_splits - huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6512fbca-167690694f39770a5b3a444e;baeaa905-0a57-4585-ac97-9aaae12dd47d)
Repository Not Found for url: https://hub-ci.huggingface.co/api/repos/delete.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
```
I think this can be caused by collisions in temporary repository IDs because we create them in multiprocessing:
```python
with temporary_repo(f"{CI_HUB_USER}/test-{int(time.time() * 10e3)}") as ds_name:
```
This can also be caused when there is another issue that does not allow the creation of the repository, thus making it impossible to delete it.
This PR tries to fix this issue by increasing the precision of the number on the repository ID: `10e6` instead of `10e3`.
Additionally, this PR catches RepositoryNotFoundError.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6262/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6262",
"html_url": "https://github.com/huggingface/datasets/pull/6262",
"diff_url": "https://github.com/huggingface/datasets/pull/6262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6262.patch",
"merged_at": "2023-09-28T15:30:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6261
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6261/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6261/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6261/events
|
https://github.com/huggingface/datasets/issues/6261
| 1,913,813,178
|
I_kwDODunzps5yEni6
| 6,261
|
Can't load a dataset
|
{
"login": "joaopedrosdmm",
"id": 37955817,
"node_id": "MDQ6VXNlcjM3OTU1ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/37955817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joaopedrosdmm",
"html_url": "https://github.com/joaopedrosdmm",
"followers_url": "https://api.github.com/users/joaopedrosdmm/followers",
"following_url": "https://api.github.com/users/joaopedrosdmm/following{/other_user}",
"gists_url": "https://api.github.com/users/joaopedrosdmm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joaopedrosdmm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joaopedrosdmm/subscriptions",
"organizations_url": "https://api.github.com/users/joaopedrosdmm/orgs",
"repos_url": "https://api.github.com/users/joaopedrosdmm/repos",
"events_url": "https://api.github.com/users/joaopedrosdmm/events{/privacy}",
"received_events_url": "https://api.github.com/users/joaopedrosdmm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"I believe is due to the fact that doesn't work with .tgz files.",
"`JourneyDB/JourneyDB` is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.\r\n\r\n> I believe is due to the fact that doesn't work with .tgz files.\r\n\r\nIndeed, the dataset's data files structure is not supported natively by `datasets`. To load it, one option is to clone the repo (or download it with `huggingface_hub.snapshot_download`) and use `Dataset.from_generator` to process the files.",
"> JourneyDB/JourneyDB is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.´\r\n\r\nI did authentication with:\r\n\r\n```\r\nfrom huggingface_hub import notebook_login\r\nnotebook_login()\r\n```\r\n\r\nIsn't that the correct way to do it?\r\n\r\n> Indeed, the dataset's data files structure is not supported natively by datasets. To load it, one option is to clone the repo (or download it with huggingface_hub.snapshot_download) and use Dataset.from_generator to process the files.\r\n\r\nGreat suggestion I will give it a try.",
"Have you accepted the terms in the dialog [here](https://huggingface.co/datasets/JourneyDB/JourneyDB)?\r\n\r\nIIRC Kaggle preinstalls an outdated `datasets` version, so it's also a good idea to update it before importing `datasets` (and do the same for `huggingface_hub`)",
"Sorry for the late reply. Yes, I did. Thanks for the tip!"
] | 2023-09-26T15:46:25
| 2023-10-05T10:23:23
| 2023-10-05T10:23:22
|
NONE
| null |
### Describe the bug
Can't seem to load the JourneyDB dataset.
It throws the following error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[15], line 2
1 # If the dataset is gated/private, make sure you have run huggingface-cli login
----> 2 dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True)
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1661 ignore_verifications = ignore_verifications or save_infos
1663 # Create a dataset builder
-> 1664 builder_instance = load_dataset_builder(
1665 path=path,
1666 name=name,
1667 data_dir=data_dir,
1668 data_files=data_files,
1669 cache_dir=cache_dir,
1670 features=features,
1671 download_config=download_config,
1672 download_mode=download_mode,
1673 revision=revision,
1674 use_auth_token=use_auth_token,
1675 **config_kwargs,
1676 )
1678 # Return iterable dataset in case of streaming
1679 if streaming:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1488 download_config = download_config.copy() if download_config else DownloadConfig()
1489 download_config.use_auth_token = use_auth_token
-> 1490 dataset_module = dataset_module_factory(
1491 path,
1492 revision=revision,
1493 download_config=download_config,
1494 download_mode=download_mode,
1495 data_dir=data_dir,
1496 data_files=data_files,
1497 )
1499 # Get dataset builder class from the processing script
1500 builder_cls = import_main_class(dataset_module.module_path)
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1238, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1236 raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
1237 if isinstance(e1, FileNotFoundError):
-> 1238 raise FileNotFoundError(
1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1241 ) from None
1242 raise e1 from None
1243 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/JourneyDB/JourneyDB/JourneyDB.py or any data file in the same directory. Couldn't find 'JourneyDB/JourneyDB' on the Hugging Face Hub either: FileNotFoundError: Unable to find data in dataset repository JourneyDB/JourneyDB with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
### Steps to reproduce the bug
1)
```
from huggingface_hub import notebook_login
notebook_login()
```
2)
```
!pip install -q datasets
from datasets import load_dataset
```
3)
`dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True)`
### Expected behavior
Load the dataset
### Environment info
Notebook
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6261/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6260
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6260/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6260/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6260/events
|
https://github.com/huggingface/datasets/issues/6260
| 1,912,593,466
|
I_kwDODunzps5x_9w6
| 6,260
|
REUSE_DATASET_IF_EXISTS don't work
|
{
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! Unfortunately, the current behavior is to delete the downloaded data when this error happens. So, I've opened a PR that removes the problematic import to avoid losing data due to `apache_beam` not being installed (we host the preprocessed version of `natual_questions` on the HF GCS, so requiring `apache_beam` in that case doesn't make sense)",
"Thanks for your reply. I met another question that I set `export HF_DATASETS_CACHE=/data/lxy/.cache` , but each time I run load_datasets, the datasets module still looking for NQ in the wrong default cache dir '/home/lxy/.cache' 。How to avoid this incorrect behavior. I am sure HF_DATASETS_CACHE was set correctly since I use echo & to check it.\r\n\r\nby the way I delete the file in '/home/lxy/.cache' since I found there has some kb size file seems useless.",
"You need to set this variable before the `datasets` import. Then, you can use `import datasets; datasets.config.HF_DATASETS_CACHE` to verify the cache location."
] | 2023-09-26T03:02:16
| 2023-09-28T18:23:36
| 2023-09-28T18:23:36
|
NONE
| null |
### Describe the bug
I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ
config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ')
data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS)
---
Since I don't have apache_beam installed, it throw a exception. After I pip install apache_beam ,the download restart..

### Steps to reproduce the bug
run this two line code
config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/data/lxy/NQ',download_desc='NQ')
data=datasets.load_dataset('natural_questions',cache_dir=r'/data/lxy/NQ',download_config=config,download_mode=DownloadMode.REUSE_DATASET_IF_EXISTS)
### Expected behavior
Download behavior can be correctly follow DownloadMode
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6260/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6259
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6259/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6259/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6259/events
|
https://github.com/huggingface/datasets/issues/6259
| 1,911,965,758
|
I_kwDODunzps5x9kg-
| 6,259
|
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
|
{
"login": "MF-FOOM",
"id": 141304309,
"node_id": "U_kgDOCGwh9Q",
"avatar_url": "https://avatars.githubusercontent.com/u/141304309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MF-FOOM",
"html_url": "https://github.com/MF-FOOM",
"followers_url": "https://api.github.com/users/MF-FOOM/followers",
"following_url": "https://api.github.com/users/MF-FOOM/following{/other_user}",
"gists_url": "https://api.github.com/users/MF-FOOM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MF-FOOM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MF-FOOM/subscriptions",
"organizations_url": "https://api.github.com/users/MF-FOOM/orgs",
"repos_url": "https://api.github.com/users/MF-FOOM/repos",
"events_url": "https://api.github.com/users/MF-FOOM/events{/privacy}",
"received_events_url": "https://api.github.com/users/MF-FOOM/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting this issue! We should be able to avoid this by making our `glob` patterns more precise. In the meantime, you can load the dataset by directly assigning splits to the data files: \r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files={\"train\": \"testing123/train/output_train.parquet\", \"validation\": \"testing123/val/output_val.parquet\"})\r\n```"
] | 2023-09-25T17:20:54
| 2023-09-26T17:54:08
| null |
NONE
| null |
### Describe the bug
When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets.
### Steps to reproduce the bug
1. Create a root directory, e.g., "testing123".
2. Under "testing123", create two subdirectories: "train" and "val".
3. Create and save a parquet file with 3 unique rows in the "train" subdirectory.
4. Create and save a parquet file with 4 unique rows in the "val" subdirectory.
5. Load the datasets from the root directory using `load_dataset("parquet", data_dir="testing123")`
6. Iterate through the datasets and print the rows
Here's a collab reproducing these steps:
https://colab.research.google.com/drive/11NEdImnQ3OqJlwKSHRMhr7jCBesNdLY4?usp=sharing
### Expected behavior
- Training set should contain 3 unique rows.
- Validation set should contain 4 unique rows.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.2
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6259/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6258
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6258/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6258/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6258/events
|
https://github.com/huggingface/datasets/pull/6258
| 1,911,445,373
|
PR_kwDODunzps5bHxHl
| 6,258
|
[DOCS] Fix typo: Elasticsearch
|
{
"login": "leemthompo",
"id": 32779855,
"node_id": "MDQ6VXNlcjMyNzc5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/32779855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leemthompo",
"html_url": "https://github.com/leemthompo",
"followers_url": "https://api.github.com/users/leemthompo/followers",
"following_url": "https://api.github.com/users/leemthompo/following{/other_user}",
"gists_url": "https://api.github.com/users/leemthompo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leemthompo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leemthompo/subscriptions",
"organizations_url": "https://api.github.com/users/leemthompo/orgs",
"repos_url": "https://api.github.com/users/leemthompo/repos",
"events_url": "https://api.github.com/users/leemthompo/events{/privacy}",
"received_events_url": "https://api.github.com/users/leemthompo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006131 / 0.011353 (-0.005222) | 0.003682 / 0.011008 (-0.007327) | 0.081108 / 0.038508 (0.042600) | 0.061580 / 0.023109 (0.038471) | 0.395880 / 0.275898 (0.119982) | 0.427429 / 0.323480 (0.103949) | 0.003570 / 0.007986 (-0.004416) | 0.003874 / 0.004328 (-0.000455) | 0.063322 / 0.004250 (0.059072) | 0.049742 / 0.037052 (0.012690) | 0.396547 / 0.258489 (0.138058) | 0.434759 / 0.293841 (0.140918) | 0.028137 / 0.128546 (-0.100409) | 0.008103 / 0.075646 (-0.067544) | 0.262504 / 0.419271 (-0.156767) | 0.045944 / 0.043533 (0.002411) | 0.397659 / 0.255139 (0.142520) | 0.416479 / 0.283200 (0.133280) | 0.022870 / 0.141683 (-0.118813) | 1.478280 / 1.452155 (0.026126) | 1.543748 / 1.492716 (0.051031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228851 / 0.018006 (0.210845) | 0.432845 / 0.000490 (0.432355) | 0.005922 / 0.000200 (0.005722) | 0.000227 / 0.000054 (0.000172) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025545 / 0.037411 (-0.011867) | 0.073506 / 0.014526 (0.058980) | 0.087622 / 0.176557 (-0.088935) | 0.145455 / 0.737135 (-0.591680) | 0.085236 / 0.296338 (-0.211102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433083 / 0.215209 (0.217874) | 4.323121 / 2.077655 (2.245466) | 2.297947 / 1.504120 (0.793827) | 2.126405 / 1.541195 (0.585211) | 2.201635 / 1.468490 (0.733145) | 0.509902 / 4.584777 (-4.074875) | 3.116877 / 3.745712 (-0.628835) | 2.892949 / 5.269862 (-2.376912) | 1.866833 / 4.565676 (-2.698844) | 0.058087 / 0.424275 (-0.366189) | 0.006464 / 0.007607 (-0.001143) | 0.503594 / 0.226044 (0.277550) | 5.027634 / 2.268929 (2.758705) | 2.718030 / 55.444624 (-52.726595) | 2.373876 / 6.876477 (-4.502600) | 2.515496 / 2.142072 (0.373423) | 0.602648 / 4.805227 (-4.202579) | 0.126119 / 6.500664 (-6.374545) | 0.060623 / 0.075469 (-0.014846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236429 / 1.841788 (-0.605359) | 17.760532 / 8.074308 (9.686224) | 13.970093 / 10.191392 (3.778701) | 0.145455 / 0.680424 (-0.534969) | 0.017110 / 0.534201 (-0.517091) | 0.329649 / 0.579283 (-0.249634) | 0.366942 / 0.434364 (-0.067421) | 0.384418 / 0.540337 (-0.155920) | 0.552330 / 1.386936 (-0.834606) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006302 / 0.011353 (-0.005051) | 0.003677 / 0.011008 (-0.007331) | 0.062836 / 0.038508 (0.024328) | 0.063317 / 0.023109 (0.040207) | 0.449970 / 0.275898 (0.174072) | 0.480903 / 0.323480 (0.157423) | 0.005013 / 0.007986 (-0.002972) | 0.002934 / 0.004328 (-0.001394) | 0.062975 / 0.004250 (0.058724) | 0.051285 / 0.037052 (0.014233) | 0.448417 / 0.258489 (0.189928) | 0.486022 / 0.293841 (0.192181) | 0.029215 / 0.128546 (-0.099332) | 0.008189 / 0.075646 (-0.067457) | 0.068203 / 0.419271 (-0.351068) | 0.041942 / 0.043533 (-0.001591) | 0.445749 / 0.255139 (0.190610) | 0.465442 / 0.283200 (0.182243) | 0.020681 / 0.141683 (-0.121002) | 1.500704 / 1.452155 (0.048549) | 1.550511 / 1.492716 (0.057795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224922 / 0.018006 (0.206915) | 0.419714 / 0.000490 (0.419224) | 0.003804 / 0.000200 (0.003604) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026924 / 0.037411 (-0.010487) | 0.082400 / 0.014526 (0.067874) | 0.092193 / 0.176557 (-0.084363) | 0.147045 / 0.737135 (-0.590090) | 0.093173 / 0.296338 (-0.203166) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462510 / 0.215209 (0.247300) | 4.635249 / 2.077655 (2.557594) | 2.627127 / 1.504120 (1.123007) | 2.442879 / 1.541195 (0.901684) | 2.502456 / 1.468490 (1.033966) | 0.506607 / 4.584777 (-4.078170) | 3.127348 / 3.745712 (-0.618364) | 2.901818 / 5.269862 (-2.368044) | 1.906876 / 4.565676 (-2.658801) | 0.058025 / 0.424275 (-0.366250) | 0.006442 / 0.007607 (-0.001165) | 0.534438 / 0.226044 (0.308394) | 5.352481 / 2.268929 (3.083553) | 3.058068 / 55.444624 (-52.386556) | 2.697310 / 6.876477 (-4.179167) | 2.873141 / 2.142072 (0.731069) | 0.594517 / 4.805227 (-4.210710) | 0.125369 / 6.500664 (-6.375295) | 0.061411 / 0.075469 (-0.014058) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369549 / 1.841788 (-0.472238) | 17.933507 / 8.074308 (9.859199) | 14.890107 / 10.191392 (4.698715) | 0.154398 / 0.680424 (-0.526026) | 0.018021 / 0.534201 (-0.516180) | 0.335163 / 0.579283 (-0.244120) | 0.350396 / 0.434364 (-0.083968) | 0.397694 / 0.540337 (-0.142643) | 0.554853 / 1.386936 (-0.832083) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-25T12:50:59
| 2023-09-26T14:55:35
| 2023-09-26T13:36:40
|
CONTRIBUTOR
| null |
Not ElasticSearch :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6258/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6258/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6258",
"html_url": "https://github.com/huggingface/datasets/pull/6258",
"diff_url": "https://github.com/huggingface/datasets/pull/6258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6258.patch",
"merged_at": "2023-09-26T13:36:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6257
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6257/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6257/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6257/events
|
https://github.com/huggingface/datasets/issues/6257
| 1,910,741,044
|
I_kwDODunzps5x45g0
| 6,257
|
HfHubHTTPError - exceeded our hourly quotas for action: commit
|
{
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"how is your dataset structured? (file types, how many commits and files are you trying to push, etc)",
"I succeeded in uploading it after several attempts with an hour gap between each attempt (inconvenient but worked). The final dataset is [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2), code and context to the dataset can be found [here](https://github.com/yuvalkirstain/PickScore/).\r\nI can close the issue if this behavior is intended, as most users probably do not need to upload large-scale datasets.",
"We could fix this by creating a single commit for all the (Parquet) shards in `push_to_hub` instead of one commit per shard, as we currently do. \r\n\r\n@Wauplin Any updates on the 2-step commit process suggested by you that we need to implement this?",
"> Any updates on the 2-step commit process suggested by you that we need to implement this?\r\n\r\nRe-prioritizing this, sorry. Will let you know but probably can be done this week."
] | 2023-09-25T06:11:43
| 2023-09-27T07:04:59
| null |
NONE
| null |
### Describe the bug
I try to upload a very large dataset of images, and get the following error:
```
File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit, run_as_future)
2710 try:
2711 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params)
-> 2712 hf_raise_for_status(commit_resp, endpoint_name="commit")
2713 except RepositoryNotFoundError as e:
2714 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE)
File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/yuvalkirstain/pickapic_v2/commit/main (Request ID: Root=1-65112399-12d63f7d7f28bfa40a36a0fd)
You have exceeded our hourly quotas for action: commit. We invite you to retry later.
```
this makes it much less convenient to host large datasets on HF hub.
### Steps to reproduce the bug
Upload a very large dataset of images
### Expected behavior
the upload to work well
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6257/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6256
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6256/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6256/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6256/events
|
https://github.com/huggingface/datasets/issues/6256
| 1,910,275,199
|
I_kwDODunzps5x3Hx_
| 6,256
|
load_dataset() function's cache_dir does not seems to work
|
{
"login": "andyzhu",
"id": 171831,
"node_id": "MDQ6VXNlcjE3MTgzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/171831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyzhu",
"html_url": "https://github.com/andyzhu",
"followers_url": "https://api.github.com/users/andyzhu/followers",
"following_url": "https://api.github.com/users/andyzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzhu/subscriptions",
"organizations_url": "https://api.github.com/users/andyzhu/orgs",
"repos_url": "https://api.github.com/users/andyzhu/repos",
"events_url": "https://api.github.com/users/andyzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Can you share the error message?\r\n\r\nAlso, it would help if you could check whether `huggingface_hub`'s download behaves the same:\r\n```python\r\nfrom huggingface_hub import snapshot_download\r\nsnapshot_download(\"trec\", repo_type=\"dataset\", cache_dir='/path/to/my/dir)\r\n```\r\n\r\nIn the next major release, we aim to switch to `huggingface_hub` for file download/caching, but we could align the `cache_dir`'s `umask` behavior earlier than this if their solution works for your use case."
] | 2023-09-24T15:34:06
| 2023-09-27T13:40:45
| null |
NONE
| null |
### Describe the bug
datasets version: 2.14.5
when trying to run the following command
trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir')
I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine.
It seems the cache_dir parameter cannot change the dataset saving directory from the default
what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work
### Steps to reproduce the bug
datasets version: 2.14.5
when trying to run the following command
trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir')
I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine.
It seems the cache_dir parameter cannot change the dataset saving directory from the default
what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work
### Expected behavior
the dataset should be saved to the cache_dir points to
### Environment info
datasets version: 2.14.5
macos X: Ventura 13.4.1 (c)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6256/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6255
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6255/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6255/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6255/events
|
https://github.com/huggingface/datasets/pull/6255
| 1,909,842,977
|
PR_kwDODunzps5bCioS
| 6,255
|
Parallelize builder configs creation
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005905 / 0.011353 (-0.005448) | 0.003623 / 0.011008 (-0.007385) | 0.079616 / 0.038508 (0.041108) | 0.059840 / 0.023109 (0.036730) | 0.392281 / 0.275898 (0.116383) | 0.434539 / 0.323480 (0.111059) | 0.004746 / 0.007986 (-0.003239) | 0.002935 / 0.004328 (-0.001394) | 0.062907 / 0.004250 (0.058657) | 0.048233 / 0.037052 (0.011181) | 0.394170 / 0.258489 (0.135681) | 0.427430 / 0.293841 (0.133589) | 0.027382 / 0.128546 (-0.101164) | 0.007890 / 0.075646 (-0.067756) | 0.259681 / 0.419271 (-0.159591) | 0.044085 / 0.043533 (0.000552) | 0.388640 / 0.255139 (0.133501) | 0.412665 / 0.283200 (0.129465) | 0.021256 / 0.141683 (-0.120427) | 1.485672 / 1.452155 (0.033518) | 1.531410 / 1.492716 (0.038694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220346 / 0.018006 (0.202340) | 0.425329 / 0.000490 (0.424840) | 0.006224 / 0.000200 (0.006024) | 0.000208 / 0.000054 (0.000153) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024864 / 0.037411 (-0.012547) | 0.072925 / 0.014526 (0.058399) | 0.083711 / 0.176557 (-0.092845) | 0.144213 / 0.737135 (-0.592923) | 0.084201 / 0.296338 (-0.212137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399467 / 0.215209 (0.184258) | 3.978979 / 2.077655 (1.901325) | 1.916994 / 1.504120 (0.412874) | 1.753098 / 1.541195 (0.211903) | 1.809866 / 1.468490 (0.341376) | 0.506806 / 4.584777 (-4.077971) | 3.051044 / 3.745712 (-0.694668) | 2.857624 / 5.269862 (-2.412237) | 1.872033 / 4.565676 (-2.693644) | 0.058543 / 0.424275 (-0.365732) | 0.006569 / 0.007607 (-0.001038) | 0.472630 / 0.226044 (0.246586) | 4.724862 / 2.268929 (2.455934) | 2.413068 / 55.444624 (-53.031556) | 2.046910 / 6.876477 (-4.829567) | 2.190455 / 2.142072 (0.048383) | 0.595228 / 4.805227 (-4.210000) | 0.125942 / 6.500664 (-6.374722) | 0.059474 / 0.075469 (-0.015995) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235927 / 1.841788 (-0.605861) | 17.367803 / 8.074308 (9.293495) | 13.550362 / 10.191392 (3.358970) | 0.131664 / 0.680424 (-0.548760) | 0.016331 / 0.534201 (-0.517870) | 0.331295 / 0.579283 (-0.247988) | 0.367641 / 0.434364 (-0.066723) | 0.382595 / 0.540337 (-0.157742) | 0.540361 / 1.386936 (-0.846575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006120 / 0.011353 (-0.005233) | 0.003691 / 0.011008 (-0.007318) | 0.062768 / 0.038508 (0.024259) | 0.058045 / 0.023109 (0.034936) | 0.443616 / 0.275898 (0.167718) | 0.473854 / 0.323480 (0.150374) | 0.004710 / 0.007986 (-0.003275) | 0.002915 / 0.004328 (-0.001414) | 0.062922 / 0.004250 (0.058672) | 0.048557 / 0.037052 (0.011505) | 0.446136 / 0.258489 (0.187647) | 0.479235 / 0.293841 (0.185394) | 0.028704 / 0.128546 (-0.099842) | 0.008170 / 0.075646 (-0.067477) | 0.068853 / 0.419271 (-0.350419) | 0.041393 / 0.043533 (-0.002140) | 0.444683 / 0.255139 (0.189544) | 0.466607 / 0.283200 (0.183407) | 0.020890 / 0.141683 (-0.120793) | 1.473745 / 1.452155 (0.021590) | 1.498772 / 1.492716 (0.006055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216875 / 0.018006 (0.198868) | 0.411700 / 0.000490 (0.411211) | 0.003337 / 0.000200 (0.003137) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.080617 / 0.014526 (0.066092) | 0.091052 / 0.176557 (-0.085505) | 0.144126 / 0.737135 (-0.593009) | 0.090123 / 0.296338 (-0.206216) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461132 / 0.215209 (0.245922) | 4.598662 / 2.077655 (2.521008) | 2.539213 / 1.504120 (1.035093) | 2.362782 / 1.541195 (0.821587) | 2.428648 / 1.468490 (0.960157) | 0.506305 / 4.584777 (-4.078472) | 3.091132 / 3.745712 (-0.654581) | 2.884870 / 5.269862 (-2.384992) | 1.880806 / 4.565676 (-2.684870) | 0.058727 / 0.424275 (-0.365548) | 0.006452 / 0.007607 (-0.001155) | 0.533519 / 0.226044 (0.307474) | 5.346406 / 2.268929 (3.077478) | 2.987920 / 55.444624 (-52.456704) | 2.667591 / 6.876477 (-4.208885) | 2.848696 / 2.142072 (0.706623) | 0.601018 / 4.805227 (-4.204209) | 0.124929 / 6.500664 (-6.375735) | 0.061583 / 0.075469 (-0.013886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356825 / 1.841788 (-0.484962) | 17.964503 / 8.074308 (9.890195) | 14.691471 / 10.191392 (4.500079) | 0.132525 / 0.680424 (-0.547899) | 0.018061 / 0.534201 (-0.516140) | 0.335459 / 0.579283 (-0.243824) | 0.378260 / 0.434364 (-0.056104) | 0.390681 / 0.540337 (-0.149657) | 0.547030 / 1.386936 (-0.839906) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006624 / 0.011353 (-0.004729) | 0.004039 / 0.011008 (-0.006970) | 0.085862 / 0.038508 (0.047354) | 0.077183 / 0.023109 (0.054074) | 0.319132 / 0.275898 (0.043234) | 0.350818 / 0.323480 (0.027338) | 0.004122 / 0.007986 (-0.003864) | 0.003395 / 0.004328 (-0.000934) | 0.065237 / 0.004250 (0.060987) | 0.056675 / 0.037052 (0.019623) | 0.321040 / 0.258489 (0.062551) | 0.362011 / 0.293841 (0.068170) | 0.030988 / 0.128546 (-0.097559) | 0.008623 / 0.075646 (-0.067023) | 0.289433 / 0.419271 (-0.129839) | 0.052755 / 0.043533 (0.009222) | 0.323291 / 0.255139 (0.068152) | 0.340110 / 0.283200 (0.056911) | 0.026299 / 0.141683 (-0.115383) | 1.509405 / 1.452155 (0.057250) | 1.559993 / 1.492716 (0.067277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233285 / 0.018006 (0.215279) | 0.451633 / 0.000490 (0.451143) | 0.009954 / 0.000200 (0.009754) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029623 / 0.037411 (-0.007788) | 0.083942 / 0.014526 (0.069416) | 0.097378 / 0.176557 (-0.079178) | 0.152630 / 0.737135 (-0.584506) | 0.098379 / 0.296338 (-0.197959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386237 / 0.215209 (0.171028) | 3.850805 / 2.077655 (1.773150) | 1.896032 / 1.504120 (0.391912) | 1.729746 / 1.541195 (0.188551) | 1.867831 / 1.468490 (0.399341) | 0.481496 / 4.584777 (-4.103281) | 3.564432 / 3.745712 (-0.181280) | 3.336084 / 5.269862 (-1.933777) | 2.040944 / 4.565676 (-2.524732) | 0.057247 / 0.424275 (-0.367028) | 0.007275 / 0.007607 (-0.000332) | 0.464600 / 0.226044 (0.238556) | 4.648562 / 2.268929 (2.379634) | 2.394430 / 55.444624 (-53.050195) | 2.029748 / 6.876477 (-4.846728) | 2.280975 / 2.142072 (0.138902) | 0.619073 / 4.805227 (-4.186154) | 0.150504 / 6.500664 (-6.350160) | 0.061206 / 0.075469 (-0.014263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267309 / 1.841788 (-0.574479) | 19.062725 / 8.074308 (10.988417) | 14.192565 / 10.191392 (4.001173) | 0.162908 / 0.680424 (-0.517515) | 0.018445 / 0.534201 (-0.515756) | 0.392110 / 0.579283 (-0.187173) | 0.415340 / 0.434364 (-0.019024) | 0.456783 / 0.540337 (-0.083554) | 0.653019 / 1.386936 (-0.733917) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006995 / 0.011353 (-0.004358) | 0.004027 / 0.011008 (-0.006981) | 0.064124 / 0.038508 (0.025616) | 0.076004 / 0.023109 (0.052895) | 0.401760 / 0.275898 (0.125862) | 0.432339 / 0.323480 (0.108859) | 0.005471 / 0.007986 (-0.002515) | 0.003335 / 0.004328 (-0.000993) | 0.064164 / 0.004250 (0.059913) | 0.058101 / 0.037052 (0.021048) | 0.401698 / 0.258489 (0.143209) | 0.436033 / 0.293841 (0.142192) | 0.032789 / 0.128546 (-0.095757) | 0.008482 / 0.075646 (-0.067165) | 0.070707 / 0.419271 (-0.348565) | 0.048287 / 0.043533 (0.004755) | 0.395501 / 0.255139 (0.140362) | 0.419385 / 0.283200 (0.136186) | 0.024043 / 0.141683 (-0.117640) | 1.503310 / 1.452155 (0.051156) | 1.562160 / 1.492716 (0.069444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227629 / 0.018006 (0.209623) | 0.457306 / 0.000490 (0.456816) | 0.005835 / 0.000200 (0.005635) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032991 / 0.037411 (-0.004420) | 0.093265 / 0.014526 (0.078739) | 0.106595 / 0.176557 (-0.069961) | 0.158557 / 0.737135 (-0.578578) | 0.106805 / 0.296338 (-0.189533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436573 / 0.215209 (0.221364) | 4.355777 / 2.077655 (2.278122) | 2.323151 / 1.504120 (0.819031) | 2.164101 / 1.541195 (0.622906) | 2.252808 / 1.468490 (0.784318) | 0.494902 / 4.584777 (-4.089875) | 3.615073 / 3.745712 (-0.130639) | 3.329738 / 5.269862 (-1.940124) | 2.059137 / 4.565676 (-2.506539) | 0.058384 / 0.424275 (-0.365891) | 0.007330 / 0.007607 (-0.000277) | 0.512326 / 0.226044 (0.286281) | 5.125652 / 2.268929 (2.856724) | 2.861981 / 55.444624 (-52.582644) | 2.500172 / 6.876477 (-4.376305) | 2.715862 / 2.142072 (0.573789) | 0.597299 / 4.805227 (-4.207928) | 0.134346 / 6.500664 (-6.366318) | 0.060396 / 0.075469 (-0.015074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353771 / 1.841788 (-0.488017) | 19.334801 / 8.074308 (11.260493) | 14.669875 / 10.191392 (4.478483) | 0.167607 / 0.680424 (-0.512817) | 0.019839 / 0.534201 (-0.514362) | 0.395473 / 0.579283 (-0.183810) | 0.419822 / 0.434364 (-0.014542) | 0.471400 / 0.540337 (-0.068938) | 0.648297 / 1.386936 (-0.738639) |\n\n</details>\n</details>\n\n\n",
"@mariosasko let me know what you think or if you have better ideas to make it faster",
"Yea lazy data files resolution seems a better approach actually"
] | 2023-09-23T11:56:20
| 2023-09-26T15:44:47
| 2023-09-26T15:44:19
|
MEMBER
| null |
For datasets with lots of configs defined in YAML
E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6255/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6255",
"html_url": "https://github.com/huggingface/datasets/pull/6255",
"diff_url": "https://github.com/huggingface/datasets/pull/6255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6255.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6254
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6254/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6254/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6254/events
|
https://github.com/huggingface/datasets/issues/6254
| 1,909,672,104
|
I_kwDODunzps5x00io
| 6,254
|
Dataset.from_generator() cost much more time in vscode debugging mode then running mode
|
{
"login": "dontnet-wuenze",
"id": 56437469,
"node_id": "MDQ6VXNlcjU2NDM3NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/56437469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dontnet-wuenze",
"html_url": "https://github.com/dontnet-wuenze",
"followers_url": "https://api.github.com/users/dontnet-wuenze/followers",
"following_url": "https://api.github.com/users/dontnet-wuenze/following{/other_user}",
"gists_url": "https://api.github.com/users/dontnet-wuenze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dontnet-wuenze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dontnet-wuenze/subscriptions",
"organizations_url": "https://api.github.com/users/dontnet-wuenze/orgs",
"repos_url": "https://api.github.com/users/dontnet-wuenze/repos",
"events_url": "https://api.github.com/users/dontnet-wuenze/events{/privacy}",
"received_events_url": "https://api.github.com/users/dontnet-wuenze/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Answered on the forum: https://discuss.huggingface.co/t/dataset-from-generator-cost-much-more-time-in-vscode-debugging-mode-then-running-mode/56005/2"
] | 2023-09-23T02:07:26
| 2023-10-03T14:42:53
| 2023-10-03T14:42:53
|
NONE
| null |
### Describe the bug
Hey there,
I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset.
However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal.
### Steps to reproduce the bug
I write a simple test code :
```python
import os
from functools import partial
from typing import Callable
import torch
import time
from torch.utils.data import Dataset as TorchDataset
from datasets import load_from_disk, Dataset as HFDataset
import torch
from torch.utils.data import Dataset
class SimpleDataset(Dataset):
def __init__(self, data):
self.data = data
self.keys = list(data[0].keys())
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sample = self.data[index]
return {key: sample[key] for key in self.keys}
def TorchDataset2HuggingfaceDataset(torch_dataset: TorchDataset, cache_dir: str = None
) -> HFDataset:
"""
convert torch dataset to huggingface dataset
"""
generator : Callable[[], TorchDataset] = lambda: (sample for sample in torch_dataset)
return HFDataset.from_generator(generator, cache_dir=cache_dir)
if __name__ == '__main__':
data = [
{'id': 1, 'name': 'Alice'},
{'id': 2, 'name': 'Bob'},
{'id': 3, 'name': 'Charlie'}
]
torch_dataset = SimpleDataset(data)
start_time = time.time()
huggingface_dataset = TorchDataset2HuggingfaceDataset(torch_dataset)
end_time = time.time()
print("time: ", end_time - start_time)
print(huggingface_dataset)
```
### Expected behavior
this test on my machine report that the running time on terminal is 0.086,
however the running time in debugging mode on vscode is 0.25, which I think is much longer than expected.
I’d like to know is the anything wrong in the code or just because of debugging?
I have traced the code and I find is this func which I get stuck.
```python
def create_config_id(
self,
config_kwargs: dict,
custom_features: Optional[Features] = None,
) -> str:
...
# stuck in this line
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.17.2
- PyArrow version: 11.0.0
- Pandas version: 2.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6254/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6253
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6253/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6253/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6253/events
|
https://github.com/huggingface/datasets/pull/6253
| 1,906,618,910
|
PR_kwDODunzps5a3s__
| 6,253
|
Check builder cls default config name in inspect
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006591 / 0.011353 (-0.004762) | 0.003991 / 0.011008 (-0.007017) | 0.085197 / 0.038508 (0.046689) | 0.080312 / 0.023109 (0.057202) | 0.342026 / 0.275898 (0.066128) | 0.370749 / 0.323480 (0.047269) | 0.004124 / 0.007986 (-0.003861) | 0.003413 / 0.004328 (-0.000916) | 0.064363 / 0.004250 (0.060113) | 0.055920 / 0.037052 (0.018868) | 0.340667 / 0.258489 (0.082178) | 0.380138 / 0.293841 (0.086297) | 0.031115 / 0.128546 (-0.097431) | 0.008511 / 0.075646 (-0.067135) | 0.289065 / 0.419271 (-0.130207) | 0.052266 / 0.043533 (0.008734) | 0.343808 / 0.255139 (0.088669) | 0.353578 / 0.283200 (0.070378) | 0.024006 / 0.141683 (-0.117676) | 1.490322 / 1.452155 (0.038168) | 1.591133 / 1.492716 (0.098417) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234718 / 0.018006 (0.216712) | 0.447023 / 0.000490 (0.446533) | 0.009343 / 0.000200 (0.009143) | 0.000259 / 0.000054 (0.000204) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030466 / 0.037411 (-0.006945) | 0.083367 / 0.014526 (0.068841) | 0.100532 / 0.176557 (-0.076024) | 0.158018 / 0.737135 (-0.579117) | 0.098280 / 0.296338 (-0.198059) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408501 / 0.215209 (0.193292) | 4.066937 / 2.077655 (1.989282) | 2.034029 / 1.504120 (0.529909) | 1.842982 / 1.541195 (0.301788) | 1.987319 / 1.468490 (0.518829) | 0.492126 / 4.584777 (-4.092651) | 3.554027 / 3.745712 (-0.191685) | 3.289023 / 5.269862 (-1.980839) | 2.069796 / 4.565676 (-2.495880) | 0.057930 / 0.424275 (-0.366346) | 0.007308 / 0.007607 (-0.000299) | 0.482596 / 0.226044 (0.256552) | 4.830714 / 2.268929 (2.561785) | 2.506787 / 55.444624 (-52.937838) | 2.163498 / 6.876477 (-4.712979) | 2.389135 / 2.142072 (0.247062) | 0.597538 / 4.805227 (-4.207689) | 0.134268 / 6.500664 (-6.366396) | 0.061189 / 0.075469 (-0.014280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245328 / 1.841788 (-0.596460) | 19.145151 / 8.074308 (11.070843) | 14.742121 / 10.191392 (4.550729) | 0.144749 / 0.680424 (-0.535675) | 0.018433 / 0.534201 (-0.515768) | 0.391867 / 0.579283 (-0.187416) | 0.416555 / 0.434364 (-0.017809) | 0.454341 / 0.540337 (-0.085997) | 0.646833 / 1.386936 (-0.740103) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006669 / 0.011353 (-0.004684) | 0.004031 / 0.011008 (-0.006978) | 0.064347 / 0.038508 (0.025839) | 0.076857 / 0.023109 (0.053748) | 0.415864 / 0.275898 (0.139966) | 0.468615 / 0.323480 (0.145135) | 0.005383 / 0.007986 (-0.002603) | 0.003314 / 0.004328 (-0.001015) | 0.064829 / 0.004250 (0.060578) | 0.057182 / 0.037052 (0.020129) | 0.417055 / 0.258489 (0.158566) | 0.472725 / 0.293841 (0.178884) | 0.031938 / 0.128546 (-0.096608) | 0.008564 / 0.075646 (-0.067082) | 0.070649 / 0.419271 (-0.348623) | 0.047439 / 0.043533 (0.003906) | 0.409589 / 0.255139 (0.154450) | 0.433700 / 0.283200 (0.150500) | 0.024132 / 0.141683 (-0.117551) | 1.500825 / 1.452155 (0.048670) | 1.592059 / 1.492716 (0.099343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225652 / 0.018006 (0.207646) | 0.444188 / 0.000490 (0.443698) | 0.004581 / 0.000200 (0.004381) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033272 / 0.037411 (-0.004139) | 0.096833 / 0.014526 (0.082307) | 0.107134 / 0.176557 (-0.069422) | 0.159299 / 0.737135 (-0.577836) | 0.107533 / 0.296338 (-0.188806) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429100 / 0.215209 (0.213890) | 4.281051 / 2.077655 (2.203396) | 2.318713 / 1.504120 (0.814593) | 2.165645 / 1.541195 (0.624451) | 2.250224 / 1.468490 (0.781734) | 0.495791 / 4.584777 (-4.088986) | 3.591953 / 3.745712 (-0.153760) | 3.303426 / 5.269862 (-1.966436) | 2.076861 / 4.565676 (-2.488816) | 0.058369 / 0.424275 (-0.365906) | 0.007387 / 0.007607 (-0.000220) | 0.501270 / 0.226044 (0.275225) | 5.014987 / 2.268929 (2.746059) | 2.800951 / 55.444624 (-52.643673) | 2.464316 / 6.876477 (-4.412161) | 2.685259 / 2.142072 (0.543187) | 0.584797 / 4.805227 (-4.220430) | 0.131889 / 6.500664 (-6.368775) | 0.061021 / 0.075469 (-0.014448) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366982 / 1.841788 (-0.474806) | 19.820376 / 8.074308 (11.746068) | 14.968664 / 10.191392 (4.777272) | 0.165344 / 0.680424 (-0.515080) | 0.019956 / 0.534201 (-0.514245) | 0.395843 / 0.579283 (-0.183441) | 0.420854 / 0.434364 (-0.013510) | 0.465065 / 0.540337 (-0.075272) | 0.651531 / 1.386936 (-0.735405) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005379) | 0.003714 / 0.011008 (-0.007294) | 0.080049 / 0.038508 (0.041541) | 0.061233 / 0.023109 (0.038124) | 0.317187 / 0.275898 (0.041289) | 0.352725 / 0.323480 (0.029245) | 0.004867 / 0.007986 (-0.003119) | 0.002953 / 0.004328 (-0.001376) | 0.063156 / 0.004250 (0.058905) | 0.046752 / 0.037052 (0.009700) | 0.320171 / 0.258489 (0.061682) | 0.367572 / 0.293841 (0.073731) | 0.027253 / 0.128546 (-0.101293) | 0.008100 / 0.075646 (-0.067546) | 0.261206 / 0.419271 (-0.158066) | 0.044581 / 0.043533 (0.001048) | 0.331169 / 0.255139 (0.076030) | 0.348719 / 0.283200 (0.065519) | 0.021397 / 0.141683 (-0.120286) | 1.528315 / 1.452155 (0.076160) | 1.533789 / 1.492716 (0.041073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233336 / 0.018006 (0.215329) | 0.416866 / 0.000490 (0.416376) | 0.008805 / 0.000200 (0.008605) | 0.000240 / 0.000054 (0.000186) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024754 / 0.037411 (-0.012657) | 0.073311 / 0.014526 (0.058785) | 0.085419 / 0.176557 (-0.091138) | 0.146380 / 0.737135 (-0.590756) | 0.085545 / 0.296338 (-0.210793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431426 / 0.215209 (0.216217) | 4.315899 / 2.077655 (2.238244) | 2.232492 / 1.504120 (0.728372) | 2.064174 / 1.541195 (0.522979) | 2.158982 / 1.468490 (0.690492) | 0.499375 / 4.584777 (-4.085402) | 3.093259 / 3.745712 (-0.652454) | 2.848260 / 5.269862 (-2.421601) | 1.853097 / 4.565676 (-2.712579) | 0.057143 / 0.424275 (-0.367132) | 0.006349 / 0.007607 (-0.001258) | 0.507747 / 0.226044 (0.281702) | 5.078872 / 2.268929 (2.809944) | 2.717697 / 55.444624 (-52.726927) | 2.363564 / 6.876477 (-4.512913) | 2.485756 / 2.142072 (0.343684) | 0.595888 / 4.805227 (-4.209340) | 0.127285 / 6.500664 (-6.373379) | 0.060639 / 0.075469 (-0.014830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219287 / 1.841788 (-0.622501) | 17.300038 / 8.074308 (9.225730) | 13.747230 / 10.191392 (3.555838) | 0.144841 / 0.680424 (-0.535583) | 0.016587 / 0.534201 (-0.517614) | 0.336891 / 0.579283 (-0.242392) | 0.376128 / 0.434364 (-0.058236) | 0.385749 / 0.540337 (-0.154588) | 0.552218 / 1.386936 (-0.834718) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006477 / 0.011353 (-0.004876) | 0.003709 / 0.011008 (-0.007299) | 0.064708 / 0.038508 (0.026200) | 0.062627 / 0.023109 (0.039518) | 0.444721 / 0.275898 (0.168823) | 0.477825 / 0.323480 (0.154345) | 0.004890 / 0.007986 (-0.003096) | 0.002896 / 0.004328 (-0.001432) | 0.063781 / 0.004250 (0.059530) | 0.050488 / 0.037052 (0.013436) | 0.453466 / 0.258489 (0.194977) | 0.483303 / 0.293841 (0.189462) | 0.028814 / 0.128546 (-0.099732) | 0.008207 / 0.075646 (-0.067440) | 0.070140 / 0.419271 (-0.349131) | 0.041487 / 0.043533 (-0.002045) | 0.454599 / 0.255139 (0.199460) | 0.468374 / 0.283200 (0.185174) | 0.019758 / 0.141683 (-0.121925) | 1.437542 / 1.452155 (-0.014613) | 1.507965 / 1.492716 (0.015249) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223358 / 0.018006 (0.205352) | 0.413824 / 0.000490 (0.413334) | 0.004593 / 0.000200 (0.004393) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026278 / 0.037411 (-0.011134) | 0.081992 / 0.014526 (0.067466) | 0.089969 / 0.176557 (-0.086587) | 0.143668 / 0.737135 (-0.593467) | 0.091273 / 0.296338 (-0.205066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461198 / 0.215209 (0.245989) | 4.615398 / 2.077655 (2.537743) | 2.552291 / 1.504120 (1.048171) | 2.373789 / 1.541195 (0.832595) | 2.431591 / 1.468490 (0.963101) | 0.507683 / 4.584777 (-4.077094) | 3.148771 / 3.745712 (-0.596941) | 2.849118 / 5.269862 (-2.420744) | 1.883001 / 4.565676 (-2.682675) | 0.059423 / 0.424275 (-0.364852) | 0.006463 / 0.007607 (-0.001144) | 0.535129 / 0.226044 (0.309085) | 5.362870 / 2.268929 (3.093941) | 3.016548 / 55.444624 (-52.428076) | 2.666205 / 6.876477 (-4.210271) | 2.821396 / 2.142072 (0.679324) | 0.606596 / 4.805227 (-4.198631) | 0.125991 / 6.500664 (-6.374673) | 0.063566 / 0.075469 (-0.011903) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.364771 / 1.841788 (-0.477017) | 18.000713 / 8.074308 (9.926404) | 14.840330 / 10.191392 (4.648937) | 0.144770 / 0.680424 (-0.535653) | 0.018060 / 0.534201 (-0.516141) | 0.334470 / 0.579283 (-0.244813) | 0.387386 / 0.434364 (-0.046978) | 0.398743 / 0.540337 (-0.141595) | 0.555437 / 1.386936 (-0.831499) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006491 / 0.011353 (-0.004862) | 0.004058 / 0.011008 (-0.006950) | 0.084462 / 0.038508 (0.045954) | 0.072310 / 0.023109 (0.049201) | 0.352458 / 0.275898 (0.076560) | 0.385829 / 0.323480 (0.062350) | 0.003978 / 0.007986 (-0.004007) | 0.003455 / 0.004328 (-0.000873) | 0.064070 / 0.004250 (0.059819) | 0.055577 / 0.037052 (0.018525) | 0.361288 / 0.258489 (0.102799) | 0.400147 / 0.293841 (0.106306) | 0.030785 / 0.128546 (-0.097761) | 0.008676 / 0.075646 (-0.066971) | 0.287481 / 0.419271 (-0.131791) | 0.052643 / 0.043533 (0.009110) | 0.354670 / 0.255139 (0.099531) | 0.382322 / 0.283200 (0.099122) | 0.025657 / 0.141683 (-0.116026) | 1.486798 / 1.452155 (0.034643) | 1.588439 / 1.492716 (0.095723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240881 / 0.018006 (0.222875) | 0.463997 / 0.000490 (0.463507) | 0.009688 / 0.000200 (0.009488) | 0.000601 / 0.000054 (0.000546) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029071 / 0.037411 (-0.008340) | 0.083077 / 0.014526 (0.068551) | 0.119857 / 0.176557 (-0.056699) | 0.153387 / 0.737135 (-0.583749) | 0.132162 / 0.296338 (-0.164177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383822 / 0.215209 (0.168613) | 3.828572 / 2.077655 (1.750918) | 1.877629 / 1.504120 (0.373509) | 1.708757 / 1.541195 (0.167562) | 1.771658 / 1.468490 (0.303168) | 0.482439 / 4.584777 (-4.102338) | 3.496247 / 3.745712 (-0.249466) | 3.282055 / 5.269862 (-1.987807) | 2.053069 / 4.565676 (-2.512607) | 0.056626 / 0.424275 (-0.367649) | 0.007338 / 0.007607 (-0.000269) | 0.461257 / 0.226044 (0.235213) | 4.605326 / 2.268929 (2.336397) | 2.408365 / 55.444624 (-53.036260) | 1.986550 / 6.876477 (-4.889926) | 2.225220 / 2.142072 (0.083148) | 0.601301 / 4.805227 (-4.203927) | 0.132217 / 6.500664 (-6.368447) | 0.061217 / 0.075469 (-0.014252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268706 / 1.841788 (-0.573081) | 18.892026 / 8.074308 (10.817717) | 14.093892 / 10.191392 (3.902500) | 0.162483 / 0.680424 (-0.517941) | 0.018372 / 0.534201 (-0.515829) | 0.391901 / 0.579283 (-0.187382) | 0.401578 / 0.434364 (-0.032786) | 0.456741 / 0.540337 (-0.083596) | 0.646760 / 1.386936 (-0.740176) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006657 / 0.011353 (-0.004696) | 0.003981 / 0.011008 (-0.007027) | 0.066126 / 0.038508 (0.027617) | 0.072673 / 0.023109 (0.049564) | 0.409970 / 0.275898 (0.134072) | 0.430797 / 0.323480 (0.107317) | 0.005477 / 0.007986 (-0.002508) | 0.003362 / 0.004328 (-0.000966) | 0.065532 / 0.004250 (0.061282) | 0.056018 / 0.037052 (0.018966) | 0.406676 / 0.258489 (0.148187) | 0.438516 / 0.293841 (0.144675) | 0.032795 / 0.128546 (-0.095751) | 0.008580 / 0.075646 (-0.067066) | 0.072692 / 0.419271 (-0.346579) | 0.048110 / 0.043533 (0.004577) | 0.396826 / 0.255139 (0.141687) | 0.418442 / 0.283200 (0.135242) | 0.023269 / 0.141683 (-0.118414) | 1.499438 / 1.452155 (0.047283) | 1.568842 / 1.492716 (0.076126) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218729 / 0.018006 (0.200723) | 0.450771 / 0.000490 (0.450281) | 0.004996 / 0.000200 (0.004796) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031484 / 0.037411 (-0.005928) | 0.092927 / 0.014526 (0.078401) | 0.107849 / 0.176557 (-0.068707) | 0.156658 / 0.737135 (-0.580478) | 0.106373 / 0.296338 (-0.189965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434658 / 0.215209 (0.219449) | 4.336386 / 2.077655 (2.258731) | 2.322577 / 1.504120 (0.818457) | 2.149505 / 1.541195 (0.608310) | 2.201967 / 1.468490 (0.733476) | 0.496994 / 4.584777 (-4.087783) | 3.533065 / 3.745712 (-0.212647) | 3.235750 / 5.269862 (-2.034112) | 2.034854 / 4.565676 (-2.530823) | 0.058258 / 0.424275 (-0.366017) | 0.007260 / 0.007607 (-0.000347) | 0.509115 / 0.226044 (0.283071) | 5.088427 / 2.268929 (2.819499) | 2.793551 / 55.444624 (-52.651073) | 2.430588 / 6.876477 (-4.445889) | 2.625998 / 2.142072 (0.483926) | 0.611676 / 4.805227 (-4.193552) | 0.133343 / 6.500664 (-6.367321) | 0.059888 / 0.075469 (-0.015581) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377292 / 1.841788 (-0.464496) | 19.214299 / 8.074308 (11.139991) | 14.629146 / 10.191392 (4.437754) | 0.171283 / 0.680424 (-0.509141) | 0.020348 / 0.534201 (-0.513853) | 0.397823 / 0.579283 (-0.181461) | 0.411590 / 0.434364 (-0.022774) | 0.470850 / 0.540337 (-0.069487) | 0.658667 / 1.386936 (-0.728269) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-21T10:15:32
| 2023-09-21T14:16:44
| 2023-09-21T14:08:00
|
MEMBER
| null |
Fix https://github.com/huggingface/datasets-server/issues/1812
this was causing this issue:
```ipython
In [1]: from datasets import *
In [2]: inspect.get_dataset_config_names("aakanksha/udpos")
Out[2]: ['default']
In [3]: load_dataset_builder("aakanksha/udpos").config.name
Out[3]: 'en'
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6253/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6253/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6253",
"html_url": "https://github.com/huggingface/datasets/pull/6253",
"diff_url": "https://github.com/huggingface/datasets/pull/6253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6253.patch",
"merged_at": "2023-09-21T14:08:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6252
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6252/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6252/events
|
https://github.com/huggingface/datasets/issues/6252
| 1,906,375,378
|
I_kwDODunzps5xoPrS
| 6,252
|
exif_transpose not done to Image (PIL problem)
|
{
"login": "rhajou",
"id": 108274349,
"node_id": "U_kgDOBnQirQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhajou",
"html_url": "https://github.com/rhajou",
"followers_url": "https://api.github.com/users/rhajou/followers",
"following_url": "https://api.github.com/users/rhajou/following{/other_user}",
"gists_url": "https://api.github.com/users/rhajou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhajou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhajou/subscriptions",
"organizations_url": "https://api.github.com/users/rhajou/orgs",
"repos_url": "https://api.github.com/users/rhajou/repos",
"events_url": "https://api.github.com/users/rhajou/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhajou/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] |
{
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10",
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"id": 9038583,
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"title": "3.0",
"description": "Next major release",
"creator": {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 0,
"state": "open",
"created_at": "2023-02-13T16:22:42",
"updated_at": "2023-09-22T14:07:52",
"due_on": null,
"closed_at": null
}
|
[
"Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndataset = dataset.with_transform(exif_transpose_transform)\r\n```",
"This operation sets some `Image` attributes to `None` (`.format`, `.filename`, etc.), causing our tests to fail, so I think we should wait for Datasets 3.0 to make this change. In version 3.0, storing image paths will be replaced by embedding image bytes, so there will be fewer instances where we use the `.filename` attribute."
] | 2023-09-21T08:11:46
| 2023-09-22T14:07:52
| null |
NONE
| null |
### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images).
For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference):
```
from PIL import Image, ImageOps
pil = ImageOps.exif_transpose(pil)
```
reference: https://stackoverflow.com/a/63950647/5720150
Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose?
Thank you
### Motivation
Prevent having inverted data related to exif metadata that may affect object detection tasks
### Your contribution
Changing in datasets.featrues.Image I can help with that.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6252/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6251
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6251/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6251/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6251/events
|
https://github.com/huggingface/datasets/pull/6251
| 1,904,418,426
|
PR_kwDODunzps5awQsy
| 6,251
|
Support streaming datasets with pyarrow.parquet.read_table
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode. \r\n\r\n(Note that Parquet files are compressed, meaning the loaded table can be significantly larger than the size in Parquet.)\r\n\r\nInstead, we should suggest the authors to use:\r\n```python\r\nwith open(doc_path, \"rb\") as f:\r\n parquet_file = pq.ParquetFile(f)\r\n for batch in parquet_file.iter_batches():\r\n pa_table = pa.Table.from_batches([batch])\r\n yield idx, pa_table\r\n idx += 1\r\n```",
"@mariosasko I think the potential problem you evoke is independent of whether or not we support streaming mode:\r\n- if the user's script with `read_table` works in non-streaming mode, it will also work in streaming mode after this PR\r\n\r\nIn fact, what we should suggest instead is to follow the scriptless approach, so that our `parquet` packaged module is used, with all the optimizations implemented. But this approach is not possible in all cases, and some use cases need to implement a script. And if they have small Parquet files and use `read_table`, I think we should support streaming.\r\n\r\nIn summary, let me clarify the goal and the scope of this PR:\r\n- a user needs using a loading script\r\n- their files are small enough so that they use `read_table`\r\n- their loading script works in non-streaming mode\r\n- therefore, this PR allows loading their dataset in streaming mode as well",
"Yes, the no-script approach with metadata configs makes the most sense.\r\n\r\n> their files are small enough so that they use read_table\r\n\r\nSome of the Parquet files in that repo are larger than 1 GB ...\r\n\r\nAlso, I'd wait for more instances of people using the `read_table` function on the Hub before merging this PR.",
"@mariosasko, yes, this solution is not specifically for the \"uonlp/CulturaX\" dataset, but for other use cases as I explained above: indeed, they finally removed the use of `read_table` because their data files are too large.\r\n\r\n> Also, I'd wait for more instances of people using the `read_table` function on the Hub before merging this PR.\r\n\r\nDo you know how many datasets are currently using `read_table`?",
"> Do you know how many datasets are currently using read_table?\r\n\r\nZero (based on the script that checks the script contents of the public Hub datasets). ",
"I see... Thanks! :hugs: ",
"@mariosasko thanks for pointing the script! :hugs: \r\n\r\nHowever, I have found some Hub datasets that are using `read_table`, e.g.:\r\n- https://huggingface.co/datasets/jglaser/protein_ligand_contacts\r\n- https://huggingface.co/datasets/AresEkb/prof_standards_sbert_large_mt_nlu_ru\r\n- https://huggingface.co/datasets/victorcosta/pt_legislation\r\n- https://huggingface.co/datasets/jglaser/binding_affinity\r\n- https://huggingface.co/datasets/jglaser/pdbbind_complexes\r\n- https://huggingface.co/datasets/victorcosta/ria_pt__proems_format",
"I'm merging this PR as discussed in private.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008267 / 0.011353 (-0.003086) | 0.005813 / 0.011008 (-0.005195) | 0.108802 / 0.038508 (0.070294) | 0.093996 / 0.023109 (0.070886) | 0.403115 / 0.275898 (0.127217) | 0.457299 / 0.323480 (0.133819) | 0.006277 / 0.007986 (-0.001709) | 0.004701 / 0.004328 (0.000373) | 0.080700 / 0.004250 (0.076449) | 0.077906 / 0.037052 (0.040854) | 0.409972 / 0.258489 (0.151483) | 0.477707 / 0.293841 (0.183867) | 0.041816 / 0.128546 (-0.086731) | 0.011250 / 0.075646 (-0.064397) | 0.390634 / 0.419271 (-0.028637) | 0.065361 / 0.043533 (0.021828) | 0.404501 / 0.255139 (0.149362) | 0.448162 / 0.283200 (0.164962) | 0.032823 / 0.141683 (-0.108860) | 1.899892 / 1.452155 (0.447737) | 2.044561 / 1.492716 (0.551844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241093 / 0.018006 (0.223086) | 0.482111 / 0.000490 (0.481622) | 0.005505 / 0.000200 (0.005305) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034861 / 0.037411 (-0.002551) | 0.109296 / 0.014526 (0.094770) | 0.127594 / 0.176557 (-0.048962) | 0.191815 / 0.737135 (-0.545320) | 0.122630 / 0.296338 (-0.173709) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452194 / 0.215209 (0.236985) | 4.486200 / 2.077655 (2.408545) | 2.155635 / 1.504120 (0.651515) | 2.004569 / 1.541195 (0.463374) | 2.142570 / 1.468490 (0.674080) | 0.561488 / 4.584777 (-4.023289) | 4.381102 / 3.745712 (0.635390) | 3.914920 / 5.269862 (-1.354942) | 2.474271 / 4.565676 (-2.091406) | 0.067528 / 0.424275 (-0.356747) | 0.008723 / 0.007607 (0.001116) | 0.536077 / 0.226044 (0.310033) | 5.342050 / 2.268929 (3.073122) | 2.735747 / 55.444624 (-52.708877) | 2.353938 / 6.876477 (-4.522539) | 2.442878 / 2.142072 (0.300805) | 0.685404 / 4.805227 (-4.119823) | 0.156657 / 6.500664 (-6.344007) | 0.071714 / 0.075469 (-0.003755) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.562852 / 1.841788 (-0.278935) | 24.538203 / 8.074308 (16.463895) | 16.857777 / 10.191392 (6.666385) | 0.184221 / 0.680424 (-0.496203) | 0.021688 / 0.534201 (-0.512513) | 0.470700 / 0.579283 (-0.108583) | 0.470593 / 0.434364 (0.036229) | 0.645066 / 0.540337 (0.104729) | 0.756075 / 1.386936 (-0.630861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009486 / 0.011353 (-0.001867) | 0.004694 / 0.011008 (-0.006314) | 0.080216 / 0.038508 (0.041708) | 0.093479 / 0.023109 (0.070369) | 0.537353 / 0.275898 (0.261455) | 0.551631 / 0.323480 (0.228151) | 0.007373 / 0.007986 (-0.000613) | 0.004044 / 0.004328 (-0.000285) | 0.075301 / 0.004250 (0.071051) | 0.069408 / 0.037052 (0.032355) | 0.527962 / 0.258489 (0.269473) | 0.559423 / 0.293841 (0.265582) | 0.039351 / 0.128546 (-0.089195) | 0.010801 / 0.075646 (-0.064845) | 0.092803 / 0.419271 (-0.326468) | 0.058876 / 0.043533 (0.015343) | 0.513742 / 0.255139 (0.258603) | 0.574666 / 0.283200 (0.291466) | 0.030277 / 0.141683 (-0.111406) | 1.884936 / 1.452155 (0.432782) | 2.008260 / 1.492716 (0.515543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242162 / 0.018006 (0.224156) | 0.467400 / 0.000490 (0.466910) | 0.005348 / 0.000200 (0.005148) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038022 / 0.037411 (0.000611) | 0.108239 / 0.014526 (0.093713) | 0.121514 / 0.176557 (-0.055042) | 0.184951 / 0.737135 (-0.552184) | 0.123138 / 0.296338 (-0.173200) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.558587 / 0.215209 (0.343377) | 5.740312 / 2.077655 (3.662657) | 3.172164 / 1.504120 (1.668044) | 2.852908 / 1.541195 (1.311713) | 2.894435 / 1.468490 (1.425945) | 0.586399 / 4.584777 (-3.998378) | 4.498342 / 3.745712 (0.752630) | 4.000569 / 5.269862 (-1.269292) | 2.610988 / 4.565676 (-1.954688) | 0.068415 / 0.424275 (-0.355860) | 0.008602 / 0.007607 (0.000994) | 0.614731 / 0.226044 (0.388686) | 6.068158 / 2.268929 (3.799229) | 3.301070 / 55.444624 (-52.143554) | 2.868034 / 6.876477 (-4.008443) | 2.959072 / 2.142072 (0.816999) | 0.684174 / 4.805227 (-4.121053) | 0.154099 / 6.500664 (-6.346565) | 0.070641 / 0.075469 (-0.004828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.835667 / 1.841788 (-0.006120) | 24.981645 / 8.074308 (16.907337) | 17.218517 / 10.191392 (7.027125) | 0.197055 / 0.680424 (-0.483368) | 0.025465 / 0.534201 (-0.508736) | 0.523498 / 0.579283 (-0.055785) | 0.528268 / 0.434364 (0.093904) | 0.599518 / 0.540337 (0.059180) | 0.887206 / 1.386936 (-0.499730) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-20T08:07:02
| 2023-09-27T06:37:03
| 2023-09-27T06:26:24
|
MEMBER
| null |
Support streaming datasets with `pyarrow.parquet.read_table`.
See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2
CC: @AndreaFrancis
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6251/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6251",
"html_url": "https://github.com/huggingface/datasets/pull/6251",
"diff_url": "https://github.com/huggingface/datasets/pull/6251.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6251.patch",
"merged_at": "2023-09-27T06:26:24"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6247
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6247/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6247/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6247/events
|
https://github.com/huggingface/datasets/pull/6247
| 1,901,390,945
|
PR_kwDODunzps5amAQ1
| 6,247
|
Update create_dataset.mdx
|
{
"login": "EswarDivi",
"id": 76403422,
"node_id": "MDQ6VXNlcjc2NDAzNDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/76403422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EswarDivi",
"html_url": "https://github.com/EswarDivi",
"followers_url": "https://api.github.com/users/EswarDivi/followers",
"following_url": "https://api.github.com/users/EswarDivi/following{/other_user}",
"gists_url": "https://api.github.com/users/EswarDivi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EswarDivi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EswarDivi/subscriptions",
"organizations_url": "https://api.github.com/users/EswarDivi/orgs",
"repos_url": "https://api.github.com/users/EswarDivi/repos",
"events_url": "https://api.github.com/users/EswarDivi/events{/privacy}",
"received_events_url": "https://api.github.com/users/EswarDivi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008892 / 0.011353 (-0.002461) | 0.005140 / 0.011008 (-0.005868) | 0.110951 / 0.038508 (0.072442) | 0.086159 / 0.023109 (0.063050) | 0.391117 / 0.275898 (0.115218) | 0.440884 / 0.323480 (0.117404) | 0.006562 / 0.007986 (-0.001423) | 0.003711 / 0.004328 (-0.000618) | 0.081848 / 0.004250 (0.077598) | 0.063187 / 0.037052 (0.026135) | 0.369771 / 0.258489 (0.111282) | 0.447685 / 0.293841 (0.153844) | 0.046623 / 0.128546 (-0.081923) | 0.014024 / 0.075646 (-0.061622) | 0.418556 / 0.419271 (-0.000715) | 0.064660 / 0.043533 (0.021127) | 0.379416 / 0.255139 (0.124277) | 0.415800 / 0.283200 (0.132600) | 0.036899 / 0.141683 (-0.104784) | 1.710280 / 1.452155 (0.258125) | 1.932326 / 1.492716 (0.439610) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311351 / 0.018006 (0.293345) | 0.621121 / 0.000490 (0.620631) | 0.013677 / 0.000200 (0.013477) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031310 / 0.037411 (-0.006102) | 0.099546 / 0.014526 (0.085020) | 0.122100 / 0.176557 (-0.054457) | 0.186477 / 0.737135 (-0.550659) | 0.116634 / 0.296338 (-0.179704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574639 / 0.215209 (0.359430) | 5.976678 / 2.077655 (3.899023) | 2.535482 / 1.504120 (1.031362) | 2.248873 / 1.541195 (0.707678) | 2.361696 / 1.468490 (0.893205) | 0.866700 / 4.584777 (-3.718077) | 5.298018 / 3.745712 (1.552306) | 4.753240 / 5.269862 (-0.516622) | 3.124698 / 4.565676 (-1.440979) | 0.101852 / 0.424275 (-0.322423) | 0.009117 / 0.007607 (0.001510) | 0.723730 / 0.226044 (0.497685) | 7.172649 / 2.268929 (4.903720) | 3.400410 / 55.444624 (-52.044214) | 2.626619 / 6.876477 (-4.249857) | 2.948692 / 2.142072 (0.806620) | 0.991589 / 4.805227 (-3.813638) | 0.208902 / 6.500664 (-6.291762) | 0.076172 / 0.075469 (0.000703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621880 / 1.841788 (-0.219907) | 22.735673 / 8.074308 (14.661365) | 20.376990 / 10.191392 (10.185598) | 0.232219 / 0.680424 (-0.448204) | 0.028616 / 0.534201 (-0.505585) | 0.455725 / 0.579283 (-0.123558) | 0.562796 / 0.434364 (0.128432) | 0.545344 / 0.540337 (0.005007) | 0.759440 / 1.386936 (-0.627496) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009845 / 0.011353 (-0.001508) | 0.005289 / 0.011008 (-0.005719) | 0.083117 / 0.038508 (0.044609) | 0.098467 / 0.023109 (0.075357) | 0.532345 / 0.275898 (0.256447) | 0.571000 / 0.323480 (0.247520) | 0.007223 / 0.007986 (-0.000763) | 0.004442 / 0.004328 (0.000114) | 0.081710 / 0.004250 (0.077459) | 0.071132 / 0.037052 (0.034080) | 0.540093 / 0.258489 (0.281604) | 0.582244 / 0.293841 (0.288403) | 0.048509 / 0.128546 (-0.080038) | 0.013897 / 0.075646 (-0.061749) | 0.092579 / 0.419271 (-0.326692) | 0.073409 / 0.043533 (0.029876) | 0.537369 / 0.255139 (0.282230) | 0.551403 / 0.283200 (0.268203) | 0.038847 / 0.141683 (-0.102835) | 1.940848 / 1.452155 (0.488693) | 2.045597 / 1.492716 (0.552881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303883 / 0.018006 (0.285877) | 0.600237 / 0.000490 (0.599748) | 0.006030 / 0.000200 (0.005830) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036633 / 0.037411 (-0.000778) | 0.105853 / 0.014526 (0.091327) | 0.126289 / 0.176557 (-0.050267) | 0.190022 / 0.737135 (-0.547113) | 0.123251 / 0.296338 (-0.173087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711893 / 0.215209 (0.496684) | 6.979781 / 2.077655 (4.902126) | 3.491514 / 1.504120 (1.987394) | 3.268077 / 1.541195 (1.726882) | 3.241777 / 1.468490 (1.773287) | 0.875913 / 4.584777 (-3.708864) | 5.458421 / 3.745712 (1.712709) | 4.818355 / 5.269862 (-0.451507) | 3.256046 / 4.565676 (-1.309631) | 0.095000 / 0.424275 (-0.329275) | 0.009072 / 0.007607 (0.001465) | 0.818468 / 0.226044 (0.592424) | 8.027702 / 2.268929 (5.758773) | 4.363234 / 55.444624 (-51.081390) | 3.695269 / 6.876477 (-3.181207) | 3.902601 / 2.142072 (1.760528) | 1.039007 / 4.805227 (-3.766220) | 0.212050 / 6.500664 (-6.288614) | 0.081438 / 0.075469 (0.005969) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.746945 / 1.841788 (-0.094842) | 25.274283 / 8.074308 (17.199975) | 23.514717 / 10.191392 (13.323325) | 0.232580 / 0.680424 (-0.447843) | 0.032083 / 0.534201 (-0.502118) | 0.482873 / 0.579283 (-0.096410) | 0.585730 / 0.434364 (0.151366) | 0.602066 / 0.540337 (0.061729) | 0.796391 / 1.386936 (-0.590546) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-18T17:06:29
| 2023-09-19T18:51:49
| 2023-09-19T18:40:10
|
CONTRIBUTOR
| null |
modified , as AudioFolder and ImageFolder not in Dataset Library.
``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset```
```
cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site-packages/datasets/__init__.py)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6247/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6247",
"html_url": "https://github.com/huggingface/datasets/pull/6247",
"diff_url": "https://github.com/huggingface/datasets/pull/6247.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6247.patch",
"merged_at": "2023-09-19T18:40:10"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6246
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6246/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6246/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6246/events
|
https://github.com/huggingface/datasets/issues/6246
| 1,899,848,414
|
I_kwDODunzps5xPWLe
| 6,246
|
Add new column to dataset
|
{
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think it's an issue with the code.\r\n\r\nSpecifically:\r\n```python\r\ndataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nNow `dataset` is the train set with a new column. \r\nTo fix this, you can do:\r\n\r\n```python\r\ndataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```",
"> I think it's an issue with the code.\r\n> \r\n> Specifically:\r\n> \r\n> ```python\r\n> dataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n> \r\n> Now `dataset` is the train set with a new column. To fix this, you can do:\r\n> \r\n> ```python\r\n> dataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n\r\nThanks for your response, but i can not access mask images, please let me know why the problem still persists. Here is the notebook for reference: https://colab.research.google.com/drive/10lZ_zLtU4itYVmIVTvIEVbjfOtCZaAZy?usp=sharing ",
"I think there is a slight misunderstanding.\r\n```python\r\nnew_column = [\"mask\"] * len(dataset[\"train\"])\r\ndataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nadds a column with the string `mask` to your dataset.\r\nIf you're trying to load the images `\"mask_{idx}.png\"` in your dataset, you could try:\r\n\r\n```\r\nfrom datasets import Image\r\n\r\ndataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/workspace/data/mask_{idx}.png\", with_indices=True).cast_column(\"mask\", Image())\r\n```\r\n\r\nWhat this does is that it adds a column to your dataset name `mask` with the path to the mask, then it cast the column as an `Image` feature.\r\n\r\nThis [link](https://huggingface.co/docs/datasets/v2.5.1/en/image_load) explains how to load images.\r\n\r\nHope this helps!",
"> I think there is a slight misunderstanding.\r\n> \r\n> ```python\r\n> new_column = [\"mask\"] * len(dataset[\"train\"])\r\n> dataset['train'] = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n> ```\r\n> \r\n> adds a column with the string `mask` to your dataset. If you're trying to load the images `\"mask_{idx}.png\"` in your dataset, you could try:\r\n> \r\n> ```\r\n> from datasets import Image\r\n> \r\n> dataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/workspace/data/mask_{idx}.png\", with_indices=True).cast_column(\"mask\", Image())\r\n> ```\r\n> \r\n> What this does is that it adds a column to your dataset name `mask` with the path to the mask, then it cast the column as an `Image` feature.\r\n> \r\n> This [link](https://huggingface.co/docs/datasets/v2.5.1/en/image_load) explains how to load images.\r\n> \r\n> Hope this helps!\r\n\r\nThank you very much, this is really helpful...\r\ni made some changes for it to work:\r\n```\r\ndataset['train'] = dataset['train'].map(lambda u, idx: {'mask': f\"/content/data/mask_{idx}.png\"}, with_indices=True).cast_column(\"mask\", Image())\r\n```\r\nThanks Again @Dref360 "
] | 2023-09-17T16:59:48
| 2023-09-18T16:20:09
| 2023-09-18T16:20:09
|
NONE
| null |
### Describe the bug
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dataset['train']['/workspace/data']
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_column_key(key, columns)
518 def _check_valid_column_key(key: str, columns: List[str]) -> None:
519 if key not in columns:
--> 520 raise KeyError(f"Column {key} not in the dataset. Current columns in the dataset: {columns}")
521
522
KeyError: "Column train not in the dataset. Current columns in the dataset: ['image', '/workspace/data']"
```
### Steps to reproduce the bug
please find the notebook for reference: https://colab.research.google.com/drive/10lZ_zLtU4itYVmIVTvIEVbjfOtCZaAZy?usp=sharing
### Expected behavior
add column to the dataset
### Environment info
colab pro
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6246/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6244
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6244/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6244/events
|
https://github.com/huggingface/datasets/pull/6244
| 1,898,861,422
|
PR_kwDODunzps5adtD3
| 6,244
|
Add support for `fsspec>=2023.9.0`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006410 / 0.011353 (-0.004943) | 0.003995 / 0.011008 (-0.007013) | 0.083585 / 0.038508 (0.045076) | 0.074285 / 0.023109 (0.051176) | 0.307163 / 0.275898 (0.031265) | 0.344691 / 0.323480 (0.021212) | 0.004277 / 0.007986 (-0.003708) | 0.004192 / 0.004328 (-0.000136) | 0.065156 / 0.004250 (0.060905) | 0.056774 / 0.037052 (0.019721) | 0.315483 / 0.258489 (0.056994) | 0.361911 / 0.293841 (0.068070) | 0.030454 / 0.128546 (-0.098092) | 0.008600 / 0.075646 (-0.067047) | 0.286692 / 0.419271 (-0.132579) | 0.052354 / 0.043533 (0.008821) | 0.308997 / 0.255139 (0.053858) | 0.337847 / 0.283200 (0.054647) | 0.022459 / 0.141683 (-0.119224) | 1.482758 / 1.452155 (0.030604) | 1.572853 / 1.492716 (0.080137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288603 / 0.018006 (0.270597) | 0.632903 / 0.000490 (0.632413) | 0.013702 / 0.000200 (0.013502) | 0.000284 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028448 / 0.037411 (-0.008964) | 0.082441 / 0.014526 (0.067916) | 0.099048 / 0.176557 (-0.077508) | 0.154370 / 0.737135 (-0.582765) | 0.146143 / 0.296338 (-0.150195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399250 / 0.215209 (0.184040) | 3.986683 / 2.077655 (1.909028) | 1.962606 / 1.504120 (0.458486) | 1.782653 / 1.541195 (0.241459) | 1.830251 / 1.468490 (0.361761) | 0.492498 / 4.584777 (-4.092278) | 3.549581 / 3.745712 (-0.196131) | 3.200056 / 5.269862 (-2.069806) | 2.028109 / 4.565676 (-2.537568) | 0.058222 / 0.424275 (-0.366053) | 0.007629 / 0.007607 (0.000022) | 0.482083 / 0.226044 (0.256039) | 4.824728 / 2.268929 (2.555800) | 2.448772 / 55.444624 (-52.995852) | 2.079629 / 6.876477 (-4.796848) | 2.267739 / 2.142072 (0.125667) | 0.586712 / 4.805227 (-4.218515) | 0.134073 / 6.500664 (-6.366591) | 0.060565 / 0.075469 (-0.014904) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263244 / 1.841788 (-0.578544) | 18.964498 / 8.074308 (10.890190) | 14.125062 / 10.191392 (3.933670) | 0.167635 / 0.680424 (-0.512789) | 0.018469 / 0.534201 (-0.515732) | 0.390395 / 0.579283 (-0.188888) | 0.406055 / 0.434364 (-0.028309) | 0.460717 / 0.540337 (-0.079620) | 0.642746 / 1.386936 (-0.744190) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.003972 / 0.011008 (-0.007036) | 0.064569 / 0.038508 (0.026061) | 0.075450 / 0.023109 (0.052341) | 0.405250 / 0.275898 (0.129352) | 0.433530 / 0.323480 (0.110050) | 0.005625 / 0.007986 (-0.002361) | 0.004118 / 0.004328 (-0.000211) | 0.065092 / 0.004250 (0.060842) | 0.057979 / 0.037052 (0.020927) | 0.413732 / 0.258489 (0.155243) | 0.451983 / 0.293841 (0.158142) | 0.032170 / 0.128546 (-0.096377) | 0.008690 / 0.075646 (-0.066957) | 0.071792 / 0.419271 (-0.347479) | 0.048560 / 0.043533 (0.005027) | 0.410312 / 0.255139 (0.155173) | 0.427294 / 0.283200 (0.144095) | 0.023006 / 0.141683 (-0.118677) | 1.496319 / 1.452155 (0.044164) | 1.566744 / 1.492716 (0.074027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266812 / 0.018006 (0.248805) | 0.540277 / 0.000490 (0.539788) | 0.008998 / 0.000200 (0.008799) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032496 / 0.037411 (-0.004915) | 0.091387 / 0.014526 (0.076861) | 0.107516 / 0.176557 (-0.069041) | 0.160019 / 0.737135 (-0.577116) | 0.107686 / 0.296338 (-0.188652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433321 / 0.215209 (0.218111) | 4.330221 / 2.077655 (2.252566) | 2.367215 / 1.504120 (0.863095) | 2.192464 / 1.541195 (0.651269) | 2.200204 / 1.468490 (0.731714) | 0.488057 / 4.584777 (-4.096720) | 3.625429 / 3.745712 (-0.120283) | 3.282859 / 5.269862 (-1.987003) | 2.038716 / 4.565676 (-2.526960) | 0.057968 / 0.424275 (-0.366307) | 0.007753 / 0.007607 (0.000146) | 0.509133 / 0.226044 (0.283089) | 5.086445 / 2.268929 (2.817516) | 2.846017 / 55.444624 (-52.598607) | 2.469546 / 6.876477 (-4.406931) | 2.673218 / 2.142072 (0.531145) | 0.591228 / 4.805227 (-4.213999) | 0.131920 / 6.500664 (-6.368744) | 0.059967 / 0.075469 (-0.015502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375634 / 1.841788 (-0.466153) | 19.506752 / 8.074308 (11.432444) | 14.677876 / 10.191392 (4.486484) | 0.165071 / 0.680424 (-0.515353) | 0.020614 / 0.534201 (-0.513587) | 0.395967 / 0.579283 (-0.183316) | 0.424358 / 0.434364 (-0.010006) | 0.469954 / 0.540337 (-0.070384) | 0.643169 / 1.386936 (-0.743767) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006072 / 0.011353 (-0.005281) | 0.003691 / 0.011008 (-0.007318) | 0.081683 / 0.038508 (0.043175) | 0.059114 / 0.023109 (0.036005) | 0.317053 / 0.275898 (0.041155) | 0.357672 / 0.323480 (0.034192) | 0.003577 / 0.007986 (-0.004408) | 0.003890 / 0.004328 (-0.000438) | 0.063667 / 0.004250 (0.059417) | 0.048233 / 0.037052 (0.011181) | 0.322854 / 0.258489 (0.064365) | 0.368014 / 0.293841 (0.074173) | 0.027750 / 0.128546 (-0.100796) | 0.008137 / 0.075646 (-0.067509) | 0.263906 / 0.419271 (-0.155366) | 0.045402 / 0.043533 (0.001870) | 0.315414 / 0.255139 (0.060275) | 0.340906 / 0.283200 (0.057707) | 0.023475 / 0.141683 (-0.118208) | 1.443922 / 1.452155 (-0.008233) | 1.550332 / 1.492716 (0.057616) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211914 / 0.018006 (0.193908) | 0.423577 / 0.000490 (0.423088) | 0.003436 / 0.000200 (0.003236) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024675 / 0.037411 (-0.012737) | 0.072550 / 0.014526 (0.058024) | 0.084533 / 0.176557 (-0.092024) | 0.146106 / 0.737135 (-0.591029) | 0.085523 / 0.296338 (-0.210816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403498 / 0.215209 (0.188289) | 4.019000 / 2.077655 (1.941345) | 1.984821 / 1.504120 (0.480701) | 1.805071 / 1.541195 (0.263876) | 1.860906 / 1.468490 (0.392416) | 0.499570 / 4.584777 (-4.085207) | 3.088424 / 3.745712 (-0.657288) | 2.833693 / 5.269862 (-2.436169) | 1.869731 / 4.565676 (-2.695945) | 0.057606 / 0.424275 (-0.366669) | 0.006960 / 0.007607 (-0.000647) | 0.476085 / 0.226044 (0.250040) | 4.774063 / 2.268929 (2.505134) | 2.458079 / 55.444624 (-52.986545) | 2.106075 / 6.876477 (-4.770402) | 2.248373 / 2.142072 (0.106301) | 0.589767 / 4.805227 (-4.215460) | 0.124382 / 6.500664 (-6.376282) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287031 / 1.841788 (-0.554756) | 17.662455 / 8.074308 (9.588147) | 14.288812 / 10.191392 (4.097420) | 0.156168 / 0.680424 (-0.524256) | 0.016795 / 0.534201 (-0.517406) | 0.333726 / 0.579283 (-0.245557) | 0.362327 / 0.434364 (-0.072037) | 0.387773 / 0.540337 (-0.152564) | 0.547232 / 1.386936 (-0.839704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006494 / 0.011353 (-0.004859) | 0.003762 / 0.011008 (-0.007247) | 0.062373 / 0.038508 (0.023864) | 0.066357 / 0.023109 (0.043247) | 0.448687 / 0.275898 (0.172789) | 0.482445 / 0.323480 (0.158965) | 0.004990 / 0.007986 (-0.002996) | 0.002945 / 0.004328 (-0.001384) | 0.062444 / 0.004250 (0.058194) | 0.051381 / 0.037052 (0.014329) | 0.449310 / 0.258489 (0.190821) | 0.483188 / 0.293841 (0.189347) | 0.029078 / 0.128546 (-0.099468) | 0.008146 / 0.075646 (-0.067501) | 0.067369 / 0.419271 (-0.351903) | 0.041732 / 0.043533 (-0.001801) | 0.451675 / 0.255139 (0.196536) | 0.470445 / 0.283200 (0.187246) | 0.021053 / 0.141683 (-0.120630) | 1.483627 / 1.452155 (0.031472) | 1.541594 / 1.492716 (0.048878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210247 / 0.018006 (0.192240) | 0.424663 / 0.000490 (0.424173) | 0.005394 / 0.000200 (0.005194) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026894 / 0.037411 (-0.010517) | 0.081324 / 0.014526 (0.066798) | 0.091362 / 0.176557 (-0.085195) | 0.145602 / 0.737135 (-0.591533) | 0.091896 / 0.296338 (-0.204443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469662 / 0.215209 (0.254453) | 4.689495 / 2.077655 (2.611840) | 2.596462 / 1.504120 (1.092342) | 2.422584 / 1.541195 (0.881389) | 2.476710 / 1.468490 (1.008220) | 0.507049 / 4.584777 (-4.077728) | 3.185519 / 3.745712 (-0.560193) | 2.879842 / 5.269862 (-2.390019) | 1.882643 / 4.565676 (-2.683034) | 0.058046 / 0.424275 (-0.366229) | 0.006797 / 0.007607 (-0.000811) | 0.545245 / 0.226044 (0.319201) | 5.449248 / 2.268929 (3.180319) | 3.057341 / 55.444624 (-52.387283) | 2.728385 / 6.876477 (-4.148092) | 2.898945 / 2.142072 (0.756873) | 0.600035 / 4.805227 (-4.205192) | 0.126337 / 6.500664 (-6.374327) | 0.061333 / 0.075469 (-0.014136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332966 / 1.841788 (-0.508821) | 17.960805 / 8.074308 (9.886497) | 14.978838 / 10.191392 (4.787446) | 0.148852 / 0.680424 (-0.531572) | 0.018307 / 0.534201 (-0.515894) | 0.335234 / 0.579283 (-0.244050) | 0.389659 / 0.434364 (-0.044704) | 0.393259 / 0.540337 (-0.147078) | 0.549237 / 1.386936 (-0.837699) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008808 / 0.011353 (-0.002545) | 0.005001 / 0.011008 (-0.006008) | 0.110022 / 0.038508 (0.071514) | 0.078015 / 0.023109 (0.054906) | 0.384724 / 0.275898 (0.108826) | 0.441354 / 0.323480 (0.117874) | 0.005116 / 0.007986 (-0.002870) | 0.004308 / 0.004328 (-0.000020) | 0.081679 / 0.004250 (0.077429) | 0.061386 / 0.037052 (0.024333) | 0.398149 / 0.258489 (0.139660) | 0.464859 / 0.293841 (0.171018) | 0.047443 / 0.128546 (-0.081104) | 0.014693 / 0.075646 (-0.060954) | 0.365438 / 0.419271 (-0.053833) | 0.081689 / 0.043533 (0.038156) | 0.400458 / 0.255139 (0.145319) | 0.449958 / 0.283200 (0.166758) | 0.038266 / 0.141683 (-0.103417) | 1.795043 / 1.452155 (0.342888) | 1.908819 / 1.492716 (0.416102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297911 / 0.018006 (0.279905) | 0.601640 / 0.000490 (0.601150) | 0.015406 / 0.000200 (0.015206) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034520 / 0.037411 (-0.002891) | 0.092657 / 0.014526 (0.078131) | 0.113992 / 0.176557 (-0.062564) | 0.189075 / 0.737135 (-0.548061) | 0.106602 / 0.296338 (-0.189736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.585838 / 0.215209 (0.370629) | 5.719281 / 2.077655 (3.641627) | 2.525914 / 1.504120 (1.021794) | 2.231908 / 1.541195 (0.690713) | 2.215272 / 1.468490 (0.746782) | 0.814425 / 4.584777 (-3.770352) | 5.243406 / 3.745712 (1.497694) | 4.476642 / 5.269862 (-0.793220) | 2.929438 / 4.565676 (-1.636239) | 0.092070 / 0.424275 (-0.332205) | 0.009358 / 0.007607 (0.001751) | 0.713975 / 0.226044 (0.487931) | 6.948846 / 2.268929 (4.679918) | 3.361125 / 55.444624 (-52.083500) | 2.575224 / 6.876477 (-4.301253) | 2.783082 / 2.142072 (0.641009) | 1.016205 / 4.805227 (-3.789022) | 0.202578 / 6.500664 (-6.298086) | 0.076696 / 0.075469 (0.001227) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.650889 / 1.841788 (-0.190898) | 23.358273 / 8.074308 (15.283965) | 19.882450 / 10.191392 (9.691058) | 0.228971 / 0.680424 (-0.451453) | 0.027736 / 0.534201 (-0.506465) | 0.472405 / 0.579283 (-0.106878) | 0.581799 / 0.434364 (0.147435) | 0.533000 / 0.540337 (-0.007338) | 0.815588 / 1.386936 (-0.571348) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009151 / 0.011353 (-0.002202) | 0.005074 / 0.011008 (-0.005934) | 0.078709 / 0.038508 (0.040201) | 0.077696 / 0.023109 (0.054586) | 0.522356 / 0.275898 (0.246458) | 0.562345 / 0.323480 (0.238865) | 0.006411 / 0.007986 (-0.001575) | 0.004379 / 0.004328 (0.000051) | 0.082402 / 0.004250 (0.078151) | 0.064223 / 0.037052 (0.027170) | 0.518184 / 0.258489 (0.259695) | 0.566221 / 0.293841 (0.272380) | 0.046796 / 0.128546 (-0.081750) | 0.013987 / 0.075646 (-0.061659) | 0.094925 / 0.419271 (-0.324346) | 0.058810 / 0.043533 (0.015277) | 0.520252 / 0.255139 (0.265113) | 0.566403 / 0.283200 (0.283203) | 0.034720 / 0.141683 (-0.106963) | 1.796809 / 1.452155 (0.344654) | 1.913787 / 1.492716 (0.421070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317449 / 0.018006 (0.299443) | 0.620154 / 0.000490 (0.619664) | 0.007066 / 0.000200 (0.006866) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035252 / 0.037411 (-0.002160) | 0.111648 / 0.014526 (0.097122) | 0.120692 / 0.176557 (-0.055864) | 0.193202 / 0.737135 (-0.543933) | 0.127905 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.661012 / 0.215209 (0.445803) | 6.626680 / 2.077655 (4.549026) | 3.243065 / 1.504120 (1.738945) | 2.904053 / 1.541195 (1.362858) | 2.880516 / 1.468490 (1.412026) | 0.875650 / 4.584777 (-3.709127) | 5.381993 / 3.745712 (1.636281) | 4.743997 / 5.269862 (-0.525864) | 3.020736 / 4.565676 (-1.544940) | 0.106573 / 0.424275 (-0.317702) | 0.011151 / 0.007607 (0.003544) | 0.821990 / 0.226044 (0.595946) | 8.225383 / 2.268929 (5.956454) | 3.963232 / 55.444624 (-51.481392) | 3.288916 / 6.876477 (-3.587561) | 3.579435 / 2.142072 (1.437363) | 1.043379 / 4.805227 (-3.761848) | 0.207508 / 6.500664 (-6.293156) | 0.085109 / 0.075469 (0.009640) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.723798 / 1.841788 (-0.117990) | 24.709848 / 8.074308 (16.635540) | 22.484674 / 10.191392 (12.293282) | 0.260357 / 0.680424 (-0.420067) | 0.033539 / 0.534201 (-0.500662) | 0.487814 / 0.579283 (-0.091469) | 0.610171 / 0.434364 (0.175807) | 0.585012 / 0.540337 (0.044674) | 0.803764 / 1.386936 (-0.583172) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006661 / 0.011353 (-0.004692) | 0.004022 / 0.011008 (-0.006987) | 0.084269 / 0.038508 (0.045760) | 0.070707 / 0.023109 (0.047598) | 0.315035 / 0.275898 (0.039137) | 0.339830 / 0.323480 (0.016350) | 0.003994 / 0.007986 (-0.003991) | 0.004129 / 0.004328 (-0.000199) | 0.065383 / 0.004250 (0.061133) | 0.055493 / 0.037052 (0.018441) | 0.320521 / 0.258489 (0.062032) | 0.354301 / 0.293841 (0.060460) | 0.031177 / 0.128546 (-0.097370) | 0.008724 / 0.075646 (-0.066922) | 0.288298 / 0.419271 (-0.130974) | 0.052418 / 0.043533 (0.008885) | 0.319122 / 0.255139 (0.063983) | 0.335859 / 0.283200 (0.052659) | 0.026260 / 0.141683 (-0.115423) | 1.450039 / 1.452155 (-0.002115) | 1.545172 / 1.492716 (0.052455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234232 / 0.018006 (0.216226) | 0.454983 / 0.000490 (0.454493) | 0.007590 / 0.000200 (0.007390) | 0.000550 / 0.000054 (0.000495) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028714 / 0.037411 (-0.008698) | 0.083686 / 0.014526 (0.069160) | 0.162986 / 0.176557 (-0.013570) | 0.167574 / 0.737135 (-0.569561) | 0.273158 / 0.296338 (-0.023180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388275 / 0.215209 (0.173066) | 3.862034 / 2.077655 (1.784379) | 1.843561 / 1.504120 (0.339441) | 1.675224 / 1.541195 (0.134029) | 1.730394 / 1.468490 (0.261904) | 0.495259 / 4.584777 (-4.089518) | 3.627155 / 3.745712 (-0.118557) | 3.290590 / 5.269862 (-1.979272) | 2.032432 / 4.565676 (-2.533245) | 0.058212 / 0.424275 (-0.366063) | 0.007815 / 0.007607 (0.000208) | 0.460625 / 0.226044 (0.234580) | 4.616845 / 2.268929 (2.347916) | 2.339280 / 55.444624 (-53.105344) | 1.957216 / 6.876477 (-4.919261) | 2.129511 / 2.142072 (-0.012562) | 0.591782 / 4.805227 (-4.213446) | 0.136391 / 6.500664 (-6.364273) | 0.059627 / 0.075469 (-0.015842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278998 / 1.841788 (-0.562789) | 18.485496 / 8.074308 (10.411188) | 14.161273 / 10.191392 (3.969881) | 0.164346 / 0.680424 (-0.516078) | 0.018144 / 0.534201 (-0.516057) | 0.391601 / 0.579283 (-0.187682) | 0.424391 / 0.434364 (-0.009973) | 0.458209 / 0.540337 (-0.082129) | 0.645124 / 1.386936 (-0.741812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006799 / 0.011353 (-0.004554) | 0.004023 / 0.011008 (-0.006985) | 0.065206 / 0.038508 (0.026698) | 0.074386 / 0.023109 (0.051277) | 0.437399 / 0.275898 (0.161501) | 0.467382 / 0.323480 (0.143903) | 0.005467 / 0.007986 (-0.002519) | 0.003324 / 0.004328 (-0.001005) | 0.064289 / 0.004250 (0.060039) | 0.057257 / 0.037052 (0.020205) | 0.440035 / 0.258489 (0.181546) | 0.477138 / 0.293841 (0.183298) | 0.032171 / 0.128546 (-0.096375) | 0.008400 / 0.075646 (-0.067247) | 0.070877 / 0.419271 (-0.348395) | 0.048180 / 0.043533 (0.004648) | 0.441274 / 0.255139 (0.186135) | 0.461386 / 0.283200 (0.178187) | 0.022576 / 0.141683 (-0.119106) | 1.520914 / 1.452155 (0.068759) | 1.575593 / 1.492716 (0.082877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221551 / 0.018006 (0.203545) | 0.447213 / 0.000490 (0.446723) | 0.004435 / 0.000200 (0.004235) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032123 / 0.037411 (-0.005288) | 0.091809 / 0.014526 (0.077283) | 0.103938 / 0.176557 (-0.072618) | 0.156878 / 0.737135 (-0.580258) | 0.105071 / 0.296338 (-0.191268) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430389 / 0.215209 (0.215180) | 4.293496 / 2.077655 (2.215841) | 2.292801 / 1.504120 (0.788681) | 2.135320 / 1.541195 (0.594126) | 2.195720 / 1.468490 (0.727229) | 0.493277 / 4.584777 (-4.091500) | 3.685617 / 3.745712 (-0.060096) | 3.278897 / 5.269862 (-1.990965) | 2.036939 / 4.565676 (-2.528737) | 0.058766 / 0.424275 (-0.365509) | 0.007783 / 0.007607 (0.000176) | 0.511165 / 0.226044 (0.285120) | 5.126757 / 2.268929 (2.857829) | 2.756690 / 55.444624 (-52.687935) | 2.421745 / 6.876477 (-4.454732) | 2.597249 / 2.142072 (0.455177) | 0.647206 / 4.805227 (-4.158021) | 0.143392 / 6.500664 (-6.357273) | 0.060110 / 0.075469 (-0.015359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340289 / 1.841788 (-0.501499) | 19.057620 / 8.074308 (10.983312) | 14.832892 / 10.191392 (4.641500) | 0.167730 / 0.680424 (-0.512694) | 0.020178 / 0.534201 (-0.514023) | 0.394060 / 0.579283 (-0.185223) | 0.433976 / 0.434364 (-0.000388) | 0.474417 / 0.540337 (-0.065921) | 0.640653 / 1.386936 (-0.746283) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007661 / 0.011353 (-0.003692) | 0.004541 / 0.011008 (-0.006467) | 0.100547 / 0.038508 (0.062039) | 0.084257 / 0.023109 (0.061148) | 0.377627 / 0.275898 (0.101729) | 0.433764 / 0.323480 (0.110284) | 0.005995 / 0.007986 (-0.001990) | 0.003810 / 0.004328 (-0.000518) | 0.076409 / 0.004250 (0.072158) | 0.063411 / 0.037052 (0.026359) | 0.382504 / 0.258489 (0.124015) | 0.449721 / 0.293841 (0.155880) | 0.036499 / 0.128546 (-0.092047) | 0.009942 / 0.075646 (-0.065705) | 0.343839 / 0.419271 (-0.075433) | 0.062147 / 0.043533 (0.018614) | 0.383244 / 0.255139 (0.128105) | 0.415606 / 0.283200 (0.132406) | 0.027475 / 0.141683 (-0.114207) | 1.740413 / 1.452155 (0.288258) | 1.862210 / 1.492716 (0.369493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260064 / 0.018006 (0.242058) | 0.499001 / 0.000490 (0.498511) | 0.015811 / 0.000200 (0.015611) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033599 / 0.037411 (-0.003812) | 0.099354 / 0.014526 (0.084828) | 0.114693 / 0.176557 (-0.061864) | 0.180231 / 0.737135 (-0.556904) | 0.114715 / 0.296338 (-0.181623) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459884 / 0.215209 (0.244675) | 4.580806 / 2.077655 (2.503151) | 2.270770 / 1.504120 (0.766650) | 2.077127 / 1.541195 (0.535932) | 2.167175 / 1.468490 (0.698685) | 0.570593 / 4.584777 (-4.014184) | 4.120926 / 3.745712 (0.375214) | 3.817595 / 5.269862 (-1.452267) | 2.404782 / 4.565676 (-2.160894) | 0.067972 / 0.424275 (-0.356304) | 0.009378 / 0.007607 (0.001771) | 0.549642 / 0.226044 (0.323597) | 5.490369 / 2.268929 (3.221440) | 2.905264 / 55.444624 (-52.539361) | 2.452935 / 6.876477 (-4.423542) | 2.700760 / 2.142072 (0.558688) | 0.700407 / 4.805227 (-4.104820) | 0.159349 / 6.500664 (-6.341315) | 0.074605 / 0.075469 (-0.000864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517803 / 1.841788 (-0.323985) | 22.343700 / 8.074308 (14.269392) | 16.411639 / 10.191392 (6.220247) | 0.169816 / 0.680424 (-0.510608) | 0.021532 / 0.534201 (-0.512668) | 0.470161 / 0.579283 (-0.109122) | 0.473412 / 0.434364 (0.039048) | 0.539690 / 0.540337 (-0.000647) | 0.774011 / 1.386936 (-0.612925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.004651 / 0.011008 (-0.006357) | 0.075162 / 0.038508 (0.036654) | 0.085365 / 0.023109 (0.062256) | 0.493272 / 0.275898 (0.217374) | 0.535776 / 0.323480 (0.212296) | 0.006323 / 0.007986 (-0.001663) | 0.003785 / 0.004328 (-0.000544) | 0.076161 / 0.004250 (0.071911) | 0.065982 / 0.037052 (0.028929) | 0.513355 / 0.258489 (0.254866) | 0.549219 / 0.293841 (0.255378) | 0.038052 / 0.128546 (-0.090494) | 0.010055 / 0.075646 (-0.065592) | 0.083744 / 0.419271 (-0.335527) | 0.056708 / 0.043533 (0.013175) | 0.496273 / 0.255139 (0.241135) | 0.523709 / 0.283200 (0.240509) | 0.026502 / 0.141683 (-0.115181) | 1.793032 / 1.452155 (0.340877) | 1.870534 / 1.492716 (0.377817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252288 / 0.018006 (0.234281) | 0.490380 / 0.000490 (0.489890) | 0.005884 / 0.000200 (0.005684) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038238 / 0.037411 (0.000827) | 0.110010 / 0.014526 (0.095485) | 0.125497 / 0.176557 (-0.051059) | 0.188154 / 0.737135 (-0.548981) | 0.126112 / 0.296338 (-0.170227) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515837 / 0.215209 (0.300628) | 5.135153 / 2.077655 (3.057498) | 2.761740 / 1.504120 (1.257620) | 2.552718 / 1.541195 (1.011523) | 2.636425 / 1.468490 (1.167935) | 0.588442 / 4.584777 (-3.996335) | 4.220833 / 3.745712 (0.475120) | 3.874637 / 5.269862 (-1.395225) | 2.424668 / 4.565676 (-2.141009) | 0.069979 / 0.424275 (-0.354296) | 0.009349 / 0.007607 (0.001742) | 0.608936 / 0.226044 (0.382891) | 6.081209 / 2.268929 (3.812280) | 3.348067 / 55.444624 (-52.096557) | 2.919130 / 6.876477 (-3.957347) | 3.159093 / 2.142072 (1.017020) | 0.704059 / 4.805227 (-4.101169) | 0.158417 / 6.500664 (-6.342247) | 0.071321 / 0.075469 (-0.004148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595287 / 1.841788 (-0.246501) | 23.096619 / 8.074308 (15.022311) | 17.258041 / 10.191392 (7.066649) | 0.186197 / 0.680424 (-0.494227) | 0.023633 / 0.534201 (-0.510567) | 0.472181 / 0.579283 (-0.107102) | 0.493817 / 0.434364 (0.059453) | 0.567657 / 0.540337 (0.027320) | 0.793789 / 1.386936 (-0.593147) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007084 / 0.011353 (-0.004268) | 0.004093 / 0.011008 (-0.006915) | 0.086395 / 0.038508 (0.047887) | 0.087734 / 0.023109 (0.064625) | 0.356936 / 0.275898 (0.081038) | 0.386413 / 0.323480 (0.062933) | 0.005531 / 0.007986 (-0.002454) | 0.003462 / 0.004328 (-0.000866) | 0.065503 / 0.004250 (0.061252) | 0.058973 / 0.037052 (0.021920) | 0.354151 / 0.258489 (0.095662) | 0.398812 / 0.293841 (0.104971) | 0.031508 / 0.128546 (-0.097038) | 0.008537 / 0.075646 (-0.067109) | 0.290942 / 0.419271 (-0.128329) | 0.053537 / 0.043533 (0.010004) | 0.352067 / 0.255139 (0.096928) | 0.375142 / 0.283200 (0.091943) | 0.025658 / 0.141683 (-0.116025) | 1.468496 / 1.452155 (0.016341) | 1.556926 / 1.492716 (0.064210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238858 / 0.018006 (0.220852) | 0.460018 / 0.000490 (0.459528) | 0.009613 / 0.000200 (0.009414) | 0.000326 / 0.000054 (0.000272) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.088431 / 0.014526 (0.073905) | 0.098130 / 0.176557 (-0.078427) | 0.155160 / 0.737135 (-0.581975) | 0.099963 / 0.296338 (-0.196375) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385769 / 0.215209 (0.170560) | 3.836723 / 2.077655 (1.759069) | 1.861065 / 1.504120 (0.356945) | 1.685159 / 1.541195 (0.143965) | 1.780679 / 1.468490 (0.312189) | 0.491865 / 4.584777 (-4.092912) | 3.581139 / 3.745712 (-0.164573) | 3.366278 / 5.269862 (-1.903584) | 2.093094 / 4.565676 (-2.472583) | 0.058063 / 0.424275 (-0.366212) | 0.007903 / 0.007607 (0.000296) | 0.464866 / 0.226044 (0.238821) | 4.647754 / 2.268929 (2.378825) | 2.316466 / 55.444624 (-53.128158) | 1.984079 / 6.876477 (-4.892398) | 2.235020 / 2.142072 (0.092948) | 0.592591 / 4.805227 (-4.212636) | 0.135586 / 6.500664 (-6.365078) | 0.061434 / 0.075469 (-0.014035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282940 / 1.841788 (-0.558848) | 19.635975 / 8.074308 (11.561667) | 14.426135 / 10.191392 (4.234743) | 0.166732 / 0.680424 (-0.513692) | 0.018438 / 0.534201 (-0.515763) | 0.393173 / 0.579283 (-0.186110) | 0.417291 / 0.434364 (-0.017073) | 0.459188 / 0.540337 (-0.081149) | 0.632568 / 1.386936 (-0.754368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007166 / 0.011353 (-0.004187) | 0.004254 / 0.011008 (-0.006754) | 0.064667 / 0.038508 (0.026159) | 0.085142 / 0.023109 (0.062033) | 0.410081 / 0.275898 (0.134183) | 0.445803 / 0.323480 (0.122323) | 0.005600 / 0.007986 (-0.002385) | 0.003520 / 0.004328 (-0.000809) | 0.064148 / 0.004250 (0.059897) | 0.059869 / 0.037052 (0.022817) | 0.407439 / 0.258489 (0.148950) | 0.451169 / 0.293841 (0.157329) | 0.032619 / 0.128546 (-0.095927) | 0.008706 / 0.075646 (-0.066940) | 0.071230 / 0.419271 (-0.348041) | 0.048499 / 0.043533 (0.004966) | 0.416401 / 0.255139 (0.161262) | 0.430737 / 0.283200 (0.147537) | 0.022511 / 0.141683 (-0.119172) | 1.517296 / 1.452155 (0.065141) | 1.581704 / 1.492716 (0.088988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220738 / 0.018006 (0.202732) | 0.454026 / 0.000490 (0.453536) | 0.004695 / 0.000200 (0.004495) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033202 / 0.037411 (-0.004209) | 0.097506 / 0.014526 (0.082980) | 0.106661 / 0.176557 (-0.069896) | 0.160554 / 0.737135 (-0.576581) | 0.109203 / 0.296338 (-0.187135) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432013 / 0.215209 (0.216804) | 4.310399 / 2.077655 (2.232744) | 2.296529 / 1.504120 (0.792409) | 2.139929 / 1.541195 (0.598734) | 2.227432 / 1.468490 (0.758942) | 0.493697 / 4.584777 (-4.091080) | 3.639877 / 3.745712 (-0.105835) | 3.323165 / 5.269862 (-1.946697) | 2.084527 / 4.565676 (-2.481150) | 0.058812 / 0.424275 (-0.365463) | 0.007813 / 0.007607 (0.000206) | 0.512366 / 0.226044 (0.286321) | 5.119660 / 2.268929 (2.850732) | 2.783819 / 55.444624 (-52.660806) | 2.490669 / 6.876477 (-4.385808) | 2.696653 / 2.142072 (0.554581) | 0.627161 / 4.805227 (-4.178066) | 0.137032 / 6.500664 (-6.363632) | 0.064040 / 0.075469 (-0.011429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369578 / 1.841788 (-0.472210) | 20.421182 / 8.074308 (12.346873) | 15.719347 / 10.191392 (5.527955) | 0.166150 / 0.680424 (-0.514274) | 0.020262 / 0.534201 (-0.513939) | 0.395645 / 0.579283 (-0.183638) | 0.430363 / 0.434364 (-0.004001) | 0.477843 / 0.540337 (-0.062494) | 0.638501 / 1.386936 (-0.748435) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006141 / 0.011353 (-0.005211) | 0.003683 / 0.011008 (-0.007325) | 0.081127 / 0.038508 (0.042618) | 0.064285 / 0.023109 (0.041176) | 0.323038 / 0.275898 (0.047140) | 0.347280 / 0.323480 (0.023800) | 0.003518 / 0.007986 (-0.004467) | 0.002958 / 0.004328 (-0.001370) | 0.063093 / 0.004250 (0.058843) | 0.050682 / 0.037052 (0.013629) | 0.321222 / 0.258489 (0.062733) | 0.359266 / 0.293841 (0.065425) | 0.027515 / 0.128546 (-0.101032) | 0.007964 / 0.075646 (-0.067682) | 0.261305 / 0.419271 (-0.157966) | 0.044897 / 0.043533 (0.001365) | 0.320684 / 0.255139 (0.065545) | 0.335722 / 0.283200 (0.052522) | 0.023378 / 0.141683 (-0.118305) | 1.418211 / 1.452155 (-0.033943) | 1.523728 / 1.492716 (0.031011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222316 / 0.018006 (0.204310) | 0.426943 / 0.000490 (0.426454) | 0.008785 / 0.000200 (0.008585) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024716 / 0.037411 (-0.012695) | 0.075341 / 0.014526 (0.060816) | 0.089532 / 0.176557 (-0.087024) | 0.147638 / 0.737135 (-0.589498) | 0.085697 / 0.296338 (-0.210641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396395 / 0.215209 (0.181186) | 3.947280 / 2.077655 (1.869625) | 1.894762 / 1.504120 (0.390642) | 1.712094 / 1.541195 (0.170899) | 1.779049 / 1.468490 (0.310559) | 0.509206 / 4.584777 (-4.075571) | 3.073951 / 3.745712 (-0.671761) | 2.886826 / 5.269862 (-2.383035) | 1.894444 / 4.565676 (-2.671232) | 0.059519 / 0.424275 (-0.364756) | 0.006951 / 0.007607 (-0.000656) | 0.468213 / 0.226044 (0.242169) | 4.667134 / 2.268929 (2.398206) | 2.342516 / 55.444624 (-53.102108) | 1.992047 / 6.876477 (-4.884430) | 2.142059 / 2.142072 (-0.000014) | 0.600507 / 4.805227 (-4.204720) | 0.128982 / 6.500664 (-6.371682) | 0.062100 / 0.075469 (-0.013369) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234500 / 1.841788 (-0.607288) | 17.951646 / 8.074308 (9.877338) | 13.862763 / 10.191392 (3.671371) | 0.143133 / 0.680424 (-0.537291) | 0.016643 / 0.534201 (-0.517558) | 0.333174 / 0.579283 (-0.246109) | 0.366956 / 0.434364 (-0.067408) | 0.384569 / 0.540337 (-0.155769) | 0.546830 / 1.386936 (-0.840106) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.003725 / 0.011008 (-0.007283) | 0.062099 / 0.038508 (0.023591) | 0.064117 / 0.023109 (0.041008) | 0.456100 / 0.275898 (0.180202) | 0.490794 / 0.323480 (0.167314) | 0.005652 / 0.007986 (-0.002334) | 0.002897 / 0.004328 (-0.001432) | 0.061909 / 0.004250 (0.057659) | 0.050634 / 0.037052 (0.013582) | 0.454422 / 0.258489 (0.195933) | 0.493208 / 0.293841 (0.199367) | 0.028822 / 0.128546 (-0.099724) | 0.008115 / 0.075646 (-0.067531) | 0.067214 / 0.419271 (-0.352058) | 0.041529 / 0.043533 (-0.002004) | 0.458016 / 0.255139 (0.202877) | 0.476059 / 0.283200 (0.192859) | 0.019926 / 0.141683 (-0.121757) | 1.465345 / 1.452155 (0.013190) | 1.533518 / 1.492716 (0.040802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218830 / 0.018006 (0.200823) | 0.418869 / 0.000490 (0.418380) | 0.005154 / 0.000200 (0.004954) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027648 / 0.037411 (-0.009763) | 0.083842 / 0.014526 (0.069316) | 0.092300 / 0.176557 (-0.084257) | 0.146098 / 0.737135 (-0.591037) | 0.093441 / 0.296338 (-0.202898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464426 / 0.215209 (0.249217) | 4.632705 / 2.077655 (2.555051) | 2.642091 / 1.504120 (1.137971) | 2.461768 / 1.541195 (0.920573) | 2.535554 / 1.468490 (1.067064) | 0.507506 / 4.584777 (-4.077271) | 3.095485 / 3.745712 (-0.650227) | 2.884261 / 5.269862 (-2.385601) | 1.908943 / 4.565676 (-2.656734) | 0.058622 / 0.424275 (-0.365653) | 0.006892 / 0.007607 (-0.000715) | 0.536045 / 0.226044 (0.310001) | 5.377448 / 2.268929 (3.108519) | 3.076023 / 55.444624 (-52.368602) | 2.745586 / 6.876477 (-4.130890) | 2.939582 / 2.142072 (0.797510) | 0.595639 / 4.805227 (-4.209589) | 0.125086 / 6.500664 (-6.375578) | 0.061075 / 0.075469 (-0.014394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342820 / 1.841788 (-0.498968) | 18.326240 / 8.074308 (10.251932) | 15.007094 / 10.191392 (4.815702) | 0.133037 / 0.680424 (-0.547387) | 0.018702 / 0.534201 (-0.515499) | 0.330245 / 0.579283 (-0.249038) | 0.381494 / 0.434364 (-0.052870) | 0.393705 / 0.540337 (-0.146633) | 0.533676 / 1.386936 (-0.853260) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007644 / 0.011353 (-0.003709) | 0.004759 / 0.011008 (-0.006249) | 0.100569 / 0.038508 (0.062061) | 0.089645 / 0.023109 (0.066536) | 0.376679 / 0.275898 (0.100781) | 0.413214 / 0.323480 (0.089735) | 0.006087 / 0.007986 (-0.001899) | 0.003832 / 0.004328 (-0.000496) | 0.075892 / 0.004250 (0.071641) | 0.064635 / 0.037052 (0.027582) | 0.376874 / 0.258489 (0.118385) | 0.436756 / 0.293841 (0.142915) | 0.036372 / 0.128546 (-0.092174) | 0.010047 / 0.075646 (-0.065599) | 0.345073 / 0.419271 (-0.074198) | 0.062092 / 0.043533 (0.018559) | 0.380503 / 0.255139 (0.125364) | 0.414800 / 0.283200 (0.131600) | 0.028274 / 0.141683 (-0.113409) | 1.732463 / 1.452155 (0.280308) | 1.859049 / 1.492716 (0.366333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267129 / 0.018006 (0.249123) | 0.509109 / 0.000490 (0.508619) | 0.012329 / 0.000200 (0.012130) | 0.000432 / 0.000054 (0.000377) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033773 / 0.037411 (-0.003638) | 0.102800 / 0.014526 (0.088274) | 0.114256 / 0.176557 (-0.062300) | 0.182048 / 0.737135 (-0.555087) | 0.118225 / 0.296338 (-0.178113) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457553 / 0.215209 (0.242344) | 4.588212 / 2.077655 (2.510557) | 2.184138 / 1.504120 (0.680018) | 2.003570 / 1.541195 (0.462375) | 2.093217 / 1.468490 (0.624727) | 0.585679 / 4.584777 (-3.999098) | 4.175319 / 3.745712 (0.429607) | 3.914168 / 5.269862 (-1.355693) | 2.452992 / 4.565676 (-2.112684) | 0.068363 / 0.424275 (-0.355912) | 0.009314 / 0.007607 (0.001707) | 0.543640 / 0.226044 (0.317595) | 5.440853 / 2.268929 (3.171925) | 2.782415 / 55.444624 (-52.662210) | 2.332359 / 6.876477 (-4.544118) | 2.628520 / 2.142072 (0.486448) | 0.696838 / 4.805227 (-4.108389) | 0.160653 / 6.500664 (-6.340012) | 0.075599 / 0.075469 (0.000130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545305 / 1.841788 (-0.296483) | 23.073174 / 8.074308 (14.998866) | 16.974977 / 10.191392 (6.783585) | 0.183719 / 0.680424 (-0.496705) | 0.021633 / 0.534201 (-0.512568) | 0.471202 / 0.579283 (-0.108081) | 0.479385 / 0.434364 (0.045021) | 0.550872 / 0.540337 (0.010535) | 0.766825 / 1.386936 (-0.620111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007918 / 0.011353 (-0.003435) | 0.004793 / 0.011008 (-0.006215) | 0.077273 / 0.038508 (0.038765) | 0.092079 / 0.023109 (0.068969) | 0.483269 / 0.275898 (0.207371) | 0.524919 / 0.323480 (0.201439) | 0.006273 / 0.007986 (-0.001713) | 0.004018 / 0.004328 (-0.000310) | 0.077188 / 0.004250 (0.072937) | 0.067891 / 0.037052 (0.030839) | 0.478531 / 0.258489 (0.220042) | 0.526956 / 0.293841 (0.233115) | 0.038309 / 0.128546 (-0.090237) | 0.010133 / 0.075646 (-0.065513) | 0.083892 / 0.419271 (-0.335379) | 0.057369 / 0.043533 (0.013836) | 0.509427 / 0.255139 (0.254288) | 0.506574 / 0.283200 (0.223374) | 0.027987 / 0.141683 (-0.113696) | 1.897469 / 1.452155 (0.445314) | 1.893102 / 1.492716 (0.400385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243003 / 0.018006 (0.224997) | 0.500267 / 0.000490 (0.499777) | 0.007442 / 0.000200 (0.007242) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039266 / 0.037411 (0.001855) | 0.114438 / 0.014526 (0.099912) | 0.124528 / 0.176557 (-0.052029) | 0.189399 / 0.737135 (-0.547736) | 0.126703 / 0.296338 (-0.169635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518139 / 0.215209 (0.302930) | 5.162058 / 2.077655 (3.084403) | 2.835111 / 1.504120 (1.330991) | 2.640919 / 1.541195 (1.099724) | 2.736800 / 1.468490 (1.268310) | 0.582813 / 4.584777 (-4.001964) | 4.246269 / 3.745712 (0.500557) | 3.891161 / 5.269862 (-1.378701) | 2.445392 / 4.565676 (-2.120285) | 0.068943 / 0.424275 (-0.355332) | 0.009248 / 0.007607 (0.001641) | 0.604859 / 0.226044 (0.378815) | 6.030660 / 2.268929 (3.761731) | 3.409778 / 55.444624 (-52.034846) | 2.990488 / 6.876477 (-3.885988) | 3.281317 / 2.142072 (1.139245) | 0.697705 / 4.805227 (-4.107523) | 0.159502 / 6.500664 (-6.341162) | 0.072471 / 0.075469 (-0.002999) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625428 / 1.841788 (-0.216360) | 23.602509 / 8.074308 (15.528201) | 18.091474 / 10.191392 (7.900082) | 0.172816 / 0.680424 (-0.507608) | 0.023708 / 0.534201 (-0.510493) | 0.473768 / 0.579283 (-0.105515) | 0.493713 / 0.434364 (0.059349) | 0.566326 / 0.540337 (0.025989) | 0.788670 / 1.386936 (-0.598266) |\n\n</details>\n</details>\n\n\n",
"> Thanks. Any comment on my comment below?\r\n> \r\n> >Maybe we should update the docstring of get_data_patterns accordingly? Currently it only gives examples of outputs with ** not in a single path segment (i.e. not with a / as prefix or suffix).\r\n\r\nYea right we need to update it indeed, the outputs are the ones from older versions of fsspec, and from older patterns that we don't use anymore.\r\n\r\nIn general in docstrings I also think we should encourage users to use `**/*` instead of `**` (which has a behavior that is unique to fsspec)",
"Also just noticed that `KEYWORDS_IN_DIR_NAME_BASE_PATTERNS` seems to include `KEYWORDS_IN_FILENAME_BASE_PATTERNS`. I guess we can try to remove the filename one in another PR to remove this redundancy \r\n\r\n(noticed this by checking that the data pattern is the same for both the dir name and filename examples in the get_data_patterns docstring)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006922 / 0.011353 (-0.004431) | 0.004459 / 0.011008 (-0.006549) | 0.084742 / 0.038508 (0.046234) | 0.089002 / 0.023109 (0.065893) | 0.310886 / 0.275898 (0.034988) | 0.340518 / 0.323480 (0.017038) | 0.007011 / 0.007986 (-0.000975) | 0.004566 / 0.004328 (0.000237) | 0.067260 / 0.004250 (0.063009) | 0.066349 / 0.037052 (0.029297) | 0.324029 / 0.258489 (0.065540) | 0.373785 / 0.293841 (0.079944) | 0.031780 / 0.128546 (-0.096766) | 0.009208 / 0.075646 (-0.066438) | 0.288871 / 0.419271 (-0.130401) | 0.054548 / 0.043533 (0.011015) | 0.313344 / 0.255139 (0.058205) | 0.336430 / 0.283200 (0.053231) | 0.029037 / 0.141683 (-0.112646) | 1.483797 / 1.452155 (0.031642) | 1.581884 / 1.492716 (0.089167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.370520 / 0.018006 (0.352514) | 0.796720 / 0.000490 (0.796230) | 0.009329 / 0.000200 (0.009129) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033002 / 0.037411 (-0.004410) | 0.083442 / 0.014526 (0.068916) | 0.106468 / 0.176557 (-0.070088) | 0.165315 / 0.737135 (-0.571820) | 0.103048 / 0.296338 (-0.193291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386800 / 0.215209 (0.171591) | 3.843312 / 2.077655 (1.765658) | 1.848953 / 1.504120 (0.344834) | 1.679508 / 1.541195 (0.138313) | 1.733578 / 1.468490 (0.265088) | 0.488455 / 4.584777 (-4.096322) | 3.613594 / 3.745712 (-0.132118) | 3.533334 / 5.269862 (-1.736528) | 2.176216 / 4.565676 (-2.389460) | 0.056915 / 0.424275 (-0.367360) | 0.007349 / 0.007607 (-0.000258) | 0.465132 / 0.226044 (0.239088) | 4.638479 / 2.268929 (2.369550) | 2.354741 / 55.444624 (-53.089883) | 1.991777 / 6.876477 (-4.884700) | 2.249823 / 2.142072 (0.107751) | 0.582748 / 4.805227 (-4.222480) | 0.133829 / 6.500664 (-6.366835) | 0.060949 / 0.075469 (-0.014520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.252027 / 1.841788 (-0.589760) | 20.660234 / 8.074308 (12.585926) | 14.328496 / 10.191392 (4.137104) | 0.164872 / 0.680424 (-0.515552) | 0.018867 / 0.534201 (-0.515334) | 0.392850 / 0.579283 (-0.186433) | 0.425684 / 0.434364 (-0.008679) | 0.461776 / 0.540337 (-0.078562) | 0.663688 / 1.386936 (-0.723248) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007010 / 0.011353 (-0.004343) | 0.004791 / 0.011008 (-0.006217) | 0.064738 / 0.038508 (0.026230) | 0.088648 / 0.023109 (0.065539) | 0.418106 / 0.275898 (0.142208) | 0.446767 / 0.323480 (0.123287) | 0.006761 / 0.007986 (-0.001224) | 0.004649 / 0.004328 (0.000320) | 0.066345 / 0.004250 (0.062094) | 0.068326 / 0.037052 (0.031274) | 0.423426 / 0.258489 (0.164937) | 0.463160 / 0.293841 (0.169319) | 0.032689 / 0.128546 (-0.095858) | 0.009299 / 0.075646 (-0.066347) | 0.071321 / 0.419271 (-0.347951) | 0.048752 / 0.043533 (0.005219) | 0.418932 / 0.255139 (0.163793) | 0.440673 / 0.283200 (0.157473) | 0.027898 / 0.141683 (-0.113785) | 1.531860 / 1.452155 (0.079705) | 1.620456 / 1.492716 (0.127739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.354917 / 0.018006 (0.336911) | 0.792432 / 0.000490 (0.791943) | 0.006626 / 0.000200 (0.006426) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036190 / 0.037411 (-0.001222) | 0.093052 / 0.014526 (0.078526) | 0.111927 / 0.176557 (-0.064629) | 0.165571 / 0.737135 (-0.571564) | 0.112159 / 0.296338 (-0.184180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437798 / 0.215209 (0.222589) | 4.367166 / 2.077655 (2.289511) | 2.343292 / 1.504120 (0.839172) | 2.169298 / 1.541195 (0.628103) | 2.224471 / 1.468490 (0.755981) | 0.487317 / 4.584777 (-4.097460) | 3.627825 / 3.745712 (-0.117887) | 3.500914 / 5.269862 (-1.768947) | 2.175862 / 4.565676 (-2.389815) | 0.057975 / 0.424275 (-0.366300) | 0.007509 / 0.007607 (-0.000098) | 0.517389 / 0.226044 (0.291345) | 5.169694 / 2.268929 (2.900766) | 2.850993 / 55.444624 (-52.593631) | 2.473111 / 6.876477 (-4.403366) | 2.746731 / 2.142072 (0.604659) | 0.586597 / 4.805227 (-4.218630) | 0.134082 / 6.500664 (-6.366582) | 0.061035 / 0.075469 (-0.014434) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375186 / 1.841788 (-0.466602) | 20.960817 / 8.074308 (12.886509) | 15.035071 / 10.191392 (4.843679) | 0.169494 / 0.680424 (-0.510930) | 0.020654 / 0.534201 (-0.513547) | 0.398047 / 0.579283 (-0.181236) | 0.438117 / 0.434364 (0.003753) | 0.483896 / 0.540337 (-0.056441) | 0.690728 / 1.386936 (-0.696208) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004087 / 0.011008 (-0.006921) | 0.084695 / 0.038508 (0.046187) | 0.078084 / 0.023109 (0.054975) | 0.322976 / 0.275898 (0.047078) | 0.355332 / 0.323480 (0.031852) | 0.004235 / 0.007986 (-0.003750) | 0.003450 / 0.004328 (-0.000879) | 0.065355 / 0.004250 (0.061104) | 0.058593 / 0.037052 (0.021541) | 0.335761 / 0.258489 (0.077272) | 0.370392 / 0.293841 (0.076551) | 0.031720 / 0.128546 (-0.096827) | 0.008611 / 0.075646 (-0.067036) | 0.288213 / 0.419271 (-0.131059) | 0.053374 / 0.043533 (0.009842) | 0.321863 / 0.255139 (0.066724) | 0.341587 / 0.283200 (0.058387) | 0.025694 / 0.141683 (-0.115989) | 1.470502 / 1.452155 (0.018348) | 1.565068 / 1.492716 (0.072352) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231063 / 0.018006 (0.213057) | 0.464996 / 0.000490 (0.464506) | 0.007316 / 0.000200 (0.007116) | 0.000288 / 0.000054 (0.000233) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029244 / 0.037411 (-0.008167) | 0.086303 / 0.014526 (0.071777) | 0.097281 / 0.176557 (-0.079276) | 0.153552 / 0.737135 (-0.583583) | 0.098488 / 0.296338 (-0.197850) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382753 / 0.215209 (0.167544) | 3.826503 / 2.077655 (1.748848) | 1.848439 / 1.504120 (0.344319) | 1.688519 / 1.541195 (0.147324) | 1.787867 / 1.468490 (0.319377) | 0.489708 / 4.584777 (-4.095069) | 3.576780 / 3.745712 (-0.168932) | 3.341536 / 5.269862 (-1.928325) | 2.108787 / 4.565676 (-2.456889) | 0.057409 / 0.424275 (-0.366866) | 0.007325 / 0.007607 (-0.000282) | 0.459536 / 0.226044 (0.233492) | 4.590609 / 2.268929 (2.321681) | 2.313005 / 55.444624 (-53.131620) | 1.972389 / 6.876477 (-4.904087) | 2.218511 / 2.142072 (0.076439) | 0.613817 / 4.805227 (-4.191410) | 0.133846 / 6.500664 (-6.366818) | 0.062190 / 0.075469 (-0.013279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279860 / 1.841788 (-0.561928) | 19.549777 / 8.074308 (11.475469) | 14.225844 / 10.191392 (4.034452) | 0.164682 / 0.680424 (-0.515741) | 0.018321 / 0.534201 (-0.515880) | 0.389874 / 0.579283 (-0.189409) | 0.408597 / 0.434364 (-0.025767) | 0.454327 / 0.540337 (-0.086011) | 0.645571 / 1.386936 (-0.741365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007021 / 0.011353 (-0.004332) | 0.004119 / 0.011008 (-0.006889) | 0.065393 / 0.038508 (0.026885) | 0.085005 / 0.023109 (0.061896) | 0.412221 / 0.275898 (0.136323) | 0.438266 / 0.323480 (0.114786) | 0.005594 / 0.007986 (-0.002392) | 0.003499 / 0.004328 (-0.000829) | 0.065053 / 0.004250 (0.060802) | 0.060608 / 0.037052 (0.023555) | 0.413938 / 0.258489 (0.155449) | 0.446192 / 0.293841 (0.152351) | 0.032232 / 0.128546 (-0.096314) | 0.008617 / 0.075646 (-0.067029) | 0.071296 / 0.419271 (-0.347976) | 0.048756 / 0.043533 (0.005223) | 0.404977 / 0.255139 (0.149838) | 0.426801 / 0.283200 (0.143602) | 0.023650 / 0.141683 (-0.118033) | 1.526928 / 1.452155 (0.074773) | 1.627504 / 1.492716 (0.134787) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224318 / 0.018006 (0.206312) | 0.469717 / 0.000490 (0.469227) | 0.005539 / 0.000200 (0.005339) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034240 / 0.037411 (-0.003171) | 0.096449 / 0.014526 (0.081923) | 0.107309 / 0.176557 (-0.069247) | 0.160246 / 0.737135 (-0.576889) | 0.107595 / 0.296338 (-0.188743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434266 / 0.215209 (0.219057) | 4.325571 / 2.077655 (2.247916) | 2.324066 / 1.504120 (0.819946) | 2.140238 / 1.541195 (0.599044) | 2.244593 / 1.468490 (0.776103) | 0.486259 / 4.584777 (-4.098518) | 3.644120 / 3.745712 (-0.101592) | 3.372330 / 5.269862 (-1.897531) | 2.074779 / 4.565676 (-2.490897) | 0.057154 / 0.424275 (-0.367121) | 0.007304 / 0.007607 (-0.000303) | 0.516944 / 0.226044 (0.290899) | 5.174300 / 2.268929 (2.905372) | 2.816269 / 55.444624 (-52.628356) | 2.462943 / 6.876477 (-4.413534) | 2.735851 / 2.142072 (0.593779) | 0.589028 / 4.805227 (-4.216200) | 0.131804 / 6.500664 (-6.368860) | 0.060173 / 0.075469 (-0.015296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354540 / 1.841788 (-0.487248) | 20.436511 / 8.074308 (12.362203) | 15.541981 / 10.191392 (5.350589) | 0.168399 / 0.680424 (-0.512025) | 0.020716 / 0.534201 (-0.513485) | 0.396275 / 0.579283 (-0.183008) | 0.427232 / 0.434364 (-0.007132) | 0.475121 / 0.540337 (-0.065216) | 0.648579 / 1.386936 (-0.738357) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009071 / 0.011353 (-0.002282) | 0.005820 / 0.011008 (-0.005188) | 0.119974 / 0.038508 (0.081466) | 0.092145 / 0.023109 (0.069036) | 0.445349 / 0.275898 (0.169451) | 0.442488 / 0.323480 (0.119008) | 0.005352 / 0.007986 (-0.002634) | 0.004332 / 0.004328 (0.000003) | 0.084397 / 0.004250 (0.080147) | 0.064624 / 0.037052 (0.027572) | 0.430938 / 0.258489 (0.172448) | 0.503574 / 0.293841 (0.209733) | 0.047900 / 0.128546 (-0.080647) | 0.014237 / 0.075646 (-0.061409) | 0.366145 / 0.419271 (-0.053127) | 0.066344 / 0.043533 (0.022811) | 0.424582 / 0.255139 (0.169443) | 0.451845 / 0.283200 (0.168646) | 0.041409 / 0.141683 (-0.100274) | 1.886998 / 1.452155 (0.434843) | 2.011676 / 1.492716 (0.518960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301008 / 0.018006 (0.283001) | 0.608670 / 0.000490 (0.608180) | 0.011963 / 0.000200 (0.011763) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031996 / 0.037411 (-0.005415) | 0.102274 / 0.014526 (0.087748) | 0.121437 / 0.176557 (-0.055120) | 0.181647 / 0.737135 (-0.555489) | 0.121634 / 0.296338 (-0.174704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.597070 / 0.215209 (0.381861) | 5.973808 / 2.077655 (3.896154) | 2.486345 / 1.504120 (0.982225) | 2.125395 / 1.541195 (0.584201) | 2.270864 / 1.468490 (0.802374) | 0.880031 / 4.584777 (-3.704746) | 5.396522 / 3.745712 (1.650809) | 4.702005 / 5.269862 (-0.567857) | 3.023087 / 4.565676 (-1.542589) | 0.097093 / 0.424275 (-0.327182) | 0.008457 / 0.007607 (0.000850) | 0.712164 / 0.226044 (0.486120) | 7.112867 / 2.268929 (4.843938) | 3.364509 / 55.444624 (-52.080115) | 2.646953 / 6.876477 (-4.229524) | 2.795967 / 2.142072 (0.653894) | 1.067182 / 4.805227 (-3.738046) | 0.218297 / 6.500664 (-6.282368) | 0.071720 / 0.075469 (-0.003750) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640477 / 1.841788 (-0.201311) | 24.875163 / 8.074308 (16.800855) | 22.125706 / 10.191392 (11.934314) | 0.247267 / 0.680424 (-0.433157) | 0.033717 / 0.534201 (-0.500484) | 0.492422 / 0.579283 (-0.086862) | 0.578323 / 0.434364 (0.143959) | 0.579503 / 0.540337 (0.039165) | 0.816721 / 1.386936 (-0.570215) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009372 / 0.011353 (-0.001981) | 0.005449 / 0.011008 (-0.005559) | 0.095371 / 0.038508 (0.056863) | 0.086320 / 0.023109 (0.063211) | 0.539573 / 0.275898 (0.263675) | 0.580338 / 0.323480 (0.256858) | 0.007028 / 0.007986 (-0.000958) | 0.004196 / 0.004328 (-0.000133) | 0.082710 / 0.004250 (0.078460) | 0.064336 / 0.037052 (0.027284) | 0.521490 / 0.258489 (0.263001) | 0.567942 / 0.293841 (0.274101) | 0.049659 / 0.128546 (-0.078887) | 0.017297 / 0.075646 (-0.058350) | 0.093874 / 0.419271 (-0.325398) | 0.061664 / 0.043533 (0.018131) | 0.524476 / 0.255139 (0.269337) | 0.563255 / 0.283200 (0.280055) | 0.039990 / 0.141683 (-0.101693) | 1.854438 / 1.452155 (0.402283) | 1.819321 / 1.492716 (0.326605) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298817 / 0.018006 (0.280811) | 0.629381 / 0.000490 (0.628891) | 0.006259 / 0.000200 (0.006059) | 0.000690 / 0.000054 (0.000635) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.041009 / 0.037411 (0.003598) | 0.123845 / 0.014526 (0.109319) | 0.138606 / 0.176557 (-0.037951) | 0.215042 / 0.737135 (-0.522093) | 0.129572 / 0.296338 (-0.166767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668823 / 0.215209 (0.453614) | 6.596762 / 2.077655 (4.519108) | 3.275429 / 1.504120 (1.771309) | 2.921747 / 1.541195 (1.380553) | 2.963748 / 1.468490 (1.495258) | 0.897588 / 4.584777 (-3.687188) | 5.683618 / 3.745712 (1.937906) | 5.051102 / 5.269862 (-0.218760) | 3.178855 / 4.565676 (-1.386822) | 0.107446 / 0.424275 (-0.316829) | 0.008967 / 0.007607 (0.001360) | 0.785577 / 0.226044 (0.559532) | 8.236556 / 2.268929 (5.967628) | 3.914725 / 55.444624 (-51.529899) | 3.129068 / 6.876477 (-3.747409) | 3.368383 / 2.142072 (1.226310) | 1.004307 / 4.805227 (-3.800920) | 0.204788 / 6.500664 (-6.295876) | 0.078250 / 0.075469 (0.002780) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.778574 / 1.841788 (-0.063213) | 25.583659 / 8.074308 (17.509351) | 23.505866 / 10.191392 (13.314474) | 0.228759 / 0.680424 (-0.451665) | 0.038348 / 0.534201 (-0.495853) | 0.468980 / 0.579283 (-0.110303) | 0.630194 / 0.434364 (0.195830) | 0.587535 / 0.540337 (0.047198) | 0.831761 / 1.386936 (-0.555175) |\n\n</details>\n</details>\n\n\n",
"I've addressed the comments. Let me know if it looks all good now :)",
"Actually just found out that the current `**/*[-._ 0-9/]train[-._ 0-9/]**` doesn't match `data/train.csv` in bash (but does match in fsspec right now).\r\n\r\nSo there might be a risk that this pattern breaks in the future no ?",
"@lhoestq `fsspec` has tests to check their specific (non-posix) behavior, so I think merging in the current state is fine. And if they make a breaking change in the future, we can align the patterns once again :) ",
"Yea after more thoughts I also think it's fine. Feel free to merge !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006920 / 0.011353 (-0.004433) | 0.004182 / 0.011008 (-0.006826) | 0.084629 / 0.038508 (0.046121) | 0.086052 / 0.023109 (0.062943) | 0.326062 / 0.275898 (0.050164) | 0.344190 / 0.323480 (0.020710) | 0.005393 / 0.007986 (-0.002593) | 0.003410 / 0.004328 (-0.000918) | 0.064327 / 0.004250 (0.060076) | 0.056556 / 0.037052 (0.019504) | 0.319255 / 0.258489 (0.060766) | 0.357943 / 0.293841 (0.064102) | 0.032097 / 0.128546 (-0.096450) | 0.008778 / 0.075646 (-0.066868) | 0.291057 / 0.419271 (-0.128215) | 0.053225 / 0.043533 (0.009692) | 0.307713 / 0.255139 (0.052574) | 0.350058 / 0.283200 (0.066858) | 0.024380 / 0.141683 (-0.117303) | 1.459482 / 1.452155 (0.007328) | 1.555711 / 1.492716 (0.062994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239487 / 0.018006 (0.221480) | 0.467604 / 0.000490 (0.467114) | 0.010742 / 0.000200 (0.010542) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029394 / 0.037411 (-0.008018) | 0.087404 / 0.014526 (0.072879) | 0.098701 / 0.176557 (-0.077855) | 0.154145 / 0.737135 (-0.582990) | 0.099726 / 0.296338 (-0.196612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389008 / 0.215209 (0.173799) | 3.873165 / 2.077655 (1.795510) | 1.860676 / 1.504120 (0.356556) | 1.679668 / 1.541195 (0.138474) | 1.782347 / 1.468490 (0.313857) | 0.489469 / 4.584777 (-4.095308) | 3.678706 / 3.745712 (-0.067006) | 3.404076 / 5.269862 (-1.865785) | 2.110972 / 4.565676 (-2.454704) | 0.057478 / 0.424275 (-0.366797) | 0.007443 / 0.007607 (-0.000164) | 0.464780 / 0.226044 (0.238736) | 4.643606 / 2.268929 (2.374678) | 2.355744 / 55.444624 (-53.088881) | 1.993992 / 6.876477 (-4.882485) | 2.245520 / 2.142072 (0.103447) | 0.592773 / 4.805227 (-4.212454) | 0.135369 / 6.500664 (-6.365295) | 0.062478 / 0.075469 (-0.012991) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257537 / 1.841788 (-0.584251) | 19.828010 / 8.074308 (11.753702) | 14.709260 / 10.191392 (4.517868) | 0.168359 / 0.680424 (-0.512065) | 0.018907 / 0.534201 (-0.515294) | 0.397223 / 0.579283 (-0.182060) | 0.421760 / 0.434364 (-0.012604) | 0.464597 / 0.540337 (-0.075740) | 0.665905 / 1.386936 (-0.721031) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.004104 / 0.011008 (-0.006904) | 0.065008 / 0.038508 (0.026500) | 0.083485 / 0.023109 (0.060376) | 0.399808 / 0.275898 (0.123910) | 0.433374 / 0.323480 (0.109894) | 0.005453 / 0.007986 (-0.002532) | 0.003479 / 0.004328 (-0.000850) | 0.065126 / 0.004250 (0.060876) | 0.059945 / 0.037052 (0.022893) | 0.402018 / 0.258489 (0.143529) | 0.437927 / 0.293841 (0.144086) | 0.032654 / 0.128546 (-0.095892) | 0.008717 / 0.075646 (-0.066929) | 0.071737 / 0.419271 (-0.347534) | 0.048903 / 0.043533 (0.005370) | 0.402107 / 0.255139 (0.146968) | 0.417602 / 0.283200 (0.134402) | 0.024821 / 0.141683 (-0.116862) | 1.474471 / 1.452155 (0.022316) | 1.559571 / 1.492716 (0.066855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232010 / 0.018006 (0.214003) | 0.460768 / 0.000490 (0.460278) | 0.005250 / 0.000200 (0.005050) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033839 / 0.037411 (-0.003573) | 0.101617 / 0.014526 (0.087091) | 0.107984 / 0.176557 (-0.068573) | 0.160923 / 0.737135 (-0.576212) | 0.110367 / 0.296338 (-0.185971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433087 / 0.215209 (0.217878) | 4.324100 / 2.077655 (2.246445) | 2.312937 / 1.504120 (0.808817) | 2.159903 / 1.541195 (0.618708) | 2.240235 / 1.468490 (0.771745) | 0.500659 / 4.584777 (-4.084118) | 3.743801 / 3.745712 (-0.001911) | 3.441350 / 5.269862 (-1.828512) | 2.141370 / 4.565676 (-2.424306) | 0.059078 / 0.424275 (-0.365197) | 0.007468 / 0.007607 (-0.000139) | 0.508108 / 0.226044 (0.282064) | 5.076738 / 2.268929 (2.807809) | 2.825939 / 55.444624 (-52.618685) | 2.467762 / 6.876477 (-4.408715) | 2.705079 / 2.142072 (0.563006) | 0.603363 / 4.805227 (-4.201864) | 0.136267 / 6.500664 (-6.364397) | 0.062887 / 0.075469 (-0.012582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359344 / 1.841788 (-0.482443) | 20.581510 / 8.074308 (12.507202) | 15.534489 / 10.191392 (5.343097) | 0.192068 / 0.680424 (-0.488356) | 0.020831 / 0.534201 (-0.513370) | 0.403330 / 0.579283 (-0.175953) | 0.429536 / 0.434364 (-0.004828) | 0.479906 / 0.540337 (-0.060431) | 0.674170 / 1.386936 (-0.712766) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-15T17:58:25
| 2023-09-26T15:41:38
| 2023-09-26T15:32:51
|
CONTRIBUTOR
| null |
Fix #6214
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6244/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6244",
"html_url": "https://github.com/huggingface/datasets/pull/6244",
"diff_url": "https://github.com/huggingface/datasets/pull/6244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6244.patch",
"merged_at": "2023-09-26T15:32:51"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6243
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6243/events
|
https://github.com/huggingface/datasets/pull/6243
| 1,898,532,784
|
PR_kwDODunzps5aclIy
| 6,243
|
Fix cast from fixed size list to variable size list
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006784 / 0.011353 (-0.004569) | 0.004051 / 0.011008 (-0.006957) | 0.083790 / 0.038508 (0.045282) | 0.081219 / 0.023109 (0.058110) | 0.313195 / 0.275898 (0.037297) | 0.336954 / 0.323480 (0.013475) | 0.004324 / 0.007986 (-0.003662) | 0.004516 / 0.004328 (0.000188) | 0.065051 / 0.004250 (0.060801) | 0.057647 / 0.037052 (0.020595) | 0.316675 / 0.258489 (0.058186) | 0.357936 / 0.293841 (0.064095) | 0.030980 / 0.128546 (-0.097566) | 0.008844 / 0.075646 (-0.066802) | 0.287027 / 0.419271 (-0.132245) | 0.052130 / 0.043533 (0.008597) | 0.308125 / 0.255139 (0.052986) | 0.337345 / 0.283200 (0.054145) | 0.025781 / 0.141683 (-0.115902) | 1.466161 / 1.452155 (0.014006) | 1.565824 / 1.492716 (0.073108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299112 / 0.018006 (0.281106) | 0.640520 / 0.000490 (0.640030) | 0.008846 / 0.000200 (0.008647) | 0.000273 / 0.000054 (0.000219) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029853 / 0.037411 (-0.007559) | 0.081697 / 0.014526 (0.067172) | 0.099110 / 0.176557 (-0.077447) | 0.155864 / 0.737135 (-0.581271) | 0.098749 / 0.296338 (-0.197590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385722 / 0.215209 (0.170512) | 3.851490 / 2.077655 (1.773835) | 1.851995 / 1.504120 (0.347875) | 1.660398 / 1.541195 (0.119204) | 1.769370 / 1.468490 (0.300879) | 0.481523 / 4.584777 (-4.103254) | 3.550449 / 3.745712 (-0.195263) | 3.424782 / 5.269862 (-1.845079) | 2.106470 / 4.565676 (-2.459206) | 0.056500 / 0.424275 (-0.367775) | 0.007891 / 0.007607 (0.000284) | 0.465564 / 0.226044 (0.239520) | 4.662892 / 2.268929 (2.393964) | 2.305424 / 55.444624 (-53.139201) | 1.980524 / 6.876477 (-4.895953) | 2.218423 / 2.142072 (0.076350) | 0.584662 / 4.805227 (-4.220565) | 0.132325 / 6.500664 (-6.368340) | 0.060773 / 0.075469 (-0.014696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254261 / 1.841788 (-0.587527) | 19.479805 / 8.074308 (11.405497) | 14.222687 / 10.191392 (4.031295) | 0.149829 / 0.680424 (-0.530595) | 0.018630 / 0.534201 (-0.515571) | 0.395284 / 0.579283 (-0.183999) | 0.413385 / 0.434364 (-0.020978) | 0.462931 / 0.540337 (-0.077406) | 0.645359 / 1.386936 (-0.741577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004306 / 0.011008 (-0.006702) | 0.065213 / 0.038508 (0.026705) | 0.082442 / 0.023109 (0.059332) | 0.411294 / 0.275898 (0.135396) | 0.452176 / 0.323480 (0.128696) | 0.005802 / 0.007986 (-0.002183) | 0.003556 / 0.004328 (-0.000772) | 0.066163 / 0.004250 (0.061913) | 0.060680 / 0.037052 (0.023628) | 0.416975 / 0.258489 (0.158486) | 0.456353 / 0.293841 (0.162512) | 0.033584 / 0.128546 (-0.094963) | 0.008687 / 0.075646 (-0.066959) | 0.071300 / 0.419271 (-0.347972) | 0.049382 / 0.043533 (0.005849) | 0.409329 / 0.255139 (0.154190) | 0.434829 / 0.283200 (0.151629) | 0.022966 / 0.141683 (-0.118716) | 1.493847 / 1.452155 (0.041692) | 1.582372 / 1.492716 (0.089656) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280578 / 0.018006 (0.262572) | 0.538122 / 0.000490 (0.537632) | 0.004515 / 0.000200 (0.004315) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033383 / 0.037411 (-0.004028) | 0.093426 / 0.014526 (0.078901) | 0.109314 / 0.176557 (-0.067242) | 0.162349 / 0.737135 (-0.574786) | 0.109849 / 0.296338 (-0.186490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431073 / 0.215209 (0.215864) | 4.311942 / 2.077655 (2.234287) | 2.291170 / 1.504120 (0.787051) | 2.132266 / 1.541195 (0.591072) | 2.236526 / 1.468490 (0.768036) | 0.492001 / 4.584777 (-4.092776) | 3.523013 / 3.745712 (-0.222699) | 3.413481 / 5.269862 (-1.856381) | 2.112979 / 4.565676 (-2.452698) | 0.058654 / 0.424275 (-0.365621) | 0.007729 / 0.007607 (0.000121) | 0.512027 / 0.226044 (0.285982) | 5.125264 / 2.268929 (2.856336) | 2.836281 / 55.444624 (-52.608344) | 2.447253 / 6.876477 (-4.429224) | 2.711908 / 2.142072 (0.569835) | 0.592598 / 4.805227 (-4.212629) | 0.134837 / 6.500664 (-6.365827) | 0.059813 / 0.075469 (-0.015656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373464 / 1.841788 (-0.468323) | 20.548983 / 8.074308 (12.474675) | 14.799833 / 10.191392 (4.608441) | 0.168601 / 0.680424 (-0.511823) | 0.020358 / 0.534201 (-0.513843) | 0.398790 / 0.579283 (-0.180494) | 0.416921 / 0.434364 (-0.017443) | 0.480542 / 0.540337 (-0.059795) | 0.645062 / 1.386936 (-0.741874) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008616 / 0.011353 (-0.002737) | 0.004957 / 0.011008 (-0.006051) | 0.102629 / 0.038508 (0.064121) | 0.080492 / 0.023109 (0.057383) | 0.461817 / 0.275898 (0.185919) | 0.487964 / 0.323480 (0.164484) | 0.006336 / 0.007986 (-0.001649) | 0.004607 / 0.004328 (0.000278) | 0.074311 / 0.004250 (0.070061) | 0.060368 / 0.037052 (0.023315) | 0.458076 / 0.258489 (0.199587) | 0.493028 / 0.293841 (0.199187) | 0.044153 / 0.128546 (-0.084394) | 0.014066 / 0.075646 (-0.061581) | 0.369848 / 0.419271 (-0.049424) | 0.061690 / 0.043533 (0.018157) | 0.439728 / 0.255139 (0.184590) | 0.484706 / 0.283200 (0.201506) | 0.034657 / 0.141683 (-0.107026) | 1.710591 / 1.452155 (0.258437) | 1.900225 / 1.492716 (0.407509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308837 / 0.018006 (0.290831) | 0.579561 / 0.000490 (0.579072) | 0.010163 / 0.000200 (0.009963) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028108 / 0.037411 (-0.009303) | 0.085072 / 0.014526 (0.070546) | 0.103375 / 0.176557 (-0.073182) | 0.173765 / 0.737135 (-0.563371) | 0.102460 / 0.296338 (-0.193879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602642 / 0.215209 (0.387433) | 5.582537 / 2.077655 (3.504882) | 2.405553 / 1.504120 (0.901434) | 2.057298 / 1.541195 (0.516103) | 2.223787 / 1.468490 (0.755297) | 0.846138 / 4.584777 (-3.738638) | 5.290306 / 3.745712 (1.544594) | 4.836066 / 5.269862 (-0.433795) | 2.951901 / 4.565676 (-1.613775) | 0.099432 / 0.424275 (-0.324843) | 0.009198 / 0.007607 (0.001591) | 0.731370 / 0.226044 (0.505325) | 6.663026 / 2.268929 (4.394098) | 3.200932 / 55.444624 (-52.243692) | 2.486654 / 6.876477 (-4.389823) | 2.833195 / 2.142072 (0.691123) | 0.989481 / 4.805227 (-3.815746) | 0.205176 / 6.500664 (-6.295488) | 0.073760 / 0.075469 (-0.001709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745494 / 1.841788 (-0.096294) | 24.649294 / 8.074308 (16.574986) | 22.312182 / 10.191392 (12.120790) | 0.245207 / 0.680424 (-0.435217) | 0.031971 / 0.534201 (-0.502230) | 0.495179 / 0.579283 (-0.084104) | 0.603233 / 0.434364 (0.168869) | 0.560906 / 0.540337 (0.020569) | 0.788292 / 1.386936 (-0.598644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005203 / 0.011008 (-0.005805) | 0.074414 / 0.038508 (0.035906) | 0.077552 / 0.023109 (0.054443) | 0.547217 / 0.275898 (0.271319) | 0.625298 / 0.323480 (0.301818) | 0.006135 / 0.007986 (-0.001851) | 0.004163 / 0.004328 (-0.000165) | 0.078014 / 0.004250 (0.073764) | 0.064484 / 0.037052 (0.027431) | 0.562356 / 0.258489 (0.303867) | 0.643613 / 0.293841 (0.349772) | 0.050155 / 0.128546 (-0.078391) | 0.013665 / 0.075646 (-0.061981) | 0.090224 / 0.419271 (-0.329048) | 0.063852 / 0.043533 (0.020319) | 0.560914 / 0.255139 (0.305775) | 0.591531 / 0.283200 (0.308331) | 0.036491 / 0.141683 (-0.105192) | 1.670898 / 1.452155 (0.218743) | 1.783924 / 1.492716 (0.291208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.312764 / 0.018006 (0.294758) | 0.611116 / 0.000490 (0.610626) | 0.006367 / 0.000200 (0.006167) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033967 / 0.037411 (-0.003445) | 0.101550 / 0.014526 (0.087025) | 0.116953 / 0.176557 (-0.059604) | 0.180061 / 0.737135 (-0.557075) | 0.115220 / 0.296338 (-0.181118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.642110 / 0.215209 (0.426901) | 6.361381 / 2.077655 (4.283727) | 2.948175 / 1.504120 (1.444055) | 2.633935 / 1.541195 (1.092740) | 2.822150 / 1.468490 (1.353660) | 0.931412 / 4.584777 (-3.653365) | 5.428540 / 3.745712 (1.682828) | 4.672920 / 5.269862 (-0.596941) | 3.102046 / 4.565676 (-1.463630) | 0.100825 / 0.424275 (-0.323450) | 0.009464 / 0.007607 (0.001857) | 0.774102 / 0.226044 (0.548058) | 7.715003 / 2.268929 (5.446074) | 3.987807 / 55.444624 (-51.456817) | 3.089129 / 6.876477 (-3.787347) | 3.333247 / 2.142072 (1.191174) | 1.012427 / 4.805227 (-3.792800) | 0.200662 / 6.500664 (-6.300002) | 0.072422 / 0.075469 (-0.003047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.680364 / 1.841788 (-0.161424) | 24.484576 / 8.074308 (16.410268) | 21.920990 / 10.191392 (11.729598) | 0.218604 / 0.680424 (-0.461820) | 0.035818 / 0.534201 (-0.498383) | 0.470648 / 0.579283 (-0.108635) | 0.585108 / 0.434364 (0.150744) | 0.539152 / 0.540337 (-0.001185) | 0.763999 / 1.386936 (-0.622937) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006304 / 0.011353 (-0.005049) | 0.003884 / 0.011008 (-0.007125) | 0.084847 / 0.038508 (0.046339) | 0.069372 / 0.023109 (0.046263) | 0.318876 / 0.275898 (0.042978) | 0.344733 / 0.323480 (0.021253) | 0.005139 / 0.007986 (-0.002847) | 0.003203 / 0.004328 (-0.001125) | 0.065758 / 0.004250 (0.061507) | 0.054189 / 0.037052 (0.017137) | 0.317475 / 0.258489 (0.058986) | 0.359310 / 0.293841 (0.065469) | 0.030639 / 0.128546 (-0.097908) | 0.008657 / 0.075646 (-0.066989) | 0.289127 / 0.419271 (-0.130144) | 0.052344 / 0.043533 (0.008811) | 0.316122 / 0.255139 (0.060983) | 0.338339 / 0.283200 (0.055140) | 0.022677 / 0.141683 (-0.119006) | 1.551629 / 1.452155 (0.099474) | 1.617917 / 1.492716 (0.125201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231067 / 0.018006 (0.213061) | 0.450559 / 0.000490 (0.450070) | 0.008484 / 0.000200 (0.008284) | 0.000234 / 0.000054 (0.000179) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.081560 / 0.014526 (0.067034) | 0.094162 / 0.176557 (-0.082395) | 0.148583 / 0.737135 (-0.588552) | 0.093596 / 0.296338 (-0.202742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388616 / 0.215209 (0.173407) | 3.874905 / 2.077655 (1.797251) | 1.915845 / 1.504120 (0.411725) | 1.746410 / 1.541195 (0.205215) | 1.828789 / 1.468490 (0.360299) | 0.483270 / 4.584777 (-4.101506) | 3.489157 / 3.745712 (-0.256555) | 3.190086 / 5.269862 (-2.079776) | 1.978023 / 4.565676 (-2.587653) | 0.056290 / 0.424275 (-0.367985) | 0.007585 / 0.007607 (-0.000022) | 0.467051 / 0.226044 (0.241007) | 4.665971 / 2.268929 (2.397043) | 2.418550 / 55.444624 (-53.026075) | 2.048338 / 6.876477 (-4.828139) | 2.225275 / 2.142072 (0.083203) | 0.576601 / 4.805227 (-4.228626) | 0.131960 / 6.500664 (-6.368704) | 0.060177 / 0.075469 (-0.015292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249797 / 1.841788 (-0.591991) | 18.552939 / 8.074308 (10.478631) | 14.016616 / 10.191392 (3.825224) | 0.162869 / 0.680424 (-0.517555) | 0.018105 / 0.534201 (-0.516096) | 0.394838 / 0.579283 (-0.184445) | 0.403378 / 0.434364 (-0.030986) | 0.460931 / 0.540337 (-0.079407) | 0.637365 / 1.386936 (-0.749571) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004856) | 0.003928 / 0.011008 (-0.007080) | 0.063958 / 0.038508 (0.025450) | 0.069609 / 0.023109 (0.046500) | 0.401599 / 0.275898 (0.125701) | 0.428128 / 0.323480 (0.104648) | 0.005296 / 0.007986 (-0.002689) | 0.003332 / 0.004328 (-0.000996) | 0.063903 / 0.004250 (0.059652) | 0.056303 / 0.037052 (0.019250) | 0.400704 / 0.258489 (0.142214) | 0.435982 / 0.293841 (0.142141) | 0.032434 / 0.128546 (-0.096112) | 0.008570 / 0.075646 (-0.067077) | 0.070788 / 0.419271 (-0.348483) | 0.048252 / 0.043533 (0.004719) | 0.403269 / 0.255139 (0.148130) | 0.419796 / 0.283200 (0.136596) | 0.022598 / 0.141683 (-0.119085) | 1.481627 / 1.452155 (0.029472) | 1.578388 / 1.492716 (0.085672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224552 / 0.018006 (0.206546) | 0.444059 / 0.000490 (0.443570) | 0.003757 / 0.000200 (0.003557) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032173 / 0.037411 (-0.005239) | 0.092562 / 0.014526 (0.078036) | 0.104972 / 0.176557 (-0.071584) | 0.156467 / 0.737135 (-0.580669) | 0.104274 / 0.296338 (-0.192065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441693 / 0.215209 (0.226484) | 4.400217 / 2.077655 (2.322562) | 2.393862 / 1.504120 (0.889742) | 2.281178 / 1.541195 (0.739983) | 2.339895 / 1.468490 (0.871405) | 0.488734 / 4.584777 (-4.096043) | 3.523352 / 3.745712 (-0.222360) | 3.216761 / 5.269862 (-2.053101) | 2.007553 / 4.565676 (-2.558123) | 0.058050 / 0.424275 (-0.366225) | 0.007566 / 0.007607 (-0.000041) | 0.515439 / 0.226044 (0.289394) | 5.155086 / 2.268929 (2.886157) | 2.864958 / 55.444624 (-52.579666) | 2.592460 / 6.876477 (-4.284016) | 2.800449 / 2.142072 (0.658376) | 0.588441 / 4.805227 (-4.216786) | 0.131589 / 6.500664 (-6.369075) | 0.059075 / 0.075469 (-0.016394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353889 / 1.841788 (-0.487898) | 18.938285 / 8.074308 (10.863977) | 14.937141 / 10.191392 (4.745749) | 0.168811 / 0.680424 (-0.511613) | 0.020118 / 0.534201 (-0.514083) | 0.394791 / 0.579283 (-0.184492) | 0.414434 / 0.434364 (-0.019930) | 0.466821 / 0.540337 (-0.073517) | 0.629894 / 1.386936 (-0.757042) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005959 / 0.011353 (-0.005394) | 0.004164 / 0.011008 (-0.006844) | 0.082336 / 0.038508 (0.043828) | 0.070344 / 0.023109 (0.047234) | 0.348032 / 0.275898 (0.072134) | 0.366328 / 0.323480 (0.042848) | 0.003882 / 0.007986 (-0.004104) | 0.003619 / 0.004328 (-0.000709) | 0.063343 / 0.004250 (0.059093) | 0.056617 / 0.037052 (0.019564) | 0.351625 / 0.258489 (0.093136) | 0.395839 / 0.293841 (0.101998) | 0.030842 / 0.128546 (-0.097704) | 0.008363 / 0.075646 (-0.067284) | 0.300535 / 0.419271 (-0.118737) | 0.053303 / 0.043533 (0.009770) | 0.354782 / 0.255139 (0.099643) | 0.364918 / 0.283200 (0.081719) | 0.025365 / 0.141683 (-0.116318) | 1.555009 / 1.452155 (0.102854) | 1.597443 / 1.492716 (0.104727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239808 / 0.018006 (0.221801) | 0.488164 / 0.000490 (0.487675) | 0.013183 / 0.000200 (0.012983) | 0.000483 / 0.000054 (0.000429) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027938 / 0.037411 (-0.009473) | 0.078521 / 0.014526 (0.063995) | 0.095498 / 0.176557 (-0.081059) | 0.150884 / 0.737135 (-0.586251) | 0.097577 / 0.296338 (-0.198762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384546 / 0.215209 (0.169337) | 4.037707 / 2.077655 (1.960053) | 1.940321 / 1.504120 (0.436201) | 1.716741 / 1.541195 (0.175546) | 1.837200 / 1.468490 (0.368710) | 0.502112 / 4.584777 (-4.082665) | 3.770452 / 3.745712 (0.024740) | 3.325691 / 5.269862 (-1.944171) | 2.015622 / 4.565676 (-2.550055) | 0.056246 / 0.424275 (-0.368029) | 0.007320 / 0.007607 (-0.000287) | 0.445553 / 0.226044 (0.219509) | 4.567233 / 2.268929 (2.298304) | 2.319531 / 55.444624 (-53.125093) | 1.968664 / 6.876477 (-4.907813) | 2.122349 / 2.142072 (-0.019724) | 0.573688 / 4.805227 (-4.231540) | 0.131410 / 6.500664 (-6.369254) | 0.062767 / 0.075469 (-0.012702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255244 / 1.841788 (-0.586543) | 19.042480 / 8.074308 (10.968172) | 13.935342 / 10.191392 (3.743950) | 0.161259 / 0.680424 (-0.519165) | 0.020582 / 0.534201 (-0.513619) | 0.391365 / 0.579283 (-0.187918) | 0.417462 / 0.434364 (-0.016902) | 0.473121 / 0.540337 (-0.067216) | 0.674768 / 1.386936 (-0.712168) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003969 / 0.011008 (-0.007040) | 0.063558 / 0.038508 (0.025050) | 0.073847 / 0.023109 (0.050738) | 0.407064 / 0.275898 (0.131166) | 0.440695 / 0.323480 (0.117215) | 0.005783 / 0.007986 (-0.002203) | 0.003517 / 0.004328 (-0.000812) | 0.065721 / 0.004250 (0.061470) | 0.056390 / 0.037052 (0.019338) | 0.419019 / 0.258489 (0.160530) | 0.450721 / 0.293841 (0.156880) | 0.034094 / 0.128546 (-0.094452) | 0.008594 / 0.075646 (-0.067052) | 0.069254 / 0.419271 (-0.350017) | 0.049218 / 0.043533 (0.005685) | 0.413312 / 0.255139 (0.158173) | 0.439454 / 0.283200 (0.156255) | 0.021481 / 0.141683 (-0.120202) | 1.517536 / 1.452155 (0.065382) | 1.530532 / 1.492716 (0.037815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235392 / 0.018006 (0.217386) | 0.477371 / 0.000490 (0.476881) | 0.007070 / 0.000200 (0.006870) | 0.000132 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031909 / 0.037411 (-0.005502) | 0.092459 / 0.014526 (0.077933) | 0.105795 / 0.176557 (-0.070761) | 0.157745 / 0.737135 (-0.579390) | 0.104187 / 0.296338 (-0.192152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424385 / 0.215209 (0.209176) | 4.445371 / 2.077655 (2.367716) | 2.423639 / 1.504120 (0.919519) | 2.188167 / 1.541195 (0.646972) | 2.171023 / 1.468490 (0.702532) | 0.483566 / 4.584777 (-4.101211) | 3.825702 / 3.745712 (0.079990) | 3.276350 / 5.269862 (-1.993512) | 2.063075 / 4.565676 (-2.502602) | 0.061628 / 0.424275 (-0.362647) | 0.008176 / 0.007607 (0.000569) | 0.506697 / 0.226044 (0.280653) | 5.067924 / 2.268929 (2.798995) | 2.785567 / 55.444624 (-52.659057) | 2.457340 / 6.876477 (-4.419137) | 2.599646 / 2.142072 (0.457574) | 0.581550 / 4.805227 (-4.223677) | 0.131712 / 6.500664 (-6.368952) | 0.058776 / 0.075469 (-0.016693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356639 / 1.841788 (-0.485148) | 20.103463 / 8.074308 (12.029155) | 14.481010 / 10.191392 (4.289618) | 0.162870 / 0.680424 (-0.517554) | 0.023197 / 0.534201 (-0.511004) | 0.413042 / 0.579283 (-0.166241) | 0.427494 / 0.434364 (-0.006870) | 0.508457 / 0.540337 (-0.031880) | 0.662412 / 1.386936 (-0.724524) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-15T14:23:33
| 2023-09-19T18:02:21
| 2023-09-19T17:53:17
|
CONTRIBUTOR
| null |
Fix #6242
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6243",
"html_url": "https://github.com/huggingface/datasets/pull/6243",
"diff_url": "https://github.com/huggingface/datasets/pull/6243.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6243.patch",
"merged_at": "2023-09-19T17:53:17"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6242
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6242/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6242/events
|
https://github.com/huggingface/datasets/issues/6242
| 1,896,899,123
|
I_kwDODunzps5xEGIz
| 6,242
|
Data alteration when loading dataset with unspecified inner sequence length
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.",
"Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.table import cast_array_to_feature\r\nfrom datasets import Sequence, Value\r\ndata = [\r\n [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],\r\n [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],\r\n]\r\narr = pa.array(data, pa.list_(pa.list_(pa.float32(), 3)))\r\ncast_array_to_feature(arr, Sequence(Sequence(Value(\"float32\"))))\r\n```\r\n\r\nI've opened a PR with a fix."
] | 2023-09-14T16:12:45
| 2023-09-19T17:53:18
| 2023-09-19T17:53:18
|
CONTRIBUTOR
| null |
### Describe the bug
When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent.
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Value, Sequence, load_dataset
# Repository ID
repo_id = "my_repo_id"
# Define features with a specific length of 3 for each inner sequence
specified_features = Features({"key": Sequence(Sequence(Value("float32"), length=3))})
# Create a dataset with the specified features
data = [
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],
]
dataset = Dataset.from_dict({"key": data}, features=specified_features)
# Push the dataset to the hub
dataset.push_to_hub(repo_id)
# Define features without specifying the length
unspecified_features = Features({"key": Sequence(Sequence(Value("float32")))})
# Load the dataset from the hub with this new feature definition
dataset = load_dataset(f"qgallouedec/{repo_id}", split="train", features=unspecified_features)
# The obtained data is altered
print(dataset.to_dict()) # {'key': [[[1.0], [2.0]], [[3.0], [4.0]]]}
```
### Expected behavior
```python
print(dataset.to_dict()) # {'key': [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]}
```
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6242/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6241
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6241/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6241/events
|
https://github.com/huggingface/datasets/pull/6241
| 1,896,429,694
|
PR_kwDODunzps5aVfl-
| 6,241
|
Remove unused global variables in `audio.py`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004027 / 0.011008 (-0.006982) | 0.084200 / 0.038508 (0.045692) | 0.072233 / 0.023109 (0.049124) | 0.361535 / 0.275898 (0.085637) | 0.386196 / 0.323480 (0.062716) | 0.004047 / 0.007986 (-0.003939) | 0.003416 / 0.004328 (-0.000912) | 0.064724 / 0.004250 (0.060474) | 0.055740 / 0.037052 (0.018688) | 0.360422 / 0.258489 (0.101933) | 0.399230 / 0.293841 (0.105389) | 0.031537 / 0.128546 (-0.097009) | 0.008630 / 0.075646 (-0.067016) | 0.289652 / 0.419271 (-0.129620) | 0.052881 / 0.043533 (0.009348) | 0.359538 / 0.255139 (0.104399) | 0.379410 / 0.283200 (0.096211) | 0.024539 / 0.141683 (-0.117144) | 1.470891 / 1.452155 (0.018736) | 1.578879 / 1.492716 (0.086163) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239200 / 0.018006 (0.221194) | 0.462100 / 0.000490 (0.461610) | 0.009055 / 0.000200 (0.008856) | 0.000406 / 0.000054 (0.000352) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028736 / 0.037411 (-0.008675) | 0.088051 / 0.014526 (0.073525) | 0.098101 / 0.176557 (-0.078456) | 0.152399 / 0.737135 (-0.584737) | 0.098776 / 0.296338 (-0.197563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401761 / 0.215209 (0.186552) | 4.014143 / 2.077655 (1.936488) | 2.033255 / 1.504120 (0.529135) | 1.855347 / 1.541195 (0.314152) | 1.996144 / 1.468490 (0.527654) | 0.488545 / 4.584777 (-4.096232) | 3.712030 / 3.745712 (-0.033682) | 3.439725 / 5.269862 (-1.830137) | 2.119289 / 4.565676 (-2.446388) | 0.057523 / 0.424275 (-0.366752) | 0.007780 / 0.007607 (0.000173) | 0.479522 / 0.226044 (0.253477) | 4.798218 / 2.268929 (2.529290) | 2.543816 / 55.444624 (-52.900809) | 2.180392 / 6.876477 (-4.696085) | 2.427195 / 2.142072 (0.285122) | 0.602071 / 4.805227 (-4.203156) | 0.133450 / 6.500664 (-6.367214) | 0.061975 / 0.075469 (-0.013494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250040 / 1.841788 (-0.591748) | 19.532327 / 8.074308 (11.458019) | 14.200298 / 10.191392 (4.008906) | 0.165165 / 0.680424 (-0.515259) | 0.018326 / 0.534201 (-0.515875) | 0.389788 / 0.579283 (-0.189495) | 0.419301 / 0.434364 (-0.015063) | 0.452645 / 0.540337 (-0.087693) | 0.643409 / 1.386936 (-0.743527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007040 / 0.011353 (-0.004313) | 0.004157 / 0.011008 (-0.006851) | 0.065439 / 0.038508 (0.026931) | 0.083210 / 0.023109 (0.060101) | 0.406707 / 0.275898 (0.130809) | 0.442759 / 0.323480 (0.119279) | 0.006321 / 0.007986 (-0.001665) | 0.003684 / 0.004328 (-0.000645) | 0.064517 / 0.004250 (0.060266) | 0.060676 / 0.037052 (0.023624) | 0.413395 / 0.258489 (0.154906) | 0.446776 / 0.293841 (0.152935) | 0.032542 / 0.128546 (-0.096004) | 0.008614 / 0.075646 (-0.067033) | 0.071760 / 0.419271 (-0.347511) | 0.049646 / 0.043533 (0.006113) | 0.402409 / 0.255139 (0.147270) | 0.422775 / 0.283200 (0.139575) | 0.024846 / 0.141683 (-0.116836) | 1.522915 / 1.452155 (0.070761) | 1.566518 / 1.492716 (0.073802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234478 / 0.018006 (0.216472) | 0.461318 / 0.000490 (0.460828) | 0.006304 / 0.000200 (0.006105) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036904 / 0.037411 (-0.000508) | 0.102144 / 0.014526 (0.087619) | 0.108985 / 0.176557 (-0.067572) | 0.162609 / 0.737135 (-0.574526) | 0.110295 / 0.296338 (-0.186044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438735 / 0.215209 (0.223526) | 4.377602 / 2.077655 (2.299948) | 2.375305 / 1.504120 (0.871185) | 2.215877 / 1.541195 (0.674682) | 2.317468 / 1.468490 (0.848978) | 0.495137 / 4.584777 (-4.089640) | 3.726323 / 3.745712 (-0.019389) | 3.493785 / 5.269862 (-1.776077) | 2.177891 / 4.565676 (-2.387785) | 0.058975 / 0.424275 (-0.365300) | 0.007897 / 0.007607 (0.000290) | 0.514063 / 0.226044 (0.288019) | 5.132714 / 2.268929 (2.863786) | 2.914125 / 55.444624 (-52.530499) | 2.532912 / 6.876477 (-4.343564) | 2.776438 / 2.142072 (0.634365) | 0.624831 / 4.805227 (-4.180396) | 0.135023 / 6.500664 (-6.365641) | 0.062040 / 0.075469 (-0.013429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359970 / 1.841788 (-0.481818) | 20.816464 / 8.074308 (12.742156) | 16.103544 / 10.191392 (5.912152) | 0.149120 / 0.680424 (-0.531304) | 0.020279 / 0.534201 (-0.513922) | 0.408727 / 0.579283 (-0.170556) | 0.436191 / 0.434364 (0.001827) | 0.485056 / 0.540337 (-0.055281) | 0.737727 / 1.386936 (-0.649209) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008102 / 0.011353 (-0.003251) | 0.004886 / 0.011008 (-0.006123) | 0.090482 / 0.038508 (0.051974) | 0.071594 / 0.023109 (0.048485) | 0.428678 / 0.275898 (0.152780) | 0.442179 / 0.323480 (0.118699) | 0.004329 / 0.007986 (-0.003657) | 0.003756 / 0.004328 (-0.000573) | 0.087125 / 0.004250 (0.082874) | 0.055159 / 0.037052 (0.018107) | 0.437646 / 0.258489 (0.179157) | 0.446665 / 0.293841 (0.152824) | 0.046402 / 0.128546 (-0.082145) | 0.014248 / 0.075646 (-0.061398) | 0.331401 / 0.419271 (-0.087871) | 0.062010 / 0.043533 (0.018478) | 0.434774 / 0.255139 (0.179635) | 0.441063 / 0.283200 (0.157863) | 0.037424 / 0.141683 (-0.104258) | 1.720276 / 1.452155 (0.268121) | 1.731491 / 1.492716 (0.238775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302935 / 0.018006 (0.284929) | 0.590556 / 0.000490 (0.590067) | 0.014473 / 0.000200 (0.014274) | 0.000712 / 0.000054 (0.000658) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031289 / 0.037411 (-0.006122) | 0.091175 / 0.014526 (0.076649) | 0.112895 / 0.176557 (-0.063661) | 0.199558 / 0.737135 (-0.537577) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571586 / 0.215209 (0.356377) | 5.706894 / 2.077655 (3.629240) | 2.512701 / 1.504120 (1.008581) | 2.151705 / 1.541195 (0.610510) | 2.252738 / 1.468490 (0.784248) | 0.857524 / 4.584777 (-3.727253) | 5.189027 / 3.745712 (1.443315) | 4.464979 / 5.269862 (-0.804882) | 2.787486 / 4.565676 (-1.778190) | 0.090161 / 0.424275 (-0.334115) | 0.008649 / 0.007607 (0.001042) | 0.703367 / 0.226044 (0.477322) | 7.128971 / 2.268929 (4.860043) | 3.437475 / 55.444624 (-52.007149) | 2.562291 / 6.876477 (-4.314186) | 2.753419 / 2.142072 (0.611346) | 0.981964 / 4.805227 (-3.823263) | 0.194533 / 6.500664 (-6.306131) | 0.069659 / 0.075469 (-0.005810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510356 / 1.841788 (-0.331431) | 22.414117 / 8.074308 (14.339809) | 20.325418 / 10.191392 (10.134025) | 0.226823 / 0.680424 (-0.453601) | 0.029123 / 0.534201 (-0.505078) | 0.454656 / 0.579283 (-0.124627) | 0.559588 / 0.434364 (0.125224) | 0.547386 / 0.540337 (0.007048) | 0.770169 / 1.386936 (-0.616767) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010167 / 0.011353 (-0.001186) | 0.005164 / 0.011008 (-0.005844) | 0.094897 / 0.038508 (0.056388) | 0.078027 / 0.023109 (0.054918) | 0.474442 / 0.275898 (0.198544) | 0.503362 / 0.323480 (0.179882) | 0.006988 / 0.007986 (-0.000998) | 0.005369 / 0.004328 (0.001041) | 0.079547 / 0.004250 (0.075297) | 0.059382 / 0.037052 (0.022329) | 0.468759 / 0.258489 (0.210270) | 0.566780 / 0.293841 (0.272939) | 0.050791 / 0.128546 (-0.077755) | 0.013191 / 0.075646 (-0.062455) | 0.086086 / 0.419271 (-0.333186) | 0.060399 / 0.043533 (0.016866) | 0.492985 / 0.255139 (0.237846) | 0.509139 / 0.283200 (0.225940) | 0.034537 / 0.141683 (-0.107146) | 1.699166 / 1.452155 (0.247011) | 1.789781 / 1.492716 (0.297065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278776 / 0.018006 (0.260769) | 0.615877 / 0.000490 (0.615387) | 0.009062 / 0.000200 (0.008862) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032931 / 0.037411 (-0.004481) | 0.094796 / 0.014526 (0.080270) | 0.126697 / 0.176557 (-0.049859) | 0.168172 / 0.737135 (-0.568963) | 0.113906 / 0.296338 (-0.182433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602378 / 0.215209 (0.387169) | 5.987708 / 2.077655 (3.910054) | 2.800339 / 1.504120 (1.296219) | 2.474127 / 1.541195 (0.932932) | 2.502387 / 1.468490 (1.033897) | 0.808147 / 4.584777 (-3.776630) | 5.212691 / 3.745712 (1.466979) | 4.479452 / 5.269862 (-0.790409) | 2.831960 / 4.565676 (-1.733717) | 0.086777 / 0.424275 (-0.337498) | 0.009492 / 0.007607 (0.001885) | 0.716848 / 0.226044 (0.490803) | 7.099904 / 2.268929 (4.830975) | 3.794708 / 55.444624 (-51.649916) | 2.859826 / 6.876477 (-4.016650) | 3.109673 / 2.142072 (0.967600) | 0.936776 / 4.805227 (-3.868451) | 0.195152 / 6.500664 (-6.305512) | 0.074184 / 0.075469 (-0.001285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585419 / 1.841788 (-0.256369) | 22.420377 / 8.074308 (14.346068) | 20.761533 / 10.191392 (10.570141) | 0.228480 / 0.680424 (-0.451943) | 0.030944 / 0.534201 (-0.503257) | 0.444717 / 0.579283 (-0.134566) | 0.579632 / 0.434364 (0.145268) | 0.521669 / 0.540337 (-0.018669) | 0.748274 / 1.386936 (-0.638662) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-14T12:06:32
| 2023-09-15T15:57:10
| 2023-09-15T15:46:07
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6241/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6241",
"html_url": "https://github.com/huggingface/datasets/pull/6241",
"diff_url": "https://github.com/huggingface/datasets/pull/6241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6241.patch",
"merged_at": "2023-09-15T15:46:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6240
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6240/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6240/events
|
https://github.com/huggingface/datasets/issues/6240
| 1,895,723,888
|
I_kwDODunzps5w_nNw
| 6,240
|
Dataloader stuck on multiple GPUs
|
{
"login": "kuri54",
"id": 40049003,
"node_id": "MDQ6VXNlcjQwMDQ5MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuri54",
"html_url": "https://github.com/kuri54",
"followers_url": "https://api.github.com/users/kuri54/followers",
"following_url": "https://api.github.com/users/kuri54/following{/other_user}",
"gists_url": "https://api.github.com/users/kuri54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kuri54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuri54/subscriptions",
"organizations_url": "https://api.github.com/users/kuri54/orgs",
"repos_url": "https://api.github.com/users/kuri54/repos",
"events_url": "https://api.github.com/users/kuri54/events{/privacy}",
"received_events_url": "https://api.github.com/users/kuri54/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"What type of dataset are you using in this script? `torch.utils.data.Dataset` or `datasets.Dataset`? Please share the `datasets` package version if it's the latter. Otherwise, it's better to move this issue to the `accelerate` repo.",
"Very sorry, I thought I had a repo in `accelerate!`\r\nI will close this issue and repo the issue in the appropriate place."
] | 2023-09-14T05:30:30
| 2023-09-14T23:54:42
| 2023-09-14T23:54:42
|
NONE
| null |
### Describe the bug
I am trying to get CLIP to fine-tuning with my code.
When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon.
- Validation dataloader stuck in 2nd epoch only on multi-GPU
Specifically, when the "for inputs in valid_loader:" process is finished, it does not proceed to the next step. train_loader process is completed. Also, both train and valid are working correctly in the first epoch.
The accelerate command at that time is as follows.
`accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...`
- This will not happen when single GPU is used.
`CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...`
- Setting num_workers=0 in dataloader did not change the result.
### Steps to reproduce the bug
1. The codes for fine-tuning the regular CLIP were updated for accelerate.
2. Run the code with the accelerate command as `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` and the above problem will occur.
3. CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` , it works fine.
### Expected behavior
It Should end normally as if it was run on a single GPU.
### Environment info
Since `datasets-cli env` did not work, the environment is described below.
- OS: Ubuntu 22.04 with Docker
- Docker: 24.0.5, build ced0996
- Python: 3.10.12
- torch==2.0.1
- accelerate==0.21.0
- transformers==4.33.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6240/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6239
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6239/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6239/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6239/events
|
https://github.com/huggingface/datasets/issues/6239
| 1,895,349,382
|
I_kwDODunzps5w-LyG
| 6,239
|
Load local audio data doesn't work
|
{
"login": "abodacs",
"id": 554032,
"node_id": "MDQ6VXNlcjU1NDAzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/554032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abodacs",
"html_url": "https://github.com/abodacs",
"followers_url": "https://api.github.com/users/abodacs/followers",
"following_url": "https://api.github.com/users/abodacs/following{/other_user}",
"gists_url": "https://api.github.com/users/abodacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abodacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abodacs/subscriptions",
"organizations_url": "https://api.github.com/users/abodacs/orgs",
"repos_url": "https://api.github.com/users/abodacs/repos",
"events_url": "https://api.github.com/users/abodacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abodacs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think this is the same issue as https://github.com/huggingface/datasets/issues/4776. Maybe installing `ffmpeg` can fix it:\r\n```python\r\nadd-apt-repository -y ppa:savoury1/ffmpeg4\r\napt-get -qq install -y ffmpeg\r\n```\r\n\r\nHowever, the best solution is to use a newer version of `datasets`. In the recent releases, we've replaced `torchaudio` with `soundfile`, which is easier to install and faster.",
"@mariosasko \r\nThanks for your help"
] | 2023-09-13T22:30:01
| 2023-09-15T14:32:10
| 2023-09-15T14:32:10
|
NONE
| null |
### Describe the bug
I get a RuntimeError from the following code:
```python
audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio())
audio_dataset[0]
```
### Traceback
<details>
```python
RuntimeError Traceback (most recent call last)
Cell In[33], line 1
----> 1 train_dataset[0]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1386, in Features.decode_example(self, example)
1376 def decode_example(self, example: dict):
1377 """Decode example with custom feature decoding.
1378
1379 Args:
(...)
1383 :obj:`dict[str, Any]`
1384 """
-> 1386 return {
1387 column_name: decode_nested_example(feature, value)
1388 if self._column_requires_decoding[column_name]
1389 else value
1390 for column_name, (feature, value) in zip_dict(
1391 {key: value for key, value in self.items() if key in example}, example
1392 )
1393 }
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1387, in <dictcomp>(.0)
1376 def decode_example(self, example: dict):
1377 """Decode example with custom feature decoding.
1378
1379 Args:
(...)
1383 :obj:`dict[str, Any]`
1384 """
1386 return {
-> 1387 column_name: decode_nested_example(feature, value)
1388 if self._column_requires_decoding[column_name]
1389 else value
1390 for column_name, (feature, value) in zip_dict(
1391 {key: value for key, value in self.items() if key in example}, example
1392 )
1393 }
File /opt/conda/lib/python3.10/site-packages/datasets/features/features.py:1087, in decode_nested_example(schema, obj)
1085 # Object with special decoding:
1086 elif isinstance(schema, (Audio, Image)):
-> 1087 return schema.decode_example(obj) if obj is not None else None
1088 return obj
File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:103, in Audio.decode_example(self, value)
101 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.")
102 elif path is not None and path.endswith("mp3"):
--> 103 array, sampling_rate = self._decode_mp3(file if file else path)
104 elif path is not None and path.endswith("opus"):
105 if file:
File /opt/conda/lib/python3.10/site-packages/datasets/features/audio.py:241, in Audio._decode_mp3(self, path_or_file)
238 except RuntimeError as err:
239 raise ImportError("To support decoding 'mp3' audio files, please install 'sox'.") from err
--> 241 array, sampling_rate = torchaudio.load(path_or_file, format="mp3")
242 if self.sampling_rate and self.sampling_rate != sampling_rate:
243 if not hasattr(self, "_resampler") or self._resampler.orig_freq != sampling_rate:
File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:256, in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
254 if ret is not None:
255 return ret
--> 256 return _fallback_load(filepath, frame_offset, num_frames, normalize, channels_first, format)
File /opt/conda/lib/python3.10/site-packages/torchaudio/backend/sox_io_backend.py:30, in _fail_load(filepath, frame_offset, num_frames, normalize, channels_first, format)
22 def _fail_load(
23 filepath: str,
24 frame_offset: int = 0,
(...)
28 format: Optional[str] = None,
29 ) -> Tuple[torch.Tensor, int]:
---> 30 raise RuntimeError("Failed to load audio from {}".format(filepath))
RuntimeError: Failed to load audio from /kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3
```
</details>
### Steps to reproduce the bug
1. - Create a custom dataset using Local files of type mp3.
3. - Try to read the first audio item.
### Expected behavior
Expected output
```python
audio_dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': 'path/to/audio_1',
'sampling_rate': 16000}
```
### Environment info
N/A
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6239/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6238
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6238/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6238/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6238/events
|
https://github.com/huggingface/datasets/issues/6238
| 1,895,207,828
|
I_kwDODunzps5w9pOU
| 6,238
|
`dataset.filter` ALWAYS removes the first item from the dataset when using batched=True
|
{
"login": "Taytay",
"id": 1330693,
"node_id": "MDQ6VXNlcjEzMzA2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taytay",
"html_url": "https://github.com/Taytay",
"followers_url": "https://api.github.com/users/Taytay/followers",
"following_url": "https://api.github.com/users/Taytay/following{/other_user}",
"gists_url": "https://api.github.com/users/Taytay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Taytay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taytay/subscriptions",
"organizations_url": "https://api.github.com/users/Taytay/orgs",
"repos_url": "https://api.github.com/users/Taytay/repos",
"events_url": "https://api.github.com/users/Taytay/events{/privacy}",
"received_events_url": "https://api.github.com/users/Taytay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"`filter` treats the function's output as a (selection) mask - `True` keeps the sample, and `False` drops it. In your case, `bool(0)` evaluates to `False`, so dropping the first sample is the correct behavior.",
"Oh gosh! 🤦 I totally misunderstood the API! My apologies!"
] | 2023-09-13T20:20:37
| 2023-09-17T07:05:07
| 2023-09-17T07:05:07
|
NONE
| null |
### Describe the bug
If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition.
### Steps to reproduce the bug
Here's a minimal example:
```python
def filter_batch_always_true(batch, indices):
print("First index being passed into this filter function: ", indices[0])
return indices # Keep all indices
data = {"value": list(range(10))}
dataset = Dataset.from_dict(data)
filtered_dataset = dataset.filter(filter_batch_always_true, with_indices=True, batched=True)
print("Length of original dataset: ", len(dataset))
print("Length of filtered_dataset: ", len(filtered_dataset))
print("Is equal to original? ", len(filtered_dataset) == len(dataset))
print("First item of filtered dataset: ", filtered_dataset[0])
print("Last item of filtered dataset: ", filtered_dataset[-1])
```
prints:
```
First index being passed into this filter function: 0
Length of original dataset: 10
Length of filtered_dataset: 9
Is equal to original? False
First item of filtered dataset: {'value': 1}
Last item of filtered dataset: {'value': 9}
```
### Expected behavior
Filter should respect the filter condition.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.9.18
- Huggingface_hub version: 0.17.1
- PyArrow version: 10.0.1
- Pandas version: 2.0.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6238/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6237
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6237/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6237/events
|
https://github.com/huggingface/datasets/issues/6237
| 1,893,822,321
|
I_kwDODunzps5w4W9x
| 6,237
|
Tokenization with multiple workers is too slow
|
{
"login": "macabdul9",
"id": 25720695,
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macabdul9",
"html_url": "https://github.com/macabdul9",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are implemented in Rust, and Rust multi-threading doesn't work well with Python multi-processing)"
] | 2023-09-13T06:18:34
| 2023-09-19T21:54:58
| 2023-09-19T21:54:58
|
NONE
| null |
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever.
Code snippet:
```
raw_datasets.map(
encode_function,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
remove_columns=[name for name in raw_datasets["train"].column_names if name not in ["input_ids", "labels", "attention_mask"]],
desc="Tokenizing data",
)
```
Details:
```
transformers==4.28.0.dev0
datasets==4.28.0.dev0
preprocessing_num_workers==48
```
tokenizer == decapoda-research/llama-7b-hf
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6237/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6236
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6236/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6236/events
|
https://github.com/huggingface/datasets/issues/6236
| 1,893,648,480
|
I_kwDODunzps5w3shg
| 6,236
|
Support buffer shuffle for to_tf_dataset
|
{
"login": "EthanRock",
"id": 7635551,
"node_id": "MDQ6VXNlcjc2MzU1NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanRock",
"html_url": "https://github.com/EthanRock",
"followers_url": "https://api.github.com/users/EthanRock/followers",
"following_url": "https://api.github.com/users/EthanRock/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions",
"organizations_url": "https://api.github.com/users/EthanRock/orgs",
"repos_url": "https://api.github.com/users/EthanRock/repos",
"events_url": "https://api.github.com/users/EthanRock/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanRock/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"cc @Rocketknight1 ",
"Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end again to re-batch your dataset.\r\n\r\nNote that the way we construct datasets in `to_tf_dataset()`, we don't actually shuffle the entire dataset in-memory, using `tf.data.Dataset.shuffle()`! Instead, we shuffle an index array and then load from the dataset with that. This means that shuffling with `tf.data.Dataset.shuffle()` will probably be slower and use more memory than our approach - I don't think adding the option for smaller shuffle buffers will actually save you memory on this!",
"Thanks for your reply! @Rocketknight1 \r\n\"We don't actually shuffle the entire dataset in-memory, using tf.data.Dataset.shuffle()! Instead, we shuffle an index array and then load from the dataset with that.\"\r\nIn such case, there will be random access to dataset data during shuffling. When the dataset is large, the performance can be X10 times slow. I have tried many ways with to_tf_dataset() trying to achieve comparable performance with tf.data.Dataset().shuffle(buffer_size).batch(). But the performance with to_tf_dataset() is still slow. \r\n"
] | 2023-09-13T03:19:44
| 2023-09-18T01:11:21
| null |
NONE
| null |
### Feature request
I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model.
Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset.
tf.data.Dataset support buffer shuffle by default.
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None, name=None
)
### Motivation
I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset.
### Your contribution
NA
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6236/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6235
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6235/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6235/events
|
https://github.com/huggingface/datasets/issues/6235
| 1,893,337,083
|
I_kwDODunzps5w2gf7
| 6,235
|
Support multiprocessing for download/extract nestedly
|
{
"login": "hgt312",
"id": 22725729,
"node_id": "MDQ6VXNlcjIyNzI1NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgt312",
"html_url": "https://github.com/hgt312",
"followers_url": "https://api.github.com/users/hgt312/followers",
"following_url": "https://api.github.com/users/hgt312/following{/other_user}",
"gists_url": "https://api.github.com/users/hgt312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hgt312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hgt312/subscriptions",
"organizations_url": "https://api.github.com/users/hgt312/orgs",
"repos_url": "https://api.github.com/users/hgt312/repos",
"events_url": "https://api.github.com/users/hgt312/events{/privacy}",
"received_events_url": "https://api.github.com/users/hgt312/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[] | 2023-09-12T21:51:08
| 2023-09-12T21:51:08
| null |
NONE
| null |
### Feature request
Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders
```
Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s]
Downloading data files #1: 0%| | 0/1 [00:00<?, ?obj/s]
Downloading data files #2: 0%| | 0/1 [00:00<?, ?obj/s]
Extracting data files #0: 0%| | 0/1 [00:00<?, ?obj/s]
Extracting data files #1: 0%| | 0/1 [00:00<?, ?obj/s][A
Extracting data files #2: 0%| | 0/1 [00:00<?, ?obj/s][A[A
```
### Motivation
speedup dataset loading
### Your contribution
I can help test the feature
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6235/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6233
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6233/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6233/events
|
https://github.com/huggingface/datasets/pull/6233
| 1,891,804,286
|
PR_kwDODunzps5aF3kd
| 6,233
|
Update README.md
|
{
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008370 / 0.011353 (-0.002983) | 0.004674 / 0.011008 (-0.006334) | 0.103912 / 0.038508 (0.065404) | 0.101668 / 0.023109 (0.078559) | 0.417945 / 0.275898 (0.142047) | 0.454805 / 0.323480 (0.131325) | 0.004763 / 0.007986 (-0.003223) | 0.003934 / 0.004328 (-0.000394) | 0.078446 / 0.004250 (0.074196) | 0.068383 / 0.037052 (0.031331) | 0.415100 / 0.258489 (0.156611) | 0.475272 / 0.293841 (0.181431) | 0.036884 / 0.128546 (-0.091662) | 0.010097 / 0.075646 (-0.065549) | 0.354962 / 0.419271 (-0.064309) | 0.062688 / 0.043533 (0.019155) | 0.420643 / 0.255139 (0.165504) | 0.446504 / 0.283200 (0.163304) | 0.029075 / 0.141683 (-0.112608) | 1.791517 / 1.452155 (0.339363) | 1.859820 / 1.492716 (0.367104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246929 / 0.018006 (0.228923) | 0.519593 / 0.000490 (0.519103) | 0.006848 / 0.000200 (0.006648) | 0.000168 / 0.000054 (0.000114) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.115582 / 0.014526 (0.101057) | 0.128235 / 0.176557 (-0.048321) | 0.187123 / 0.737135 (-0.550012) | 0.120862 / 0.296338 (-0.175477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463406 / 0.215209 (0.248197) | 4.615517 / 2.077655 (2.537863) | 2.250513 / 1.504120 (0.746393) | 2.061226 / 1.541195 (0.520032) | 2.189938 / 1.468490 (0.721448) | 0.582984 / 4.584777 (-4.001793) | 4.299464 / 3.745712 (0.553751) | 4.037274 / 5.269862 (-1.232588) | 2.608967 / 4.565676 (-1.956710) | 0.068944 / 0.424275 (-0.355331) | 0.009501 / 0.007607 (0.001894) | 0.567436 / 0.226044 (0.341392) | 5.662738 / 2.268929 (3.393809) | 2.849094 / 55.444624 (-52.595530) | 2.461013 / 6.876477 (-4.415464) | 2.663245 / 2.142072 (0.521172) | 0.704528 / 4.805227 (-4.100699) | 0.163583 / 6.500664 (-6.337081) | 0.075719 / 0.075469 (0.000250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604743 / 1.841788 (-0.237044) | 24.512054 / 8.074308 (16.437746) | 17.870939 / 10.191392 (7.679547) | 0.199188 / 0.680424 (-0.481236) | 0.023820 / 0.534201 (-0.510381) | 0.487520 / 0.579283 (-0.091763) | 0.512543 / 0.434364 (0.078179) | 0.575138 / 0.540337 (0.034801) | 0.759863 / 1.386936 (-0.627073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010516 / 0.011353 (-0.000837) | 0.004779 / 0.011008 (-0.006229) | 0.078482 / 0.038508 (0.039974) | 0.108533 / 0.023109 (0.085424) | 0.498692 / 0.275898 (0.222794) | 0.534698 / 0.323480 (0.211218) | 0.007624 / 0.007986 (-0.000362) | 0.003938 / 0.004328 (-0.000391) | 0.077317 / 0.004250 (0.073067) | 0.078056 / 0.037052 (0.041004) | 0.493648 / 0.258489 (0.235159) | 0.540891 / 0.293841 (0.247050) | 0.040377 / 0.128546 (-0.088169) | 0.010155 / 0.075646 (-0.065491) | 0.084384 / 0.419271 (-0.334888) | 0.061419 / 0.043533 (0.017886) | 0.494474 / 0.255139 (0.239335) | 0.524656 / 0.283200 (0.241456) | 0.029052 / 0.141683 (-0.112631) | 1.794584 / 1.452155 (0.342429) | 1.939987 / 1.492716 (0.447270) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.377404 / 0.018006 (0.359398) | 0.516562 / 0.000490 (0.516072) | 0.109555 / 0.000200 (0.109356) | 0.001126 / 0.000054 (0.001071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039793 / 0.037411 (0.002382) | 0.123001 / 0.014526 (0.108475) | 0.127536 / 0.176557 (-0.049021) | 0.191681 / 0.737135 (-0.545455) | 0.128590 / 0.296338 (-0.167748) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513689 / 0.215209 (0.298480) | 5.135114 / 2.077655 (3.057459) | 2.797885 / 1.504120 (1.293765) | 2.715332 / 1.541195 (1.174137) | 2.746437 / 1.468490 (1.277947) | 0.596480 / 4.584777 (-3.988297) | 4.382013 / 3.745712 (0.636301) | 3.965956 / 5.269862 (-1.303906) | 2.545206 / 4.565676 (-2.020471) | 0.069620 / 0.424275 (-0.354655) | 0.009321 / 0.007607 (0.001714) | 0.612424 / 0.226044 (0.386379) | 6.107037 / 2.268929 (3.838109) | 3.447246 / 55.444624 (-51.997379) | 3.073262 / 6.876477 (-3.803215) | 3.280185 / 2.142072 (1.138113) | 0.704776 / 4.805227 (-4.100451) | 0.160488 / 6.500664 (-6.340176) | 0.075730 / 0.075469 (0.000261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.697035 / 1.841788 (-0.144753) | 24.766118 / 8.074308 (16.691809) | 18.476699 / 10.191392 (8.285307) | 0.176594 / 0.680424 (-0.503830) | 0.024249 / 0.534201 (-0.509952) | 0.478743 / 0.579283 (-0.100541) | 0.518774 / 0.434364 (0.084410) | 0.581498 / 0.540337 (0.041161) | 0.797784 / 1.386936 (-0.589152) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-12T06:53:06
| 2023-09-13T18:20:50
| 2023-09-13T18:10:04
|
CONTRIBUTOR
| null |
fixed a typo
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6233/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6233",
"html_url": "https://github.com/huggingface/datasets/pull/6233",
"diff_url": "https://github.com/huggingface/datasets/pull/6233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6233.patch",
"merged_at": "2023-09-13T18:10:04"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6232
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6232/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6232/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6232/events
|
https://github.com/huggingface/datasets/pull/6232
| 1,891,109,762
|
PR_kwDODunzps5aDhhK
| 6,232
|
Improve error message for missing function parameters
|
{
"login": "suavemint",
"id": 4016832,
"node_id": "MDQ6VXNlcjQwMTY4MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4016832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suavemint",
"html_url": "https://github.com/suavemint",
"followers_url": "https://api.github.com/users/suavemint/followers",
"following_url": "https://api.github.com/users/suavemint/following{/other_user}",
"gists_url": "https://api.github.com/users/suavemint/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suavemint/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suavemint/subscriptions",
"organizations_url": "https://api.github.com/users/suavemint/orgs",
"repos_url": "https://api.github.com/users/suavemint/repos",
"events_url": "https://api.github.com/users/suavemint/events{/privacy}",
"received_events_url": "https://api.github.com/users/suavemint/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"CI errors are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006681 / 0.011353 (-0.004672) | 0.004132 / 0.011008 (-0.006876) | 0.085045 / 0.038508 (0.046536) | 0.077680 / 0.023109 (0.054571) | 0.382042 / 0.275898 (0.106144) | 0.412932 / 0.323480 (0.089452) | 0.005339 / 0.007986 (-0.002646) | 0.003408 / 0.004328 (-0.000921) | 0.065280 / 0.004250 (0.061030) | 0.055732 / 0.037052 (0.018680) | 0.400231 / 0.258489 (0.141742) | 0.432497 / 0.293841 (0.138656) | 0.031532 / 0.128546 (-0.097014) | 0.008721 / 0.075646 (-0.066925) | 0.289612 / 0.419271 (-0.129660) | 0.053089 / 0.043533 (0.009556) | 0.383300 / 0.255139 (0.128161) | 0.401204 / 0.283200 (0.118004) | 0.023582 / 0.141683 (-0.118100) | 1.493854 / 1.452155 (0.041699) | 1.583497 / 1.492716 (0.090781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239163 / 0.018006 (0.221157) | 0.469555 / 0.000490 (0.469065) | 0.008325 / 0.000200 (0.008125) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028975 / 0.037411 (-0.008436) | 0.084195 / 0.014526 (0.069669) | 0.189394 / 0.176557 (0.012837) | 0.158010 / 0.737135 (-0.579125) | 0.097502 / 0.296338 (-0.198837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383085 / 0.215209 (0.167876) | 3.827030 / 2.077655 (1.749375) | 1.872279 / 1.504120 (0.368159) | 1.705808 / 1.541195 (0.164613) | 1.833706 / 1.468490 (0.365216) | 0.484744 / 4.584777 (-4.100033) | 3.658221 / 3.745712 (-0.087491) | 3.398462 / 5.269862 (-1.871399) | 2.064974 / 4.565676 (-2.500703) | 0.057740 / 0.424275 (-0.366535) | 0.007926 / 0.007607 (0.000319) | 0.465358 / 0.226044 (0.239314) | 4.652951 / 2.268929 (2.384022) | 2.328390 / 55.444624 (-53.116235) | 2.000606 / 6.876477 (-4.875870) | 2.268391 / 2.142072 (0.126318) | 0.586537 / 4.805227 (-4.218690) | 0.134749 / 6.500664 (-6.365915) | 0.061276 / 0.075469 (-0.014193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337913 / 1.841788 (-0.503875) | 20.232122 / 8.074308 (12.157814) | 14.478579 / 10.191392 (4.287187) | 0.167545 / 0.680424 (-0.512878) | 0.018745 / 0.534201 (-0.515456) | 0.401209 / 0.579283 (-0.178074) | 0.425748 / 0.434364 (-0.008616) | 0.462539 / 0.540337 (-0.077798) | 0.652446 / 1.386936 (-0.734490) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.004091 / 0.011008 (-0.006917) | 0.066202 / 0.038508 (0.027694) | 0.083096 / 0.023109 (0.059987) | 0.402160 / 0.275898 (0.126261) | 0.440565 / 0.323480 (0.117085) | 0.005757 / 0.007986 (-0.002228) | 0.003445 / 0.004328 (-0.000884) | 0.065498 / 0.004250 (0.061248) | 0.059787 / 0.037052 (0.022735) | 0.407017 / 0.258489 (0.148528) | 0.448270 / 0.293841 (0.154429) | 0.033606 / 0.128546 (-0.094941) | 0.008744 / 0.075646 (-0.066902) | 0.072902 / 0.419271 (-0.346369) | 0.050144 / 0.043533 (0.006611) | 0.401069 / 0.255139 (0.145930) | 0.426389 / 0.283200 (0.143189) | 0.023297 / 0.141683 (-0.118386) | 1.506152 / 1.452155 (0.053998) | 1.570211 / 1.492716 (0.077495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235759 / 0.018006 (0.217753) | 0.488410 / 0.000490 (0.487921) | 0.004587 / 0.000200 (0.004387) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034123 / 0.037411 (-0.003289) | 0.102163 / 0.014526 (0.087638) | 0.110892 / 0.176557 (-0.065664) | 0.166000 / 0.737135 (-0.571135) | 0.110845 / 0.296338 (-0.185494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431397 / 0.215209 (0.216188) | 4.291540 / 2.077655 (2.213885) | 2.298248 / 1.504120 (0.794128) | 2.134752 / 1.541195 (0.593557) | 2.207913 / 1.468490 (0.739423) | 0.490607 / 4.584777 (-4.094170) | 3.683078 / 3.745712 (-0.062635) | 3.314266 / 5.269862 (-1.955596) | 2.059488 / 4.565676 (-2.506188) | 0.057876 / 0.424275 (-0.366399) | 0.007696 / 0.007607 (0.000089) | 0.512186 / 0.226044 (0.286142) | 5.124071 / 2.268929 (2.855142) | 2.803913 / 55.444624 (-52.640711) | 2.428558 / 6.876477 (-4.447919) | 2.655207 / 2.142072 (0.513135) | 0.584589 / 4.805227 (-4.220638) | 0.133518 / 6.500664 (-6.367146) | 0.060729 / 0.075469 (-0.014740) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352916 / 1.841788 (-0.488872) | 20.249632 / 8.074308 (12.175323) | 15.283079 / 10.191392 (5.091686) | 0.157601 / 0.680424 (-0.522823) | 0.019650 / 0.534201 (-0.514551) | 0.396398 / 0.579283 (-0.182885) | 0.430111 / 0.434364 (-0.004252) | 0.480627 / 0.540337 (-0.059710) | 0.642165 / 1.386936 (-0.744771) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-11T19:11:58
| 2023-09-15T18:07:56
| 2023-09-15T17:59:02
|
CONTRIBUTOR
| null |
The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature."
This has been fixed.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6232/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6232",
"html_url": "https://github.com/huggingface/datasets/pull/6232",
"diff_url": "https://github.com/huggingface/datasets/pull/6232.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6232.patch",
"merged_at": "2023-09-15T17:59:02"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6231
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6231/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6231/events
|
https://github.com/huggingface/datasets/pull/6231
| 1,890,863,249
|
PR_kwDODunzps5aCr8_
| 6,231
|
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.",
"realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ",
"I think https://github.com/huggingface/datasets/pull/6218 fixed the issue (a bit differently though)",
"ah actually nope, let me check",
"@lhoestq yeah the pr you're referencing doesn't fix the problem when two semantically analogous configs occur in datasets_info.json, i suggest to rewrite the legacy one if it exists during .push_to_hub",
"Only the old versions of `datasets` use the JSON file over the README and they can only load one config so the name doesn't really matter.\r\n\r\nThat's why I chose to load the info from the JSON no matter the name (no check to see if it's \"username--dataset_name\") in my previous PR.\r\n\r\nI think you can remove the old info without even checking the name. In this case maybe no need to update load.py ",
"(also minor: not checking the name makes it more robust to dataset renaming)",
"@lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with `dataset_infos.json` having two keys in it?",
"> @lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with dataset_infos.json having two keys in it?\r\n\r\nIdeally they should have only one config no ? Since old versions of `datasets` simply load the first config in the JSON.\r\nWe can overwrite it with the new default one (and no matter the name of the outdated config in the JSON)\r\n\r\n"
] | 2023-09-11T16:27:09
| 2023-09-26T11:19:36
| null |
CONTRIBUTOR
| null |
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in this case.
Also, in `load.py` I suggest to check if a legacy config name is indeed a legacy config name because after this fix it might not be the case (this check was first introduced in https://github.com/huggingface/datasets/pull/6218)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6231/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6231",
"html_url": "https://github.com/huggingface/datasets/pull/6231",
"diff_url": "https://github.com/huggingface/datasets/pull/6231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6231.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/6230
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6230/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6230/events
|
https://github.com/huggingface/datasets/pull/6230
| 1,890,521,006
|
PR_kwDODunzps5aBh6L
| 6,230
|
Don't skip hidden files in `dl_manager.iter_files` when they are given as input
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005894 / 0.011353 (-0.005459) | 0.003621 / 0.011008 (-0.007387) | 0.080446 / 0.038508 (0.041938) | 0.056800 / 0.023109 (0.033691) | 0.326485 / 0.275898 (0.050587) | 0.376207 / 0.323480 (0.052727) | 0.004640 / 0.007986 (-0.003346) | 0.002795 / 0.004328 (-0.001533) | 0.062815 / 0.004250 (0.058565) | 0.045761 / 0.037052 (0.008709) | 0.341417 / 0.258489 (0.082928) | 0.373129 / 0.293841 (0.079288) | 0.027226 / 0.128546 (-0.101321) | 0.007873 / 0.075646 (-0.067774) | 0.261737 / 0.419271 (-0.157535) | 0.044648 / 0.043533 (0.001115) | 0.320195 / 0.255139 (0.065056) | 0.381892 / 0.283200 (0.098692) | 0.020431 / 0.141683 (-0.121252) | 1.405332 / 1.452155 (-0.046823) | 1.455592 / 1.492716 (-0.037125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191539 / 0.018006 (0.173533) | 0.423655 / 0.000490 (0.423165) | 0.002741 / 0.000200 (0.002541) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023952 / 0.037411 (-0.013459) | 0.073387 / 0.014526 (0.058861) | 0.083746 / 0.176557 (-0.092810) | 0.144977 / 0.737135 (-0.592159) | 0.083808 / 0.296338 (-0.212530) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436228 / 0.215209 (0.221019) | 4.370510 / 2.077655 (2.292855) | 2.340426 / 1.504120 (0.836306) | 2.202215 / 1.541195 (0.661021) | 2.258528 / 1.468490 (0.790037) | 0.503455 / 4.584777 (-4.081322) | 3.043695 / 3.745712 (-0.702017) | 2.784033 / 5.269862 (-2.485829) | 1.847956 / 4.565676 (-2.717721) | 0.057702 / 0.424275 (-0.366573) | 0.006703 / 0.007607 (-0.000904) | 0.510628 / 0.226044 (0.284583) | 5.101890 / 2.268929 (2.832961) | 2.816469 / 55.444624 (-52.628155) | 2.474220 / 6.876477 (-4.402257) | 2.617851 / 2.142072 (0.475779) | 0.593585 / 4.805227 (-4.211642) | 0.125895 / 6.500664 (-6.374769) | 0.062170 / 0.075469 (-0.013299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238792 / 1.841788 (-0.602996) | 18.096417 / 8.074308 (10.022108) | 13.548778 / 10.191392 (3.357386) | 0.144878 / 0.680424 (-0.535546) | 0.016644 / 0.534201 (-0.517557) | 0.334556 / 0.579283 (-0.244728) | 0.343680 / 0.434364 (-0.090684) | 0.383093 / 0.540337 (-0.157244) | 0.525075 / 1.386936 (-0.861861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006125 / 0.011353 (-0.005228) | 0.003668 / 0.011008 (-0.007340) | 0.062650 / 0.038508 (0.024142) | 0.058882 / 0.023109 (0.035772) | 0.454643 / 0.275898 (0.178745) | 0.486659 / 0.323480 (0.163179) | 0.005558 / 0.007986 (-0.002427) | 0.002858 / 0.004328 (-0.001471) | 0.062603 / 0.004250 (0.058353) | 0.049701 / 0.037052 (0.012649) | 0.455903 / 0.258489 (0.197413) | 0.491544 / 0.293841 (0.197703) | 0.028581 / 0.128546 (-0.099965) | 0.008040 / 0.075646 (-0.067607) | 0.068314 / 0.419271 (-0.350957) | 0.040637 / 0.043533 (-0.002896) | 0.450288 / 0.255139 (0.195149) | 0.476330 / 0.283200 (0.193131) | 0.018989 / 0.141683 (-0.122693) | 1.455122 / 1.452155 (0.002967) | 1.496941 / 1.492716 (0.004225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227382 / 0.018006 (0.209376) | 0.432637 / 0.000490 (0.432147) | 0.002727 / 0.000200 (0.002527) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026125 / 0.037411 (-0.011286) | 0.081342 / 0.014526 (0.066817) | 0.091227 / 0.176557 (-0.085329) | 0.145175 / 0.737135 (-0.591960) | 0.091988 / 0.296338 (-0.204351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454293 / 0.215209 (0.239083) | 4.537912 / 2.077655 (2.460257) | 2.489146 / 1.504120 (0.985026) | 2.307166 / 1.541195 (0.765971) | 2.380866 / 1.468490 (0.912376) | 0.509015 / 4.584777 (-4.075762) | 3.111069 / 3.745712 (-0.634644) | 2.839181 / 5.269862 (-2.430681) | 1.874630 / 4.565676 (-2.691047) | 0.058540 / 0.424275 (-0.365735) | 0.006693 / 0.007607 (-0.000914) | 0.528408 / 0.226044 (0.302363) | 5.285802 / 2.268929 (3.016874) | 2.952090 / 55.444624 (-52.492534) | 2.591496 / 6.876477 (-4.284980) | 2.741080 / 2.142072 (0.599007) | 0.595610 / 4.805227 (-4.209617) | 0.124387 / 6.500664 (-6.376277) | 0.061032 / 0.075469 (-0.014437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365816 / 1.841788 (-0.475972) | 18.684534 / 8.074308 (10.610226) | 14.540438 / 10.191392 (4.349046) | 0.146793 / 0.680424 (-0.533631) | 0.018165 / 0.534201 (-0.516036) | 0.333794 / 0.579283 (-0.245489) | 0.345533 / 0.434364 (-0.088830) | 0.384453 / 0.540337 (-0.155885) | 0.529104 / 1.386936 (-0.857832) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003683 / 0.011008 (-0.007325) | 0.083329 / 0.038508 (0.044821) | 0.063350 / 0.023109 (0.040241) | 0.329959 / 0.275898 (0.054061) | 0.396111 / 0.323480 (0.072631) | 0.003554 / 0.007986 (-0.004432) | 0.002907 / 0.004328 (-0.001421) | 0.064152 / 0.004250 (0.059902) | 0.049182 / 0.037052 (0.012130) | 0.343862 / 0.258489 (0.085373) | 0.414568 / 0.293841 (0.120727) | 0.027157 / 0.128546 (-0.101389) | 0.007957 / 0.075646 (-0.067689) | 0.261868 / 0.419271 (-0.157404) | 0.044938 / 0.043533 (0.001405) | 0.318470 / 0.255139 (0.063331) | 0.393319 / 0.283200 (0.110119) | 0.022848 / 0.141683 (-0.118835) | 1.419916 / 1.452155 (-0.032238) | 1.508783 / 1.492716 (0.016067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200530 / 0.018006 (0.182523) | 0.433586 / 0.000490 (0.433097) | 0.002063 / 0.000200 (0.001863) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024803 / 0.037411 (-0.012609) | 0.075894 / 0.014526 (0.061368) | 0.086488 / 0.176557 (-0.090069) | 0.149058 / 0.737135 (-0.588077) | 0.087046 / 0.296338 (-0.209292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390771 / 0.215209 (0.175562) | 3.886178 / 2.077655 (1.808523) | 1.868626 / 1.504120 (0.364506) | 1.708532 / 1.541195 (0.167338) | 1.788491 / 1.468490 (0.320001) | 0.505706 / 4.584777 (-4.079071) | 3.062094 / 3.745712 (-0.683618) | 2.898559 / 5.269862 (-2.371302) | 1.901225 / 4.565676 (-2.664452) | 0.058366 / 0.424275 (-0.365909) | 0.006851 / 0.007607 (-0.000756) | 0.465382 / 0.226044 (0.239337) | 4.650187 / 2.268929 (2.381258) | 2.316152 / 55.444624 (-53.128472) | 1.989597 / 6.876477 (-4.886879) | 2.169266 / 2.142072 (0.027194) | 0.593257 / 4.805227 (-4.211970) | 0.126440 / 6.500664 (-6.374224) | 0.062227 / 0.075469 (-0.013242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283591 / 1.841788 (-0.558197) | 18.384667 / 8.074308 (10.310358) | 14.079611 / 10.191392 (3.888219) | 0.150453 / 0.680424 (-0.529971) | 0.017100 / 0.534201 (-0.517101) | 0.330503 / 0.579283 (-0.248780) | 0.348134 / 0.434364 (-0.086230) | 0.385726 / 0.540337 (-0.154612) | 0.529147 / 1.386936 (-0.857789) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006168 / 0.011353 (-0.005185) | 0.003801 / 0.011008 (-0.007208) | 0.063168 / 0.038508 (0.024660) | 0.062331 / 0.023109 (0.039221) | 0.448321 / 0.275898 (0.172423) | 0.484416 / 0.323480 (0.160937) | 0.004827 / 0.007986 (-0.003159) | 0.002848 / 0.004328 (-0.001480) | 0.062736 / 0.004250 (0.058486) | 0.049128 / 0.037052 (0.012075) | 0.449276 / 0.258489 (0.190787) | 0.499035 / 0.293841 (0.205194) | 0.028577 / 0.128546 (-0.099969) | 0.008114 / 0.075646 (-0.067532) | 0.068297 / 0.419271 (-0.350974) | 0.040835 / 0.043533 (-0.002698) | 0.453556 / 0.255139 (0.198417) | 0.475420 / 0.283200 (0.192220) | 0.020292 / 0.141683 (-0.121390) | 1.472226 / 1.452155 (0.020071) | 1.523809 / 1.492716 (0.031093) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230662 / 0.018006 (0.212655) | 0.439697 / 0.000490 (0.439207) | 0.009899 / 0.000200 (0.009699) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026418 / 0.037411 (-0.010993) | 0.082188 / 0.014526 (0.067662) | 0.091039 / 0.176557 (-0.085518) | 0.146646 / 0.737135 (-0.590489) | 0.091693 / 0.296338 (-0.204645) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462086 / 0.215209 (0.246877) | 4.620925 / 2.077655 (2.543271) | 2.539234 / 1.504120 (1.035114) | 2.371178 / 1.541195 (0.829983) | 2.440538 / 1.468490 (0.972048) | 0.511047 / 4.584777 (-4.073730) | 3.082088 / 3.745712 (-0.663624) | 2.918162 / 5.269862 (-2.351700) | 1.899651 / 4.565676 (-2.666025) | 0.059003 / 0.424275 (-0.365272) | 0.006746 / 0.007607 (-0.000861) | 0.537863 / 0.226044 (0.311819) | 5.382355 / 2.268929 (3.113426) | 3.060091 / 55.444624 (-52.384534) | 2.754969 / 6.876477 (-4.121507) | 2.863156 / 2.142072 (0.721084) | 0.606888 / 4.805227 (-4.198339) | 0.127448 / 6.500664 (-6.373216) | 0.062975 / 0.075469 (-0.012494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336065 / 1.841788 (-0.505722) | 19.019902 / 8.074308 (10.945594) | 15.057979 / 10.191392 (4.866587) | 0.160646 / 0.680424 (-0.519778) | 0.018340 / 0.534201 (-0.515861) | 0.341664 / 0.579283 (-0.237619) | 0.356536 / 0.434364 (-0.077828) | 0.393974 / 0.540337 (-0.146363) | 0.546036 / 1.386936 (-0.840900) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007220 / 0.011353 (-0.004132) | 0.004537 / 0.011008 (-0.006471) | 0.087333 / 0.038508 (0.048825) | 0.095637 / 0.023109 (0.072528) | 0.323819 / 0.275898 (0.047921) | 0.358838 / 0.323480 (0.035358) | 0.005910 / 0.007986 (-0.002076) | 0.003781 / 0.004328 (-0.000548) | 0.064565 / 0.004250 (0.060315) | 0.062818 / 0.037052 (0.025766) | 0.322595 / 0.258489 (0.064106) | 0.371865 / 0.293841 (0.078024) | 0.031667 / 0.128546 (-0.096880) | 0.009068 / 0.075646 (-0.066579) | 0.290574 / 0.419271 (-0.128697) | 0.054618 / 0.043533 (0.011085) | 0.314708 / 0.255139 (0.059569) | 0.336647 / 0.283200 (0.053447) | 0.027070 / 0.141683 (-0.114613) | 1.500640 / 1.452155 (0.048485) | 1.586775 / 1.492716 (0.094059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294461 / 0.018006 (0.276455) | 0.580125 / 0.000490 (0.579635) | 0.008165 / 0.000200 (0.007965) | 0.000320 / 0.000054 (0.000266) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032352 / 0.037411 (-0.005059) | 0.092187 / 0.014526 (0.077661) | 0.104993 / 0.176557 (-0.071564) | 0.162738 / 0.737135 (-0.574397) | 0.103242 / 0.296338 (-0.193096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396732 / 0.215209 (0.181523) | 3.955049 / 2.077655 (1.877394) | 1.876762 / 1.504120 (0.372642) | 1.698477 / 1.541195 (0.157282) | 1.847086 / 1.468490 (0.378596) | 0.488306 / 4.584777 (-4.096471) | 3.658922 / 3.745712 (-0.086790) | 3.559050 / 5.269862 (-1.710812) | 2.187363 / 4.565676 (-2.378313) | 0.059795 / 0.424275 (-0.364480) | 0.008966 / 0.007607 (0.001359) | 0.474212 / 0.226044 (0.248168) | 4.732540 / 2.268929 (2.463611) | 2.466370 / 55.444624 (-52.978254) | 2.112105 / 6.876477 (-4.764372) | 2.414624 / 2.142072 (0.272552) | 0.595447 / 4.805227 (-4.209780) | 0.136705 / 6.500664 (-6.363959) | 0.062267 / 0.075469 (-0.013202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266518 / 1.841788 (-0.575270) | 21.009975 / 8.074308 (12.935666) | 14.823960 / 10.191392 (4.632568) | 0.165630 / 0.680424 (-0.514793) | 0.018499 / 0.534201 (-0.515702) | 0.396720 / 0.579283 (-0.182563) | 0.424807 / 0.434364 (-0.009557) | 0.463326 / 0.540337 (-0.077011) | 0.653132 / 1.386936 (-0.733804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007789 / 0.011353 (-0.003564) | 0.004720 / 0.011008 (-0.006288) | 0.066656 / 0.038508 (0.028148) | 0.094219 / 0.023109 (0.071109) | 0.414965 / 0.275898 (0.139067) | 0.454808 / 0.323480 (0.131328) | 0.006088 / 0.007986 (-0.001898) | 0.003980 / 0.004328 (-0.000349) | 0.066048 / 0.004250 (0.061797) | 0.065875 / 0.037052 (0.028823) | 0.419994 / 0.258489 (0.161505) | 0.462001 / 0.293841 (0.168160) | 0.033534 / 0.128546 (-0.095013) | 0.009010 / 0.075646 (-0.066636) | 0.072778 / 0.419271 (-0.346493) | 0.049834 / 0.043533 (0.006301) | 0.411003 / 0.255139 (0.155864) | 0.430918 / 0.283200 (0.147718) | 0.025664 / 0.141683 (-0.116019) | 1.526771 / 1.452155 (0.074616) | 1.634767 / 1.492716 (0.142051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271180 / 0.018006 (0.253174) | 0.576704 / 0.000490 (0.576214) | 0.004362 / 0.000200 (0.004162) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035648 / 0.037411 (-0.001763) | 0.102407 / 0.014526 (0.087881) | 0.111613 / 0.176557 (-0.064944) | 0.166173 / 0.737135 (-0.570962) | 0.113371 / 0.296338 (-0.182967) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436031 / 0.215209 (0.220822) | 4.347071 / 2.077655 (2.269416) | 2.366937 / 1.504120 (0.862817) | 2.216356 / 1.541195 (0.675161) | 2.335933 / 1.468490 (0.867443) | 0.490484 / 4.584777 (-4.094293) | 3.730656 / 3.745712 (-0.015056) | 3.497248 / 5.269862 (-1.772613) | 2.215729 / 4.565676 (-2.349947) | 0.057905 / 0.424275 (-0.366370) | 0.007983 / 0.007607 (0.000376) | 0.510413 / 0.226044 (0.284369) | 5.114502 / 2.268929 (2.845574) | 2.871599 / 55.444624 (-52.573026) | 2.537514 / 6.876477 (-4.338962) | 2.819135 / 2.142072 (0.677063) | 0.588397 / 4.805227 (-4.216830) | 0.134665 / 6.500664 (-6.365999) | 0.063349 / 0.075469 (-0.012120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352962 / 1.841788 (-0.488826) | 21.628664 / 8.074308 (13.554356) | 15.962105 / 10.191392 (5.770713) | 0.167781 / 0.680424 (-0.512643) | 0.020965 / 0.534201 (-0.513236) | 0.402809 / 0.579283 (-0.176474) | 0.435153 / 0.434364 (0.000789) | 0.481394 / 0.540337 (-0.058944) | 0.658068 / 1.386936 (-0.728868) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-11T13:29:19
| 2023-09-13T18:21:28
| 2023-09-13T18:12:09
|
CONTRIBUTOR
| null |
Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/6230/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6230",
"html_url": "https://github.com/huggingface/datasets/pull/6230",
"diff_url": "https://github.com/huggingface/datasets/pull/6230.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6230.patch",
"merged_at": "2023-09-13T18:12:09"
}
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.