Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 269, in get
result = next(queryset)
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 1608, in __next__
raw_doc = next(self._cursor)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pymongo/cursor.py", line 1267, in next
raise StopIteration
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 516, in get_response_with_details
CachedResponseDocument.objects(kind=kind, dataset=dataset, config=config, split=split)
File "/src/services/worker/.venv/lib/python3.9/site-packages/mongoengine/queryset/base.py", line 272, in get
raise queryset._document.DoesNotExist(msg)
libcommon.simple_cache.DoesNotExist: CachedResponseDocument matching query does not exist.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
compute_split_names_from_info_response(
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 565, in get_previous_step_or_raise
response = get_response_with_details(kind=kind, dataset=dataset, config=config, split=split)
File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 529, in get_response_with_details
raise CachedArtifactNotFoundError(kind=kind, dataset=dataset, config=config, split=split) from e
libcommon.simple_cache.CachedArtifactNotFoundError: Cache entry does not exist: kind='config-info' dataset='chreh/test_data_morebpp' config='default' split=None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 50, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pa.ipc.open_stream(f).schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 190, in open_stream
return RecordBatchStreamReader(source, options=options,
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 52, in __init__
self._open(source, options=options, memory_pool=memory_pool)
File "pyarrow/ipc.pxi", line 974, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 88, in pyarrow.lib.check_status
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow_hotfix/__init__.py", line 47, in __arrow_ext_deserialize__
raise RuntimeError(
RuntimeError: Disallowed deserialization of 'arrow.py_extension_type':
storage_type = list<item: list<item: float>>
serialized = b'\x80\x04\x95L\x00\x00\x00\x00\x00\x00\x00\x8c\x1adatasets.features.features\x94\x8c\x14Array2DExtensionType\x94\x93\x94M\xc9\x01K\x08\x86\x94\x8c\x07float32\x94\x86\x94R\x94.'
pickle disassembly:
0: \x80 PROTO 4
2: \x95 FRAME 76
11: \x8c SHORT_BINUNICODE 'datasets.features.features'
39: \x94 MEMOIZE (as 0)
40: \x8c SHORT_BINUNICODE 'Array2DExtensionType'
62: \x94 MEMOIZE (as 1)
63: \x93 STACK_GLOBAL
64: \x94 MEMOIZE (as 2)
65: M BININT2 457
68: K BININT1 8
70: \x86 TUPLE2
71: \x94 MEMOIZE (as 3)
72: \x8c SHORT_BINUNICODE 'float32'
81: \x94 MEMOIZE (as 4)
82: \x86 TUPLE2
83: \x94 MEMOIZE (as 5)
84: R REDUCE
85: \x94 MEMOIZE (as 6)
86: . STOP
highest protocol among opcodes = 4
Reading of untrusted Parquet or Feather files with a PyExtensionType column
allows arbitrary code execution.
If you trust this file, you can enable reading the extension type by one of:
- upgrading to pyarrow >= 14.0.1, and call `pa.PyExtensionType.set_auto_load(True)`
- disable this error by running `import pyarrow_hotfix; pyarrow_hotfix.uninstall()`
We strongly recommend updating your Parquet/Feather files to use extension types
derived from `pyarrow.ExtensionType` instead, and register this type explicitly.
See https://arrow.apache.org/docs/dev/python/extending_types.html#defining-extension-types-user-defined-types
for more details.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
No dataset card yet
- Downloads last month
- 1