Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
number: int64
tail of airbnb-Population:airbnb-Population-country_destination:airbnb-Country: list<item: int64>
child 0, item: int64
tail of airbnb-Population:airbnb-Population-country_destination:airbnb-Country___TIMESTAMP: list<item: int64>
child 0, item: int64
-- schema metadata --
huggingface: '{"info": {"features": {"number": {"dtype": "int64", "_type"' + 333
to
{'number': Value('int64'), 'tail of airbnb-User:airbnb-User-age_bucket:airbnb-Age_bucket': List(Value('int64')), 'tail of airbnb-User:airbnb-User-age_bucket:airbnb-Age_bucket___TIMESTAMP': List(Value('int64')), 'tail of airbnb-Population:airbnb-Population-age_bucket:airbnb-Age_bucket': List(Value('int64')), 'tail of airbnb-Population:airbnb-Population-age_bucket:airbnb-Age_bucket___TIMESTAMP': List(Value('int64'))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 76, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 59, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
number: int64
tail of airbnb-Population:airbnb-Population-country_destination:airbnb-Country: list<item: int64>
child 0, item: int64
tail of airbnb-Population:airbnb-Population-country_destination:airbnb-Country___TIMESTAMP: list<item: int64>
child 0, item: int64
-- schema metadata --
huggingface: '{"info": {"features": {"number": {"dtype": "int64", "_type"' + 333
to
{'number': Value('int64'), 'tail of airbnb-User:airbnb-User-age_bucket:airbnb-Age_bucket': List(Value('int64')), 'tail of airbnb-User:airbnb-User-age_bucket:airbnb-Age_bucket___TIMESTAMP': List(Value('int64')), 'tail of airbnb-Population:airbnb-Population-age_bucket:airbnb-Age_bucket': List(Value('int64')), 'tail of airbnb-Population:airbnb-Population-age_bucket:airbnb-Age_bucket___TIMESTAMP': List(Value('int64'))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Griffin: Joint RDB Dataset (v65)
This repository contains the main experiment data used for the Griffin model. The paper is at Link.
The data is not in a standard format (like CSV or Parquet). Instead, it's provided as a collection of processed files and directories that are loaded using the custom scripts from our main code repository.
For detailed information on the data processing, model architecture, and loading logic, please refer to our main GitHub repository: https://github.com/yanxwb/Griffin
How to Use
The primary way to use this data is to first download it and then use it with the code from the GitHub repository mentioned above.
Here are some examples to download the entire dataset folder.
Using Python
You can use the huggingface_hub library to download the entire repository to a local directory.
from huggingface_hub import snapshot_download
# This will download the whole repository to your Hugging Face cache
# and return the path to it.
repo_id = "yamboo/Griffin_datasets_joint_v65"
local_dir_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
# You can specify a local folder if you don't want to use the cache
# local_dir="path/to/my/local/folder",
)
print(f"Dataset downloaded to: {local_dir_path}")
- Downloads last month
- 90