Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'eval_loss', 'eval_steps_per_second', 'eval_samples_per_second', 'eval_runtime', 'perplexity'}) and 5 missing columns ({'train_steps_per_second', 'train_samples_per_second', 'train_runtime', 'total_flos', 'train_loss'}).
This happened while the json dataset builder was generating data using
hf://datasets/gonzalobenegas/gpn-animal-promoter-checkpoints-second-part/eval_results.json (at revision 9280ab52eaf293b463e62773030d7cbab6b5fccc)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
epoch: double
eval_loss: double
eval_runtime: double
eval_samples_per_second: double
eval_steps_per_second: double
perplexity: double
to
{'epoch': Value('float64'), 'total_flos': Value('float64'), 'train_loss': Value('float64'), 'train_runtime': Value('float64'), 'train_samples_per_second': Value('float64'), 'train_steps_per_second': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1450, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 993, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 5 new columns ({'eval_loss', 'eval_steps_per_second', 'eval_samples_per_second', 'eval_runtime', 'perplexity'}) and 5 missing columns ({'train_steps_per_second', 'train_samples_per_second', 'train_runtime', 'total_flos', 'train_loss'}).
This happened while the json dataset builder was generating data using
hf://datasets/gonzalobenegas/gpn-animal-promoter-checkpoints-second-part/eval_results.json (at revision 9280ab52eaf293b463e62773030d7cbab6b5fccc)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
epoch
float64 | total_flos
float64 | train_loss
float64 | train_runtime
float64 | train_samples_per_second
float64 | train_steps_per_second
float64 |
|---|---|---|---|---|---|
29.016677
| 124,468,667,634,155,520,000
| 1.097204
| 1,394,368.6556
| 190.939
| 0.093
|
checkpoints
This model is a fine-tuned version of songlab/gpn-animal-promoter on the dataset dataset. It achieves the following results on the evaluation set:
- Loss: 1.1658
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 43
- gradient_accumulation_steps: 16
- total_train_batch_size: 2048
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- training_steps: 130000
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.1109 | 2.0091 | 10000 | 1.1813 |
| 1.1102 | 4.0182 | 20000 | 1.1783 |
| 1.1085 | 6.0273 | 30000 | 1.1766 |
| 1.106 | 9.0025 | 40000 | 1.1754 |
| 1.1028 | 11.0116 | 50000 | 1.1766 |
| 1.1 | 13.0207 | 60000 | 1.1759 |
| 1.0964 | 15.0298 | 70000 | 1.1729 |
| 1.0927 | 18.0050 | 80000 | 1.1696 |
| 1.089 | 20.0142 | 90000 | 1.1684 |
| 1.0859 | 22.0233 | 100000 | 1.1670 |
| 1.0837 | 24.0324 | 110000 | 1.1658 |
| 1.0823 | 27.0076 | 120000 | 1.1643 |
| 1.0819 | 29.0167 | 130000 | 1.1631 |
Framework versions
- Transformers 4.57.1
- Pytorch 2.9.0+cu128
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 23