Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowTypeError
Message:      ("Expected bytes, got a 'list' object", 'Conversion failed for column speech_1 with type object')
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 181, in _generate_tables
                  pa_table = pa.Table.from_pandas(df, preserve_index=False)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 4795, in pyarrow.lib.Table.from_pandas
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 637, in dataframe_to_arrays
                  arrays = [convert_column(c, f)
                            ^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 625, in convert_column
                  raise e
                File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 619, in convert_column
                  result = pa.array(col, type=type_, from_pandas=True, safe=safe)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 365, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 91, in pyarrow.lib._ndarray_to_array
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column speech_1 with type object')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

1. Dataset Summary

This dataset contains UK Parliamentary speeches (ParlaMint-GB), processed and curated for research in political discourse modeling, and generative simulation.
It was created as part of the research paper "ParliaBench: An Evaluation and Benchmarking Framework for LLM-Generated Parliamentary Speech".

The dataset includes:

  • Speech text and prompt questions
  • Speaker metadata (name, party, house, political orientation, date)
  • Debate section topics/ speech topics
  • Preprocessed labels aligned to 21 fixed EuroVoc categories (e.g., POLITICS, HEALTH, ENVIRONMENT)

2. Dataset Structure

Format

  • JSON

  • Format fields: "speaker": string, "speech": string, "section": string, "speech_date": string, "speech_id": string, "filename": string, "party": string, "topic": list of strings, "prompts": list of strings, "house": string, "political_orientation_code": string, "political_orientation_label": string, "eurovoc_topic": string

    train 80% test 20% random seed 42

3. Intended Uses

This dataset is intended for:

  • Political debate modeling
  • Fine-tuning LLMs on parliamentary speech
  • Research on linguistics and political behavior analysis

4. Source Data

  • Original corpus: ParlaMint-GB (Public)
  • Processing steps applied:
    • Removed parties with less than 1,000 speeches,

    • Filtered out speeches less than 35 words (5th percentile) and more than 1580 words (99th percentile)

    • Removed the Unknown political party,

    • Filtered out debate sections named "Business of the House" and "Point of Order",

    • On the speech field, we replaced the left and right double quotation marks (U+201C and U+201D) with regular double quotation marks (U+0022),

    • Speaker Information Processing • Parsed unique speaker identifiers and full names • Extracted political affiliations with temporal bounds

    • Speech Content Extraction • Traversed dated session XML files to extract individual speeches • Extracted debate section topics and speech topic context information based on XML structure and elements • Filtered out procedural elements and non-substantive content

    • Political Orientation Extraction • Extracted the political orientation code for each party from ParlaMint-listOrg.xml • Matched the political orientation code to the political orientation label from ParlaMint-taxonomy-politicalOrientation.xml • We matched the speakers with their speeches based on the speaker ID attribute each speech had, along with their political orientations.

    • Speech-Party Temporal Alignment To ensure that each speech was matched to the correct political party at the time of delivery, we implemented a temporal matching mechanism that handled political party changes and role transitions of speakers over time. Alignment Algorithm

      1. Extract the date of each speech from the XML session files
      2. Parse each speaker’s list of political affiliations from listPerson.xml
      3. Extract the time validity range (@from and @to attributes) indicating when that affiliation was active
      4. Compare the speech date with the affiliation’s from and to dates to identify the active affiliation
      5. In cases where no exact match was found (e.g., missing to date), default to the most recent affiliation that started before the speech date
    • Prompt Extraction The extracted speech content included question prompts and speeches. We identified patterns where question prompts began with a letter or number, allowing us to separate them from actual speeches. During extraction, we stored the speeches as string values in the "speech" attribute, while the individual question prompts were collected into a list of strings under the "prompts" attribute, as a single speaker could have expressed more than one question. We attributed the prompts to the speeches with the same debate section. Finally, we cleaned the prompts by removing the number and letter prefixes.

    • Thematic Categorization While ParlaMint uses Comparative Agentas Project (CAP) classification, we selected EuroVoc as the standard classification system for European parliamentary systems. For policy domains with clear semantic correspondence between CAP and EuroVoc taxonomies, we applied direct mapping rules. For semantically complex or ambiguous categories, we employed Kevlar classification methodology.

5. Licensing

  • Derived from ParlaMint from Clarin
  • This dataset is also released under CC BY 4.0

6. Ethical Considerations

  • Contains public speeches from elected officials.
  • Should not be used for political profiling of individuals.

7. Citation

@misc{ParliaBench2025, title={ParliaBench: An Evaluation and Benchmarking Framework for LLM-Generated Parliamentary Speech}, author={Marios Koniaris and Argyro Tsipi and Panayiotis Tsanakas}, year={2025}, eprint={2511.08247}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2511.08247}, }

8. Authors

Marios Koniaris, Argyro Tsipi, Panayiotis Tsanakas ParliaBench: An Evaluation and Benchmarking Framework for LLM-Generated Parliamentary Speech.

Downloads last month
23

Collection including argyrotsipi/train-dataset