Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
The size of the content of the first rows (577429 B) exceeds the maximum supported size (200000 B) even after truncation. Please report the issue.
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

AC-Bench: A Benchmark for Actual Causality Reasoning

Dataset Description

AC-Bench is designed to evaluate the actual causality (AC) reasoning capabilities of large language models (LLMs). It contains a collection of carefully annotated samples, each consisting of a story, a query related to actual causation, detailed reasoning steps, and a binary answer. The dataset aims to provide a comprehensive benchmark for assessing the ability of LLMs to perform formal and interpretable AC reasoning.

Dataset Structure

Each sample in the dataset is formatted as a JSON object with the following structure:

{
  "story": "A narrative describing a real-world scenario involving causal events.",
  "question": "A query asking whether a specific event caused an outcome.",
  "reasoning": {
    "causal_events": {
      "Event Description": {
        "occur": true/false,  // Whether the event actually occurs.
        "order": integer,     // The temporal order of the event.
        "focal": true/false,  // Whether the event is the focal causal event.
        "sufficient": true/false,  // Whether the event is a sufficient cause.
        "necessary": true/false,   // Whether the event is a necessary cause.
        "halpern_pearl": true/false,  // Whether the event is an actual cause.
        "norm_violated": true/false,  // Whether the event violates a norm.
        "behavior_intended": true/false  // Whether the event is intentional.
      },
      ...
    },
    "outcome_event": {
      "Outcome Description": {
        "occur": true/false,  // Whether the outcome event occurs.
        "order": integer      // The temporal order of the outcome event.
      }
    }
  },
  "answer": "Yes/No"  // The binary answer to the question.
}

Example Sample

{
  "story": "Bill's wife, Sue, is out of town for the weekend. She leaves Bill a message that says, 'I just saw this marvelous bookend. It's called a Bartlett bookend. So pretty! I'm going to go back tomorrow and get one. It will be perfect for the left side of our bookshelf'. Bill goes and visits his friend. Bill and his friend talk for a while, and when Bill asks if his friend is willing to sell the bookend, his friend is happy to sell it. Bill makes an offer, but his friend insists on him not paying so much. Finally, Bill buys the right-side Bartlett bookend from his friend and goes home. Then the next day, Sue goes and buys the left-side Bartlett bookend. So, when Sue got home, they had the paired set of bookends.",
  "question": "Did Bill cause them to possess the paired set of bookends?",
  "reasoning": {
    "causal_events": {
      "Bill buys the right-side Bartlett bookend": {
        "occur": true,
        "order": 0,
        "focal": true,
        "sufficient": false,
        "necessary": true,
        "halpern_pearl": true,
        "norm_violated": false,
        "behavior_intended": true
      },
      "Sue buys the left-side Bartlett bookend": {
        "occur": true,
        "order": 1,
        "focal": false,
        "sufficient": false,
        "necessary": true,
        "halpern_pearl": true,
        "norm_violated": false,
        "behavior_intended": true
      }
    },
    "outcome_event": {
      "Bill and Sue possess the paired set of bookends": {
        "occur": true,
        "order": 2
      }
    }
  },
  "answer": "Yes"
}

Dataset Features

  • Comprehensive Annotations: Each sample includes detailed reasoning steps for actual causality reasoning.
  • Focus on Actual Causation: The dataset is specifically designed to evaluate reasoning about actual causality, distinguishing it from type causality.
  • Challenging and Diverse Samples: The dataset includes a variety of causal scenarios, and presents greater challenge for LLMs compared with Big-Bench Hard Causal Judgment.

Usage

This dataset can be used to train, evaluate, and analyze the performance of LLMs on actual causality reasoning tasks. Researchers can leverage this benchmark to develop more interpretable and accurate causal reasoning models.

License

This dataset is released under the Creative Commons Attribution 4.0 International License.

Acknowledgments

This dataset is based on the work presented in the paper "AC-Reason: Towards Theory-Guided Actual Causality Reasoning with Large Language Models" by Yanxi Zhang, Xin Cong, Zhong Zhang, Xiao Liu, Dongyan Zhao, and Yesai Wu. We thank the authors for their contributions to the field of causal reasoning.

Downloads last month
11