Improve dataset card: Add metadata and description
Browse filesThis PR improves the dataset card by:
- Adding the correct metadata, including `task_categories`, reflecting the nature of the FinTagging benchmark.
- Adding a link to the paper for better context and discoverability.
- Adding a link to the Github repository for the evaluation framework.
- Rewriting the description to accurately reflect the FinTagging benchmark's purpose and contents.
README.md
CHANGED
|
@@ -1,21 +1,29 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
splits:
|
| 11 |
-
- name: test
|
| 12 |
-
num_bytes: 37128330
|
| 13 |
-
num_examples: 6599
|
| 14 |
-
download_size: 4493992
|
| 15 |
-
dataset_size: 37128330
|
| 16 |
-
configs:
|
| 17 |
-
- config_name: default
|
| 18 |
-
data_files:
|
| 19 |
-
- split: test
|
| 20 |
-
path: data/test-*
|
| 21 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- table-question-answering
|
| 5 |
+
tags:
|
| 6 |
+
- finance
|
| 7 |
+
- xbrl
|
| 8 |
+
- information-extraction
|
| 9 |
+
- semantic-alignment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
+
|
| 12 |
+
# FinTagging: An LLM-ready Benchmark for Extracting and Structuring Financial Information
|
| 13 |
+
|
| 14 |
+
FinTagging is the first full-scope, table-aware XBRL benchmark designed to evaluate the structured information extraction and semantic alignment capabilities of large language models (LLMs) in the context of XBRL-based financial reporting. It decomposes the XBRL tagging problem into two subtasks:
|
| 15 |
+
|
| 16 |
+
- **FinNI:** Financial entity extraction.
|
| 17 |
+
- **FinCL:** Taxonomy-driven concept alignment.
|
| 18 |
+
|
| 19 |
+
FinTagging requires models to jointly extract facts from both unstructured text and structured tables and align them with the full 10k+ US-GAAP taxonomy.
|
| 20 |
+
|
| 21 |
+
[Paper](https://huggingface.co/papers/2505.20650) | [Evaluation Framework](https://github.com/The-FinAI/FinBen)
|
| 22 |
+
|
| 23 |
+
This repository contains the original benchmark dataset without preprocessing. Annotated data (`benchmark_ground_truth_pipeline.json`) is provided in the "annotation" folder. For preprocessed datasets suitable for specific model architectures, please see the linked datasets in the Github README.
|
| 24 |
+
|
| 25 |
+
**Datasets:**
|
| 26 |
+
|
| 27 |
+
* **FinNI-eval:** Evaluation set for FinNI subtask.
|
| 28 |
+
* **FinCL-eval:** Evaluation set for FinCL subtask.
|
| 29 |
+
* **FinTagging_BIO:** BIO-format dataset for token-level tagging.
|