SecQue / README.md
nogabenyoash's picture
Update README.md
894196b verified
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
tags:
  - finance
  - table-text
  - numerical_reasoning
size_categories:
  - < 1K

SECQUE

SECQUE is a comprehensive benchmark for evaluating large language models (LLMs) in financial analysis tasks.

SECQUE comprises 565 expert-written questions covering SEC filings analysis across four key categories:

  • comparison analysis
  • ratio calculation
  • risk assessment
  • financial insight generation.

To assess model performance, we develop SECQUE-Judge, an evaluation mechanism leveraging multiple LLM-based judges, which demonstrates strong alignment with human evaluations. Additionally, we provide an extensive analysis of various models’ performance on our benchmark.

Results

Model Baseline Financial Baseline CoT Financial CoT Flipped Avg Tokens by Model
GPT-4o 0.69/0.79 0.62/0.71 0.67/0.76 0.63/0.73 0.68/0.78 319.84
GPT-4o-mini 0.64/0.73 0.38/0.47 0.60/0.72 0.56/0.65 0.62/0.73 289.76
Llama-3.3-70B-Instruct 0.65/0.75 0.60/0.71 0.63/0.74 0.60/0.72 0.62/0.74 341.63
Qwen2.5-32B-Instruct 0.61/0.72 0.49/0.58 0.60/0.71 0.55/0.67 0.65/0.75 331.34
Phi-4 0.56/0.66 0.55/0.64 0.57/0.67 0.56/0.66 0.57/0.67 294.33
Meta-Llama-3.1-8B-Instruct 0.48/0.60 0.41/0.54 0.44/0.56 0.40/0.53 0.47/0.59 338.38
Mistral-Nemo-Instruct-2407 0.46/0.55 0.32/0.42 0.45/0.56 0.44/0.55 0.44/0.54 231.52
Avg Tokens by Prompt 283.04 151.97 437.38 334.71 317.57 304.93

Citation

@inproceedings{
    title = "SECQUE: A Benchmark for Evaluating Real-World Financial Analysis Capabilitiese",
    author = "Ben Yoash, Noga  and
      Brief, Meni and
      Ovadia, Oded  and
      Shenderovitz, Gil  and
      Mishaeli, Moshik  and
      Lemberg, Rachel  and
      Sheetrit, Eitam",
    month = apr,
    year = "2025",
    url = "https://arxiv.org/pdf/2504.04596",
}

Evaluation Benchmark notice

This benchmark is indented solely for evaluation, and must not be used for training in any way.