--- dataset_name: "AraLingBench" pretty_name: "AraLingBench" tags: - arabic - evaluation - multiple-choice - question-answering language: - ar task_categories: - question-answering size_categories: - n<1K --- # AraLingBench 📄 **Paper:** [arXiv:2511.14295](https://arxiv.org/abs/2511.14295) 💻 **GitHub:** [hammoudhasan/AraLingBench](https://github.com/hammoudhasan/AraLingBench) AraLingBench is a **150-question Arabic multiple-choice benchmark** that tests core linguistic competence of language models across five pillars: - النحو (Grammar) - الصرف (Morphology) - الإملاء (Spelling & Orthography) - فهم اللغة (Reading Comprehension) - التركيب اللغوي والأسلوبي (Syntax & Stylistics) All questions are **human-authored and validated**, with a single correct answer and a difficulty label: `Easy`, `Medium`, or `Hard`. ## Data Fields Each example has: - `label` *(str)* — linguistic category - `context` *(str)* — optional supporting text (may be empty) - `question` *(str)* — question in Arabic - `options` *(List[str])* — answer choices - `answer` *(str)* — correct choice (matches one of `options`) - `difficulty` *(str)* — one of `Easy`, `Medium`, `Hard` Single split: - `train` — 150 examples (use as an evaluation set) ## Usage ```python from datasets import load_dataset ds = load_dataset("hammh0a/AraLingBench") example = ds["train"][0] print(example["label"]) print(example["question"]) print(example["options"]) print(example["answer"]) ``` ## Citation If you use AraLingBench, please cite: ```bibtex @article{zbib2025aralingbench, title = {AraLingBench: A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models}, author = {Mohammad Zbib and Hasan Abed Al Kader Hammoud and Sina Mukalled and Nadine Rizk and Fatima Karnib and Issam Lakkis and Ammar Mohanna and Bernard Ghanem}, journal = {arXiv preprint arXiv:2511.14295}, year = {2025}, url = {https://arxiv.org/abs/2511.14295} } ```