--- license: mit --- # PerMedCQA: Persian Medical Consumer QA Benchmark **PerMedCQA: Benchmarking Large Language Models on Medical Consumer Question Answering in Persian** PerMedCQA is the first large-scale, real-world benchmark for Persian-language medical consumer question answering. It contains anonymized medical inquiries from Persian-speaking users paired with professional responses, enabling rigorous evaluation of large language models in low-resource, health-related domains. --- ## 📊 Dataset Overview - **Total entries**: 68,138 QA pairs - **Source platforms**: DrYab, HiSalamat, GetZoop, Mavara-e-Teb - **Timeframe**: Nov 10, 2022 – Apr 2, 2024 - **Languages**: Persian only - **Licensing**: [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) - - ## 🔗 Paper - 📄 [Paper on arXiv](https://arxiv.org/abs/2505.18331) - 📊 [Paper with Code page](https://paperswithcode.com/paper/permedcqa-benchmarking-large-language-models) --- ## 🧬 Metadata & Features Each example in the dataset includes: - `instance_id`: Unique ID for each QA pair - `Title`: Short user-submitted title - `Question`: Full Persian-language consumer medical question - `Expert_Answer`: Doctor’s response - `Category`: Medical topic (e.g., “پوست و مو”) - `Specialty`: Expert’s medical field (e.g., “متخصص پوست و مو”) - `Age`: Reported patient age - `Weight`: Reported weight (optional) - `Sex`: Patient gender (`"man"` or `"woman"`) - `dataset_source`: Name of the platform (e.g., DrYab, Getzoop) - `Tag`: ICD‑11 label and rationale - `QuestionType`: Question classification tag (e.g., "Contraindication", "Indication") and reasoning --- ## 📁 Dataset Structure ```json { "Title": "قرمزی پوست نوزاد بعد از استفاده از پماد", "Category": "پوست و مو", "Specialty": "متخصص پوست و مو", "Age": "1", "Weight": "10", "Sex": "man", "dataset_source": "HiSalamat", "instance_id": 32405, "Tag": { "Tag": 23, "Tag_Reasoning": "The question addresses a skin reaction in an infant following the application of a cream, indicating a dermatological condition." }, "Question": "سلام خسته نباشید. من واسه پسر ۱ ساله‌ام که جای واکسنش سفت شده بود، پماد موضعی استفاده کردم. ولی الان پوستش خیلی قرمز شده و خارش داره. ممکنه حساسیت داده باشه؟ باید چکار کنم؟", "Expert_Answer": "احتمالا پوست نوزاد به ترکیبات پماد حساسیت نشان داده است. مصرف آن را قطع کنید و در صورت ادامه علائم به متخصص پوست مراجعه کنید.", "QuestionType": { "Explanation": "The user asks about an adverse skin reaction following the use of a topical medication on an infant, which is a case of possible side effects.", "QuestionType_Tag": "SideEffect" } } ``` --- ## 📥 How to Load ```python from datasets import load_dataset ds = load_dataset("NaghmehAI/PerMedCQA", split="train") print(ds[0]) ``` --- ## 🚀 Intended Uses - 🧠 **Evaluation** of multilingual or Persian-specific LLMs in real-world, informal medical domains - 🛠️ **Few-shot or zero-shot** fine-tuning, instruction-tuning - 🌍 **Cultural insights**: Persian language behavior in health-related discourse - ⚠️ **NOT for clinical use**: Informational and research purposes only --- ## ⚙️ Data Processing Pipeline ### Stage 1: Column Transformation (`change_columns.py`) - Reads CSV/JSON input and transforms them into structured JSON format - Handles single-turn, multi-turn, and multi-expert Q&A data - Cleans and formats the text, removing unnecessary whitespace and newlines - Creates a chat-style JSON file with `user` and `assistant` roles ### Stage 2: QA Preprocessing (`preprocess_for_qa.py`) - Truncates multi-turn dialogues to first Q&A pair - Removes: - Empty or invalid messages - Q&A pairs shorter than 3 words - Duplicate Q&A instances - Adds: - `dataset_source` and `instance_id` to each item - Merges cleaned records into `All_QA_preprocessed.json` ### Dataset Cleaning Results: | Dataset | Step 1 Removed | Step 2 Removed | Step 3 Removed | Final Records | |--------------|----------------|----------------|----------------|----------------| | Dr_Yab | 63 | 1083 | 25 | 37,905 | | GetZoop | 1005 | 1352 | 8 | 25,502 | | Hi-Salamat | 9580 | 121 | 1 | 5,220 | | Mavara-e-Teb | 0 | 1034 | 92 | 4,789 | --- ## 📚 Citation If you use **PerMedCQA**, please cite: ```bibtex @misc{jamali2025permedcqa, title={PerMedCQA: Benchmarking Large Language Models on Medical Consumer Question Answering in Persian Language}, author={Jamali, Naghmeh and Mohammadi, Milad and Baledi, Danial and Rezvani, Zahra and Faili, Heshaam}, year={2025}, eprint={2505.18331}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.18331} } ``` --- ## 📬 Contact For collaboration or questions: 📧 Naghmeh Jamali – naghme.jamali.ai@gmail.com, Milad Mohammadi – miladmohammadi@ut.ac.ir, Danial Baledi – baledi.danial@gmail.com.