IndicMSMARCO / README.md
prasanjith's picture
Update README.md
ae58f84 verified
|
raw
history blame
7.94 kB
metadata
license: mit
task_categories:
  - text-retrieval
  - question-answering
language:
  - as
  - bn
  - gu
  - hi
  - kn
  - ml
  - mr
  - ne
  - or
  - pa
  - ta
  - te
  - ur
multilinguality: multilingual
size_categories:
  - 10K<n<100K
source_datasets:
  - ms_marco
tags:
  - indian-languages
  - multilingual
  - indic
  - retrieval
  - msmarco
  - benchmark
pretty_name: 'IndicMSMARCO: Multilingual Information Retrieval Benchmark'
configs:
  - config_name: as
    data_files:
      - as/*.parquet
    default: false
    description: Assamese language subset
  - config_name: bn
    data_files:
      - bn/*.parquet
    default: false
    description: Bengali language subset
  - config_name: gu
    data_files:
      - gu/*.parquet
    default: false
    description: Gujarati language subset
  - config_name: hi
    data_files:
      - hi/*.parquet
    default: false
    description: Hindi language subset
  - config_name: kn
    data_files:
      - kn/*.parquet
    default: false
    description: Kannada language subset
  - config_name: ml
    data_files:
      - ml/*.parquet
    default: false
    description: Malayalam language subset
  - config_name: mr
    data_files:
      - mr/*.parquet
    default: false
    description: Marathi language subset
  - config_name: ne
    data_files:
      - ne/*.parquet
    default: false
    description: Nepali language subset
  - config_name: or
    data_files:
      - or/*.parquet
    default: false
    description: Odia language subset
  - config_name: pa
    data_files:
      - pa/*.parquet
    default: false
    description: Punjabi language subset
  - config_name: ta
    data_files:
      - ta/*.parquet
    default: false
    description: Tamil language subset
  - config_name: te
    data_files:
      - te/*.parquet
    default: false
    description: Telugu language subset
  - config_name: ur
    data_files:
      - ur/*.parquet
    default: false
    description: Urdu language subset

🔍 IndicMSMARCO: Multilingual Information Retrieval Benchmark

A comprehensive multilingual variant of MS MARCO specifically tailored for Indian languages, featuring carefully selected queries and corresponding passages with high-quality translations.

🚀 Quick Start - Load Individual Languages

from datasets import load_dataset

# Load ONLY Hindi data (fast and efficient!)
hindi_data = load_dataset("ai4bharat/IndicMSMARCO", "hi")
print(f"Hindi queries: {len(hindi_data['train'])} samples")

# Load ONLY Bengali data  
bengali_data = load_dataset("ai4bharat/IndicMSMARCO", "bn")
print(f"Bengali queries: {len(bengali_data['train'])} samples")

# Access query-passage pairs
for example in hindi_data['train'][:3]:
    print(f"Query: {example['query']}")
    print(f"Passage: {example['passage'][:200]}...")
    print("---")

📊 Dataset Overview

  • Total Samples: 12,999
  • Languages: 13 languages
  • Source: MS MARCO development set
  • Quality: Human-verified translations
  • Task: Information Retrieval / Passage Ranking

🎯 Key Features

  • Topic Diversity: Science, history, politics, health, technology
  • Query Complexity: Simple factual, descriptive, and complex entity-based queries
  • Balanced Representation: Short, medium, and long-form queries
  • High-Quality Translations: Professional translation and verification
  • Consistent Structure: Normalized schema across all languages

📋 Available Languages (13 total)

Code Language Load Command Sample Count
as Assamese load_dataset('ai4bharat/IndicMSMARCO', 'as') ~999
bn Bengali load_dataset('ai4bharat/IndicMSMARCO', 'bn') ~999
gu Gujarati load_dataset('ai4bharat/IndicMSMARCO', 'gu') ~999
hi Hindi load_dataset('ai4bharat/IndicMSMARCO', 'hi') ~999
kn Kannada load_dataset('ai4bharat/IndicMSMARCO', 'kn') ~999
ml Malayalam load_dataset('ai4bharat/IndicMSMARCO', 'ml') ~999
mr Marathi load_dataset('ai4bharat/IndicMSMARCO', 'mr') ~999
ne Nepali load_dataset('ai4bharat/IndicMSMARCO', 'ne') ~999
or Odia load_dataset('ai4bharat/IndicMSMARCO', 'or') ~999
pa Punjabi load_dataset('ai4bharat/IndicMSMARCO', 'pa') ~999
ta Tamil load_dataset('ai4bharat/IndicMSMARCO', 'ta') ~999
te Telugu load_dataset('ai4bharat/IndicMSMARCO', 'te') ~999
ur Urdu load_dataset('ai4bharat/IndicMSMARCO', 'ur') ~999

💡 Usage Examples

Information Retrieval Evaluation

from datasets import load_dataset

# Load Hindi benchmark
dataset = load_dataset("ai4bharat/IndicMSMARCO", "hi")
queries = dataset['train']

# Extract queries and passages for retrieval evaluation
for item in queries:
    query_id = item['query_id']
    query_text = item['query']
    passage_text = item['passage']
    relevance = item['relevance_score']
    
    # Use for your retrieval model evaluation
    print(f"Query {query_id}: {query_text}")
    print(f"Relevant passage: {passage_text[:100]}...")

Cross-lingual Retrieval Benchmark

# Compare retrieval across languages
languages = ['as', 'bn', 'gu', 'hi']
results = {}

for lang in languages:
    dataset = load_dataset("ai4bharat/IndicMSMARCO", lang)
    results[lang] = dataset['train']
    print(f"{lang}: {len(results[lang])} query-passage pairs")

# Evaluate your multilingual retrieval model
for lang_code, queries in results.items():
    # Run your retrieval evaluation here
    pass

📋 Dataset Structure

{
    "query_id": "1234567",
    "query": "भारत की राजधानी क्या है?",
    "passage": "भारत की राजधानी नई दिल्ली है। यह देश के उत्तरी भाग में स्थित है...",
    "passage_id": "7654321",
    "language": "hi",
    "answer": "नई दिल्ली",
    "title": "भारत की राजधानी",
    "query_type": "factual",
    "relevance_score": 1.0,
    "is_selected": true,
    "text": "Query: भारत की राजधानी क्या है? | Passage: भारत की राजधानी नई दिल्ली है...",
    "dataset": "IndicMSMARCO",
    "source": "MS MARCO translated to Indian languages",
    "meta": "{\"model\": \"translation_model\", \"verified\": true}"
}

⚡ Performance & Loading Tips

  • Single Language Loading: Always use config name for fastest loading
  • Streaming: Use streaming=True for memory-efficient processing
  • Batch Evaluation: Load full train split for comprehensive benchmarking
  • Cross-lingual: Compare same query_id across languages

🎯 Use Cases

  • 🔍 Information Retrieval: Benchmark multilingual retrieval systems
  • 🤖 RAG Evaluation: Test retrieval-augmented generation systems
  • 📊 Cross-lingual IR: Evaluate cross-language information retrieval
  • 🧪 Model Comparison: Compare multilingual embedding models
  • 📚 Academic Research: Multilingual IR and NLP research

📖 Citation

If you use IndicMSMARCO in your research, please cite:

@dataset{indic_msmarco_2024,
  title={IndicRAGSuite: LargeScale Datasets and a Benchmark for Indian Language RAG Systems},
  author={Pasunuti Prasanjith,Prathmesh B More,Anoop Kunchukuttan, Raj Dabre},
  year={2025},
  {journal = {arXiv preprint arXiv:2506.01615},
  url={https://huggingface.co/datasets/ai4bharat/IndicMSMARCO}
}

📄 License

MIT License

🔧 Technical Details

  • Format: JSONL files per language
  • Encoding: UTF-8
  • Schema: Normalized MS MARCO structure
  • Quality Control: Multi-stage validation process

Built for multilingual information retrieval • Human-verified quality • Ready for benchmarking