Improve model card: Add comprehensive metadata, GitHub link, and Transformers usage
#1
by
nielsr
HF Staff
- opened
This PR significantly improves the model card for the MMR1 model, presented in MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources.
Key improvements include:
- Metadata: Addition of
license: apache-2.0,pipeline_tag: image-text-to-text,library_name: transformers, and relevanttags(qwen2_5_vl,multimodal-llm,multimodal-reasoning,math-reasoning). - Contextual Metadata: Inclusion of
datasets(MMR1/MMR1-SFT,MMR1/MMR1-RL) andbase_model(Qwen/Qwen2.5-VL-7B-Instruct) for better lineage and discoverability. - Performance Metrics: Addition of
model-indexwith an average performance score for theMMR1-7B-RLvariant on mathematics-related multimodal reasoning benchmarks. - Content Enrichment: Integration of the paper abstract, a direct link to the GitHub repository, and key sections from the GitHub README (Introduction, Methodology, Open Resources, Evaluation, Analysis, Qualitative Demo, Acknowledgement, Citation, and License) for a more informative model overview. Relevant images from the GitHub repository are also included.
- Usage Example: Addition of a clear "Quick Start (Inference)" code snippet demonstrating how to use the model with the
transformerslibrary, which directly supports thelibrary_namemetadata and enables the automated "How to use" widget. The example is designed for image-text input, aligning with the model's capabilities. - Clean-up: Removal of internal "File information" from the model card.
This enhances the model's discoverability, usability, and documentation for researchers and users on the Hugging Face Hub.
Sicong
changed pull request status to
merged