Improve model card: Add `transformers` compatibility, `text-generation` pipeline tag, and comprehensive details

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for MedResearcher-R1-32B by:

  • Adding library_name: transformers to the metadata, enabling the automated code snippet on the Hugging Face Hub, as confirmed by the model's config.json (Qwen2ForCausalLM architecture, transformers_version, and Qwen2Tokenizer class).
  • Setting pipeline_tag: text-generation to ensure the model is discoverable for relevant tasks at https://huggingface.co/models?pipeline_tag=text-generation, aligning with its role as an LLM-based agent.
  • Integrating a more comprehensive description of the MedResearcher-R1 framework, including its key features, performance highlights, and open-sourced dataset, directly sourced from the project's GitHub README. This provides a richer context for the model's capabilities.
  • Adding a "Quick start: Run Model for Evaluation" section with a sglang server setup, directly reflecting usage instructions found in the GitHub repository, to guide users on how to deploy and evaluate the model.
  • Correcting the BibTeX citation format for better parsing and consistency.
  • Including the Star History chart from the GitHub README for community engagement.

These changes aim to provide a more informative and user-friendly experience for anyone interacting with the model on the Hub. The existing arXiv paper link and GitHub repository link are maintained.

AQ org

LGTM

m1ngcheng changed pull request status to merged

Sign up or log in to comment