nielsr HF Staff commited on
Commit
2369f05
·
verified ·
1 Parent(s): 23e6661

Improve model card with metadata and links

Browse files

This PR updates the model card to enhance its discoverability and provide essential information.

Key changes include:
- Adding `pipeline_tag: text-generation` for better categorization.
- Adding `library_name: transformers` to enable the automated "how to use" widget, as evidenced by the `config.json` and `tokenizer_config.json` files showing compatibility with the `Qwen2ForCausalLM` architecture and `transformers` library.
- Linking the model to its official paper: [Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training](https://huggingface.co/papers/2510.04996).
- Adding a direct link to the associated [GitHub repository](https://github.com/RLHFlow/Reinforce-Ada).

These updates provide users with clearer context and instructions for the model.

Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -1,4 +1,9 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
- Checkpoint from step=400 and trained on the [hard prompt set](https://huggingface.co/datasets/RLHFlow/reinforce_ada_hard_prompt).
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
+
7
+ This model is a checkpoint trained using the Reinforce-Ada framework, as described in the paper [Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training](https://huggingface.co/papers/2510.04996). It is a `Qwen2ForCausalLM` model (as indicated in `config.json`), specifically a checkpoint from step=400, fine-tuned on the [hard prompt set](https://huggingface.co/datasets/RLHFlow/reinforce_ada_hard_prompt).
8
+
9
+ The official code repository for Reinforce-Ada can be found on [GitHub](https://github.com/RLHFlow/Reinforce-Ada).