Interplay-LM Extrapolation RL Models

This repository is organized by experiment setting. Each top-level directory corresponds to one pretraining mixture used in the extrapolation experiments.

Within each setting:

  • base/ stores the base model used to initialize RL.
  • rl/ stores the final RL checkpoints for each experiment variant.

Only inference-relevant Hugging Face files are included.

Included settings

  • id2-10_0.2easy_0.3medium_0.5hard
  • id2-10_0.5easy_0.3medium_0.2hard
  • id2-10_0.4995easy_0.4995medium_0.001hard
  • id2-10_0.475easy_0.475medium_0.05hard

Load

from transformers import AutoModelForCausalLM, AutoTokenizer

repo_id = "Interplay-LM-Reasoning/extrapolation_rl"
subdir = "id2-10_0.5easy_0.3medium_0.2hard/rl/op11-14_uniform"

tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder=subdir)
model = AutoModelForCausalLM.from_pretrained(repo_id, subfolder=subdir)

Reference

  • Zhang, Charlie; Neubig, Graham; Yue, Xiang. "On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models." arXiv:2512.07783 (2025).
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for Interplay-LM-Reasoning/extrapolation_rl