Text Generation
Transformers
Safetensors
Basque
llama
conversational
text-generation-inference

Llama-3.1-8B-Instruct-Magpie_mix [BASELINE]

Fine-tuned version of Llama-3.1-8B-Instruct. Curated by instruction tuning the base model with mix of MagpieEU Basque instructions and Magpie-Llama-3.1-Pro-300K-Filtered English instructions.

📕 Paper: DIPLomA: Efficient Adaptation of Instructed LLMs to Low-Resource Languages via Post-Training Delta Merging

License

This model inherits the Llama 3.1 Community License from its base model. Before use or redistribution, please review the license terms

Citation

If you use Llama-eus-8B-DIPLomA please cite the following reference:

@inproceedings{sarasua-etal-2025-diploma,
    title = "{DIPL}om{A}: Efficient Adaptation of Instructed {LLM}s to Low-Resource Languages via Post-Training Delta Merging",
    author = "Sarasua, Ixak  and
      Corral, Ander  and
      Saralegi, Xabier",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.findings-emnlp.1355/",
    pages = "24898--24912",
    ISBN = "979-8-89176-335-7",
    abstract = "This paper investigates how open-weight instruction-tuned large language models (LLMs) can be efficiently adapted to low-resource languages without requiring costly large-scale post-training. We introduce DIPLomA (Decoupled Instruction-Preserving Language Adaptation), a lightweight delta-based transfer strategy that provides a practical and effective solution for this scenario. DIPLomA decouples language adaptation from post-training alignment by first continually pretraining a foundational LLM on a modest amount of monolingual target-language data while anchoring on English replay, and then injecting instruction-following capabilities via delta-based weight merging from the instructed counterpart of the base LLM. We evaluate DIPLomA on Basque and validate its generality on Welsh and Swahili, demonstrating consistent and substantial gains in instruction-following, linguistic proficiency, and safety. Compared to strong baselines, our method achieves average relative improvements of 50 points in Basque, 63 in Welsh, and 51 in Swahili, while preserving the original model{'}s multilingual performance. These results highlight DIPLomA as an effective, resource-efficient strategy for bringing high-quality instruction alignment to underrepresented languages at scale."
}

Contact

Downloads last month
21
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for orai-nlp/Llama-3.1-8B-Instruct-Magpie_mix

Finetuned
(2009)
this model

Datasets used to train orai-nlp/Llama-3.1-8B-Instruct-Magpie_mix

Collection including orai-nlp/Llama-3.1-8B-Instruct-Magpie_mix