--- base_model: - unsloth/Llama-3.2-1B-Instruct - diabolic6045/open-llama-3.2-1B-Instruct - qingy2024/Benchmaxx-Llama-3.2-1B-Instruct - Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2 - SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [qingy2024/Benchmaxx-Llama-3.2-1B-Instruct](https://huggingface.co/qingy2024/Benchmaxx-Llama-3.2-1B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) * [diabolic6045/open-llama-3.2-1B-Instruct](https://huggingface.co/diabolic6045/open-llama-3.2-1B-Instruct) * [Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2](https://huggingface.co/Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2) * [SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5](https://huggingface.co/SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: SkyOrbis/SKY-Ko-Llama3.2-1B-lora-epoch5 - model: unsloth/Llama-3.2-1B-Instruct - model: Xiaojian9992024/Llama3.2-1B-THREADRIPPER-v0.2 - model: diabolic6045/open-llama-3.2-1B-Instruct base_model: qingy2024/Benchmaxx-Llama-3.2-1B-Instruct merge_method: model_stock dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] ```