--- base_model: - GreatCaptainNemo/ProLLaMA - dnagpt/llama-dna - NousResearch/Llama-2-7b-hf library_name: transformers tags: - mergekit - merge --- # llama_task_arithmetic This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base. ### Models Merged The following models were included in the merge: * [GreatCaptainNemo/ProLLaMA](https://huggingface.co/GreatCaptainNemo/ProLLaMA) * [dnagpt/llama-dna](https://huggingface.co/dnagpt/llama-dna) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: NousResearch/Llama-2-7b-hf models: - model: GreatCaptainNemo/ProLLaMA parameters: weight: 0.3 - model: dnagpt/llama-dna parameters: weight: 0.3 merge_method: task_arithmetic dtype: float16 tokenizer_source: "base" ```