Update README.md
Browse files
README.md
CHANGED
|
@@ -3,26 +3,70 @@ license: apache-2.0
|
|
| 3 |
tags:
|
| 4 |
- merge
|
| 5 |
- mergekit
|
| 6 |
-
-
|
| 7 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
- unsloth/Qwen2.5-Coder-1.5B-Instruct
|
| 9 |
-
- Qwen/Qwen2.5-Math-1.5B-Instruct
|
| 10 |
-
- bunnycore/Qwen2.5-1.5B-Matrix
|
| 11 |
-
- Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
|
| 12 |
-
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion
|
| 16 |
|
| 17 |
-
ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion
|
| 18 |
-
* [cyixiao/qwen-1.5B-openbookqa](https://huggingface.co/cyixiao/qwen-1.5B-openbookqa)
|
| 19 |
-
* [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct)
|
| 20 |
-
* [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct)
|
| 21 |
-
* [bunnycore/Qwen2.5-1.5B-Matrix](https://huggingface.co/bunnycore/Qwen2.5-1.5B-Matrix)
|
| 22 |
-
* [Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini](https://huggingface.co/Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini)
|
| 23 |
-
* [Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3)
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
```yaml
|
| 28 |
merge_method: della
|
|
@@ -32,7 +76,7 @@ parameters:
|
|
| 32 |
lambda: 1.0
|
| 33 |
normalize: true
|
| 34 |
|
| 35 |
-
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
|
| 36 |
|
| 37 |
models:
|
| 38 |
- model: cyixiao/qwen-1.5B-openbookqa
|
|
@@ -59,4 +103,45 @@ models:
|
|
| 59 |
parameters:
|
| 60 |
weight: 1
|
| 61 |
density: 0.5
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- merge
|
| 5 |
- mergekit
|
| 6 |
+
- lazzymergekit
|
| 7 |
+
- Qwen2
|
| 8 |
+
- Coder
|
| 9 |
+
- Math
|
| 10 |
+
- Bunnycore
|
| 11 |
+
- Instruct
|
| 12 |
+
- OpenBookQA
|
| 13 |
+
- instruction-following
|
| 14 |
+
- long-form-generation
|
| 15 |
+
base_model:
|
| 16 |
- unsloth/Qwen2.5-Coder-1.5B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
|
|
|
|
| 19 |
|
| 20 |
+
# **ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
+
**ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion** is a cutting-edge merged model that combines the finest features from **instruction-following**, **coding**, **mathematical reasoning**, and **factual question-answering**. This powerhouse is designed for high performance in diverse technical, creative, and interactive tasks.
|
| 23 |
+
|
| 24 |
+
## 🌟 **Family Tree**
|
| 25 |
+
|
| 26 |
+
This model is the fusion of the following:
|
| 27 |
+
|
| 28 |
+
- [**cyixiao/qwen-1.5B-openbookqa**](https://huggingface.co/cyixiao/qwen-1.5B-openbookqa)
|
| 29 |
+
- [**unsloth/Qwen2.5-Coder-1.5B-Instruct**](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct)
|
| 30 |
+
- [**Qwen/Qwen2.5-Math-1.5B-Instruct**](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B-Instruct)
|
| 31 |
+
- [**bunnycore/Qwen2.5-1.5B-Matrix**](https://huggingface.co/bunnycore/Qwen2.5-1.5B-Matrix)
|
| 32 |
+
- [**Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini**](https://huggingface.co/Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini)
|
| 33 |
+
- [**Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3**](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3)
|
| 34 |
+
|
| 35 |
+
These models have been seamlessly blended to create a versatile AI that excels across multiple domains.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
## 🧬 **Detailed Model Lineage**
|
| 40 |
+
|
| 41 |
+
### **A: cyixiao/qwen-1.5B-openbookqa**
|
| 42 |
+
|
| 43 |
+
- Focuses on factual knowledge and reasoning from the OpenBookQA dataset, providing strong question-answering capabilities.
|
| 44 |
+
|
| 45 |
+
### **B: unsloth/Qwen2.5-Coder-1.5B-Instruct**
|
| 46 |
+
|
| 47 |
+
- Tailored for **coding** and **instruction-following**, this model enhances the ability to generate code and follow precise instructions with ease.
|
| 48 |
+
|
| 49 |
+
### **C: Qwen/Qwen2.5-Math-1.5B-Instruct**
|
| 50 |
+
|
| 51 |
+
- This model specializes in **mathematical reasoning** and logical problem-solving, making it perfect for structured tasks that require high-level thinking.
|
| 52 |
+
|
| 53 |
+
### **D: bunnycore/Qwen2.5-1.5B-Matrix**
|
| 54 |
+
|
| 55 |
+
- A multi-purpose model that blends **instruction**, **math**, and **coding**, providing a well-rounded performance in both structured and creative tasks.
|
| 56 |
+
|
| 57 |
+
### **E: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini**
|
| 58 |
+
|
| 59 |
+
- Fine-tuned on conversational and identity-specific tasks, this model contributes to the model’s ability to handle **conversation-heavy** tasks with clarity.
|
| 60 |
+
|
| 61 |
+
### **F: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3**
|
| 62 |
+
|
| 63 |
+
- This model brings **uncensored** capabilities, ensuring that the AI is flexible and adaptable in open-ended and unrestricted instruction-following scenarios.
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## 🛠️ **Merge Details**
|
| 68 |
+
|
| 69 |
+
The model was merged using the **DELLA merge method** with **bfloat16** precision, ensuring high-performance across multiple task types. Here's the configuration used for the merge:
|
| 70 |
|
| 71 |
```yaml
|
| 72 |
merge_method: della
|
|
|
|
| 76 |
lambda: 1.0
|
| 77 |
normalize: true
|
| 78 |
|
| 79 |
+
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
|
| 80 |
|
| 81 |
models:
|
| 82 |
- model: cyixiao/qwen-1.5B-openbookqa
|
|
|
|
| 103 |
parameters:
|
| 104 |
weight: 1
|
| 105 |
density: 0.5
|
| 106 |
+
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## 🎯 **Key Features & Capabilities**
|
| 112 |
+
|
| 113 |
+
### **1. Coding and Instruction Following**:
|
| 114 |
+
|
| 115 |
+
This model excels in technical coding tasks thanks to the contributions from **Qwen2.5-Coder** and **Matrix**.
|
| 116 |
+
|
| 117 |
+
### **2. Mathematical Reasoning**:
|
| 118 |
+
|
| 119 |
+
With **Qwen2.5-Math-1.5B-Instruct**, the model is perfect for solving complex **mathematical problems** and structured logical tasks.
|
| 120 |
+
|
| 121 |
+
### **3. Conversational Abilities**:
|
| 122 |
+
|
| 123 |
+
Fine-tuned on conversation and identity tasks, the model handles complex dialogue and conversational exchanges with **Syed-Hasan-8503**.
|
| 124 |
+
|
| 125 |
+
### **4. Uncensored Versatility**:
|
| 126 |
+
|
| 127 |
+
Thanks to **Josiefied-Qwen2.5**, this model can operate without restrictions, making it ideal for **open-ended instruction-following**.
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## 📜 **License**
|
| 132 |
+
|
| 133 |
+
This model is open-sourced under the **Apache-2.0 License**, allowing others to use and modify it freely, as long as they give proper attribution.
|
| 134 |
+
|
| 135 |
+
---
|
| 136 |
+
|
| 137 |
+
## 💡 **Tags**
|
| 138 |
+
|
| 139 |
+
- `merge`
|
| 140 |
+
- `Qwen`
|
| 141 |
+
- `Coder`
|
| 142 |
+
- `Math`
|
| 143 |
+
- `Bunnycore`
|
| 144 |
+
- `instruction-following`
|
| 145 |
+
- `long-form-generation`
|
| 146 |
+
|
| 147 |
+
---
|