Upload folder using huggingface_hub
Browse files- .gitattributes +12 -0
- Qwen-Qwen2.5-Coder-14B-IQ4_XS.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q2_K.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q3_K_L.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q3_K_M.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q3_K_S.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q4_K_M.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q4_K_S.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q5_K_M.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q5_K_S.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q6_K.gguf +3 -0
- Qwen-Qwen2.5-Coder-14B-Q8_0.gguf +3 -0
- README.md +47 -0
- featherless-quants.png +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Qwen-Qwen2.5-Coder-14B-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Qwen-Qwen2.5-Coder-14B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Qwen-Qwen2.5-Coder-14B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Qwen-Qwen2.5-Coder-14B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Qwen-Qwen2.5-Coder-14B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Qwen-Qwen2.5-Coder-14B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
Qwen-Qwen2.5-Coder-14B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
Qwen-Qwen2.5-Coder-14B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
Qwen-Qwen2.5-Coder-14B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
Qwen-Qwen2.5-Coder-14B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
Qwen-Qwen2.5-Coder-14B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
featherless-quants.png filter=lfs diff=lfs merge=lfs -text
|
Qwen-Qwen2.5-Coder-14B-IQ4_XS.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a43d682989403e77191fbe9a61144a28e0f81209e9abf60dcd52b93369d49c4e
|
| 3 |
+
size 8186195744
|
Qwen-Qwen2.5-Coder-14B-Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:916846548ff29cc83d05421ed56d6255680d24a1069e0b7ba3806927e2941489
|
| 3 |
+
size 5770497824
|
Qwen-Qwen2.5-Coder-14B-Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:38c37c33986e9f146316ff4852d1554071ecfc73362dad4a0575053faeaac80e
|
| 3 |
+
size 7924768544
|
Qwen-Qwen2.5-Coder-14B-Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:57d3dd2ed701993ed89a28a91de8c2d862905850d7fe9355b12d7632386775f0
|
| 3 |
+
size 7339204384
|
Qwen-Qwen2.5-Coder-14B-Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62b7e5129a5c4c6c4ca75fb6966053a0d1cc6d40fe0ebfb90b3b0041fffed275
|
| 3 |
+
size 6659596064
|
Qwen-Qwen2.5-Coder-14B-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:277d650977027810effe99addd42180d1b681a9438d26545d0e2f855fd10dccc
|
| 3 |
+
size 8988110624
|
Qwen-Qwen2.5-Coder-14B-Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b30d0e108f897dd6236758c2544343e0c55b16dc2adf45c4d64ad802f777ae6d
|
| 3 |
+
size 8573431584
|
Qwen-Qwen2.5-Coder-14B-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b32d9dc4461abe012861be5c98b00fff50cc002713098bc8975e3829a90f7866
|
| 3 |
+
size 10508873504
|
Qwen-Qwen2.5-Coder-14B-Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:68f2c09649c0e449cf3d8f729aa319b957fccc30bd9c850e8995e01e42cb722b
|
| 3 |
+
size 10266554144
|
Qwen-Qwen2.5-Coder-14B-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5d2994553b840f61237782fa1cc1abd51b617d882fe98df0a6899030edf07799
|
| 3 |
+
size 12124684064
|
Qwen-Qwen2.5-Coder-14B-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1927fa39c01a9e63f5799d4753526fc05ae75a78a290b910620a1932bf70e90e
|
| 3 |
+
size 15701597984
|
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: Qwen/Qwen2.5-Coder-14B
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
quantized_by: featherless-ai-quants
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Qwen/Qwen2.5-Coder-14B GGUF Quantizations π
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+
*Optimized GGUF quantization files for enhanced model performance*
|
| 12 |
+
|
| 13 |
+
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## Available Quantizations π
|
| 17 |
+
|
| 18 |
+
| Quantization Type | File | Size |
|
| 19 |
+
|-------------------|------|------|
|
| 20 |
+
| IQ4_XS | [Qwen-Qwen2.5-Coder-14B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-IQ4_XS.gguf) | 7806.96 MB |
|
| 21 |
+
| Q2_K | [Qwen-Qwen2.5-Coder-14B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q2_K.gguf) | 5503.18 MB |
|
| 22 |
+
| Q3_K_L | [Qwen-Qwen2.5-Coder-14B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q3_K_L.gguf) | 7557.65 MB |
|
| 23 |
+
| Q3_K_M | [Qwen-Qwen2.5-Coder-14B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q3_K_M.gguf) | 6999.21 MB |
|
| 24 |
+
| Q3_K_S | [Qwen-Qwen2.5-Coder-14B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q3_K_S.gguf) | 6351.09 MB |
|
| 25 |
+
| Q4_K_M | [Qwen-Qwen2.5-Coder-14B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q4_K_M.gguf) | 8571.73 MB |
|
| 26 |
+
| Q4_K_S | [Qwen-Qwen2.5-Coder-14B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q4_K_S.gguf) | 8176.26 MB |
|
| 27 |
+
| Q5_K_M | [Qwen-Qwen2.5-Coder-14B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q5_K_M.gguf) | 10022.04 MB |
|
| 28 |
+
| Q5_K_S | [Qwen-Qwen2.5-Coder-14B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q5_K_S.gguf) | 9790.95 MB |
|
| 29 |
+
| Q6_K | [Qwen-Qwen2.5-Coder-14B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q6_K.gguf) | 11563.00 MB |
|
| 30 |
+
| Q8_0 | [Qwen-Qwen2.5-Coder-14B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Coder-14B-GGUF/blob/main/Qwen-Qwen2.5-Coder-14B-Q8_0.gguf) | 14974.21 MB |
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
## β‘ Powered by [Featherless AI](https://featherless.ai)
|
| 36 |
+
|
| 37 |
+
### Key Features
|
| 38 |
+
|
| 39 |
+
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
|
| 40 |
+
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
|
| 41 |
+
- π **Vast Compatibility** - Support for 2400+ models and counting
|
| 42 |
+
- π **Affordable Pricing** - Starting at just $10/month
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
**Links:**
|
| 47 |
+
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-quants.png
ADDED
|
Git LFS Details
|