s-emanuilov commited on
Commit
297fa18
·
verified ·
1 Parent(s): bdecc34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -36,9 +36,9 @@ Available in three sizes with full models, LoRA adapters, and quantized GGUF var
36
 
37
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
38
  |------------|------------|--------------|------------------|
39
- | **2.6B** | [Tucan--2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan--2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan--2.6B-v1.0-LoRA) 📍| [GGUF](https://huggingface.co/s-emanuilov/Tucan--2.6B-v1.0-GGUF) |
40
- | **9B** | [Tucan--9B-v1.0](https://huggingface.co/s-emanuilov/Tucan--9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan--9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan--9B-v1.0-GGUF) |
41
- | **27B** | [Tucan--27B-v1.0](https://huggingface.co/s-emanuilov/Tucan--27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan--27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan--27B-v1.0-GGUF) |
42
 
43
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
44
 
@@ -91,7 +91,7 @@ import json
91
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
92
 
93
  # Load model
94
- model_name = "s-emanuilov/Tucan--2.6B-v1.0"
95
  tokenizer = AutoTokenizer.from_pretrained(model_name)
96
  model = AutoModelForCausalLM.from_pretrained(
97
  model_name,
 
36
 
37
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
38
  |------------|------------|--------------|------------------|
39
+ | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) 📍| [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) |
40
+ | **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
41
+ | **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
42
 
43
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
44
 
 
91
  from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
92
 
93
  # Load model
94
+ model_name = "s-emanuilov/Tucan-2.6B-v1.0"
95
  tokenizer = AutoTokenizer.from_pretrained(model_name)
96
  model = AutoModelForCausalLM.from_pretrained(
97
  model_name,