danielhanchen commited on
Commit
4953d73
·
verified ·
1 Parent(s): d624866

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ license: apache-2.0
5
+ base_model:
6
+ - allenai/Olmo-3-7B-Instruct
7
+ language:
8
+ - en
9
+ ---
10
+ <div>
11
+ <p style="margin-top: 0;margin-bottom: 0;">
12
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
13
+ </p>
14
+ <div style="display: flex; gap: 5px; align-items: center; ">
15
+ <a href="https://github.com/unslothai/unsloth/">
16
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
17
+ </a>
18
+ <a href="https://discord.gg/unsloth">
19
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
20
+ </a>
21
+ <a href="https://docs.unsloth.ai/">
22
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
23
+ </a>
24
+ </div>
25
+ </div>
26
+
27
+
28
+ ## Model Details
29
+ <img alt="OLMo Logo" src="https://cdn-uploads.huggingface.co/production/uploads/65316953791d5a2611426c20/nC44-uxMD6J6H3OHxRtVU.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
30
+
31
+
32
+ # Model Card for Olmo 3 7B Instruct
33
+
34
+ We introduce Olmo 3, a new family of 7B and 32B models both Instruct and Think variants. Long chain-of-thought thinking improves reasoning tasks like math and coding.
35
+
36
+ Olmo is a series of **O**pen **l**anguage **mo**dels designed to enable the science of language models.
37
+ These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
38
+
39
+
40
+
41
+ The core models released in this batch include the following:
42
+
43
+ | **Stage** | **Olmo 3 7B Think** | **Olmo 3 32B Think** | **Olmo 3 7B Instruct** |
44
+ |--------------------------|-----------------------|------------------------|---------------------------|
45
+ | **Base Model** | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) | [Olmo-3-32B](https://huggingface.co/allenai/Olmo-3-1125-32B) | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) |
46
+ | **SFT** | [Olmo-3-7B-Think-SFT](https://huggingface.co/allenai/Olmo-3-7B-Think-SFT) | [Olmo-3-32B-Think-SFT](https://huggingface.co/allenai/Olmo-3-32B-Think-SFT) | [Olmo-3-7B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3-7B-Instruct-SFT) |
47
+ | **DPO** | [Olmo-3-7B-Think-DPO](https://huggingface.co/allenai/Olmo-3-7B-Think-DPO) | [Olmo-3-32B-Think-DPO](https://huggingface.co/allenai/Olmo-3-32B-Think-DPO) | [Olmo-3-7B-Instruct-DPO](https://huggingface.co/allenai/Olmo-3-7B-Instruct-DPO) |
48
+ | **Final Models (RLVR)** | [Olmo-3-7B-Think](https://huggingface.co/allenai/Olmo-3-7B-Think) | [Olmo-3-32B-Think](https://huggingface.co/allenai/Olmo-3-32B-Think) | [Olmo-3-7B-Instruct](https://huggingface.co/allenai/Olmo-3-7B-Instruct) |
49
+
50
+
51
+ ## Installation
52
+
53
+ Olmo 3 is supported in transformers 4.57.0 or higher:
54
+ ```bash
55
+ pip install transformers>=4.57.0
56
+ ```
57
+
58
+ ## Inference
59
+
60
+ You can use OLMo with the standard HuggingFace transformers library:
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-7B-Instruct")
64
+ tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-7B-Instruct")
65
+ message = ["Who would win in a fight - a dinosaur or a cow named Moo Moo?"]
66
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
67
+ # optional verifying cuda
68
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
69
+ # olmo = olmo.to('cuda')
70
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
71
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
72
+ >> 'This is a fun and imaginative question! Let’s break it down...'
73
+ ```
74
+
75
+ For faster performance, you can quantize the model using the following method:
76
+ ```python
77
+ AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-7B-Instruct",
78
+ torch_dtype=torch.float16,
79
+ load_in_8bit=True) # Requires bitsandbytes
80
+ ```
81
+ The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
82
+ ```python
83
+ inputs.input_ids.to('cuda')
84
+ ```
85
+
86
+ We have released checkpoints for these models. For post-training, the naming convention is `step_XXXX`.
87
+
88
+
89
+ To load a specific model revision with HuggingFace, simply add the argument `revision`:
90
+ ```bash
91
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-7B-Instruct", revision="step_300")
92
+ ```
93
+
94
+ Or, you can access all the revisions for the models via the following code snippet:
95
+ ```python
96
+ from huggingface_hub import list_repo_refs
97
+ out = list_repo_refs("allenai/Olmo-3-7B-Instruct")
98
+ branches = [b.name for b in out.branches]
99
+ ```
100
+
101
+ ## Chat template
102
+
103
+ ## Default System Message
104
+ The default system prompt for this model is:
105
+ ```
106
+ <|im_start|>system
107
+ You are a helpful function-calling AI assistant.
108
+ You do not currently have access to any functions. <functions></functions><|im_end|>
109
+ ```
110
+
111
+ ## Chat Format
112
+
113
+ The chat template for this model is formatted as:
114
+ ```
115
+ <|im_start|>system
116
+ You are a helpful function-calling AI assistant.
117
+ You do not currently have access to any functions. <functions></functions><|im_end|>
118
+ <|im_start|>user
119
+ Who would win in a fight - a dinosaur or a cow named Moo Moo?<|im_end|>
120
+ <|im_start|>assistant
121
+ This is a fun and imaginative question! Let’s break it down...
122
+ Moo Moo the cow would certinaly win.
123
+ <|endoftext|>
124
+ ```
125
+
126
+ ### Model Description
127
+
128
+ - **Developed by:** Allen Institute for AI (Ai2)
129
+ - **Model type:** a Transformer style autoregressive language model.
130
+ - **Language(s) (NLP):** English
131
+ - **License:** This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
132
+ - **Contact:** Technical inquiries: `[email protected]`. Press: `[email protected]`
133
+ - **Date cutoff:** Dec. 2024.
134
+
135
+
136
+ ### Model Sources
137
+
138
+ - **Project Page:** https://allenai.org/olmo
139
+ - **Repositories:**
140
+ - Open-Instruct for DPO and RLVR: https://github.com/allenai/open-instruct
141
+ - OLMo-Core for pre-training and SFT: https://github.com/allenai/OLMo-core
142
+ - OLMo-Eval for evaluation: https://github.com/allenai/OLMo-Eval
143
+ - **Paper:** [TBD]
144
+ <!-- - **Technical blog post:** (URL) -->
145
+ <!-- - **W&B Logs:** [SFT](()), [DPO](()), [RLVR](()) -->
146
+
147
+
148
+ ## Evaluation
149
+
150
+ | **Skill** | **Benchmark** | **Olmo 3 Instruct 7B SFT** | **Olmo 3 Instruct 7B DPO** | **Olmo3 Instruct 7B** | **Qwen 3 8B (no reasoning)** | **Qwen 3 VL 8B Instruct** | **Qwen 2.5 7B** | **Olmo 2 7B Instruct** | **Apertus 8B Instruct** | **Granite 3.3 8B Instruct** |
151
+ |-----------|--------------|---------------------------|---------------------------|------------------------|------------------------------|----------------------------|-------------------|--------------------------|----------------------------|-------------------------------|
152
+ | **Math** | MATH | 65.1 | 79.6 | 87.3 | 82.3 | 91.6 | 71.0 | 30.1 | 21.9 | 67.3 |
153
+ | | AIME 2024 | 6.7 | 23.5 | 44.3 | 26.2 | 55.1 | 11.3 | 1.3 | 0.5 | 7.3 |
154
+ | | AIME 2025 | 7.2 | 20.4 | 32.5 | 21.7 | 43.3 | 6.3 | 0.4 | 0.2 | 6.3 |
155
+ | | OMEGA | 14.4 | 22.8 | 28.9 | 20.5 | 32.3 | 13.7 | 5.2 | 5.0 | 10.7 |
156
+ | **Reasoning** | BigBenchHard | 51.0 | 69.3 | 71.2 | 73.7 | 85.6 | 68.8 | 43.8 | 42.2 | 61.2 |
157
+ | | ZebraLogic | 18.0 | 28.4 | 32.9 | 25.4 | 64.3 | 10.7 | 5.3 | 5.3 | 17.6 |
158
+ | | AGI Eval English | 59.2 | 64.0 | 64.4 | 76.0 | 84.5 | 69.8 | 56.1 | 50.8 | 64.0 |
159
+ | **Coding** | HumanEvalPlus | 69.8 | 72.9 | 77.2 | 79.8 | 82.9 | 74.9 | 25.8 | 34.4 | 64.0 |
160
+ | | MBPP+ | 56.5 | 55.9 | 60.2 | 64.4 | 66.3 | 62.6 | 40.7 | 42.1 | 54.0 |
161
+ | | LiveCodeBench v3 | 20.0 | 18.8 | 29.5 | 53.2 | 55.9 | 34.5 | 7.2 | 7.8 | 11.5 |
162
+ | **IF** | IFEval | 81.7 | 82.0 | 85.6 | 86.3 | 87.8 | 73.4 | 72.2 | 71.4 | 77.5 |
163
+ | | IFBench | 27.4 | 29.3 | 32.3 | 29.3 | 34.0 | 28.4 | 26.7 | 22.1 | 22.3 |
164
+ | **Knowledge** | MMLU | 67.1 | 69.1 | 69.1 | 80.4 | 83.6 | 77.2 | 61.6 | 62.7 | 63.5 |
165
+ | **QA** | PopQA | 16.5 | 20.7 | 14.1 | 20.4 | 26.5 | 21.5 | 25.5 | 25.5 | 28.9 |
166
+ | | GPQA | 30.0 | 37.9 | 40.4 | 44.6 | 51.1 | 35.6 | 31.3 | 28.8 | 33.0 |
167
+ | **Chat** | AlpacaEval 2 LC | 21.8 | 43.3 | 40.9 | 49.8 | 73.5 | 23.0 | 18.3 | 8.1 | 28.6 |
168
+ | **Tool Use** | SimpleQA | 74.2 | 79.8 | 79.3 | 79.0 | 90.3 | 78.0 | – | – | – |
169
+ | | LitQA2 | 38.0 | 43.3 | 38.2 | 39.6 | 30.7 | 29.8 | – | – | – |
170
+ | | BFCL | 48.9 | 49.6 | 49.8 | 60.2 | 66.2 | 55.8 | – | – | – |
171
+ | **Safety** | Safety | 89.2 | 90.2 | 87.3 | 78.0 | 80.2 | 73.4 | 93.1 | 72.2 | 73.7 |
172
+
173
+ ## Model Details
174
+
175
+ #### Stage 1: SFT
176
+ - supervised fine-tuning on the Dolci-Think-SFT-7B dataset. This dataset consits of math, code, chat, and general knowledge queries.
177
+ - Datasets: [Dolci-Think-SFT-7B](https://huggingface.co/datasets/allenai/dolci-thinking-sft), [Dolci-Instruct-SFT-7B](https://huggingface.co/datasets/allenai/dolci-instruct-sft)
178
+
179
+ #### Stage 2:DPO
180
+ - direct preference optimization on the Dolci-Think-DPO-7B dataset. This dataset consits of math, code, chat, and general knowledge queries.
181
+ - Datasets: [Dolci-Think-DPO-7B](https://huggingface.co/datasets/allenai/dolci-thinking-dpo), [Dolci-Instruct-DPO-7B](https://huggingface.co/datasets/allenai/dolci-3-instruct-dpo-with-metadata)
182
+
183
+ #### Stage 3: RLVR
184
+ - reinforcement learning from verifiable rewards on the Dolci-Think-RL-7B dataset. This dataset consits of math, code, instruction-following, and general chat queries.
185
+ - Datasets: [Dolci-Think-RL-7B](https://huggingface.co/datasets/allenai/Dolci-Think-RL-7B), [Dolci-Instruct-RL-7B](https://huggingface.co/datasets/allenai/Dolci-Instruct-RL-7B)
186
+
187
+
188
+ ## Bias, Risks, and Limitations
189
+ Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
190
+
191
+ ## License
192
+ This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with [Ai2's Responsible Use Guidelines](https://allenai.org/responsible-use).
193
+
194
+
195
+ ## Citation
196
+ A technical manuscript is forthcoming!
197
+
198
+ ## Model Card Contact
199
+ For errors in this model card, contact `[email protected]`.
chat_template.jinja ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# Unsloth template fixes #}
2
+ {%- set has_system = messages|selectattr('role', 'equalto', 'system')|list|length > 0 -%}{%- if not has_system -%}{{- '<|im_start|>system
3
+ You are a helpful function-calling AI assistant. ' -}}{%- if tools is none -%}{{- 'You do not currently have access to any functions. <functions></functions><|im_end|>
4
+ ' -}}{%- else -%}{{- 'You are provided with function signatures within <functions></functions> XML tags. You may call one or more functions to assist with the user query. Output any function calls within <function_calls></function_calls> XML tags. Do not make assumptions about what values to plug into functions.' -}}{{- '<functions>' -}}{{- tools | tojson -}}{{- '</functions><|im_end|>
5
+ ' -}}{%- endif -%}{%- endif -%}{%- for message in messages -%}{%- if message['role'] == 'system' -%}{{- '<|im_start|>system
6
+ ' + message['content'] -}}{%- if tools is not none -%}{{- '<functions>' -}}{{- tools | tojson -}}{{- '</functions>' -}}{%- elif message.get('functions', none) is not none -%}{{- ' <functions>' + message['functions'] + '</functions>' -}}{%- endif -%}{{- '<|im_end|>
7
+ ' -}}{%- elif message['role'] == 'user' -%}{{- '<|im_start|>user
8
+ ' + message['content'] + '<|im_end|>
9
+ ' -}}{%- elif message['role'] == 'assistant' -%}{{- '<|im_start|>assistant
10
+ ' -}}{%- if message.get('content', none) is not none -%}{{- message['content'] -}}{%- endif -%}{%- if message.get('function_calls', none) is not none -%}{{- '<function_calls>' + message['function_calls'] + '</function_calls>' -}}{% elif message.get('tool_calls', none) is not none %}{{- '<function_calls>' -}}{%- for tool_call in message['tool_calls'] %}{%- if tool_call is mapping and tool_call.get('function', none) is not none %}{%- set args = tool_call['function']['arguments'] -%}{%- set ns = namespace(arguments_list=[]) -%}{%- if args is mapping -%}{%- for key, value in args|items -%}{%- set ns.arguments_list = ns.arguments_list + [key ~ '=' ~ (value | tojson)] -%}{%- endfor -%}{%- endif -%}{%- set arguments = ns.arguments_list | join(', ') -%}{{- tool_call['function']['name'] + '(' + arguments + ')' -}}{%- if not loop.last -%}{{ '
11
+ ' }}{%- endif -%}{% else %}{{- tool_call -}}{%- endif %}{%- endfor %}{{- '</function_calls>' -}}{%- endif -%}{%- if not loop.last -%}{{- '<|im_end|>' + '
12
+ ' -}}{%- else -%}{{- eos_token -}}{%- endif -%}{%- elif message['role'] == 'environment' -%}{{- '<|im_start|>environment
13
+ ' + message['content'] + '<|im_end|>
14
+ ' -}}{%- elif message['role'] == 'tool' -%}{{- '<|im_start|>environment
15
+ ' + message['content'] + '<|im_end|>
16
+ ' -}}{%- endif -%}{%- if loop.last and add_generation_prompt -%}{{- '<|im_start|>assistant
17
+ ' -}}{%- endif -%}{%- endfor -%}
18
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. #}
config.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Olmo3ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 100257,
8
+ "torch_dtype": "bfloat16",
9
+ "eos_token_id": 100257,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 11008,
14
+ "layer_types": [
15
+ "sliding_attention",
16
+ "sliding_attention",
17
+ "sliding_attention",
18
+ "full_attention",
19
+ "sliding_attention",
20
+ "sliding_attention",
21
+ "sliding_attention",
22
+ "full_attention",
23
+ "sliding_attention",
24
+ "sliding_attention",
25
+ "sliding_attention",
26
+ "full_attention",
27
+ "sliding_attention",
28
+ "sliding_attention",
29
+ "sliding_attention",
30
+ "full_attention",
31
+ "sliding_attention",
32
+ "sliding_attention",
33
+ "sliding_attention",
34
+ "full_attention",
35
+ "sliding_attention",
36
+ "sliding_attention",
37
+ "sliding_attention",
38
+ "full_attention",
39
+ "sliding_attention",
40
+ "sliding_attention",
41
+ "sliding_attention",
42
+ "full_attention",
43
+ "sliding_attention",
44
+ "sliding_attention",
45
+ "sliding_attention",
46
+ "full_attention"
47
+ ],
48
+ "max_position_embeddings": 65536,
49
+ "model_type": "olmo3",
50
+ "num_attention_heads": 32,
51
+ "num_hidden_layers": 32,
52
+ "num_key_value_heads": 32,
53
+ "pad_token_id": 100277,
54
+ "rms_norm_eps": 1e-06,
55
+ "rope_scaling": {
56
+ "attention_factor": 1.2079441541679836,
57
+ "beta_fast": 32,
58
+ "beta_slow": 1,
59
+ "factor": 8.0,
60
+ "original_max_position_embeddings": 8192,
61
+ "rope_type": "yarn"
62
+ },
63
+ "rope_theta": 500000,
64
+ "sliding_window": 4096,
65
+ "tie_word_embeddings": false,
66
+ "transformers_version": "4.57.1",
67
+ "unsloth_fixed": true,
68
+ "use_cache": false,
69
+ "vocab_size": 100278
70
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 100257,
4
+ "eos_token_id": [
5
+ 100265,
6
+ 100257
7
+ ],
8
+ "max_length": 65536,
9
+ "pad_token_id": 100277,
10
+ "transformers_version": "4.57.1"
11
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7bd4c8ba52177a815aee5929cb458a116a65799688d3e51402500a1cb77023c
3
+ size 4969984976
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea92aa0499a2ca02400e5d87324009e172bdbfb9809f2ac501485d8224382328
3
+ size 4981161496
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d50f95e4f2ec5fc4fbce6217cbf54fd1d700928b05e5550c8ee08bf4ba948925
3
+ size 4644917240
model.safetensors.index.json ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 528384,
4
+ "total_size": 14596022272
5
+ },
6
+ "weight_map": {
7
+ "lm_head.weight": "model-00003-of-00003.safetensors",
8
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
17
+ "model.layers.0.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.1.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
28
+ "model.layers.1.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
30
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
31
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
32
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
33
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
34
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
35
+ "model.layers.10.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
36
+ "model.layers.10.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
37
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
38
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
39
+ "model.layers.10.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
40
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
41
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
42
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
43
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
44
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
45
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
46
+ "model.layers.11.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
47
+ "model.layers.11.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
48
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
49
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
50
+ "model.layers.11.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
51
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
52
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
53
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
54
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
55
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
56
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
57
+ "model.layers.12.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
58
+ "model.layers.12.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
59
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
60
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
61
+ "model.layers.12.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
62
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
63
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
64
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
65
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
66
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
67
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
68
+ "model.layers.13.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
69
+ "model.layers.13.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
70
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
71
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.13.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
74
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
77
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
79
+ "model.layers.14.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.14.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.14.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
86
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
89
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.15.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
91
+ "model.layers.15.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.15.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
98
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
101
+ "model.layers.16.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.16.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
103
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.16.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
110
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.17.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
113
+ "model.layers.17.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
115
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
116
+ "model.layers.17.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
117
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
118
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
119
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
120
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
121
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
122
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
123
+ "model.layers.18.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
124
+ "model.layers.18.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
125
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
127
+ "model.layers.18.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
134
+ "model.layers.19.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.19.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
137
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.19.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
139
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
141
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
142
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
143
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
144
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
145
+ "model.layers.2.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
146
+ "model.layers.2.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
147
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
148
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
149
+ "model.layers.2.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
150
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
151
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
152
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
153
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
154
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
155
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
156
+ "model.layers.20.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
157
+ "model.layers.20.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
158
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
159
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
160
+ "model.layers.20.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
161
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
162
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
163
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
164
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
165
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
166
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
167
+ "model.layers.21.post_feedforward_layernorm.weight": "model-00002-of-00003.safetensors",
168
+ "model.layers.21.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
169
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
170
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
171
+ "model.layers.21.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
172
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
173
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
174
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
175
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
176
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
177
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
178
+ "model.layers.22.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
179
+ "model.layers.22.self_attn.k_norm.weight": "model-00002-of-00003.safetensors",
180
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
181
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
182
+ "model.layers.22.self_attn.q_norm.weight": "model-00002-of-00003.safetensors",
183
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
184
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
185
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
186
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
187
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
188
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
189
+ "model.layers.23.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
190
+ "model.layers.23.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
191
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
192
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
193
+ "model.layers.23.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
194
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
195
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
196
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
197
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
198
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
199
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
200
+ "model.layers.24.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
201
+ "model.layers.24.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
202
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
203
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
204
+ "model.layers.24.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
205
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
206
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
207
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
208
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
209
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
210
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
211
+ "model.layers.25.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
212
+ "model.layers.25.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
213
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
214
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
215
+ "model.layers.25.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
216
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
217
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
218
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
219
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
220
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
221
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
222
+ "model.layers.26.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
223
+ "model.layers.26.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
224
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
225
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
226
+ "model.layers.26.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
227
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
228
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
229
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
230
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
231
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
232
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
233
+ "model.layers.27.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
234
+ "model.layers.27.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
235
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
236
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
237
+ "model.layers.27.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
238
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
239
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
240
+ "model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
241
+ "model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
242
+ "model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
243
+ "model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
244
+ "model.layers.28.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
245
+ "model.layers.28.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
246
+ "model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
247
+ "model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
248
+ "model.layers.28.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
249
+ "model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
250
+ "model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
251
+ "model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
252
+ "model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
253
+ "model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
254
+ "model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
255
+ "model.layers.29.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
256
+ "model.layers.29.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
257
+ "model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
258
+ "model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
259
+ "model.layers.29.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
260
+ "model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
261
+ "model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
262
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
263
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
264
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
265
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
266
+ "model.layers.3.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
267
+ "model.layers.3.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
268
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
269
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
270
+ "model.layers.3.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
271
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
272
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
273
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
274
+ "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
275
+ "model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
276
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
277
+ "model.layers.30.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
278
+ "model.layers.30.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
279
+ "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
280
+ "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
281
+ "model.layers.30.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
282
+ "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
283
+ "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
284
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
285
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
286
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
287
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
288
+ "model.layers.31.post_feedforward_layernorm.weight": "model-00003-of-00003.safetensors",
289
+ "model.layers.31.self_attn.k_norm.weight": "model-00003-of-00003.safetensors",
290
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
291
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
292
+ "model.layers.31.self_attn.q_norm.weight": "model-00003-of-00003.safetensors",
293
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
294
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
295
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
296
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
297
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
298
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
299
+ "model.layers.4.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
300
+ "model.layers.4.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
301
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
302
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
303
+ "model.layers.4.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
304
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
305
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
306
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
307
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
308
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
309
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
310
+ "model.layers.5.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
311
+ "model.layers.5.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
312
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
313
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
314
+ "model.layers.5.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
315
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
316
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
317
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
318
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
319
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
320
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
321
+ "model.layers.6.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
322
+ "model.layers.6.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
323
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
324
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
325
+ "model.layers.6.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
326
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
327
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
328
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
329
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
330
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
331
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
332
+ "model.layers.7.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
333
+ "model.layers.7.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
334
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
335
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
336
+ "model.layers.7.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
337
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
338
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
339
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
340
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
341
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
342
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
343
+ "model.layers.8.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
344
+ "model.layers.8.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
345
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
346
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
347
+ "model.layers.8.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
348
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
349
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
350
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
351
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
352
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
353
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
354
+ "model.layers.9.post_feedforward_layernorm.weight": "model-00001-of-00003.safetensors",
355
+ "model.layers.9.self_attn.k_norm.weight": "model-00001-of-00003.safetensors",
356
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
357
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
358
+ "model.layers.9.self_attn.q_norm.weight": "model-00001-of-00003.safetensors",
359
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
360
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
361
+ "model.norm.weight": "model-00003-of-00003.safetensors"
362
+ }
363
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|pad|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": "�"
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "5809": {
5
+ "content": "�",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100256": {
13
+ "content": "<|extra_id_0|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": false
19
+ },
20
+ "100257": {
21
+ "content": "<|endoftext|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "100258": {
29
+ "content": "<|fim_prefix|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "100259": {
37
+ "content": "<|fim_middle|>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "100260": {
45
+ "content": "<|fim_suffix|>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "100261": {
53
+ "content": "|||PHONE_NUMBER|||",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "100262": {
61
+ "content": "|||EMAIL_ADDRESS|||",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "100263": {
69
+ "content": "|||IP_ADDRESS|||",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": false
75
+ },
76
+ "100264": {
77
+ "content": "<|im_start|>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "100265": {
85
+ "content": "<|im_end|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "100266": {
93
+ "content": "<functions>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "100267": {
101
+ "content": "</functions>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "100268": {
109
+ "content": "<function_calls>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "100269": {
117
+ "content": "</function_calls>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "100270": {
125
+ "content": "<|extra_id_1|>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "100271": {
133
+ "content": "<|extra_id_2|>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "100272": {
141
+ "content": "<|extra_id_3|>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "100273": {
149
+ "content": "<|extra_id_4|>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "100274": {
157
+ "content": "<|extra_id_5|>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "100275": {
165
+ "content": "<|extra_id_6|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "100276": {
173
+ "content": "<|endofprompt|>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "100277": {
181
+ "content": "<|pad|>",
182
+ "lstrip": false,
183
+ "normalized": false,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": true
187
+ }
188
+ },
189
+ "bos_token": "<|endoftext|>",
190
+ "clean_up_tokenization_spaces": false,
191
+ "eos_token": "<|endoftext|>",
192
+ "extra_special_tokens": {},
193
+ "model_max_length": 65536,
194
+ "pad_token": "<|pad|>",
195
+ "padding_side": "left",
196
+ "tokenizer_class": "GPT2Tokenizer",
197
+ "unk_token": "�",
198
+ "chat_template": "{# Unsloth template fixes #}\n{%- set has_system = messages|selectattr('role', 'equalto', 'system')|list|length > 0 -%}{%- if not has_system -%}{{- '<|im_start|>system\nYou are a helpful function-calling AI assistant. ' -}}{%- if tools is none -%}{{- 'You do not currently have access to any functions. <functions></functions><|im_end|>\n' -}}{%- else -%}{{- 'You are provided with function signatures within <functions></functions> XML tags. You may call one or more functions to assist with the user query. Output any function calls within <function_calls></function_calls> XML tags. Do not make assumptions about what values to plug into functions.' -}}{{- '<functions>' -}}{{- tools | tojson -}}{{- '</functions><|im_end|>\n' -}}{%- endif -%}{%- endif -%}{%- for message in messages -%}{%- if message['role'] == 'system' -%}{{- '<|im_start|>system\n' + message['content'] -}}{%- if tools is not none -%}{{- '<functions>' -}}{{- tools | tojson -}}{{- '</functions>' -}}{%- elif message.get('functions', none) is not none -%}{{- ' <functions>' + message['functions'] + '</functions>' -}}{%- endif -%}{{- '<|im_end|>\n' -}}{%- elif message['role'] == 'user' -%}{{- '<|im_start|>user\n' + message['content'] + '<|im_end|>\n' -}}{%- elif message['role'] == 'assistant' -%}{{- '<|im_start|>assistant\n' -}}{%- if message.get('content', none) is not none -%}{{- message['content'] -}}{%- endif -%}{%- if message.get('function_calls', none) is not none -%}{{- '<function_calls>' + message['function_calls'] + '</function_calls>' -}}{% elif message.get('tool_calls', none) is not none %}{{- '<function_calls>' -}}{%- for tool_call in message['tool_calls'] %}{%- if tool_call is mapping and tool_call.get('function', none) is not none %}{%- set args = tool_call['function']['arguments'] -%}{%- set ns = namespace(arguments_list=[]) -%}{%- if args is mapping -%}{%- for key, value in args|items -%}{%- set ns.arguments_list = ns.arguments_list + [key ~ '=' ~ (value | tojson)] -%}{%- endfor -%}{%- endif -%}{%- set arguments = ns.arguments_list | join(', ') -%}{{- tool_call['function']['name'] + '(' + arguments + ')' -}}{%- if not loop.last -%}{{ '\n' }}{%- endif -%}{% else %}{{- tool_call -}}{%- endif %}{%- endfor %}{{- '</function_calls>' -}}{%- endif -%}{%- if not loop.last -%}{{- '<|im_end|>' + '\n' -}}{%- else -%}{{- eos_token -}}{%- endif -%}{%- elif message['role'] == 'environment' -%}{{- '<|im_start|>environment\n' + message['content'] + '<|im_end|>\n' -}}{%- elif message['role'] == 'tool' -%}{{- '<|im_start|>environment\n' + message['content'] + '<|im_end|>\n' -}}{%- endif -%}{%- if loop.last and add_generation_prompt -%}{{- '<|im_start|>assistant\n' -}}{%- endif -%}{%- endfor -%}\n{# Copyright 2025-present Unsloth. Apache 2.0 License. #}"
199
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff