anemll commited on
Commit
8cc6426
·
verified ·
1 Parent(s): abe79d2

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - coreml
5
+ - ANE
6
+ - LLaMA
7
+ - Qwen
8
+ - DeepSeek
9
+ - Apple
10
+ - Apple Neural Engine
11
+ - DeepHermes
12
+ ---
13
+ # ANEMLL
14
+
15
+ **ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
16
+
17
+ The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
18
+
19
+ This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
20
+
21
+ This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
22
+
23
+ For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
24
+
25
+
26
+ ---
27
+
28
+ ## License
29
+
30
+ ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
31
+ The original model may require a separate license depending on the architecture:
32
+ - LLaMA models: Based on Meta's LLaMA and may require Meta's license
33
+ - Qwen models: Based on Alibaba's Qwen and may require Alibaba's license
34
+ - Other models: Check respective original model licenses
35
+
36
+ This model is converted for CoreML using ANEMLL's open-source conversion pipeline. It supports multiple LLM architectures including LLaMA, Qwen, and DeepSeek variants.
37
+
38
+ ---
39
+
40
+ ## Requirements
41
+
42
+ - **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
43
+ - **CoreML Tools** and **HuggingFace Transformers** libraries
44
+ - **Python 3.9**
45
+
46
+ `chat.py` provides a sample inference script.
47
+ `chat_full.py` provides a sample inference script with history and conversation management.
48
+
49
+ **Installation**
50
+
51
+ 1. Download the model from Hugging Face:
52
+ ```bash
53
+ # Install required tools
54
+ pip install huggingface_hub
55
+
56
+ # Install Git LFS (Large File Support)
57
+ # macOS with Homebrew:
58
+ brew install git-lfs
59
+ # Or Ubuntu/Debian:
60
+ # sudo apt-get install git-lfs
61
+
62
+ # Initialize Git LFS
63
+ git lfs install
64
+
65
+ # Clone the repository with model files
66
+ git clone https://huggingface.co/anemll/anemll-Qwen3-4B-ctx1024_0.3.0
67
+ ```
68
+
69
+ 2. Extract model files:
70
+ ```bash
71
+ # Navigate to cloned directory
72
+ cd anemll-Qwen3-4B-ctx1024_0.3.0
73
+
74
+ # Pull LFS files (model weights)
75
+ git lfs pull
76
+
77
+ # Extract CoreML model files
78
+ find . -type f -name "*.zip" -exec unzip {} \;
79
+ ```
80
+
81
+ 3. Install dependencies:
82
+ ```bash
83
+ pip install coremltools transformers
84
+ ```
85
+
86
+ **Coremltools:**
87
+
88
+ See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
89
+
90
+ **How to Run**
91
+
92
+ 1. Basic chat interface:
93
+ ```bash
94
+ python chat.py --meta ./meta.yaml
95
+ ```
96
+
97
+ 2. Full conversation mode with history:
98
+ ```bash
99
+ python chat_full.py --meta ./meta.yaml
100
+ ```
101
+
102
+ > Note: The first time the model loads, macOS will take some time to place it on the device.
103
+ > Subsequent loads will be instantaneous.
104
+ > Use Ctrl-D to exit, Ctrl-C to interrupt inference.
105
+
106
+ **More Info**
107
+ Please check following links for later updates:
108
+
109
+ * [GitHub](https://github.com/anemll)
110
+ * [Hugging Face Models](https://huggingface.co/anemll)
111
+ * [Twitter/X](https://x.com/anemll)
112
+ * [Website](https://anemll.com)
113
+
114
+
115
116
+
117
+ # anemll-Qwen3-4B-ctx1024_0.3.0
118
+
119
+ This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
120
+
121
+ ## Available Distributions
122
+
123
+ ### Standard Distribution
124
+ - Contains zipped MLMODELC files
125
+ - Suitable for macOS and development
126
+
127
+ ### iOS Distribution
128
+ - Contains unzipped MLMODELC files
129
+ - Ready for iOS deployment
130
+ - Includes offline tokenizer support
131
+
132
+ ## Model Information
133
+ - Context Length: 1024
134
+ - Batch Size: 64
135
+ - Number of Chunks: 2
136
+
137
+ ## Quick Start
138
+
139
+ ### Test in iOS/macOS App
140
+ Try our sample Chat-Bot app on TestFlight:
141
+ 1. Install TestFlight from App Store
142
+ 2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
143
+ 3. App includes a small demo model pre-installed
144
+ 4. You can add custom models via HuggingFace URLs
145
+
146
+ > [!Note]
147
+ > - The TestFlight app works on both iOS and macOS
148
+ > - Demonstrates proper model integration and provides a reference implementation
149
+ > - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
150
+ > - macOS supports both zipped and unzipped model formats
151
+
152
+ ```
chat.py ADDED
@@ -0,0 +1,949 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+
32
+ class TokenPrinter:
33
+ """Handles background printing of generated tokens."""
34
+ def __init__(self, tokenizer):
35
+ self.tokenizer = tokenizer
36
+ self.token_queue = queue.Queue()
37
+ self.stop_event = threading.Event()
38
+ self.thread = None
39
+ self.buffer = ""
40
+ self.lock = threading.Lock()
41
+ self.thinking = True # Track if we're still in thinking mode
42
+ self.decoding_buffer = [] # Buffer for token IDs
43
+ # Add token counting and timing
44
+ self.start_time = time.time()
45
+ self.token_count = 0
46
+ self.start()
47
+
48
+ def start(self):
49
+ """Start the printer thread."""
50
+ if self.thread is None:
51
+ self.thread = threading.Thread(target=self._print_worker)
52
+ self.thread.daemon = True
53
+ self.thread.start()
54
+
55
+ def add_token(self, token_id):
56
+ """Add a token to the print queue."""
57
+ if not self.stop_event.is_set():
58
+ self.token_queue.put(token_id)
59
+ self.token_count += 1
60
+
61
+ def drain_buffer(self):
62
+ """Decode token IDs from decoding_buffer in the main thread."""
63
+ if not self.decoding_buffer:
64
+ return
65
+
66
+ # Decode all tokens at once in the main thread
67
+ token_str = self.tokenizer.decode(self.decoding_buffer)
68
+ self.decoding_buffer.clear()
69
+
70
+ # Store the text in buffer for later saving to file
71
+ with self.lock:
72
+ self.buffer += token_str
73
+
74
+ # Color-handling logic
75
+ if self.thinking and "</think>" in token_str:
76
+ self.thinking = False
77
+ parts = token_str.split("</think>")
78
+ if len(parts) > 0:
79
+ print(parts[0] + "</think>", end='', flush=True)
80
+ if len(parts) > 1:
81
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
82
+ else:
83
+ if not self.thinking:
84
+ print(LIGHT_BLUE + token_str, end='', flush=True)
85
+ else:
86
+ print(token_str, end='', flush=True)
87
+
88
+ def _print_worker(self):
89
+ """Worker thread that takes token_ids from the queue."""
90
+ while not self.stop_event.is_set():
91
+ try:
92
+ token_id = self.token_queue.get(timeout=0.01)
93
+ with self.lock:
94
+ self.decoding_buffer.append(token_id)
95
+ self.token_queue.task_done()
96
+ except queue.Empty:
97
+ continue
98
+ except Exception as e:
99
+ print(f"\nError: Token printer error: {str(e)}")
100
+ break
101
+
102
+ def stop(self):
103
+ """Stop the printer thread."""
104
+ if self.thread and self.thread.is_alive():
105
+ # Ensure any remaining tokens are processed
106
+ self.drain_buffer()
107
+ self.stop_event.set()
108
+ try:
109
+ self.thread.join(timeout=1.0)
110
+ except Exception:
111
+ pass
112
+ # Calculate and print tokens/s with shorter format in blue
113
+ elapsed = time.time() - self.start_time
114
+ if elapsed > 0 and self.token_count > 0:
115
+ tokens_per_sec = self.token_count / elapsed
116
+ print(f"\n{DARK_BLUE}{tokens_per_sec:.1f} t/s{RESET_COLOR}")
117
+ else:
118
+ print(RESET_COLOR) # Reset color at the end
119
+ return self.buffer
120
+
121
+ def parse_model_path(path):
122
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
123
+ path = Path(path)
124
+
125
+ # If path exists exactly as specified, return it
126
+ if path.exists():
127
+ return str(path)
128
+
129
+ # Try with both extensions
130
+ candidates = [
131
+ path, # Original path
132
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
133
+ path.with_suffix('.mlpackage'), # With .mlpackage
134
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
135
+ Path(str(path) + '.mlpackage')
136
+ ]
137
+
138
+ # Try all possible paths
139
+ for candidate in candidates:
140
+ if candidate.exists():
141
+ print(f"Found model at: {candidate}")
142
+ return str(candidate)
143
+
144
+ # If we get here, no valid path was found
145
+ print("\nError: Model not found. Tried following paths:")
146
+ for candidate in candidates:
147
+ print(f" {candidate}")
148
+ raise FileNotFoundError(f"Model not found: {path}")
149
+
150
+ def parse_ffn_filename(path):
151
+ """Parse FFN model filename to extract chunk information."""
152
+ path = Path(path)
153
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
154
+ match = re.search(pattern, path.name)
155
+
156
+ if match:
157
+ current_chunk = int(match.group(1))
158
+ total_chunks = int(match.group(2))
159
+ return current_chunk, total_chunks
160
+ return None, None
161
+
162
+ def find_all_chunks(base_path):
163
+ """Find all chunk files matching the base FFN path pattern."""
164
+ path = Path(base_path)
165
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
166
+ return sorted(glob.glob(pattern))
167
+
168
+ def load_model(path, function_name=None):
169
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
170
+ path = Path(path)
171
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
172
+
173
+ try:
174
+ if path.suffix == '.mlmodelc':
175
+ # For compiled models (.mlmodelc), use CompiledMLModel
176
+ if function_name:
177
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
178
+ else:
179
+ return ct.models.CompiledMLModel(str(path), compute_unit)
180
+ else:
181
+ # For packages (.mlpackage)
182
+ if function_name:
183
+ return ct.models.MLModel(str(path), function_name=function_name)
184
+ else:
185
+ return ct.models.MLModel(str(path))
186
+
187
+ except RuntimeError as e:
188
+ if "valid manifest does not exist" in str(e):
189
+ print(f"\nError: Could not load compiled model at {path}")
190
+ print("This might be because:")
191
+ print("1. The model is not properly compiled")
192
+ print("2. The model was compiled for a different OS version")
193
+ print("3. The model needs to be recompiled")
194
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
195
+ raise
196
+
197
+ def load_metadata(model,args):
198
+ # Extract metadata and config parameters
199
+ metadata = {}
200
+ if hasattr(model, 'user_defined_metadata'):
201
+ meta = model.user_defined_metadata
202
+
203
+ # Extract key parameters with defaults
204
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
205
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
206
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
207
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
208
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
209
+
210
+ print("\nExtracted Parameters:")
211
+ print(f" Context Length: {metadata['context_length']}")
212
+ print(f" State Length: {metadata['state_length']}")
213
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
214
+ print(f" LUT Bits: {metadata['lut_bits']}")
215
+ print(f" Number of Chunks: {metadata['num_chunks']}")
216
+
217
+ # Print model info
218
+ print("\nModel Info:")
219
+ if 'com.anemll.info' in meta:
220
+ print(f" {meta['com.anemll.info']}")
221
+ if 'com.github.apple.coremltools.version' in meta:
222
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
223
+
224
+ # Print model input/output shapes
225
+ print("\nModel Shapes:")
226
+ if hasattr(model, 'input_description'):
227
+ print(" Inputs:")
228
+ for name, desc in model.input_description.items():
229
+ print(f" {name}: {desc}")
230
+ if hasattr(model, 'output_description'):
231
+ print(" Outputs:")
232
+ for name, desc in model.output_description.items():
233
+ print(f" {name}: {desc}")
234
+ else:
235
+ print("\nWarning: No metadata found in model")
236
+
237
+ # Check if model directory name contains context length pattern (ctxXXX)
238
+ ctx_len = 512
239
+ if args.context_length is None:
240
+ import re
241
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
242
+ if ctx_match:
243
+ ctx_len0 = int(ctx_match.group(1))
244
+ if 512 <= ctx_len0 <= 8096:
245
+ ctx_len = ctx_len0
246
+ print(f"\nDetected context length {ctx_len} from directory name")
247
+ else:
248
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
249
+ else:
250
+ ctx_len = args.context_length
251
+
252
+ # Use defaults or values from args
253
+ metadata['context_length'] = ctx_len
254
+ metadata['state_length'] = ctx_len
255
+ # Get batch size from args or use default
256
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
257
+ metadata['lut_bits'] = 4
258
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
259
+ print("\nUsing parameters:")
260
+ print(f" Context Length: {metadata['context_length']}")
261
+ print(f" State Length: {metadata['state_length']}")
262
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
263
+ print(f" LUT Bits: {metadata['lut_bits']}")
264
+ print(f" Number of Chunks: {metadata['num_chunks']}")
265
+
266
+ # Override with values from args if they exist
267
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
268
+ metadata['batch_size'] = args.batch_size
269
+ print(f"\nOverriding batch size from args: {args.batch_size}")
270
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
271
+ metadata['num_chunks'] = args.num_chunks
272
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
273
+
274
+ return metadata
275
+
276
+ def load_models(args,metadata):
277
+ """Load all required models and extract metadata."""
278
+ print("\nLoading models...")
279
+
280
+ try:
281
+ # Load embeddings model
282
+ print("\nLoading embeddings model...")
283
+ embed_path = parse_model_path(args.embed)
284
+ print(f"Loading from: {embed_path}")
285
+ embed_model = load_model(embed_path)
286
+ print("Embeddings model loaded successfully")
287
+ metadata = load_metadata(embed_model,args)
288
+
289
+
290
+
291
+ # Load LM head model
292
+ print("\nLoading LM head model...")
293
+ lmhead_path = parse_model_path(args.lmhead)
294
+ print(f"Loading from: {lmhead_path}")
295
+ lmhead_model = load_model(lmhead_path)
296
+ print("LM head model loaded successfully")
297
+
298
+ # Parse FFN path and find chunks if needed
299
+ print("\nLoading FFN+PREFILL model(s)...")
300
+ ffn_path = parse_model_path(args.ffn)
301
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
302
+
303
+ ffn_models = []
304
+ if chunk_no and total_chunks:
305
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
306
+ # Find and load all chunks
307
+ chunk_paths = find_all_chunks(ffn_path)
308
+ if len(chunk_paths) != total_chunks:
309
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
310
+
311
+ for chunk_path in chunk_paths:
312
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
313
+ try:
314
+ # For chunked models, we need both infer and prefill functions
315
+ ffn_models.append({
316
+ 'infer': load_model(chunk_path, function_name='infer'),
317
+ 'prefill': load_model(chunk_path, function_name='prefill')
318
+ })
319
+ print("Chunk loaded successfully")
320
+ except Exception as e:
321
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
322
+ raise
323
+ metadata = load_metadata(ffn_models[0],args)
324
+
325
+ else:
326
+ print("\nLoading single FFN model...")
327
+ ffn_models.append(load_model(ffn_path))
328
+ print("FFN model loaded successfully")
329
+
330
+ return embed_model, ffn_models, lmhead_model, metadata
331
+
332
+ except Exception as e:
333
+ print(f"\nError loading models: {str(e)}")
334
+ print("\nPlease ensure all model files exist and are accessible.")
335
+ print("Expected files:")
336
+ print(f" Embeddings: {args.embed}")
337
+ print(f" LM Head: {args.lmhead}")
338
+ print(f" FFN: {args.ffn}")
339
+ raise
340
+
341
+ # At the top of the file, make this a default path
342
+
343
+ def initialize_tokenizer(model_path=None):
344
+ """Initialize and configure the tokenizer."""
345
+ try:
346
+
347
+
348
+ tokenizer = AutoTokenizer.from_pretrained(
349
+ str(model_path),
350
+ use_fast=False,
351
+ trust_remote_code=True
352
+ )
353
+
354
+ print("\nTokenizer Configuration:")
355
+ print(f"Tokenizer type: {type(tokenizer)}")
356
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
357
+ print(f"Vocabulary size: {len(tokenizer)}")
358
+ print(f"Model max length: {tokenizer.model_max_length}")
359
+
360
+ if tokenizer.pad_token is None:
361
+ tokenizer.pad_token = tokenizer.eos_token
362
+ tokenizer.pad_token_id = tokenizer.eos_token_id
363
+ print("Set PAD token to EOS token")
364
+
365
+ tokenizer.padding_side = "left"
366
+
367
+ print(f"\nSpecial Tokens:")
368
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
369
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
370
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
371
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
372
+
373
+ return tokenizer
374
+
375
+ except Exception as e:
376
+ print(f"\nError: Failed to load tokenizer from {model_path}")
377
+ print(f"Error details: {str(e)}")
378
+ print(f"Error type: {type(e)}")
379
+ print("\nThis appears to be a tokenizer loading issue.")
380
+
381
+ # Check if it's the specific Qwen tokenizer file issue
382
+ if "expected str, bytes or os.PathLike object, not NoneType" in str(e):
383
+ print("\nThis error suggests the tokenizer files are missing or incomplete.")
384
+ print("For Qwen models, you need the original model directory with tokenizer files.")
385
+ print("Try using: --tokenizer ~/.cache/huggingface/hub/models--Qwen--Qwen3-0.6B/snapshots/YOUR_SNAPSHOT_ID")
386
+ else:
387
+ print("Please provide the path to a compatible model directory with tokenizer files.")
388
+ import traceback
389
+ traceback.print_exc()
390
+ raise
391
+
392
+
393
+
394
+ def make_causal_mask(length, start):
395
+ """Create causal attention mask."""
396
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
397
+ row_indices = np.arange(length).reshape(length, 1)
398
+ col_indices = np.arange(length).reshape(1, length)
399
+ mask[:, :, col_indices <= (row_indices + start)] = 0
400
+ return mask
401
+
402
+ def initialize_causal_mask(context_length):
403
+ """Initialize causal mask for transformer attention."""
404
+ causal_mask = make_causal_mask(context_length, 0)
405
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
406
+ print(f"\nInitialized causal mask for context length {context_length}")
407
+ return causal_mask
408
+
409
+ def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None, causal_mask=None):
410
+ """Run prefill on the input sequence."""
411
+ # Use provided causal mask or create one if not provided
412
+ if causal_mask is None:
413
+ causal_mask = make_causal_mask(context_length, 0)
414
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
415
+
416
+ # Process in batches
417
+ batch_pos = 0
418
+ while batch_pos < context_pos:
419
+ batch_end = min(batch_pos + batch_size, context_pos)
420
+ current_batch_size = batch_end - batch_pos
421
+
422
+ # Get current batch
423
+ batch_input = input_ids[:, batch_pos:batch_end]
424
+
425
+ # Always pad to full batch size for prefill
426
+ batch_input = F.pad(
427
+ batch_input,
428
+ (0, batch_size - current_batch_size),
429
+ value=0
430
+ )
431
+
432
+ # Generate position IDs for full batch size
433
+ position_ids = torch.arange(batch_pos, batch_pos+batch_size, dtype=torch.int32) # Changed: Always use full batch size
434
+ batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos+batch_size, :] # Changed: Use full batch size
435
+
436
+ # Run embeddings with proper batch size
437
+ hidden_states = torch.from_numpy(
438
+ embed_model.predict({
439
+ 'input_ids': batch_input.numpy(),
440
+ 'batch_size': np.array([batch_size], dtype=np.int32) # Add batch_size parameter
441
+ })['hidden_states']
442
+ )
443
+
444
+ # Run through FFN chunks with state
445
+ for ffn_model in ffn_models:
446
+ if isinstance(ffn_model, dict):
447
+ inputs = {
448
+ 'hidden_states': hidden_states.numpy(), # [1, 64, hidden_size]
449
+ 'position_ids': position_ids.numpy(), # [64]
450
+ 'causal_mask': batch_causal_mask.numpy(), # [1, 1, 64, context_length]
451
+ 'current_pos': np.array([batch_pos], dtype=np.int32) # [1]
452
+ }
453
+ output = ffn_model['prefill'].predict(inputs, state)
454
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
455
+
456
+ batch_pos = batch_end
457
+
458
+ return torch.tensor([context_pos], dtype=torch.int32)
459
+
460
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, metadata, state=None, causal_mask=None, temperature=0.0):
461
+ """Generate the next token."""
462
+ # Get current token
463
+ current_token = input_ids[:, pos-1:pos] # [1, 1]
464
+
465
+ # Run embeddings
466
+ hidden_states = torch.from_numpy(
467
+ embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
468
+ ) # [1, 1, hidden_size]
469
+
470
+ # Create masks
471
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
472
+ update_mask[0, 0, pos-1, 0] = 1.0
473
+ position_ids = torch.tensor([pos-1], dtype=torch.int32) # [1]
474
+
475
+ # Use provided causal mask or create one if not provided
476
+ if causal_mask is None:
477
+ causal_mask_data = make_causal_mask(context_length, 0)
478
+ single_causal_mask = torch.tensor(causal_mask_data[:, :, pos-1:pos, :], dtype=torch.float16) # [1, 1, 1, context_length]
479
+ else:
480
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
481
+
482
+ # Run through FFN chunks with state
483
+ for ffn_model in ffn_models:
484
+ if isinstance(ffn_model, dict):
485
+ inputs = {
486
+ 'hidden_states': hidden_states.numpy(),
487
+ 'update_mask': update_mask.numpy(),
488
+ 'position_ids': position_ids.numpy(),
489
+ 'causal_mask': single_causal_mask.numpy(),
490
+ 'current_pos': position_ids.numpy()
491
+ }
492
+ output = ffn_model['infer'].predict(inputs, state)
493
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
494
+
495
+ # Run LM head
496
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
497
+ # Debug print
498
+ #print("\nLM Head output keys:", list(lm_output.keys()))
499
+
500
+ # Get number of logits from metadata, using split_lm_head if available
501
+ # First check for split_lm_head (new), then num_logits (legacy), default to 8
502
+ num_logits = metadata.get('split_lm_head', metadata.get('num_logits', 8))
503
+
504
+ # Combine logits1-N if they exist
505
+ if 'logits1' in lm_output:
506
+ # Concatenate all logits parts
507
+ logits_parts = []
508
+ for i in range(1, num_logits + 1):
509
+ key = f'logits{i}'
510
+ if key in lm_output:
511
+ logits_parts.append(torch.from_numpy(lm_output[key]))
512
+ logits = torch.cat(logits_parts, dim=-1) # Concatenate along vocab dimension
513
+ else:
514
+ # Try output_logits as fallback
515
+ logits = torch.from_numpy(lm_output['output_logits'])
516
+
517
+ # Apply temperature and sample
518
+ if temperature > 0:
519
+ logits = logits / temperature
520
+ probs = F.softmax(logits[0, -1, :], dim=-1)
521
+ next_token = torch.multinomial(probs, num_samples=1).item()
522
+ else:
523
+ next_token = torch.argmax(logits[0, -1, :]).item()
524
+
525
+ return next_token
526
+
527
+ def create_unified_state(ffn_models, context_length):
528
+ """Create unified KV cache state for transformer."""
529
+ if isinstance(ffn_models[0], dict):
530
+ # Use first FFN model's prefill function to create state
531
+ state = ffn_models[0]['prefill'].make_state()
532
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
533
+ return state
534
+ else:
535
+ state = ffn_models[0].make_state()
536
+ print("\nCreated unified transformer state")
537
+ return state
538
+
539
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask=None, auto_prompt=None, warmup=False, save_file=None):
540
+ """Interactive chat loop."""
541
+ context_length = metadata.get('context_length')
542
+ batch_size = metadata.get('batch_size', 64)
543
+
544
+ if not warmup:
545
+ print(f"\nUsing context length: {context_length}")
546
+ print("\nStarting chat session. Press Ctrl+D to exit.")
547
+ print("Type your message and press Enter to chat.")
548
+
549
+ # Check if tokenizer has chat template and if it works
550
+ has_chat_template = False
551
+ try:
552
+ # Test if chat template works
553
+ test_messages = [{"role": "user", "content": "test"}]
554
+ tokenizer.apply_chat_template(test_messages, return_tensors="pt")
555
+ has_chat_template = True
556
+ if not warmup:
557
+ print("\nUsing chat template for prompts")
558
+ except:
559
+ if not warmup:
560
+ print("\nUsing manual formatting for prompts")
561
+
562
+ conversation = []
563
+
564
+ try:
565
+ while True:
566
+ try:
567
+ if not warmup:
568
+ print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
569
+ if auto_prompt is not None:
570
+ user_input = auto_prompt
571
+ if not warmup:
572
+ print(user_input)
573
+ else:
574
+ user_input = input().strip()
575
+ except EOFError:
576
+ if not warmup:
577
+ print("\nExiting chat...")
578
+ break
579
+
580
+ if not user_input:
581
+ continue
582
+
583
+ # Format prompt based on tokenizer capabilities
584
+ if has_chat_template:
585
+ messages = [{"role": "user", "content": user_input}]
586
+ input_ids = tokenizer.apply_chat_template(
587
+ messages,
588
+ return_tensors="pt",
589
+ add_generation_prompt=True
590
+ ).to(torch.int32)
591
+ else:
592
+ # Manual formatting for Llama models without chat template
593
+ formatted_prompt = f"[INST] {user_input} [/INST]"
594
+ input_ids = tokenizer(
595
+ formatted_prompt,
596
+ return_tensors="pt",
597
+ add_special_tokens=True
598
+ ).input_ids.to(torch.int32)
599
+
600
+ context_pos = input_ids.size(1)
601
+
602
+ if not warmup:
603
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
604
+
605
+ # Initialize token printer
606
+ token_printer = TokenPrinter(tokenizer)
607
+ tokens_generated = 0 # Track number of tokens
608
+
609
+ try:
610
+ # Start prefill timing
611
+ prefill_start = time.time()
612
+
613
+ # Run prefill with state and causal mask
614
+ current_pos = run_prefill(
615
+ embed_model,
616
+ ffn_models,
617
+ input_ids,
618
+ context_pos,
619
+ context_length,
620
+ batch_size,
621
+ state,
622
+ causal_mask
623
+ )
624
+
625
+ # Calculate prefill timing
626
+ prefill_time = time.time() - prefill_start
627
+ prefill_tokens = context_pos # Number of tokens in input
628
+ prefill_tokens_per_sec = prefill_tokens / prefill_time if prefill_time > 0 else 0
629
+
630
+ # Generation loop with state
631
+ input_ids = input_ids
632
+ pos = context_pos
633
+ inference_start = time.time()
634
+ inference_tokens = 0
635
+
636
+ while pos < context_length - 1:
637
+ # Generate next token with causal mask
638
+ next_token = generate_next_token(
639
+ embed_model,
640
+ ffn_models,
641
+ lmhead_model,
642
+ input_ids,
643
+ pos,
644
+ context_length,
645
+ metadata,
646
+ state,
647
+ causal_mask
648
+ )
649
+
650
+ # Add token to sequence
651
+ if pos < input_ids.size(1):
652
+ input_ids[0, pos] = next_token
653
+ else:
654
+ input_ids = torch.cat([
655
+ input_ids,
656
+ torch.tensor([[next_token]], dtype=torch.int32)
657
+ ], dim=1)
658
+
659
+ # Add to printer only if not in warmup
660
+ if not warmup:
661
+ token_printer.add_token(next_token)
662
+ token_printer.drain_buffer()
663
+
664
+ pos += 1
665
+ tokens_generated += 1
666
+ inference_tokens += 1
667
+
668
+ # Check limits
669
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
670
+ break
671
+
672
+ if next_token == tokenizer.eos_token_id:
673
+ break
674
+
675
+ # Calculate inference timing
676
+ inference_time = time.time() - inference_start
677
+ inference_tokens_per_sec = inference_tokens / inference_time if inference_time > 0 else 0
678
+
679
+ # Get final response and add to conversation
680
+ if not warmup:
681
+ response = token_printer.stop()
682
+ # Print timing stats
683
+ prefill_ms = prefill_time * 1000 # Convert to milliseconds
684
+ print(f"\nPrefill: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s)")
685
+ print(f"Inference: {inference_tokens_per_sec:.1f} t/s")
686
+ print(f"Total: Generated {tokens_generated} tokens in {prefill_time + inference_time:.2f}s")
687
+ conversation.append({"role": "assistant", "content": response})
688
+
689
+ # Save response to file if requested
690
+ if save_file:
691
+ try:
692
+ # Add small delay to ensure all tokens are processed
693
+ time.sleep(0.5)
694
+
695
+ # Make sure response ends with EOS token if it's supposed to
696
+ if response and not response.endswith("<|eot_id|>") and not response.endswith("</s>"):
697
+ if tokenizer.eos_token:
698
+ eos_text = tokenizer.decode([tokenizer.eos_token_id])
699
+ if not response.endswith(eos_text):
700
+ print(f"\n{DARK_BLUE}Adding missing EOS token for consistency{RESET_COLOR}")
701
+ response += eos_text
702
+
703
+ with open(save_file, 'w') as f:
704
+ f.write(response)
705
+ print(f"\n{DARK_BLUE}Response saved to file: {save_file}{RESET_COLOR}")
706
+ except Exception as e:
707
+ print(f"\n{DARK_BLUE}Error saving to file: {str(e)}{RESET_COLOR}")
708
+ else:
709
+ token_printer.stop() # Clean up without printing stats
710
+
711
+ # Exit after one response in auto_prompt mode
712
+ if auto_prompt is not None:
713
+ break
714
+
715
+ except KeyboardInterrupt:
716
+ print("\nGeneration interrupted")
717
+ token_printer.stop()
718
+ continue
719
+
720
+ except Exception as e:
721
+ print(f"\nError in chat loop: {str(e)}")
722
+ import traceback
723
+ traceback.print_exc()
724
+
725
+ def parse_args():
726
+ parser = argparse.ArgumentParser(description='Chat with CoreML LLaMA, gil resolved (c) 2025 Anemll')
727
+
728
+ # Add meta.yaml option
729
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
730
+
731
+ # Model paths
732
+ parser.add_argument('--d', '--dir', type=str, default='.',
733
+ help='Directory containing model files (default: current directory)')
734
+ parser.add_argument('--embed', type=str, required=False,
735
+ help='Path to embeddings model (relative to --dir)')
736
+ parser.add_argument('--ffn', type=str, required=False,
737
+ help='Path to FFN model (can be chunked, relative to --dir)')
738
+ parser.add_argument('--lmhead', type=str, required=False,
739
+ help='Path to LM head model (relative to --dir)')
740
+ parser.add_argument('--tokenizer', type=str, required=False,
741
+ help='Path to tokenizer')
742
+
743
+ # Add new argument for auto-generation
744
+ parser.add_argument('--prompt', type=str,
745
+ help='If specified, run once with this prompt and exit')
746
+
747
+ # Add save option
748
+ parser.add_argument('--save', type=str,
749
+ help='Save assistant\'s response to specified file')
750
+
751
+ # Add no-warmup flag
752
+ parser.add_argument('--nw', action='store_true',
753
+ help='Skip warmup phase')
754
+
755
+ # Model configuration
756
+ parser.add_argument('--context-length', type=int,
757
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
758
+ parser.add_argument('--batch-size', type=int,
759
+ help='Batch size for prefill (default: 64)')
760
+ parser.add_argument('--num-logits', type=int, default=8,
761
+ help='Number of logits outputs from LM head (default: 8, legacy)')
762
+ parser.add_argument('--split-lm-head', type=int,
763
+ help='Number of logits splits from LM head (default: 8 for llama, 16 for qwen)')
764
+
765
+ args = parser.parse_args()
766
+
767
+ # If meta.yaml is provided, load parameters from it
768
+ if args.meta:
769
+ try:
770
+ with open(args.meta, 'r') as f:
771
+ meta = yaml.safe_load(f)
772
+ params = meta['model_info']['parameters']
773
+
774
+ # Set model directory to meta.yaml directory if not specified
775
+ if not args.d or args.d == '.':
776
+ args.d = str(Path(args.meta).parent)
777
+
778
+ # Build model paths based on parameters
779
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
780
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
781
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
782
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
783
+ num_chunks = int(params['num_chunks'])
784
+
785
+ # Set model paths if not specified
786
+ if not args.lmhead:
787
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
788
+ if not args.embed:
789
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
790
+ if not args.ffn:
791
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
792
+ if not args.tokenizer:
793
+ # Check if there's a tokenizer_path parameter in meta.yaml
794
+ if 'tokenizer_path' in params:
795
+ args.tokenizer = params['tokenizer_path']
796
+ else:
797
+ # Default to the model directory, but this might need manual override
798
+ args.tokenizer = args.d
799
+
800
+ # Set other parameters if not overridden by command line
801
+ if args.context_length is None:
802
+ args.context_length = int(params['context_length'])
803
+ if args.batch_size is None:
804
+ args.batch_size = int(params['batch_size'])
805
+ args.num_chunks = num_chunks
806
+ # Add num_logits parameter with default of 8, override command line if present in meta
807
+ if 'num_logits' in params:
808
+ args.num_logits = int(params['num_logits'])
809
+
810
+ # Add split_lm_head parameter with default of 8
811
+ if 'split_lm_head' in params:
812
+ args.split_lm_head = int(params['split_lm_head'])
813
+ else:
814
+ args.split_lm_head = 8 # Default value for backward compatibility
815
+
816
+ print(f"\nLoaded parameters from {args.meta}:")
817
+ print(f" Context Length: {args.context_length}")
818
+ print(f" Batch Size: {args.batch_size}")
819
+ print(f" Num Chunks: {args.num_chunks}")
820
+ print(f" Num Logits: {args.num_logits}")
821
+ print(f" Split LM Head: {args.split_lm_head}")
822
+ print(f" Models Directory: {args.d}")
823
+ print(f" Embeddings: {args.embed}")
824
+ print(f" LM Head: {args.lmhead}")
825
+ print(f" FFN: {args.ffn}")
826
+
827
+ except Exception as e:
828
+ print(f"\nError loading meta.yaml: {str(e)}")
829
+ sys.exit(1)
830
+ else:
831
+ # If no meta.yaml, set default split_lm_head if not provided
832
+ if not hasattr(args, 'split_lm_head') or args.split_lm_head is None:
833
+ args.split_lm_head = args.num_logits # Use num_logits as fallback
834
+
835
+ return args
836
+
837
+ def main():
838
+ args = parse_args()
839
+
840
+ # Convert directory to absolute path
841
+ model_dir = Path(args.d).resolve()
842
+ if not model_dir.exists():
843
+ print(f"\nError: Model directory not found: {model_dir}")
844
+ return 1
845
+
846
+ print(f"\nUsing model directory: {model_dir}")
847
+ print(f"Context length: {args.context_length}")
848
+
849
+ try:
850
+ # Update paths to be relative to model directory
851
+ args.embed = str(model_dir / args.embed)
852
+ args.ffn = str(model_dir / args.ffn)
853
+ args.lmhead = str(model_dir / args.lmhead)
854
+
855
+ # Handle tokenizer path separately since it's not relative to model_dir
856
+ if args.tokenizer is None:
857
+ args.tokenizer = str(model_dir)
858
+
859
+ # Check if tokenizer directory exists and has required files
860
+ tokenizer_path = Path(args.tokenizer)
861
+ if not tokenizer_path.exists():
862
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
863
+ return 1
864
+
865
+ # Check if tokenizer has the required files
866
+ required_files = ['tokenizer.json', 'tokenizer_config.json']
867
+ missing_files = [f for f in required_files if not (tokenizer_path / f).exists()]
868
+
869
+ if missing_files:
870
+ print(f"\nWarning: Tokenizer directory missing required files: {missing_files}")
871
+ print(f"Current tokenizer path: {args.tokenizer}")
872
+ print("\nFor Qwen models, you may need to specify the original model directory:")
873
+ print(" python chat.py --meta /tmp/qwen/meta.yaml --tokenizer ~/.cache/huggingface/hub/models--Qwen--Qwen3-0.6B/snapshots/YOUR_SNAPSHOT_ID")
874
+ print("\nOr add 'tokenizer_path' to your meta.yaml file.")
875
+
876
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
877
+ print(f"Using tokenizer path: {args.tokenizer}")
878
+
879
+ metadata = {}
880
+ # Load models and extract metadata
881
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
882
+
883
+ print(f"\nMetadata befor args.context_length: {metadata}")
884
+
885
+ # Override context length from command line if provided
886
+ if args.context_length is not None:
887
+ metadata['context_length'] = args.context_length
888
+ metadata['state_length'] = args.context_length # Also update state_length
889
+ print(f"\nOverriding context length from command line: {args.context_length}")
890
+
891
+ # Add num_logits to metadata (legacy support)
892
+ metadata['num_logits'] = getattr(args, 'num_logits', 8)
893
+
894
+ # Add split_lm_head to metadata (preferred)
895
+ metadata['split_lm_head'] = getattr(args, 'split_lm_head', getattr(args, 'num_logits', 8))
896
+
897
+ print(f"\nMetadata after load_models: {metadata}")
898
+ print(f"Using split_lm_head value: {metadata.get('split_lm_head', 8)}")
899
+
900
+ # Load tokenizer with resolved path
901
+ tokenizer = initialize_tokenizer(args.tokenizer)
902
+ if tokenizer is None:
903
+ raise RuntimeError("Failed to initialize tokenizer")
904
+
905
+ # Create unified state once
906
+ state = create_unified_state(ffn_models, metadata['context_length'])
907
+
908
+ # Initialize causal mask once
909
+ causal_mask = initialize_causal_mask(metadata['context_length'])
910
+
911
+ # Warmup runs to prevent Python GIL issues with CoreML !
912
+ if not args.nw:
913
+ for i in range(2):
914
+ chat_loop(
915
+ embed_model=embed_model,
916
+ ffn_models=ffn_models,
917
+ lmhead_model=lmhead_model,
918
+ tokenizer=tokenizer,
919
+ metadata=metadata,
920
+ state=state,
921
+ causal_mask=causal_mask, # Pass the causal mask
922
+ warmup=True,
923
+ auto_prompt="who are you?"
924
+ )
925
+
926
+ # Main run
927
+ chat_loop(
928
+ embed_model=embed_model,
929
+ ffn_models=ffn_models,
930
+ lmhead_model=lmhead_model,
931
+ tokenizer=tokenizer,
932
+ metadata=metadata,
933
+ state=state,
934
+ causal_mask=causal_mask, # Pass the causal mask
935
+ warmup=False,
936
+ auto_prompt=args.prompt,
937
+ save_file=args.save
938
+ )
939
+
940
+ except Exception as e:
941
+ print(f"\nError: {str(e)}")
942
+ import traceback
943
+ traceback.print_exc()
944
+ return 1
945
+
946
+ return 0
947
+
948
+ if __name__ == "__main__":
949
+ exit(main())
chat_full.py ADDED
@@ -0,0 +1,989 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at the top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+ THINKING_MODE = False
32
+ THINKING_PROMPT = """You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem."""
33
+ DEBUG_LEVEL = 0 # Default debug level
34
+
35
+ class TokenPrinter:
36
+ """Handles background printing of generated tokens."""
37
+ def __init__(self, tokenizer):
38
+ self.tokenizer = tokenizer
39
+ self.token_queue = queue.Queue()
40
+ self.stop_event = threading.Event()
41
+ self.thread = None
42
+ self.buffer = ""
43
+ self.lock = threading.Lock()
44
+ self.thinking = True # Track if we're still in thinking mode
45
+ self.decoding_buffer = [] # Buffer for token IDs
46
+ # Timing and stats tracking
47
+ self.start_time = time.time()
48
+ self.token_count = 0
49
+ self.prefill_time = 0
50
+ self.inference_time = 0
51
+ self.context_pos = 0
52
+ self.start()
53
+
54
+ def start(self):
55
+ """Start the printer thread."""
56
+ if self.thread is None:
57
+ self.thread = threading.Thread(target=self._print_worker)
58
+ self.thread.daemon = True
59
+ self.thread.start()
60
+
61
+ def add_token(self, token_id):
62
+ """Add a token to the print queue."""
63
+ if not self.stop_event.is_set():
64
+ self.token_queue.put(token_id)
65
+ self.token_count += 1
66
+
67
+ def drain_buffer(self):
68
+ """Decode token IDs from decoding_buffer in the main thread."""
69
+ if not self.decoding_buffer:
70
+ return
71
+
72
+ # Decode all tokens at once in the main thread
73
+ token_str = self.tokenizer.decode(self.decoding_buffer)
74
+ self.decoding_buffer.clear()
75
+
76
+ # Color-handling logic
77
+ if self.thinking and "</think>" in token_str:
78
+ self.thinking = False
79
+ parts = token_str.split("</think>")
80
+ if len(parts) > 0:
81
+ print(parts[0] + "</think>", end='', flush=True)
82
+ if len(parts) > 1:
83
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
84
+ else:
85
+ if not self.thinking:
86
+ print(LIGHT_BLUE + token_str, end='', flush=True)
87
+ else:
88
+ print(token_str, end='', flush=True)
89
+
90
+ def _print_worker(self):
91
+ """Worker thread that takes token_ids from the queue."""
92
+ while not self.stop_event.is_set():
93
+ try:
94
+ token_id = self.token_queue.get(timeout=0.01)
95
+ with self.lock:
96
+ self.decoding_buffer.append(token_id)
97
+ self.token_queue.task_done()
98
+ except queue.Empty:
99
+ continue
100
+ except Exception as e:
101
+ print(f"\nError: Token printer error: {str(e)}")
102
+ break
103
+
104
+ def stop(self):
105
+ """Stop the printer thread."""
106
+ if self.thread and self.thread.is_alive():
107
+ self.stop_event.set()
108
+ try:
109
+ self.thread.join(timeout=1.0)
110
+ except Exception:
111
+ pass
112
+ print(RESET_COLOR) # Reset color at the end
113
+ return self.buffer
114
+
115
+ def set_timing(self, prefill_time, inference_time, context_pos):
116
+ """Set timing information."""
117
+ self.prefill_time = prefill_time
118
+ self.inference_time = inference_time
119
+ self.context_pos = context_pos
120
+
121
+ def parse_model_path(path):
122
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
123
+ path = Path(path)
124
+
125
+ # If path exists exactly as specified, return it
126
+ if path.exists():
127
+ return str(path)
128
+
129
+ # Try with both extensions
130
+ candidates = [
131
+ path, # Original path
132
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
133
+ path.with_suffix('.mlpackage'), # With .mlpackage
134
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
135
+ Path(str(path) + '.mlpackage')
136
+ ]
137
+
138
+ # Try all possible paths
139
+ for candidate in candidates:
140
+ if candidate.exists():
141
+ print(f"Found model at: {candidate}")
142
+ return str(candidate)
143
+
144
+ # If we get here, no valid path was found
145
+ print("\nError: Model not found. Tried following paths:")
146
+ for candidate in candidates:
147
+ print(f" {candidate}")
148
+ raise FileNotFoundError(f"Model not found: {path}")
149
+
150
+ def parse_ffn_filename(path):
151
+ """Parse FFN model filename to extract chunk information."""
152
+ path = Path(path)
153
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
154
+ match = re.search(pattern, path.name)
155
+
156
+ if match:
157
+ current_chunk = int(match.group(1))
158
+ total_chunks = int(match.group(2))
159
+ return current_chunk, total_chunks
160
+ return None, None
161
+
162
+ def find_all_chunks(base_path):
163
+ """Find all chunk files matching the base FFN path pattern."""
164
+ path = Path(base_path)
165
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
166
+ return sorted(glob.glob(pattern))
167
+
168
+ def load_model(path, function_name=None):
169
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
170
+ path = Path(path)
171
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
172
+
173
+ try:
174
+ if path.suffix == '.mlmodelc':
175
+ # For compiled models (.mlmodelc), use CompiledMLModel
176
+ if function_name:
177
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
178
+ else:
179
+ return ct.models.CompiledMLModel(str(path), compute_unit)
180
+ else:
181
+ # For packages (.mlpackage)
182
+ if function_name:
183
+ return ct.models.MLModel(str(path), function_name=function_name)
184
+ else:
185
+ return ct.models.MLModel(str(path))
186
+
187
+ except RuntimeError as e:
188
+ if "valid manifest does not exist" in str(e):
189
+ print(f"\nError: Could not load compiled model at {path}")
190
+ print("This might be because:")
191
+ print("1. The model is not properly compiled")
192
+ print("2. The model was compiled for a different OS version")
193
+ print("3. The model needs to be recompiled")
194
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
195
+ raise
196
+
197
+ def parse_args():
198
+ parser = argparse.ArgumentParser(description='Full Chat with CoreML LLaMA with context window shifting, gil resolved (c) 2025 Anemll')
199
+
200
+ # Add meta.yaml option
201
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
202
+
203
+ # Add existing arguments
204
+ parser.add_argument('--d', '--dir', type=str, default='.',
205
+ help='Directory containing model files (default: current directory)')
206
+ parser.add_argument('--embed', type=str, required=False,
207
+ help='Path to embeddings model (relative to --dir)')
208
+ parser.add_argument('--ffn', type=str, required=False,
209
+ help='Path to FFN model (can be chunked, relative to --dir)')
210
+ parser.add_argument('--lmhead', type=str, required=False,
211
+ help='Path to LM head model (relative to --dir)')
212
+ parser.add_argument('--tokenizer', type=str, required=False,
213
+ help='Path to tokenizer')
214
+
215
+ # Add new argument for auto-generation
216
+ parser.add_argument('--prompt', type=str,
217
+ help='If specified, run once with this prompt and exit')
218
+
219
+ # Add no-warmup flag
220
+ parser.add_argument('--nw', action='store_true',
221
+ help='Skip warmup phase')
222
+
223
+ # Add debug level
224
+ parser.add_argument('--debug-level', type=int, default=0,
225
+ help='Debug level (0=none, 1=print prompts, 2=more verbose)')
226
+
227
+ # Model configuration
228
+ parser.add_argument('--context-length', type=int,
229
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
230
+ parser.add_argument('--batch-size', type=int,
231
+ help='Batch size for prefill (default: 64)')
232
+
233
+ args = parser.parse_args()
234
+
235
+ # If meta.yaml is provided, load parameters from it
236
+ if args.meta:
237
+ try:
238
+ with open(args.meta, 'r') as f:
239
+ meta = yaml.safe_load(f)
240
+ params = meta['model_info']['parameters']
241
+
242
+ # Set model directory to meta.yaml directory if not specified
243
+ if not args.d or args.d == '.':
244
+ args.d = str(Path(args.meta).parent)
245
+
246
+ # Build model paths based on parameters
247
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
248
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
249
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
250
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
251
+ num_chunks = int(params['num_chunks'])
252
+
253
+ # Set model paths if not specified
254
+ if not args.lmhead:
255
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
256
+ if not args.embed:
257
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
258
+ if not args.ffn:
259
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
260
+ if not args.tokenizer:
261
+ args.tokenizer = args.d
262
+
263
+ # Set other parameters if not overridden by command line
264
+ if args.context_length is None:
265
+ args.context_length = int(params['context_length'])
266
+ if args.batch_size is None:
267
+ args.batch_size = int(params['batch_size'])
268
+ args.num_chunks = num_chunks
269
+
270
+ # Parse split_lm_head parameter from meta.yaml
271
+ if 'split_lm_head' in params:
272
+ args.split_lm_head = int(params['split_lm_head'])
273
+ else:
274
+ args.split_lm_head = 8 # Default value
275
+
276
+ print(f"\nLoaded parameters from {args.meta}:")
277
+ print(f" Context Length: {args.context_length}")
278
+ print(f" Batch Size: {args.batch_size}")
279
+ print(f" Num Chunks: {args.num_chunks}")
280
+ print(f" Split LM Head: {args.split_lm_head}")
281
+ print(f" Models Directory: {args.d}")
282
+ print(f" Embeddings: {args.embed}")
283
+ print(f" LM Head: {args.lmhead}")
284
+ print(f" FFN: {args.ffn}")
285
+
286
+ except Exception as e:
287
+ print(f"\nError loading meta.yaml: {str(e)}")
288
+ sys.exit(1)
289
+
290
+ return args
291
+
292
+ def load_metadata(model,args):
293
+ # Extract metadata and config parameters
294
+ metadata = {}
295
+ if hasattr(model, 'user_defined_metadata'):
296
+ meta = model.user_defined_metadata
297
+
298
+ # Extract key parameters with defaults
299
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
300
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
301
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
302
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
303
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
304
+
305
+ print("\nExtracted Parameters:")
306
+ print(f" Context Length: {metadata['context_length']}")
307
+ print(f" State Length: {metadata['state_length']}")
308
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
309
+ print(f" LUT Bits: {metadata['lut_bits']}")
310
+ print(f" Number of Chunks: {metadata['num_chunks']}")
311
+
312
+ # Print model info
313
+ print("\nModel Info:")
314
+ if 'com.anemll.info' in meta:
315
+ print(f" {meta['com.anemll.info']}")
316
+ if 'com.github.apple.coremltools.version' in meta:
317
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
318
+
319
+ # Print model input/output shapes
320
+ print("\nModel Shapes:")
321
+ if hasattr(model, 'input_description'):
322
+ print(" Inputs:")
323
+ for name, desc in model.input_description.items():
324
+ print(f" {name}: {desc}")
325
+ if hasattr(model, 'output_description'):
326
+ print(" Outputs:")
327
+ for name, desc in model.output_description.items():
328
+ print(f" {name}: {desc}")
329
+ else:
330
+ print("\nWarning: No metadata found in model")
331
+
332
+ # Check if model directory name contains context length pattern (ctxXXX)
333
+ ctx_len = 512
334
+ if args.context_length is None:
335
+ import re
336
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
337
+ if ctx_match:
338
+ ctx_len0 = int(ctx_match.group(1))
339
+ if 512 <= ctx_len0 <= 8096:
340
+ ctx_len = ctx_len0
341
+ print(f"\nDetected context length {ctx_len} from directory name")
342
+ else:
343
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
344
+ else:
345
+ ctx_len = args.context_length
346
+
347
+ # Use defaults or values from args
348
+ metadata['context_length'] = ctx_len
349
+ metadata['state_length'] = ctx_len
350
+ # Get batch size from args or use default
351
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
352
+ metadata['lut_bits'] = 4
353
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
354
+ print("\nUsing parameters:")
355
+ print(f" Context Length: {metadata['context_length']}")
356
+ print(f" State Length: {metadata['state_length']}")
357
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
358
+ print(f" LUT Bits: {metadata['lut_bits']}")
359
+ print(f" Number of Chunks: {metadata['num_chunks']}")
360
+
361
+ # Override with values from args if they exist
362
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
363
+ metadata['batch_size'] = args.batch_size
364
+ print(f"\nOverriding batch size from args: {args.batch_size}")
365
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
366
+ metadata['num_chunks'] = args.num_chunks
367
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
368
+
369
+ return metadata
370
+
371
+ def load_models(args,metadata):
372
+ """Load all required models and extract metadata."""
373
+ print("\nLoading models...")
374
+
375
+ try:
376
+ # Load embeddings model
377
+ print("\nLoading embeddings model...")
378
+ embed_path = parse_model_path(args.embed)
379
+ print(f"Loading from: {embed_path}")
380
+ embed_model = load_model(embed_path)
381
+ print("Embeddings model loaded successfully")
382
+ metadata = load_metadata(embed_model,args)
383
+
384
+
385
+
386
+ # Load LM head model
387
+ print("\nLoading LM head model...")
388
+ lmhead_path = parse_model_path(args.lmhead)
389
+ print(f"Loading from: {lmhead_path}")
390
+ lmhead_model = load_model(lmhead_path)
391
+ print("LM head model loaded successfully")
392
+
393
+ # Parse FFN path and find chunks if needed
394
+ print("\nLoading FFN+PREFILL model(s)...")
395
+ ffn_path = parse_model_path(args.ffn)
396
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
397
+
398
+ ffn_models = []
399
+ if chunk_no and total_chunks:
400
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
401
+ # Find and load all chunks
402
+ chunk_paths = find_all_chunks(ffn_path)
403
+ if len(chunk_paths) != total_chunks:
404
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
405
+
406
+ for chunk_path in chunk_paths:
407
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
408
+ try:
409
+ # For chunked models, we need both infer and prefill functions
410
+ ffn_models.append({
411
+ 'infer': load_model(chunk_path, function_name='infer'),
412
+ 'prefill': load_model(chunk_path, function_name='prefill')
413
+ })
414
+ print("Chunk loaded successfully")
415
+ except Exception as e:
416
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
417
+ raise
418
+ metadata = load_metadata(ffn_models[0],args)
419
+
420
+ else:
421
+ print("\nLoading single FFN model...")
422
+ ffn_models.append(load_model(ffn_path))
423
+ print("FFN model loaded successfully")
424
+
425
+ return embed_model, ffn_models, lmhead_model, metadata
426
+
427
+ except Exception as e:
428
+ print(f"\nError loading models: {str(e)}")
429
+ print("\nPlease ensure all model files exist and are accessible.")
430
+ print("Expected files:")
431
+ print(f" Embeddings: {args.embed}")
432
+ print(f" LM Head: {args.lmhead}")
433
+ print(f" FFN: {args.ffn}")
434
+ raise
435
+
436
+ # At the top of the file, make this a default path
437
+
438
+ def initialize_tokenizer(model_path=None):
439
+ """Initialize and configure the tokenizer."""
440
+ try:
441
+
442
+
443
+ tokenizer = AutoTokenizer.from_pretrained(
444
+ str(model_path),
445
+ use_fast=False,
446
+ trust_remote_code=True
447
+ )
448
+
449
+ print("\nTokenizer Configuration:")
450
+ print(f"Tokenizer type: {type(tokenizer)}")
451
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
452
+ print(f"Vocabulary size: {len(tokenizer)}")
453
+ print(f"Model max length: {tokenizer.model_max_length}")
454
+
455
+ if tokenizer.pad_token is None:
456
+ tokenizer.pad_token = tokenizer.eos_token
457
+ tokenizer.pad_token_id = tokenizer.eos_token_id
458
+ print("Set PAD token to EOS token")
459
+
460
+ tokenizer.padding_side = "left"
461
+
462
+ print(f"\nSpecial Tokens:")
463
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
464
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
465
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
466
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
467
+
468
+ return tokenizer
469
+
470
+ except Exception as e:
471
+ print(f"\nError: Failed to load tokenizer from {model_path}")
472
+ print(f"Error details: {str(e)}")
473
+ print(f"Error type: {type(e)}")
474
+ print("\nThis code requires a Llama 3.2 model for chat template functionality.")
475
+ print("Please provide the path to a Llama 3.2 model directory.")
476
+ import traceback
477
+ traceback.print_exc()
478
+ raise
479
+
480
+
481
+
482
+ def make_causal_mask(length, start):
483
+ """Create causal attention mask."""
484
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
485
+ row_indices = np.arange(length).reshape(length, 1)
486
+ col_indices = np.arange(length).reshape(1, length)
487
+ mask[:, :, col_indices <= (row_indices + start)] = 0
488
+ return mask
489
+
490
+ def run_prefill(embed_model, ffn_models, input_ids, current_pos, context_length, batch_size, state, causal_mask):
491
+ """Run prefill on the input sequence."""
492
+ #print(f"[DEBUG] Running prefill from 0 to {current_pos}")
493
+
494
+ # Process in batches
495
+ batch_pos = 0
496
+ while batch_pos < current_pos:
497
+ batch_end = min(batch_pos + batch_size, current_pos)
498
+ current_batch_size = batch_end - batch_pos
499
+
500
+ #print(f"[DEBUG] Prefill batch {batch_pos}-{batch_end} (size={current_batch_size})")
501
+
502
+ # Get current batch
503
+ batch_input = input_ids[:, batch_pos:batch_end]
504
+
505
+ # Pad to full batch size
506
+ batch_input = F.pad(
507
+ batch_input,
508
+ (0, batch_size - current_batch_size),
509
+ value=0
510
+ )
511
+
512
+ # Generate position IDs for this batch
513
+ position_ids = torch.arange(batch_pos, batch_pos + batch_size, dtype=torch.int32)
514
+
515
+ # Use the pre-initialized causal mask and extract the batch portion
516
+ batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos + batch_size, :]
517
+
518
+ # Run embeddings
519
+ hidden_states = torch.from_numpy(
520
+ embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
521
+ )
522
+
523
+ # Run through FFN chunks
524
+ for ffn_model in ffn_models:
525
+ if isinstance(ffn_model, dict):
526
+ inputs = {
527
+ 'hidden_states': hidden_states.numpy(),
528
+ 'position_ids': position_ids.numpy(),
529
+ 'causal_mask': batch_causal_mask.numpy(),
530
+ 'current_pos': np.array([batch_pos], dtype=np.int32)
531
+ }
532
+ output = ffn_model['prefill'].predict(inputs, state)
533
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
534
+
535
+ batch_pos = batch_end
536
+
537
+ return torch.tensor([current_pos], dtype=torch.int32)
538
+
539
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state, causal_mask, metadata=None, temperature=0.0):
540
+ """Generate the next token."""
541
+ # Get current token
542
+ current_token = input_ids[:, pos-1:pos]
543
+
544
+ # Run embeddings
545
+ hidden_states = torch.from_numpy(
546
+ embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
547
+ )
548
+
549
+ # Create masks
550
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
551
+ update_mask[0, 0, pos-1, 0] = 1.0
552
+ position_ids = torch.tensor([pos-1], dtype=torch.int32)
553
+
554
+ # Use the pre-initialized causal mask and extract the single position portion
555
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
556
+
557
+ # Run through FFN chunks
558
+ for ffn_model in ffn_models:
559
+ if isinstance(ffn_model, dict):
560
+ inputs = {
561
+ 'hidden_states': hidden_states.numpy(),
562
+ 'update_mask': update_mask.numpy(),
563
+ 'position_ids': position_ids.numpy(),
564
+ 'causal_mask': single_causal_mask.numpy(),
565
+ 'current_pos': position_ids.numpy()
566
+ }
567
+ output = ffn_model['infer'].predict(inputs, state)
568
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
569
+
570
+ # Run LM head and get next token
571
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
572
+
573
+ if 'logits1' in lm_output:
574
+ logits_parts = []
575
+ for i in range(1, metadata.get('split_lm_head', 8) + 1):
576
+ key = f'logits{i}'
577
+ if key in lm_output:
578
+ logits_parts.append(torch.from_numpy(lm_output[key]))
579
+ logits = torch.cat(logits_parts, dim=-1)
580
+ else:
581
+ logits = torch.from_numpy(lm_output['output_logits'])
582
+
583
+ if temperature > 0:
584
+ logits = logits / temperature
585
+ probs = F.softmax(logits[0, -1, :], dim=-1)
586
+ next_token = torch.multinomial(probs, num_samples=1).item()
587
+ else:
588
+ next_token = torch.argmax(logits[0, -1, :]).item()
589
+
590
+ return next_token
591
+
592
+ def create_unified_state(ffn_models, context_length):
593
+ """Create unified KV cache state for transformer."""
594
+ if isinstance(ffn_models[0], dict):
595
+ # Use first FFN model's prefill function to create state
596
+ state = ffn_models[0]['prefill'].make_state()
597
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
598
+ return state
599
+ else:
600
+ state = ffn_models[0].make_state()
601
+ print("\nCreated unified transformer state")
602
+ return state
603
+
604
+ def initialize_causal_mask(context_length):
605
+ """Initialize causal mask for transformer attention."""
606
+ causal_mask = make_causal_mask(context_length, 0)
607
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
608
+ print(f"\nInitialized causal mask for context length {context_length}")
609
+ return causal_mask
610
+
611
+ def get_user_input():
612
+ """Get input from user, handling special key combinations."""
613
+ global THINKING_MODE
614
+ try:
615
+ import termios
616
+ import tty
617
+ import sys
618
+
619
+ def _getch():
620
+ fd = sys.stdin.fileno()
621
+ old_settings = termios.tcgetattr(fd)
622
+ try:
623
+ tty.setraw(sys.stdin.fileno())
624
+ ch = sys.stdin.read(1)
625
+ finally:
626
+ termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
627
+ return ch
628
+
629
+ buffer = []
630
+ while True:
631
+ char = _getch()
632
+
633
+ # Debug: print the character code
634
+ print(f"\nKey pressed: {repr(char)} (hex: {hex(ord(char))})")
635
+
636
+ # Check for Enter key
637
+ if char == '\r' or char == '\n':
638
+ print() # Move to next line
639
+ input_text = ''.join(buffer)
640
+ # Check if the command is /t
641
+ if input_text == '/t':
642
+ THINKING_MODE = not THINKING_MODE
643
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
644
+ buffer = [] # Clear buffer
645
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
646
+ continue
647
+ return input_text
648
+
649
+ # Handle backspace
650
+ if char == '\x7f': # backspace
651
+ if buffer:
652
+ buffer.pop()
653
+ sys.stdout.write('\b \b') # Erase character
654
+ sys.stdout.flush()
655
+ continue
656
+
657
+ # Handle Ctrl-C
658
+ if char == '\x03': # Ctrl-C
659
+ print("^C")
660
+ raise KeyboardInterrupt
661
+
662
+ # Print character and add to buffer
663
+ sys.stdout.write(char)
664
+ sys.stdout.flush()
665
+ buffer.append(char)
666
+
667
+ except ImportError:
668
+ # Fallback for systems without termios
669
+ return input("> ")
670
+
671
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask, auto_prompt=None, warmup=False):
672
+ """Interactive chat loop."""
673
+ global THINKING_MODE
674
+ global DEBUG_LEVEL
675
+ context_length = metadata.get('context_length')
676
+ batch_size = metadata.get('batch_size', 64)
677
+
678
+ if not warmup:
679
+ print(f"\nUsing context length: {context_length}")
680
+ print("\nStarting chat session. Press Ctrl+D to exit.")
681
+ print("Type your message and press Enter to chat. Use /t to toggle thinking mode.")
682
+ print(f"Thinking mode is {'ON' if THINKING_MODE else 'OFF'}")
683
+
684
+ # Keep track of conversation history
685
+ conversation = []
686
+
687
+ try:
688
+ while True:
689
+ try:
690
+ if not warmup:
691
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
692
+ if auto_prompt is not None:
693
+ user_input = auto_prompt
694
+ if not warmup:
695
+ print(user_input)
696
+ else:
697
+ user_input = input().strip()
698
+ except EOFError:
699
+ if not warmup:
700
+ print("\nExiting chat...")
701
+ break
702
+
703
+ if not user_input:
704
+ continue
705
+
706
+ # Handle /t command
707
+ if user_input == "/t":
708
+ THINKING_MODE = not THINKING_MODE
709
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
710
+ continue
711
+
712
+ # Add user message to conversation
713
+ conversation.append({"role": "user", "content": user_input})
714
+
715
+ # Format using chat template with full history
716
+ if THINKING_MODE:
717
+ # Add thinking prompt to system message
718
+ conversation_with_thinking = [{"role": "system", "content": THINKING_PROMPT}] + conversation
719
+ base_input_ids = tokenizer.apply_chat_template(
720
+ conversation_with_thinking,
721
+ return_tensors="pt",
722
+ add_generation_prompt=True
723
+ ).to(torch.int32)
724
+
725
+ # Print full prompt if debug level >= 1
726
+ if DEBUG_LEVEL >= 1 and not warmup:
727
+ print(f"\n{DARK_BLUE}Debug: Full prompt with thinking:{RESET_COLOR}")
728
+ print(tokenizer.decode(base_input_ids[0]))
729
+ else:
730
+ base_input_ids = tokenizer.apply_chat_template(
731
+ conversation,
732
+ return_tensors="pt",
733
+ add_generation_prompt=True
734
+ ).to(torch.int32)
735
+
736
+ # Print full prompt if debug level >= 1
737
+ if DEBUG_LEVEL >= 1 and not warmup:
738
+ print(f"\n{DARK_BLUE}Debug: Full prompt:{RESET_COLOR}")
739
+ print(tokenizer.decode(base_input_ids[0]))
740
+
741
+ # Check if we need to trim history
742
+ while base_input_ids.size(1) > context_length - 100: # Leave room for response
743
+ # Remove oldest message pair (user + assistant)
744
+ if len(conversation) > 2:
745
+ conversation = conversation[2:] # Remove oldest pair
746
+ base_input_ids = tokenizer.apply_chat_template(
747
+ conversation,
748
+ return_tensors="pt",
749
+ add_generation_prompt=True
750
+ ).to(torch.int32)
751
+ else:
752
+ # If only current message remains and still too long, truncate
753
+ base_input_ids = base_input_ids[:, -context_length//2:]
754
+ break
755
+
756
+ context_pos = base_input_ids.size(1)
757
+
758
+ # Pad sequence to context_size
759
+ input_ids = F.pad(
760
+ base_input_ids,
761
+ (0, context_length - context_pos),
762
+ value=0
763
+ )
764
+
765
+ if not warmup:
766
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
767
+
768
+ # split_lm_head should already be in metadata from caller
769
+
770
+ # Initialize token printer and collect response
771
+ token_printer = TokenPrinter(tokenizer)
772
+ response_tokens = []
773
+ generation_start_time = time.time()
774
+
775
+ try:
776
+ # Run prefill on entire context
777
+ current_pos = run_prefill(
778
+ embed_model,
779
+ ffn_models,
780
+ input_ids,
781
+ context_pos,
782
+ context_length,
783
+ batch_size,
784
+ state,
785
+ causal_mask
786
+ )
787
+ #print(f"\n[DEBUG] After initial prefill - current_pos: {current_pos}")
788
+
789
+ # Generation loop
790
+ pos = context_pos
791
+ tokens_generated = 0
792
+ inference_start = time.time() # Start inference timing
793
+
794
+ while True:
795
+ # Check if we need to shift window
796
+ if pos >= context_length - 2:
797
+ # Calculate shift to maintain full batches
798
+ batch_size = metadata.get('batch_size', 64)
799
+ # Calculate max batches that fit in context
800
+ max_batches = context_length // batch_size
801
+ desired_batches = max(1, max_batches - 2) # Leave room for new tokens
802
+ new_size = min(desired_batches * batch_size, context_length - batch_size)
803
+
804
+ # Create shifted input_ids
805
+ tmp = torch.zeros((1, context_length), dtype=torch.int32)
806
+ tmp[:,0:new_size] = input_ids[:,pos-new_size:pos]
807
+ input_ids = tmp
808
+
809
+ # Reset state and run prefill
810
+ # keep the same state
811
+ #state = create_unified_state(ffn_models, context_length)
812
+ current_pos = run_prefill(
813
+ embed_model,
814
+ ffn_models,
815
+ input_ids,
816
+ new_size, # Prefill the entire shifted content
817
+ context_length,
818
+ batch_size,
819
+ state,
820
+ causal_mask
821
+ )
822
+
823
+ # Start generating from the next position
824
+ pos = new_size # Don't back up, continue from where we left off
825
+
826
+ #print(f"\n[DEBUG] After shift - next token will be at pos {pos}")
827
+ #print(f"[DEBUG] Context before next token: {tokenizer.decode(input_ids[0, pos-40:pos])}")
828
+
829
+ window_shifted = True
830
+
831
+ # Generate next token
832
+ next_token = generate_next_token(
833
+ embed_model,
834
+ ffn_models,
835
+ lmhead_model,
836
+ input_ids,
837
+ pos,
838
+ context_length,
839
+ state,
840
+ causal_mask,
841
+ metadata
842
+ )
843
+
844
+ # Add token
845
+ input_ids[0, pos] = next_token
846
+ if not warmup:
847
+ token_printer.add_token(next_token)
848
+ token_printer.drain_buffer()
849
+ response_tokens.append(next_token)
850
+
851
+ pos += 1
852
+ tokens_generated += 1
853
+
854
+ # In warmup mode, limit tokens
855
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
856
+ break
857
+
858
+ if next_token == tokenizer.eos_token_id:
859
+ break
860
+
861
+ inference_time = time.time() - inference_start # Calculate inference time
862
+
863
+ # Add assistant response to conversation
864
+ response_text = token_printer.stop()
865
+ conversation.append({"role": "assistant", "content": response_text})
866
+
867
+ # Print stats only if not in warmup
868
+ if not warmup:
869
+ total_time = time.time() - generation_start_time
870
+ prefill_time = total_time - inference_time
871
+ inference_tokens_per_sec = len(response_tokens) / inference_time if inference_time > 0 else 0
872
+ prefill_ms = prefill_time * 1000
873
+ prefill_tokens_per_sec = context_pos / prefill_time if prefill_time > 0 else 0
874
+ print(f"{DARK_BLUE}{inference_tokens_per_sec:.1f} t/s, "
875
+ f"TTFT: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s), "
876
+ f"{len(response_tokens)} tokens{RESET_COLOR}")
877
+
878
+ if auto_prompt is not None:
879
+ break
880
+
881
+ except KeyboardInterrupt:
882
+ if not warmup:
883
+ print("\nGeneration interrupted")
884
+ token_printer.stop()
885
+ continue
886
+
887
+ except Exception as e:
888
+ if not warmup:
889
+ print(f"\nError in chat loop: {str(e)}")
890
+ import traceback
891
+ traceback.print_exc()
892
+
893
+ def main():
894
+ args = parse_args()
895
+ global DEBUG_LEVEL
896
+ DEBUG_LEVEL = args.debug_level
897
+
898
+ # Convert directory to absolute path
899
+ model_dir = Path(args.d).resolve()
900
+ if not model_dir.exists():
901
+ print(f"\nError: Model directory not found: {model_dir}")
902
+ return 1
903
+
904
+ print(f"\nUsing model directory: {model_dir}")
905
+ print(f"Context length: {args.context_length}")
906
+
907
+ try:
908
+ # Update paths to be relative to model directory
909
+ args.embed = str(model_dir / args.embed)
910
+ args.ffn = str(model_dir / args.ffn)
911
+ args.lmhead = str(model_dir / args.lmhead)
912
+
913
+ # Handle tokenizer path separately since it's not relative to model_dir
914
+ if args.tokenizer is None:
915
+ args.tokenizer = str(model_dir)
916
+
917
+ if not Path(args.tokenizer).exists():
918
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
919
+ return 1
920
+
921
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
922
+ print(f"Using tokenizer path: {args.tokenizer}")
923
+
924
+ metadata = {}
925
+ # Load models and extract metadata
926
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
927
+
928
+ print(f"\nMetadata befor args.context_length: {metadata}")
929
+
930
+ # Override context length from command line if provided
931
+ if args.context_length is not None:
932
+ metadata['context_length'] = args.context_length
933
+ metadata['state_length'] = args.context_length # Also update state_length
934
+ print(f"\nOverriding context length from command line: {args.context_length}")
935
+
936
+ print(f"\nMetadata after load_models: {metadata}")
937
+
938
+ # Load tokenizer with resolved path
939
+ tokenizer = initialize_tokenizer(args.tokenizer)
940
+ if tokenizer is None:
941
+ raise RuntimeError("Failed to initialize tokenizer")
942
+
943
+ # Create unified state once
944
+ state = create_unified_state(ffn_models, metadata['context_length'])
945
+
946
+ # Initialize causal mask once
947
+ causal_mask = initialize_causal_mask(metadata['context_length'])
948
+
949
+ # Add split_lm_head to metadata for generate_next_token
950
+ metadata['split_lm_head'] = getattr(args, 'split_lm_head', 8)
951
+
952
+ # Warmup runs to prevent Python GIL issues with CoreML !
953
+ if not args.nw:
954
+ for i in range(2):
955
+ chat_loop(
956
+ embed_model=embed_model,
957
+ ffn_models=ffn_models,
958
+ lmhead_model=lmhead_model,
959
+ tokenizer=tokenizer,
960
+ metadata=metadata,
961
+ state=state, # Pass the state
962
+ causal_mask=causal_mask, # Pass the causal mask
963
+ warmup=True,
964
+ auto_prompt="who are you?"
965
+ )
966
+
967
+ # Main run
968
+ chat_loop(
969
+ embed_model=embed_model,
970
+ ffn_models=ffn_models,
971
+ lmhead_model=lmhead_model,
972
+ tokenizer=tokenizer,
973
+ metadata=metadata,
974
+ state=state, # Pass the state
975
+ causal_mask=causal_mask, # Pass the causal mask
976
+ warmup=False,
977
+ auto_prompt=args.prompt
978
+ )
979
+
980
+ except Exception as e:
981
+ print(f"\nError: {str(e)}")
982
+ import traceback
983
+ traceback.print_exc()
984
+ return 1
985
+
986
+ return 0
987
+
988
+ if __name__ == "__main__":
989
+ exit(main())
config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "LlamaTokenizer",
3
+ "model_type": "llama"
4
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
meta.yaml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model_info:
2
+ name: anemll-Qwen3-4B-MLX-dequantized-ctx1024
3
+ version: 0.3.0
4
+ description: |
5
+ Demonstarates running Qwen3-4B-MLX-dequantized on Apple Neural Engine
6
+ Context length: 1024
7
+ Batch size: 64
8
+ Chunks: 2
9
+ license: MIT
10
+ author: Anemll
11
+ framework: Core ML
12
+ language: Python
13
+ parameters:
14
+ context_length: 1024
15
+ batch_size: 64
16
+ lut_embeddings: none
17
+ lut_ffn: 4
18
+ lut_lmhead: 8
19
+ num_chunks: 2
20
+ model_prefix: qwen
21
+ embeddings: qwen_embeddings.mlmodelc
22
+ lm_head: qwen_lm_head_lut8.mlmodelc
23
+ ffn: qwen_FFN_PF_lut4.mlmodelc
24
+ split_lm_head: 16
qwen_FFN_PF_lut4_chunk_01of02.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f940f9661b267bb9282e47cd2ba5c999e4a76de57404fcc5a9f4a1b964db4863
3
+ size 243
qwen_FFN_PF_lut4_chunk_01of02.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7bea695683f198193ea833dd4dc7559ac47137a672afd457a5234edac4545da
3
+ size 983
qwen_FFN_PF_lut4_chunk_01of02.mlmodelc/metadata.json ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "metadataOutputVersion" : "3.0",
4
+ "userDefinedMetadata" : {
5
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
6
+ "com.github.apple.coremltools.version" : "8.3.0",
7
+ "com.anemll.context_length" : "1024",
8
+ "com.anemll.chunk_no" : "1",
9
+ "com.github.apple.coremltools.source_dialect" : "TorchScript",
10
+ "com.anemll.num_chunks" : "2",
11
+ "com.anemll.info" : "Converted with Anemll v0.3.0",
12
+ "com.anemll.batch_size" : "64",
13
+ "com.anemll.lut_bits" : "4"
14
+ },
15
+ "availability" : {
16
+ "macOS" : "15.0",
17
+ "tvOS" : "18.0",
18
+ "visionOS" : "2.0",
19
+ "watchOS" : "11.0",
20
+ "iOS" : "18.0",
21
+ "macCatalyst" : "18.0"
22
+ },
23
+ "inputSchema" : [
24
+ {
25
+ "hasShapeFlexibility" : "0",
26
+ "isOptional" : "0",
27
+ "dataType" : "Float16",
28
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
29
+ "shortDescription" : "",
30
+ "shape" : "[1, 1, 2560]",
31
+ "name" : "hidden_states",
32
+ "type" : "MultiArray"
33
+ },
34
+ {
35
+ "hasShapeFlexibility" : "0",
36
+ "isOptional" : "0",
37
+ "dataType" : "Int32",
38
+ "formattedType" : "MultiArray (Int32 1)",
39
+ "shortDescription" : "",
40
+ "shape" : "[1]",
41
+ "name" : "position_ids",
42
+ "type" : "MultiArray"
43
+ },
44
+ {
45
+ "hasShapeFlexibility" : "0",
46
+ "isOptional" : "0",
47
+ "dataType" : "Float16",
48
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
49
+ "shortDescription" : "",
50
+ "shape" : "[1, 1, 1, 1024]",
51
+ "name" : "causal_mask",
52
+ "type" : "MultiArray"
53
+ },
54
+ {
55
+ "hasShapeFlexibility" : "0",
56
+ "isOptional" : "0",
57
+ "dataType" : "Int32",
58
+ "formattedType" : "MultiArray (Int32 1)",
59
+ "shortDescription" : "",
60
+ "shape" : "[1]",
61
+ "name" : "current_pos",
62
+ "type" : "MultiArray"
63
+ }
64
+ ],
65
+ "outputSchema" : [
66
+ {
67
+ "hasShapeFlexibility" : "0",
68
+ "isOptional" : "0",
69
+ "dataType" : "Float16",
70
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
71
+ "shortDescription" : "",
72
+ "shape" : "[1, 1, 2560]",
73
+ "name" : "output_hidden_states",
74
+ "type" : "MultiArray"
75
+ }
76
+ ],
77
+ "modelParameters" : [
78
+
79
+ ],
80
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
81
+ "method" : "predict",
82
+ "functions" : [
83
+ {
84
+ "inputSchema" : [
85
+ {
86
+ "hasShapeFlexibility" : "0",
87
+ "isOptional" : "0",
88
+ "dataType" : "Float16",
89
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
90
+ "shortDescription" : "",
91
+ "shape" : "[1, 1, 2560]",
92
+ "name" : "hidden_states",
93
+ "type" : "MultiArray"
94
+ },
95
+ {
96
+ "hasShapeFlexibility" : "0",
97
+ "isOptional" : "0",
98
+ "dataType" : "Int32",
99
+ "formattedType" : "MultiArray (Int32 1)",
100
+ "shortDescription" : "",
101
+ "shape" : "[1]",
102
+ "name" : "position_ids",
103
+ "type" : "MultiArray"
104
+ },
105
+ {
106
+ "hasShapeFlexibility" : "0",
107
+ "isOptional" : "0",
108
+ "dataType" : "Float16",
109
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
110
+ "shortDescription" : "",
111
+ "shape" : "[1, 1, 1, 1024]",
112
+ "name" : "causal_mask",
113
+ "type" : "MultiArray"
114
+ },
115
+ {
116
+ "hasShapeFlexibility" : "0",
117
+ "isOptional" : "0",
118
+ "dataType" : "Int32",
119
+ "formattedType" : "MultiArray (Int32 1)",
120
+ "shortDescription" : "",
121
+ "shape" : "[1]",
122
+ "name" : "current_pos",
123
+ "type" : "MultiArray"
124
+ }
125
+ ],
126
+ "computePrecision" : "Mixed (Float16, Int32)",
127
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
128
+ "stateSchema" : [
129
+ {
130
+ "dataType" : "Float16",
131
+ "isOptional" : "0",
132
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
133
+ "shortDescription" : "",
134
+ "shape" : "[72, 8, 1024, 128]",
135
+ "name" : "model_model_kv_cache_0",
136
+ "type" : "State"
137
+ }
138
+ ],
139
+ "outputSchema" : [
140
+ {
141
+ "hasShapeFlexibility" : "0",
142
+ "isOptional" : "0",
143
+ "dataType" : "Float16",
144
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
145
+ "shortDescription" : "",
146
+ "shape" : "[1, 1, 2560]",
147
+ "name" : "output_hidden_states",
148
+ "type" : "MultiArray"
149
+ }
150
+ ],
151
+ "name" : "infer",
152
+ "mlProgramOperationTypeHistogram" : {
153
+ "Ios18.expandDims" : 72,
154
+ "Ios18.mul" : 144,
155
+ "Ios18.softmax" : 18,
156
+ "Ios18.matmul" : 36,
157
+ "Identity" : 1,
158
+ "Ios16.reduceMean" : 73,
159
+ "Ios18.greaterEqual" : 1,
160
+ "Select" : 1,
161
+ "Ios18.readState" : 37,
162
+ "Tile" : 36,
163
+ "Ios18.gather" : 2,
164
+ "Ios18.add" : 92,
165
+ "Ios18.layerNorm" : 73,
166
+ "Ios18.sliceUpdate" : 36,
167
+ "Ios18.writeState" : 36,
168
+ "Ios18.reshape" : 110,
169
+ "Ios18.constexprLutToDense" : 126,
170
+ "Ios18.conv" : 126,
171
+ "Ios18.concat" : 108,
172
+ "Ios18.transpose" : 108,
173
+ "Ios18.sub" : 73,
174
+ "Ios18.silu" : 18,
175
+ "Ios18.sliceByIndex" : 108,
176
+ "Ios18.squeeze" : 54
177
+ }
178
+ },
179
+ {
180
+ "inputSchema" : [
181
+ {
182
+ "hasShapeFlexibility" : "0",
183
+ "isOptional" : "0",
184
+ "dataType" : "Float16",
185
+ "formattedType" : "MultiArray (Float16 1 × 64 × 2560)",
186
+ "shortDescription" : "",
187
+ "shape" : "[1, 64, 2560]",
188
+ "name" : "hidden_states",
189
+ "type" : "MultiArray"
190
+ },
191
+ {
192
+ "hasShapeFlexibility" : "0",
193
+ "isOptional" : "0",
194
+ "dataType" : "Int32",
195
+ "formattedType" : "MultiArray (Int32 64)",
196
+ "shortDescription" : "",
197
+ "shape" : "[64]",
198
+ "name" : "position_ids",
199
+ "type" : "MultiArray"
200
+ },
201
+ {
202
+ "hasShapeFlexibility" : "0",
203
+ "isOptional" : "0",
204
+ "dataType" : "Float16",
205
+ "formattedType" : "MultiArray (Float16 1 × 1 × 64 × 1024)",
206
+ "shortDescription" : "",
207
+ "shape" : "[1, 1, 64, 1024]",
208
+ "name" : "causal_mask",
209
+ "type" : "MultiArray"
210
+ },
211
+ {
212
+ "hasShapeFlexibility" : "0",
213
+ "isOptional" : "0",
214
+ "dataType" : "Int32",
215
+ "formattedType" : "MultiArray (Int32 1)",
216
+ "shortDescription" : "",
217
+ "shape" : "[1]",
218
+ "name" : "current_pos",
219
+ "type" : "MultiArray"
220
+ }
221
+ ],
222
+ "computePrecision" : "Mixed (Float16, Int32)",
223
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
224
+ "stateSchema" : [
225
+ {
226
+ "dataType" : "Float16",
227
+ "isOptional" : "0",
228
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
229
+ "shortDescription" : "",
230
+ "shape" : "[72, 8, 1024, 128]",
231
+ "name" : "model_model_kv_cache_0",
232
+ "type" : "State"
233
+ }
234
+ ],
235
+ "outputSchema" : [
236
+ {
237
+ "hasShapeFlexibility" : "0",
238
+ "isOptional" : "0",
239
+ "dataType" : "Float16",
240
+ "formattedType" : "MultiArray (Float16 1 × 64 × 2560)",
241
+ "shortDescription" : "",
242
+ "shape" : "[1, 64, 2560]",
243
+ "name" : "output_hidden_states",
244
+ "type" : "MultiArray"
245
+ }
246
+ ],
247
+ "name" : "prefill",
248
+ "mlProgramOperationTypeHistogram" : {
249
+ "Ios18.expandDims" : 72,
250
+ "Ios18.mul" : 144,
251
+ "Ios18.softmax" : 18,
252
+ "Ios18.matmul" : 36,
253
+ "Ios16.reduceMean" : 72,
254
+ "Ios18.greaterEqual" : 1,
255
+ "Select" : 1,
256
+ "Ios18.readState" : 37,
257
+ "Tile" : 36,
258
+ "Ios18.gather" : 2,
259
+ "Ios18.add" : 92,
260
+ "Ios18.layerNorm" : 72,
261
+ "Ios18.sliceUpdate" : 36,
262
+ "Ios18.writeState" : 36,
263
+ "Ios18.reshape" : 146,
264
+ "Ios18.constexprLutToDense" : 126,
265
+ "Ios18.conv" : 126,
266
+ "Ios18.concat" : 108,
267
+ "Ios18.transpose" : 164,
268
+ "Ios18.sub" : 72,
269
+ "Ios18.silu" : 18,
270
+ "Ios18.sliceByIndex" : 108,
271
+ "Ios18.squeeze" : 54
272
+ }
273
+ }
274
+ ],
275
+ "version" : "0.3.0",
276
+ "isUpdatable" : "0",
277
+ "defaultFunctionName" : "infer",
278
+ "specificationVersion" : 9,
279
+ "stateSchema" : [
280
+ {
281
+ "dataType" : "Float16",
282
+ "isOptional" : "0",
283
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
284
+ "shortDescription" : "",
285
+ "shape" : "[72, 8, 1024, 128]",
286
+ "name" : "model_model_kv_cache_0",
287
+ "type" : "State"
288
+ }
289
+ ],
290
+ "computePrecision" : "Mixed (Float16, Int32)",
291
+ "mlProgramOperationTypeHistogram" : {
292
+ "Ios18.expandDims" : 72,
293
+ "Ios18.mul" : 144,
294
+ "Ios18.softmax" : 18,
295
+ "Ios18.matmul" : 36,
296
+ "Identity" : 1,
297
+ "Ios16.reduceMean" : 73,
298
+ "Ios18.greaterEqual" : 1,
299
+ "Select" : 1,
300
+ "Ios18.readState" : 37,
301
+ "Tile" : 36,
302
+ "Ios18.gather" : 2,
303
+ "Ios18.add" : 92,
304
+ "Ios18.layerNorm" : 73,
305
+ "Ios18.sliceUpdate" : 36,
306
+ "Ios18.writeState" : 36,
307
+ "Ios18.reshape" : 110,
308
+ "Ios18.constexprLutToDense" : 126,
309
+ "Ios18.conv" : 126,
310
+ "Ios18.concat" : 108,
311
+ "Ios18.transpose" : 108,
312
+ "Ios18.sub" : 73,
313
+ "Ios18.silu" : 18,
314
+ "Ios18.sliceByIndex" : 108,
315
+ "Ios18.squeeze" : 54
316
+ },
317
+ "shortDescription" : "Anemll Model: Multifunction FFN+Prefill",
318
+ "generatedClassName" : "qwen_FFN_PF_lut4_chunk_01of02",
319
+ "author" : "Converted with Anemll v0.3.0",
320
+ "modelType" : {
321
+ "name" : "MLModelType_mlProgram"
322
+ }
323
+ }
324
+ ]
qwen_FFN_PF_lut4_chunk_01of02.mlmodelc/model.mil ADDED
The diff for this file is too large to render. See raw diff
 
qwen_FFN_PF_lut4_chunk_01of02.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f65eef202f5908da7c989174df68f29344a91520d01df382e89930fc8ac8140
3
+ size 944314880
qwen_FFN_PF_lut4_chunk_02of02.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b3e5221beac8057139104891eb8f4b3a2b53ad0a702aff31dc9b60fc0bd66c3
3
+ size 243
qwen_FFN_PF_lut4_chunk_02of02.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47a7d13c9ac8bd8b36abf6202b6b0f879030938a7af0b7022eb13f303f3ec5e6
3
+ size 983
qwen_FFN_PF_lut4_chunk_02of02.mlmodelc/metadata.json ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "metadataOutputVersion" : "3.0",
4
+ "userDefinedMetadata" : {
5
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
6
+ "com.github.apple.coremltools.source_dialect" : "TorchScript",
7
+ "com.github.apple.coremltools.version" : "8.3.0",
8
+ "com.anemll.chunk_no" : "2",
9
+ "com.anemll.context_length" : "1024",
10
+ "com.anemll.num_chunks" : "2",
11
+ "com.anemll.batch_size" : "64",
12
+ "com.anemll.info" : "Converted with Anemll v0.3.0",
13
+ "com.anemll.lut_bits" : "4"
14
+ },
15
+ "availability" : {
16
+ "macOS" : "15.0",
17
+ "tvOS" : "18.0",
18
+ "visionOS" : "2.0",
19
+ "watchOS" : "11.0",
20
+ "iOS" : "18.0",
21
+ "macCatalyst" : "18.0"
22
+ },
23
+ "inputSchema" : [
24
+ {
25
+ "hasShapeFlexibility" : "0",
26
+ "isOptional" : "0",
27
+ "dataType" : "Float16",
28
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
29
+ "shortDescription" : "",
30
+ "shape" : "[1, 1, 2560]",
31
+ "name" : "hidden_states",
32
+ "type" : "MultiArray"
33
+ },
34
+ {
35
+ "hasShapeFlexibility" : "0",
36
+ "isOptional" : "0",
37
+ "dataType" : "Int32",
38
+ "formattedType" : "MultiArray (Int32 1)",
39
+ "shortDescription" : "",
40
+ "shape" : "[1]",
41
+ "name" : "position_ids",
42
+ "type" : "MultiArray"
43
+ },
44
+ {
45
+ "hasShapeFlexibility" : "0",
46
+ "isOptional" : "0",
47
+ "dataType" : "Float16",
48
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
49
+ "shortDescription" : "",
50
+ "shape" : "[1, 1, 1, 1024]",
51
+ "name" : "causal_mask",
52
+ "type" : "MultiArray"
53
+ },
54
+ {
55
+ "hasShapeFlexibility" : "0",
56
+ "isOptional" : "0",
57
+ "dataType" : "Int32",
58
+ "formattedType" : "MultiArray (Int32 1)",
59
+ "shortDescription" : "",
60
+ "shape" : "[1]",
61
+ "name" : "current_pos",
62
+ "type" : "MultiArray"
63
+ }
64
+ ],
65
+ "outputSchema" : [
66
+ {
67
+ "hasShapeFlexibility" : "0",
68
+ "isOptional" : "0",
69
+ "dataType" : "Float16",
70
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
71
+ "shortDescription" : "",
72
+ "shape" : "[1, 1, 2560]",
73
+ "name" : "output_hidden_states",
74
+ "type" : "MultiArray"
75
+ }
76
+ ],
77
+ "modelParameters" : [
78
+
79
+ ],
80
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
81
+ "method" : "predict",
82
+ "functions" : [
83
+ {
84
+ "inputSchema" : [
85
+ {
86
+ "hasShapeFlexibility" : "0",
87
+ "isOptional" : "0",
88
+ "dataType" : "Float16",
89
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
90
+ "shortDescription" : "",
91
+ "shape" : "[1, 1, 2560]",
92
+ "name" : "hidden_states",
93
+ "type" : "MultiArray"
94
+ },
95
+ {
96
+ "hasShapeFlexibility" : "0",
97
+ "isOptional" : "0",
98
+ "dataType" : "Int32",
99
+ "formattedType" : "MultiArray (Int32 1)",
100
+ "shortDescription" : "",
101
+ "shape" : "[1]",
102
+ "name" : "position_ids",
103
+ "type" : "MultiArray"
104
+ },
105
+ {
106
+ "hasShapeFlexibility" : "0",
107
+ "isOptional" : "0",
108
+ "dataType" : "Float16",
109
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
110
+ "shortDescription" : "",
111
+ "shape" : "[1, 1, 1, 1024]",
112
+ "name" : "causal_mask",
113
+ "type" : "MultiArray"
114
+ },
115
+ {
116
+ "hasShapeFlexibility" : "0",
117
+ "isOptional" : "0",
118
+ "dataType" : "Int32",
119
+ "formattedType" : "MultiArray (Int32 1)",
120
+ "shortDescription" : "",
121
+ "shape" : "[1]",
122
+ "name" : "current_pos",
123
+ "type" : "MultiArray"
124
+ }
125
+ ],
126
+ "computePrecision" : "Mixed (Float16, Int32)",
127
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
128
+ "stateSchema" : [
129
+ {
130
+ "dataType" : "Float16",
131
+ "isOptional" : "0",
132
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
133
+ "shortDescription" : "",
134
+ "shape" : "[72, 8, 1024, 128]",
135
+ "name" : "model_model_kv_cache_0",
136
+ "type" : "State"
137
+ }
138
+ ],
139
+ "outputSchema" : [
140
+ {
141
+ "hasShapeFlexibility" : "0",
142
+ "isOptional" : "0",
143
+ "dataType" : "Float16",
144
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
145
+ "shortDescription" : "",
146
+ "shape" : "[1, 1, 2560]",
147
+ "name" : "output_hidden_states",
148
+ "type" : "MultiArray"
149
+ }
150
+ ],
151
+ "name" : "infer",
152
+ "mlProgramOperationTypeHistogram" : {
153
+ "Ios18.expandDims" : 72,
154
+ "Ios18.mul" : 144,
155
+ "Ios18.softmax" : 18,
156
+ "Ios18.matmul" : 36,
157
+ "Identity" : 1,
158
+ "Ios16.reduceMean" : 73,
159
+ "Ios18.greaterEqual" : 1,
160
+ "Select" : 1,
161
+ "Ios18.readState" : 37,
162
+ "Tile" : 36,
163
+ "Ios18.gather" : 2,
164
+ "Ios18.add" : 92,
165
+ "Ios18.layerNorm" : 73,
166
+ "Ios18.sliceUpdate" : 36,
167
+ "Ios18.writeState" : 36,
168
+ "Ios18.reshape" : 110,
169
+ "Ios18.constexprLutToDense" : 126,
170
+ "Ios18.conv" : 126,
171
+ "Ios18.concat" : 108,
172
+ "Ios18.transpose" : 108,
173
+ "Ios18.sub" : 73,
174
+ "Ios18.silu" : 18,
175
+ "Ios18.sliceByIndex" : 108,
176
+ "Ios18.squeeze" : 54
177
+ }
178
+ },
179
+ {
180
+ "inputSchema" : [
181
+ {
182
+ "hasShapeFlexibility" : "0",
183
+ "isOptional" : "0",
184
+ "dataType" : "Float16",
185
+ "formattedType" : "MultiArray (Float16 1 × 64 × 2560)",
186
+ "shortDescription" : "",
187
+ "shape" : "[1, 64, 2560]",
188
+ "name" : "hidden_states",
189
+ "type" : "MultiArray"
190
+ },
191
+ {
192
+ "hasShapeFlexibility" : "0",
193
+ "isOptional" : "0",
194
+ "dataType" : "Int32",
195
+ "formattedType" : "MultiArray (Int32 64)",
196
+ "shortDescription" : "",
197
+ "shape" : "[64]",
198
+ "name" : "position_ids",
199
+ "type" : "MultiArray"
200
+ },
201
+ {
202
+ "hasShapeFlexibility" : "0",
203
+ "isOptional" : "0",
204
+ "dataType" : "Float16",
205
+ "formattedType" : "MultiArray (Float16 1 × 1 × 64 × 1024)",
206
+ "shortDescription" : "",
207
+ "shape" : "[1, 1, 64, 1024]",
208
+ "name" : "causal_mask",
209
+ "type" : "MultiArray"
210
+ },
211
+ {
212
+ "hasShapeFlexibility" : "0",
213
+ "isOptional" : "0",
214
+ "dataType" : "Int32",
215
+ "formattedType" : "MultiArray (Int32 1)",
216
+ "shortDescription" : "",
217
+ "shape" : "[1]",
218
+ "name" : "current_pos",
219
+ "type" : "MultiArray"
220
+ }
221
+ ],
222
+ "computePrecision" : "Mixed (Float16, Int32)",
223
+ "storagePrecision" : "Mixed (Float16, Palettized (11 bits), Palettized (13 bits), Palettized (15 bits), UInt4)",
224
+ "stateSchema" : [
225
+ {
226
+ "dataType" : "Float16",
227
+ "isOptional" : "0",
228
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
229
+ "shortDescription" : "",
230
+ "shape" : "[72, 8, 1024, 128]",
231
+ "name" : "model_model_kv_cache_0",
232
+ "type" : "State"
233
+ }
234
+ ],
235
+ "outputSchema" : [
236
+ {
237
+ "hasShapeFlexibility" : "0",
238
+ "isOptional" : "0",
239
+ "dataType" : "Float16",
240
+ "formattedType" : "MultiArray (Float16 1 × 64 × 2560)",
241
+ "shortDescription" : "",
242
+ "shape" : "[1, 64, 2560]",
243
+ "name" : "output_hidden_states",
244
+ "type" : "MultiArray"
245
+ }
246
+ ],
247
+ "name" : "prefill",
248
+ "mlProgramOperationTypeHistogram" : {
249
+ "Ios18.expandDims" : 72,
250
+ "Ios18.mul" : 144,
251
+ "Ios18.softmax" : 18,
252
+ "Ios18.matmul" : 36,
253
+ "Ios16.reduceMean" : 73,
254
+ "Ios18.greaterEqual" : 1,
255
+ "Select" : 1,
256
+ "Ios18.readState" : 37,
257
+ "Tile" : 36,
258
+ "Ios18.gather" : 2,
259
+ "Ios18.add" : 92,
260
+ "Ios18.layerNorm" : 73,
261
+ "Ios18.sliceUpdate" : 36,
262
+ "Ios18.writeState" : 36,
263
+ "Ios18.reshape" : 146,
264
+ "Ios18.constexprLutToDense" : 126,
265
+ "Ios18.conv" : 126,
266
+ "Ios18.concat" : 108,
267
+ "Ios18.transpose" : 164,
268
+ "Ios18.sub" : 73,
269
+ "Ios18.silu" : 18,
270
+ "Ios18.sliceByIndex" : 108,
271
+ "Ios18.squeeze" : 54
272
+ }
273
+ }
274
+ ],
275
+ "version" : "0.3.0",
276
+ "isUpdatable" : "0",
277
+ "defaultFunctionName" : "infer",
278
+ "specificationVersion" : 9,
279
+ "stateSchema" : [
280
+ {
281
+ "dataType" : "Float16",
282
+ "isOptional" : "0",
283
+ "formattedType" : "State (Float16 72 × 8 × 1024 × 128)",
284
+ "shortDescription" : "",
285
+ "shape" : "[72, 8, 1024, 128]",
286
+ "name" : "model_model_kv_cache_0",
287
+ "type" : "State"
288
+ }
289
+ ],
290
+ "computePrecision" : "Mixed (Float16, Int32)",
291
+ "mlProgramOperationTypeHistogram" : {
292
+ "Ios18.expandDims" : 72,
293
+ "Ios18.mul" : 144,
294
+ "Ios18.softmax" : 18,
295
+ "Ios18.matmul" : 36,
296
+ "Identity" : 1,
297
+ "Ios16.reduceMean" : 73,
298
+ "Ios18.greaterEqual" : 1,
299
+ "Select" : 1,
300
+ "Ios18.readState" : 37,
301
+ "Tile" : 36,
302
+ "Ios18.gather" : 2,
303
+ "Ios18.add" : 92,
304
+ "Ios18.layerNorm" : 73,
305
+ "Ios18.sliceUpdate" : 36,
306
+ "Ios18.writeState" : 36,
307
+ "Ios18.reshape" : 110,
308
+ "Ios18.constexprLutToDense" : 126,
309
+ "Ios18.conv" : 126,
310
+ "Ios18.concat" : 108,
311
+ "Ios18.transpose" : 108,
312
+ "Ios18.sub" : 73,
313
+ "Ios18.silu" : 18,
314
+ "Ios18.sliceByIndex" : 108,
315
+ "Ios18.squeeze" : 54
316
+ },
317
+ "shortDescription" : "Anemll Model: Multifunction FFN+Prefill",
318
+ "generatedClassName" : "qwen_FFN_PF_lut4_chunk_02of02",
319
+ "author" : "Converted with Anemll v0.3.0",
320
+ "modelType" : {
321
+ "name" : "MLModelType_mlProgram"
322
+ }
323
+ }
324
+ ]
qwen_FFN_PF_lut4_chunk_02of02.mlmodelc/model.mil ADDED
The diff for this file is too large to render. See raw diff
 
qwen_FFN_PF_lut4_chunk_02of02.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0543f72213b5295cf375c10b60ecb4e04ce02ff7ea2d2edf24def9f47de0ca3
3
+ size 944314880
qwen_embeddings.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a86e14522df57e2d90d8b058c42c1b8a5f1f6b11186dfb9ed74c390527483570
3
+ size 243
qwen_embeddings.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d04942306ac87412c0358cec9ec357be5b9f781e51e12524e1c763053480e082
3
+ size 501
qwen_embeddings.mlmodelc/metadata.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "shortDescription" : "Anemll Model (Embeddings) converted to CoreML",
4
+ "metadataOutputVersion" : "3.0",
5
+ "outputSchema" : [
6
+ {
7
+ "hasShapeFlexibility" : "0",
8
+ "isOptional" : "0",
9
+ "dataType" : "Float16",
10
+ "formattedType" : "MultiArray (Float16)",
11
+ "shortDescription" : "",
12
+ "shape" : "[]",
13
+ "name" : "hidden_states",
14
+ "type" : "MultiArray"
15
+ }
16
+ ],
17
+ "version" : "0.3.0",
18
+ "modelParameters" : [
19
+
20
+ ],
21
+ "author" : "Converted with Anemll v0.3.0",
22
+ "specificationVersion" : 9,
23
+ "storagePrecision" : "Float16",
24
+ "mlProgramOperationTypeHistogram" : {
25
+ "Ios18.gather" : 1
26
+ },
27
+ "computePrecision" : "Mixed (Float16, Int32)",
28
+ "stateSchema" : [
29
+
30
+ ],
31
+ "isUpdatable" : "0",
32
+ "availability" : {
33
+ "macOS" : "15.0",
34
+ "tvOS" : "18.0",
35
+ "visionOS" : "2.0",
36
+ "watchOS" : "11.0",
37
+ "iOS" : "18.0",
38
+ "macCatalyst" : "18.0"
39
+ },
40
+ "modelType" : {
41
+ "name" : "MLModelType_mlProgram"
42
+ },
43
+ "inputSchema" : [
44
+ {
45
+ "shortDescription" : "",
46
+ "dataType" : "Int32",
47
+ "hasShapeFlexibility" : "1",
48
+ "isOptional" : "0",
49
+ "shapeFlexibility" : "1 × 1 | 1 × 64",
50
+ "formattedType" : "MultiArray (Int32 1 × 1)",
51
+ "type" : "MultiArray",
52
+ "shape" : "[1, 1]",
53
+ "name" : "input_ids",
54
+ "enumeratedShapes" : "[[1, 1], [1, 64]]"
55
+ }
56
+ ],
57
+ "userDefinedMetadata" : {
58
+ "com.anemll.context_length" : "1024",
59
+ "com.anemll.info" : "Converted with Anemll v0.3.0",
60
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
61
+ "com.github.apple.coremltools.version" : "8.3.0",
62
+ "com.github.apple.coremltools.source_dialect" : "TorchScript"
63
+ },
64
+ "generatedClassName" : "qwen_embeddings",
65
+ "method" : "predict"
66
+ }
67
+ ]
qwen_embeddings.mlmodelc/model.mil ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ program(1.3)
2
+ [buildInfo = dict<string, string>({{"coremlc-component-MIL", "3500.11.1"}, {"coremlc-version", "3500.21.1"}, {"coremltools-component-torch", "2.5.0"}, {"coremltools-source-dialect", "TorchScript"}, {"coremltools-version", "8.3.0"}})]
3
+ {
4
+ func main<ios18>(tensor<int32, [1, ?]> input_ids) [FlexibleShapeInformation = tuple<tuple<string, dict<string, tensor<int32, [?]>>>, tuple<string, dict<string, dict<string, tensor<int32, [?]>>>>>((("DefaultShapes", {{"input_ids", [1, 1]}}), ("EnumeratedShapes", {{"79ae981e", {{"input_ids", [1, 1]}}}, {"ed9b58c8", {{"input_ids", [1, 64]}}}})))] {
5
+ int32 hidden_states_axis_0 = const()[name = string("hidden_states_axis_0"), val = int32(0)];
6
+ int32 hidden_states_batch_dims_0 = const()[name = string("hidden_states_batch_dims_0"), val = int32(0)];
7
+ bool hidden_states_validate_indices_0 = const()[name = string("hidden_states_validate_indices_0"), val = bool(false)];
8
+ tensor<fp16, [151936, 2560]> embed_tokens_weight_to_fp16 = const()[name = string("embed_tokens_weight_to_fp16"), val = tensor<fp16, [151936, 2560]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64)))];
9
+ tensor<fp16, [1, ?, 2560]> hidden_states = gather(axis = hidden_states_axis_0, batch_dims = hidden_states_batch_dims_0, indices = input_ids, validate_indices = hidden_states_validate_indices_0, x = embed_tokens_weight_to_fp16)[name = string("hidden_states_cast_fp16")];
10
+ } -> (hidden_states);
11
+ }
qwen_embeddings.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42bfb54b57d5ca84fcaa5a9d0c243febdf0668863d33996cbd93631caf1abfe0
3
+ size 777912448
qwen_lm_head_lut8.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ba4088211e9882eeb9250e64cd8dc34128ab2498383a316398e9c988e58f0a2
3
+ size 243
qwen_lm_head_lut8.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b36cd6b96e663ed87d7c65cfe75f2aab53e1ba16e9cab13f3d0081747bbc80b
3
+ size 898
qwen_lm_head_lut8.mlmodelc/metadata.json ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "shortDescription" : "Anemll Model (LM Head) converted to CoreML",
4
+ "metadataOutputVersion" : "3.0",
5
+ "outputSchema" : [
6
+ {
7
+ "hasShapeFlexibility" : "0",
8
+ "isOptional" : "0",
9
+ "dataType" : "Float16",
10
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
11
+ "shortDescription" : "",
12
+ "shape" : "[1, 1, 9496]",
13
+ "name" : "logits1",
14
+ "type" : "MultiArray"
15
+ },
16
+ {
17
+ "hasShapeFlexibility" : "0",
18
+ "isOptional" : "0",
19
+ "dataType" : "Float16",
20
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
21
+ "shortDescription" : "",
22
+ "shape" : "[1, 1, 9496]",
23
+ "name" : "logits2",
24
+ "type" : "MultiArray"
25
+ },
26
+ {
27
+ "hasShapeFlexibility" : "0",
28
+ "isOptional" : "0",
29
+ "dataType" : "Float16",
30
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
31
+ "shortDescription" : "",
32
+ "shape" : "[1, 1, 9496]",
33
+ "name" : "logits3",
34
+ "type" : "MultiArray"
35
+ },
36
+ {
37
+ "hasShapeFlexibility" : "0",
38
+ "isOptional" : "0",
39
+ "dataType" : "Float16",
40
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
41
+ "shortDescription" : "",
42
+ "shape" : "[1, 1, 9496]",
43
+ "name" : "logits4",
44
+ "type" : "MultiArray"
45
+ },
46
+ {
47
+ "hasShapeFlexibility" : "0",
48
+ "isOptional" : "0",
49
+ "dataType" : "Float16",
50
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
51
+ "shortDescription" : "",
52
+ "shape" : "[1, 1, 9496]",
53
+ "name" : "logits5",
54
+ "type" : "MultiArray"
55
+ },
56
+ {
57
+ "hasShapeFlexibility" : "0",
58
+ "isOptional" : "0",
59
+ "dataType" : "Float16",
60
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
61
+ "shortDescription" : "",
62
+ "shape" : "[1, 1, 9496]",
63
+ "name" : "logits6",
64
+ "type" : "MultiArray"
65
+ },
66
+ {
67
+ "hasShapeFlexibility" : "0",
68
+ "isOptional" : "0",
69
+ "dataType" : "Float16",
70
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
71
+ "shortDescription" : "",
72
+ "shape" : "[1, 1, 9496]",
73
+ "name" : "logits7",
74
+ "type" : "MultiArray"
75
+ },
76
+ {
77
+ "hasShapeFlexibility" : "0",
78
+ "isOptional" : "0",
79
+ "dataType" : "Float16",
80
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
81
+ "shortDescription" : "",
82
+ "shape" : "[1, 1, 9496]",
83
+ "name" : "logits8",
84
+ "type" : "MultiArray"
85
+ },
86
+ {
87
+ "hasShapeFlexibility" : "0",
88
+ "isOptional" : "0",
89
+ "dataType" : "Float16",
90
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
91
+ "shortDescription" : "",
92
+ "shape" : "[1, 1, 9496]",
93
+ "name" : "logits9",
94
+ "type" : "MultiArray"
95
+ },
96
+ {
97
+ "hasShapeFlexibility" : "0",
98
+ "isOptional" : "0",
99
+ "dataType" : "Float16",
100
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
101
+ "shortDescription" : "",
102
+ "shape" : "[1, 1, 9496]",
103
+ "name" : "logits10",
104
+ "type" : "MultiArray"
105
+ },
106
+ {
107
+ "hasShapeFlexibility" : "0",
108
+ "isOptional" : "0",
109
+ "dataType" : "Float16",
110
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
111
+ "shortDescription" : "",
112
+ "shape" : "[1, 1, 9496]",
113
+ "name" : "logits11",
114
+ "type" : "MultiArray"
115
+ },
116
+ {
117
+ "hasShapeFlexibility" : "0",
118
+ "isOptional" : "0",
119
+ "dataType" : "Float16",
120
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
121
+ "shortDescription" : "",
122
+ "shape" : "[1, 1, 9496]",
123
+ "name" : "logits12",
124
+ "type" : "MultiArray"
125
+ },
126
+ {
127
+ "hasShapeFlexibility" : "0",
128
+ "isOptional" : "0",
129
+ "dataType" : "Float16",
130
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
131
+ "shortDescription" : "",
132
+ "shape" : "[1, 1, 9496]",
133
+ "name" : "logits13",
134
+ "type" : "MultiArray"
135
+ },
136
+ {
137
+ "hasShapeFlexibility" : "0",
138
+ "isOptional" : "0",
139
+ "dataType" : "Float16",
140
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
141
+ "shortDescription" : "",
142
+ "shape" : "[1, 1, 9496]",
143
+ "name" : "logits14",
144
+ "type" : "MultiArray"
145
+ },
146
+ {
147
+ "hasShapeFlexibility" : "0",
148
+ "isOptional" : "0",
149
+ "dataType" : "Float16",
150
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
151
+ "shortDescription" : "",
152
+ "shape" : "[1, 1, 9496]",
153
+ "name" : "logits15",
154
+ "type" : "MultiArray"
155
+ },
156
+ {
157
+ "hasShapeFlexibility" : "0",
158
+ "isOptional" : "0",
159
+ "dataType" : "Float16",
160
+ "formattedType" : "MultiArray (Float16 1 × 1 × 9496)",
161
+ "shortDescription" : "",
162
+ "shape" : "[1, 1, 9496]",
163
+ "name" : "logits16",
164
+ "type" : "MultiArray"
165
+ }
166
+ ],
167
+ "version" : "0.3.0",
168
+ "modelParameters" : [
169
+
170
+ ],
171
+ "author" : "Converted with Anemll v0.3.0",
172
+ "specificationVersion" : 9,
173
+ "storagePrecision" : "Mixed (Float16, Palettized (19 bits), UInt8)",
174
+ "mlProgramOperationTypeHistogram" : {
175
+ "Ios18.transpose" : 17,
176
+ "Ios18.constexprLutToDense" : 16,
177
+ "Ios18.expandDims" : 1,
178
+ "Ios18.conv" : 16,
179
+ "Ios18.squeeze" : 16
180
+ },
181
+ "computePrecision" : "Mixed (Float16, Int32)",
182
+ "stateSchema" : [
183
+
184
+ ],
185
+ "isUpdatable" : "0",
186
+ "availability" : {
187
+ "macOS" : "15.0",
188
+ "tvOS" : "18.0",
189
+ "visionOS" : "2.0",
190
+ "watchOS" : "11.0",
191
+ "iOS" : "18.0",
192
+ "macCatalyst" : "18.0"
193
+ },
194
+ "modelType" : {
195
+ "name" : "MLModelType_mlProgram"
196
+ },
197
+ "inputSchema" : [
198
+ {
199
+ "hasShapeFlexibility" : "0",
200
+ "isOptional" : "0",
201
+ "dataType" : "Float16",
202
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2560)",
203
+ "shortDescription" : "",
204
+ "shape" : "[1, 1, 2560]",
205
+ "name" : "hidden_states",
206
+ "type" : "MultiArray"
207
+ }
208
+ ],
209
+ "userDefinedMetadata" : {
210
+ "com.anemll.context_length" : "1024",
211
+ "com.anemll.info" : "Converted with Anemll v0.3.0",
212
+ "com.anemll.lut_bits" : "8",
213
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
214
+ "com.github.apple.coremltools.version" : "8.3.0",
215
+ "com.github.apple.coremltools.source_dialect" : "TorchScript"
216
+ },
217
+ "generatedClassName" : "qwen_lm_head_lut8",
218
+ "method" : "predict"
219
+ }
220
+ ]
qwen_lm_head_lut8.mlmodelc/model.mil ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ program(1.3)
2
+ [buildInfo = dict<string, string>({{"coremlc-component-MIL", "3500.11.1"}, {"coremlc-version", "3500.21.1"}})]
3
+ {
4
+ func main<ios18>(tensor<fp16, [1, 1, 2560]> hidden_states) {
5
+ tensor<int32, [3]> var_5 = const()[name = string("op_5"), val = tensor<int32, [3]>([0, 2, 1])];
6
+ tensor<int32, [1]> input_axes_0 = const()[name = string("input_axes_0"), val = tensor<int32, [1]>([2])];
7
+ tensor<fp16, [1, 2560, 1]> var_6_cast_fp16 = transpose(perm = var_5, x = hidden_states)[name = string("transpose_16")];
8
+ tensor<fp16, [1, 2560, 1, 1]> input_cast_fp16 = expand_dims(axes = input_axes_0, x = var_6_cast_fp16)[name = string("input_cast_fp16")];
9
+ string var_29_pad_type_0 = const()[name = string("op_29_pad_type_0"), val = string("valid")];
10
+ tensor<int32, [2]> var_29_strides_0 = const()[name = string("op_29_strides_0"), val = tensor<int32, [2]>([1, 1])];
11
+ tensor<int32, [4]> var_29_pad_0 = const()[name = string("op_29_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
12
+ tensor<int32, [2]> var_29_dilations_0 = const()[name = string("op_29_dilations_0"), val = tensor<int32, [2]>([1, 1])];
13
+ int32 var_29_groups_0 = const()[name = string("op_29_groups_0"), val = int32(1)];
14
+ tensor<fp16, [9496, 2560, 1, 1]> op_9_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(24309888))))[name = string("op_9_promoted_to_fp16_palettized")];
15
+ tensor<fp16, [1, 9496, 1, 1]> var_29_cast_fp16 = conv(dilations = var_29_dilations_0, groups = var_29_groups_0, pad = var_29_pad_0, pad_type = var_29_pad_type_0, strides = var_29_strides_0, weight = op_9_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_29_cast_fp16")];
16
+ tensor<int32, [1]> var_31_axes_0 = const()[name = string("op_31_axes_0"), val = tensor<int32, [1]>([2])];
17
+ tensor<fp16, [1, 9496, 1]> var_31_cast_fp16 = squeeze(axes = var_31_axes_0, x = var_29_cast_fp16)[name = string("op_31_cast_fp16")];
18
+ tensor<int32, [3]> var_34_perm_0 = const()[name = string("op_34_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
19
+ string var_55_pad_type_0 = const()[name = string("op_55_pad_type_0"), val = string("valid")];
20
+ tensor<int32, [2]> var_55_strides_0 = const()[name = string("op_55_strides_0"), val = tensor<int32, [2]>([1, 1])];
21
+ tensor<int32, [4]> var_55_pad_0 = const()[name = string("op_55_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
22
+ tensor<int32, [2]> var_55_dilations_0 = const()[name = string("op_55_dilations_0"), val = tensor<int32, [2]>([1, 1])];
23
+ int32 var_55_groups_0 = const()[name = string("op_55_groups_0"), val = int32(1)];
24
+ tensor<fp16, [9496, 2560, 1, 1]> op_35_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(24917696))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(49227520))))[name = string("op_35_promoted_to_fp16_palettized")];
25
+ tensor<fp16, [1, 9496, 1, 1]> var_55_cast_fp16 = conv(dilations = var_55_dilations_0, groups = var_55_groups_0, pad = var_55_pad_0, pad_type = var_55_pad_type_0, strides = var_55_strides_0, weight = op_35_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_55_cast_fp16")];
26
+ tensor<int32, [1]> var_57_axes_0 = const()[name = string("op_57_axes_0"), val = tensor<int32, [1]>([2])];
27
+ tensor<fp16, [1, 9496, 1]> var_57_cast_fp16 = squeeze(axes = var_57_axes_0, x = var_55_cast_fp16)[name = string("op_57_cast_fp16")];
28
+ tensor<int32, [3]> var_60_perm_0 = const()[name = string("op_60_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
29
+ string var_81_pad_type_0 = const()[name = string("op_81_pad_type_0"), val = string("valid")];
30
+ tensor<int32, [2]> var_81_strides_0 = const()[name = string("op_81_strides_0"), val = tensor<int32, [2]>([1, 1])];
31
+ tensor<int32, [4]> var_81_pad_0 = const()[name = string("op_81_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
32
+ tensor<int32, [2]> var_81_dilations_0 = const()[name = string("op_81_dilations_0"), val = tensor<int32, [2]>([1, 1])];
33
+ int32 var_81_groups_0 = const()[name = string("op_81_groups_0"), val = int32(1)];
34
+ tensor<fp16, [9496, 2560, 1, 1]> op_61_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(49835328))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(74145152))))[name = string("op_61_promoted_to_fp16_palettized")];
35
+ tensor<fp16, [1, 9496, 1, 1]> var_81_cast_fp16 = conv(dilations = var_81_dilations_0, groups = var_81_groups_0, pad = var_81_pad_0, pad_type = var_81_pad_type_0, strides = var_81_strides_0, weight = op_61_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_81_cast_fp16")];
36
+ tensor<int32, [1]> var_83_axes_0 = const()[name = string("op_83_axes_0"), val = tensor<int32, [1]>([2])];
37
+ tensor<fp16, [1, 9496, 1]> var_83_cast_fp16 = squeeze(axes = var_83_axes_0, x = var_81_cast_fp16)[name = string("op_83_cast_fp16")];
38
+ tensor<int32, [3]> var_86_perm_0 = const()[name = string("op_86_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
39
+ string var_107_pad_type_0 = const()[name = string("op_107_pad_type_0"), val = string("valid")];
40
+ tensor<int32, [2]> var_107_strides_0 = const()[name = string("op_107_strides_0"), val = tensor<int32, [2]>([1, 1])];
41
+ tensor<int32, [4]> var_107_pad_0 = const()[name = string("op_107_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
42
+ tensor<int32, [2]> var_107_dilations_0 = const()[name = string("op_107_dilations_0"), val = tensor<int32, [2]>([1, 1])];
43
+ int32 var_107_groups_0 = const()[name = string("op_107_groups_0"), val = int32(1)];
44
+ tensor<fp16, [9496, 2560, 1, 1]> op_87_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(74752960))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(99062784))))[name = string("op_87_promoted_to_fp16_palettized")];
45
+ tensor<fp16, [1, 9496, 1, 1]> var_107_cast_fp16 = conv(dilations = var_107_dilations_0, groups = var_107_groups_0, pad = var_107_pad_0, pad_type = var_107_pad_type_0, strides = var_107_strides_0, weight = op_87_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_107_cast_fp16")];
46
+ tensor<int32, [1]> var_109_axes_0 = const()[name = string("op_109_axes_0"), val = tensor<int32, [1]>([2])];
47
+ tensor<fp16, [1, 9496, 1]> var_109_cast_fp16 = squeeze(axes = var_109_axes_0, x = var_107_cast_fp16)[name = string("op_109_cast_fp16")];
48
+ tensor<int32, [3]> var_112_perm_0 = const()[name = string("op_112_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
49
+ string var_133_pad_type_0 = const()[name = string("op_133_pad_type_0"), val = string("valid")];
50
+ tensor<int32, [2]> var_133_strides_0 = const()[name = string("op_133_strides_0"), val = tensor<int32, [2]>([1, 1])];
51
+ tensor<int32, [4]> var_133_pad_0 = const()[name = string("op_133_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
52
+ tensor<int32, [2]> var_133_dilations_0 = const()[name = string("op_133_dilations_0"), val = tensor<int32, [2]>([1, 1])];
53
+ int32 var_133_groups_0 = const()[name = string("op_133_groups_0"), val = int32(1)];
54
+ tensor<fp16, [9496, 2560, 1, 1]> op_113_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(99670592))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(123980416))))[name = string("op_113_promoted_to_fp16_palettized")];
55
+ tensor<fp16, [1, 9496, 1, 1]> var_133_cast_fp16 = conv(dilations = var_133_dilations_0, groups = var_133_groups_0, pad = var_133_pad_0, pad_type = var_133_pad_type_0, strides = var_133_strides_0, weight = op_113_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_133_cast_fp16")];
56
+ tensor<int32, [1]> var_135_axes_0 = const()[name = string("op_135_axes_0"), val = tensor<int32, [1]>([2])];
57
+ tensor<fp16, [1, 9496, 1]> var_135_cast_fp16 = squeeze(axes = var_135_axes_0, x = var_133_cast_fp16)[name = string("op_135_cast_fp16")];
58
+ tensor<int32, [3]> var_138_perm_0 = const()[name = string("op_138_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
59
+ string var_159_pad_type_0 = const()[name = string("op_159_pad_type_0"), val = string("valid")];
60
+ tensor<int32, [2]> var_159_strides_0 = const()[name = string("op_159_strides_0"), val = tensor<int32, [2]>([1, 1])];
61
+ tensor<int32, [4]> var_159_pad_0 = const()[name = string("op_159_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
62
+ tensor<int32, [2]> var_159_dilations_0 = const()[name = string("op_159_dilations_0"), val = tensor<int32, [2]>([1, 1])];
63
+ int32 var_159_groups_0 = const()[name = string("op_159_groups_0"), val = int32(1)];
64
+ tensor<fp16, [9496, 2560, 1, 1]> op_139_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(124588224))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(148898048))))[name = string("op_139_promoted_to_fp16_palettized")];
65
+ tensor<fp16, [1, 9496, 1, 1]> var_159_cast_fp16 = conv(dilations = var_159_dilations_0, groups = var_159_groups_0, pad = var_159_pad_0, pad_type = var_159_pad_type_0, strides = var_159_strides_0, weight = op_139_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_159_cast_fp16")];
66
+ tensor<int32, [1]> var_161_axes_0 = const()[name = string("op_161_axes_0"), val = tensor<int32, [1]>([2])];
67
+ tensor<fp16, [1, 9496, 1]> var_161_cast_fp16 = squeeze(axes = var_161_axes_0, x = var_159_cast_fp16)[name = string("op_161_cast_fp16")];
68
+ tensor<int32, [3]> var_164_perm_0 = const()[name = string("op_164_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
69
+ string var_185_pad_type_0 = const()[name = string("op_185_pad_type_0"), val = string("valid")];
70
+ tensor<int32, [2]> var_185_strides_0 = const()[name = string("op_185_strides_0"), val = tensor<int32, [2]>([1, 1])];
71
+ tensor<int32, [4]> var_185_pad_0 = const()[name = string("op_185_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
72
+ tensor<int32, [2]> var_185_dilations_0 = const()[name = string("op_185_dilations_0"), val = tensor<int32, [2]>([1, 1])];
73
+ int32 var_185_groups_0 = const()[name = string("op_185_groups_0"), val = int32(1)];
74
+ tensor<fp16, [9496, 2560, 1, 1]> op_165_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(149505856))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(173815680))))[name = string("op_165_promoted_to_fp16_palettized")];
75
+ tensor<fp16, [1, 9496, 1, 1]> var_185_cast_fp16 = conv(dilations = var_185_dilations_0, groups = var_185_groups_0, pad = var_185_pad_0, pad_type = var_185_pad_type_0, strides = var_185_strides_0, weight = op_165_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_185_cast_fp16")];
76
+ tensor<int32, [1]> var_187_axes_0 = const()[name = string("op_187_axes_0"), val = tensor<int32, [1]>([2])];
77
+ tensor<fp16, [1, 9496, 1]> var_187_cast_fp16 = squeeze(axes = var_187_axes_0, x = var_185_cast_fp16)[name = string("op_187_cast_fp16")];
78
+ tensor<int32, [3]> var_190_perm_0 = const()[name = string("op_190_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
79
+ string var_211_pad_type_0 = const()[name = string("op_211_pad_type_0"), val = string("valid")];
80
+ tensor<int32, [2]> var_211_strides_0 = const()[name = string("op_211_strides_0"), val = tensor<int32, [2]>([1, 1])];
81
+ tensor<int32, [4]> var_211_pad_0 = const()[name = string("op_211_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
82
+ tensor<int32, [2]> var_211_dilations_0 = const()[name = string("op_211_dilations_0"), val = tensor<int32, [2]>([1, 1])];
83
+ int32 var_211_groups_0 = const()[name = string("op_211_groups_0"), val = int32(1)];
84
+ tensor<fp16, [9496, 2560, 1, 1]> op_191_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(174423488))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(198733312))))[name = string("op_191_promoted_to_fp16_palettized")];
85
+ tensor<fp16, [1, 9496, 1, 1]> var_211_cast_fp16 = conv(dilations = var_211_dilations_0, groups = var_211_groups_0, pad = var_211_pad_0, pad_type = var_211_pad_type_0, strides = var_211_strides_0, weight = op_191_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_211_cast_fp16")];
86
+ tensor<int32, [1]> var_213_axes_0 = const()[name = string("op_213_axes_0"), val = tensor<int32, [1]>([2])];
87
+ tensor<fp16, [1, 9496, 1]> var_213_cast_fp16 = squeeze(axes = var_213_axes_0, x = var_211_cast_fp16)[name = string("op_213_cast_fp16")];
88
+ tensor<int32, [3]> var_216_perm_0 = const()[name = string("op_216_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
89
+ string var_237_pad_type_0 = const()[name = string("op_237_pad_type_0"), val = string("valid")];
90
+ tensor<int32, [2]> var_237_strides_0 = const()[name = string("op_237_strides_0"), val = tensor<int32, [2]>([1, 1])];
91
+ tensor<int32, [4]> var_237_pad_0 = const()[name = string("op_237_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
92
+ tensor<int32, [2]> var_237_dilations_0 = const()[name = string("op_237_dilations_0"), val = tensor<int32, [2]>([1, 1])];
93
+ int32 var_237_groups_0 = const()[name = string("op_237_groups_0"), val = int32(1)];
94
+ tensor<fp16, [9496, 2560, 1, 1]> op_217_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(199341120))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(223650944))))[name = string("op_217_promoted_to_fp16_palettized")];
95
+ tensor<fp16, [1, 9496, 1, 1]> var_237_cast_fp16 = conv(dilations = var_237_dilations_0, groups = var_237_groups_0, pad = var_237_pad_0, pad_type = var_237_pad_type_0, strides = var_237_strides_0, weight = op_217_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_237_cast_fp16")];
96
+ tensor<int32, [1]> var_239_axes_0 = const()[name = string("op_239_axes_0"), val = tensor<int32, [1]>([2])];
97
+ tensor<fp16, [1, 9496, 1]> var_239_cast_fp16 = squeeze(axes = var_239_axes_0, x = var_237_cast_fp16)[name = string("op_239_cast_fp16")];
98
+ tensor<int32, [3]> var_242_perm_0 = const()[name = string("op_242_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
99
+ string var_263_pad_type_0 = const()[name = string("op_263_pad_type_0"), val = string("valid")];
100
+ tensor<int32, [2]> var_263_strides_0 = const()[name = string("op_263_strides_0"), val = tensor<int32, [2]>([1, 1])];
101
+ tensor<int32, [4]> var_263_pad_0 = const()[name = string("op_263_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
102
+ tensor<int32, [2]> var_263_dilations_0 = const()[name = string("op_263_dilations_0"), val = tensor<int32, [2]>([1, 1])];
103
+ int32 var_263_groups_0 = const()[name = string("op_263_groups_0"), val = int32(1)];
104
+ tensor<fp16, [9496, 2560, 1, 1]> op_243_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(224258752))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(248568576))))[name = string("op_243_promoted_to_fp16_palettized")];
105
+ tensor<fp16, [1, 9496, 1, 1]> var_263_cast_fp16 = conv(dilations = var_263_dilations_0, groups = var_263_groups_0, pad = var_263_pad_0, pad_type = var_263_pad_type_0, strides = var_263_strides_0, weight = op_243_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_263_cast_fp16")];
106
+ tensor<int32, [1]> var_265_axes_0 = const()[name = string("op_265_axes_0"), val = tensor<int32, [1]>([2])];
107
+ tensor<fp16, [1, 9496, 1]> var_265_cast_fp16 = squeeze(axes = var_265_axes_0, x = var_263_cast_fp16)[name = string("op_265_cast_fp16")];
108
+ tensor<int32, [3]> var_268_perm_0 = const()[name = string("op_268_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
109
+ string var_289_pad_type_0 = const()[name = string("op_289_pad_type_0"), val = string("valid")];
110
+ tensor<int32, [2]> var_289_strides_0 = const()[name = string("op_289_strides_0"), val = tensor<int32, [2]>([1, 1])];
111
+ tensor<int32, [4]> var_289_pad_0 = const()[name = string("op_289_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
112
+ tensor<int32, [2]> var_289_dilations_0 = const()[name = string("op_289_dilations_0"), val = tensor<int32, [2]>([1, 1])];
113
+ int32 var_289_groups_0 = const()[name = string("op_289_groups_0"), val = int32(1)];
114
+ tensor<fp16, [9496, 2560, 1, 1]> op_269_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(249176384))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(273486208))))[name = string("op_269_promoted_to_fp16_palettized")];
115
+ tensor<fp16, [1, 9496, 1, 1]> var_289_cast_fp16 = conv(dilations = var_289_dilations_0, groups = var_289_groups_0, pad = var_289_pad_0, pad_type = var_289_pad_type_0, strides = var_289_strides_0, weight = op_269_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_289_cast_fp16")];
116
+ tensor<int32, [1]> var_291_axes_0 = const()[name = string("op_291_axes_0"), val = tensor<int32, [1]>([2])];
117
+ tensor<fp16, [1, 9496, 1]> var_291_cast_fp16 = squeeze(axes = var_291_axes_0, x = var_289_cast_fp16)[name = string("op_291_cast_fp16")];
118
+ tensor<int32, [3]> var_294_perm_0 = const()[name = string("op_294_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
119
+ string var_315_pad_type_0 = const()[name = string("op_315_pad_type_0"), val = string("valid")];
120
+ tensor<int32, [2]> var_315_strides_0 = const()[name = string("op_315_strides_0"), val = tensor<int32, [2]>([1, 1])];
121
+ tensor<int32, [4]> var_315_pad_0 = const()[name = string("op_315_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
122
+ tensor<int32, [2]> var_315_dilations_0 = const()[name = string("op_315_dilations_0"), val = tensor<int32, [2]>([1, 1])];
123
+ int32 var_315_groups_0 = const()[name = string("op_315_groups_0"), val = int32(1)];
124
+ tensor<fp16, [9496, 2560, 1, 1]> op_295_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(274094016))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(298403840))))[name = string("op_295_promoted_to_fp16_palettized")];
125
+ tensor<fp16, [1, 9496, 1, 1]> var_315_cast_fp16 = conv(dilations = var_315_dilations_0, groups = var_315_groups_0, pad = var_315_pad_0, pad_type = var_315_pad_type_0, strides = var_315_strides_0, weight = op_295_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_315_cast_fp16")];
126
+ tensor<int32, [1]> var_317_axes_0 = const()[name = string("op_317_axes_0"), val = tensor<int32, [1]>([2])];
127
+ tensor<fp16, [1, 9496, 1]> var_317_cast_fp16 = squeeze(axes = var_317_axes_0, x = var_315_cast_fp16)[name = string("op_317_cast_fp16")];
128
+ tensor<int32, [3]> var_320_perm_0 = const()[name = string("op_320_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
129
+ string var_341_pad_type_0 = const()[name = string("op_341_pad_type_0"), val = string("valid")];
130
+ tensor<int32, [2]> var_341_strides_0 = const()[name = string("op_341_strides_0"), val = tensor<int32, [2]>([1, 1])];
131
+ tensor<int32, [4]> var_341_pad_0 = const()[name = string("op_341_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
132
+ tensor<int32, [2]> var_341_dilations_0 = const()[name = string("op_341_dilations_0"), val = tensor<int32, [2]>([1, 1])];
133
+ int32 var_341_groups_0 = const()[name = string("op_341_groups_0"), val = int32(1)];
134
+ tensor<fp16, [9496, 2560, 1, 1]> op_321_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(299011648))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(323321472))))[name = string("op_321_promoted_to_fp16_palettized")];
135
+ tensor<fp16, [1, 9496, 1, 1]> var_341_cast_fp16 = conv(dilations = var_341_dilations_0, groups = var_341_groups_0, pad = var_341_pad_0, pad_type = var_341_pad_type_0, strides = var_341_strides_0, weight = op_321_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_341_cast_fp16")];
136
+ tensor<int32, [1]> var_343_axes_0 = const()[name = string("op_343_axes_0"), val = tensor<int32, [1]>([2])];
137
+ tensor<fp16, [1, 9496, 1]> var_343_cast_fp16 = squeeze(axes = var_343_axes_0, x = var_341_cast_fp16)[name = string("op_343_cast_fp16")];
138
+ tensor<int32, [3]> var_346_perm_0 = const()[name = string("op_346_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
139
+ string var_367_pad_type_0 = const()[name = string("op_367_pad_type_0"), val = string("valid")];
140
+ tensor<int32, [2]> var_367_strides_0 = const()[name = string("op_367_strides_0"), val = tensor<int32, [2]>([1, 1])];
141
+ tensor<int32, [4]> var_367_pad_0 = const()[name = string("op_367_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
142
+ tensor<int32, [2]> var_367_dilations_0 = const()[name = string("op_367_dilations_0"), val = tensor<int32, [2]>([1, 1])];
143
+ int32 var_367_groups_0 = const()[name = string("op_367_groups_0"), val = int32(1)];
144
+ tensor<fp16, [9496, 2560, 1, 1]> op_347_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(323929280))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(348239104))))[name = string("op_347_promoted_to_fp16_palettized")];
145
+ tensor<fp16, [1, 9496, 1, 1]> var_367_cast_fp16 = conv(dilations = var_367_dilations_0, groups = var_367_groups_0, pad = var_367_pad_0, pad_type = var_367_pad_type_0, strides = var_367_strides_0, weight = op_347_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_367_cast_fp16")];
146
+ tensor<int32, [1]> var_369_axes_0 = const()[name = string("op_369_axes_0"), val = tensor<int32, [1]>([2])];
147
+ tensor<fp16, [1, 9496, 1]> var_369_cast_fp16 = squeeze(axes = var_369_axes_0, x = var_367_cast_fp16)[name = string("op_369_cast_fp16")];
148
+ tensor<int32, [3]> var_372_perm_0 = const()[name = string("op_372_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
149
+ string var_393_pad_type_0 = const()[name = string("op_393_pad_type_0"), val = string("valid")];
150
+ tensor<int32, [2]> var_393_strides_0 = const()[name = string("op_393_strides_0"), val = tensor<int32, [2]>([1, 1])];
151
+ tensor<int32, [4]> var_393_pad_0 = const()[name = string("op_393_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
152
+ tensor<int32, [2]> var_393_dilations_0 = const()[name = string("op_393_dilations_0"), val = tensor<int32, [2]>([1, 1])];
153
+ int32 var_393_groups_0 = const()[name = string("op_393_groups_0"), val = int32(1)];
154
+ tensor<fp16, [9496, 2560, 1, 1]> op_373_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(348846912))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(373156736))))[name = string("op_373_promoted_to_fp16_palettized")];
155
+ tensor<fp16, [1, 9496, 1, 1]> var_393_cast_fp16 = conv(dilations = var_393_dilations_0, groups = var_393_groups_0, pad = var_393_pad_0, pad_type = var_393_pad_type_0, strides = var_393_strides_0, weight = op_373_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_393_cast_fp16")];
156
+ tensor<int32, [1]> var_395_axes_0 = const()[name = string("op_395_axes_0"), val = tensor<int32, [1]>([2])];
157
+ tensor<fp16, [1, 9496, 1]> var_395_cast_fp16 = squeeze(axes = var_395_axes_0, x = var_393_cast_fp16)[name = string("op_395_cast_fp16")];
158
+ tensor<int32, [3]> var_398_perm_0 = const()[name = string("op_398_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
159
+ string var_419_pad_type_0 = const()[name = string("op_419_pad_type_0"), val = string("valid")];
160
+ tensor<int32, [2]> var_419_strides_0 = const()[name = string("op_419_strides_0"), val = tensor<int32, [2]>([1, 1])];
161
+ tensor<int32, [4]> var_419_pad_0 = const()[name = string("op_419_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
162
+ tensor<int32, [2]> var_419_dilations_0 = const()[name = string("op_419_dilations_0"), val = tensor<int32, [2]>([1, 1])];
163
+ int32 var_419_groups_0 = const()[name = string("op_419_groups_0"), val = int32(1)];
164
+ tensor<fp16, [9496, 2560, 1, 1]> op_399_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint8, [9496, 2560, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(373764544))), lut = tensor<fp16, [1187, 1, 1, 1, 256, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(398074368))))[name = string("op_399_promoted_to_fp16_palettized")];
165
+ tensor<fp16, [1, 9496, 1, 1]> var_419_cast_fp16 = conv(dilations = var_419_dilations_0, groups = var_419_groups_0, pad = var_419_pad_0, pad_type = var_419_pad_type_0, strides = var_419_strides_0, weight = op_399_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_419_cast_fp16")];
166
+ tensor<int32, [1]> var_421_axes_0 = const()[name = string("op_421_axes_0"), val = tensor<int32, [1]>([2])];
167
+ tensor<fp16, [1, 9496, 1]> var_421_cast_fp16 = squeeze(axes = var_421_axes_0, x = var_419_cast_fp16)[name = string("op_421_cast_fp16")];
168
+ tensor<int32, [3]> var_424_perm_0 = const()[name = string("op_424_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
169
+ tensor<fp16, [1, 1, 9496]> logits1 = transpose(perm = var_34_perm_0, x = var_31_cast_fp16)[name = string("transpose_0")];
170
+ tensor<fp16, [1, 1, 9496]> logits2 = transpose(perm = var_60_perm_0, x = var_57_cast_fp16)[name = string("transpose_1")];
171
+ tensor<fp16, [1, 1, 9496]> logits3 = transpose(perm = var_86_perm_0, x = var_83_cast_fp16)[name = string("transpose_2")];
172
+ tensor<fp16, [1, 1, 9496]> logits4 = transpose(perm = var_112_perm_0, x = var_109_cast_fp16)[name = string("transpose_3")];
173
+ tensor<fp16, [1, 1, 9496]> logits5 = transpose(perm = var_138_perm_0, x = var_135_cast_fp16)[name = string("transpose_4")];
174
+ tensor<fp16, [1, 1, 9496]> logits6 = transpose(perm = var_164_perm_0, x = var_161_cast_fp16)[name = string("transpose_5")];
175
+ tensor<fp16, [1, 1, 9496]> logits7 = transpose(perm = var_190_perm_0, x = var_187_cast_fp16)[name = string("transpose_6")];
176
+ tensor<fp16, [1, 1, 9496]> logits8 = transpose(perm = var_216_perm_0, x = var_213_cast_fp16)[name = string("transpose_7")];
177
+ tensor<fp16, [1, 1, 9496]> logits9 = transpose(perm = var_242_perm_0, x = var_239_cast_fp16)[name = string("transpose_8")];
178
+ tensor<fp16, [1, 1, 9496]> logits10 = transpose(perm = var_268_perm_0, x = var_265_cast_fp16)[name = string("transpose_9")];
179
+ tensor<fp16, [1, 1, 9496]> logits11 = transpose(perm = var_294_perm_0, x = var_291_cast_fp16)[name = string("transpose_10")];
180
+ tensor<fp16, [1, 1, 9496]> logits12 = transpose(perm = var_320_perm_0, x = var_317_cast_fp16)[name = string("transpose_11")];
181
+ tensor<fp16, [1, 1, 9496]> logits13 = transpose(perm = var_346_perm_0, x = var_343_cast_fp16)[name = string("transpose_12")];
182
+ tensor<fp16, [1, 1, 9496]> logits14 = transpose(perm = var_372_perm_0, x = var_369_cast_fp16)[name = string("transpose_13")];
183
+ tensor<fp16, [1, 1, 9496]> logits15 = transpose(perm = var_398_perm_0, x = var_395_cast_fp16)[name = string("transpose_14")];
184
+ tensor<fp16, [1, 1, 9496]> logits16 = transpose(perm = var_424_perm_0, x = var_421_cast_fp16)[name = string("transpose_15")];
185
+ } -> (logits1, logits2, logits3, logits4, logits5, logits6, logits7, logits8, logits9, logits10, logits11, logits12, logits13, logits14, logits15, logits16);
186
+ }
qwen_lm_head_lut8.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84dced6345d9f025b22105d74af6176fafae44ba69644668d07c364dc81a22c8
3
+ size 398682176
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- '' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" and not message.tool_calls %}\n {%- set content = message.content %}\n {%- if not loop.last %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- if not loop.last %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
231
+ "clean_up_tokenization_spaces": false,
232
+ "eos_token": "<|im_end|>",
233
+ "errors": "replace",
234
+ "extra_special_tokens": {},
235
+ "model_max_length": 131072,
236
+ "pad_token": "<|endoftext|>",
237
+ "split_special_tokens": false,
238
+ "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null
240
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff