runtime error

Exit code: 1. Reason: ): 0%| | 0.00/3.95G [00:00<?, ?B/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 0%| | 2.44M/3.95G [00:01<27:24, 2.40MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 3%|β–Ž | 101M/3.95G [00:05<03:02, 21.1MB/s]  Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 7%|β–‹ | 287M/3.95G [00:06<01:06, 55.2MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 12%|β–ˆβ– | 488M/3.95G [00:07<00:38, 88.8MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 29%|β–ˆβ–ˆβ–‰ | 1.16G/3.95G [00:08<00:12, 232MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 1.60G/3.95G [00:09<00:09, 255MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.27G/3.95G [00:10<00:04, 356MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 3.28G/3.95G [00:11<00:01, 524MB/s] Mistral-7B-Instruct-v0.3-abliterated.IQ4(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.95G/3.95G [00:12<00:00, 313MB/s] Start the model init process gguf_init_from_file: tensor 'token_embd.weight' number of elements (134217728) is not a multiple of block size (-1541571168) gguf_init_from_file: tensor 'token_embd.weight' number of elements (134217728) is not a multiple of block size (2031217408) gguf_init_from_file: tensor 'token_embd.weight' number of elements (134217728) is not a multiple of block size (2029019392) Traceback (most recent call last): File "/app/app.py", line 25, in <module> model = GPT4All(model_name, model_path, allow_download = False, device="cpu") File "/usr/local/lib/python3.10/site-packages/gpt4all/gpt4all.py", line 101, in __init__ self.model.load_model(self.config["path"]) File "/usr/local/lib/python3.10/site-packages/gpt4all/pyllmodel.py", line 260, in load_model raise ValueError(f"Unable to instantiate model: code={err.code}, {err.message.decode()}") ValueError: Unable to instantiate model: code=95, Model format not supported (no matching implementation found)

Container logs:

Fetching error logs...