runtime error
Exit code: 1. Reason: οΏ½οΏ½β | 1.47G/3.19G [00:07<00:06, 266MB/s][A model-00004-of-00005.safetensors: 56%|ββββββ | 1.80G/3.19G [00:08<00:04, 279MB/s][A model-00004-of-00005.safetensors: 75%|ββββββββ | 2.38G/3.19G [00:09<00:02, 366MB/s][A model-00004-of-00005.safetensors: 92%|ββββββββββ| 2.92G/3.19G [00:10<00:00, 410MB/s][A model-00004-of-00005.safetensors: 100%|ββββββββββ| 3.19G/3.19G [00:11<00:00, 287MB/s] model-00005-of-00005.safetensors: 0%| | 0.00/1.24G [00:00<?, ?B/s][A model-00005-of-00005.safetensors: 2%|β | 19.3M/1.24G [00:01<01:24, 14.5MB/s][A model-00005-of-00005.safetensors: 15%|ββ | 190M/1.24G [00:02<00:12, 84.1MB/s] [A model-00005-of-00005.safetensors: 57%|ββββββ | 708M/1.24G [00:03<00:02, 236MB/s] [A model-00005-of-00005.safetensors: 100%|ββββββββββ| 1.24G/1.24G [00:04<00:00, 269MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 61, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 311, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4758, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2315, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2466, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...