build: 5318 (15e03282) with cc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5) for x86_64-amazon-linux llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14810 MiB free llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14810 MiB free llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14810 MiB free llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14810 MiB free llama_model_loader: loaded meta data with 89 key-value pairs and 363 tensors from ./Dolphin3.0-R1-Mistral-24B-Q3_K_M.gguf (version GGUF V3 (latest)) Final Winogrande score(750 tasks): 63.3333 +/- 1.7608 llama_perf_context_print: load time = 3172.65 ms llama_perf_context_print: prompt eval time = 75138.01 ms / 22541 tokens ( 3.33 ms per token, 299.99 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 78018.19 ms / 22542 tokens