| build: 5318 (15e03282) with cc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5) for x86_64-amazon-linux | |
| llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14810 MiB free | |
| llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14810 MiB free | |
| llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14810 MiB free | |
| llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14810 MiB free | |
| llama_model_loader: loaded meta data with 85 key-value pairs and 363 tensors from ./Dolphin3.0-R1-Mistral-24B-F16.gguf (version GGUF V3 (latest)) | |
| Final result: 52.6667 +/- 1.8244 | |
| Random chance: 25.0083 +/- 1.5824 | |
| llama_perf_context_print: load time = 355880.80 ms | |
| llama_perf_context_print: prompt eval time = 82923.86 ms / 36666 tokens ( 2.26 ms per token, 442.16 tokens per second) | |
| llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) | |
| llama_perf_context_print: total time = 85717.91 ms / 36667 tokens | |