Dolphin3.0-R1-Mistral-24B-GGUF / scores /Dolphin3.0-R1-Mistral-24B-IQ3_S.hsw
eaddario's picture
Generate Perplexity, KLD, ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
46eb705 verified
build: 5318 (15e03282) with cc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5) for x86_64-amazon-linux
llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14810 MiB free
llama_model_loader: loaded meta data with 89 key-value pairs and 363 tensors from ./Dolphin3.0-R1-Mistral-24B-IQ3_S.gguf (version GGUF V3 (latest))
750 62.66666667% [59.1487%, 66.0555%]
llama_perf_context_print: load time = 3192.25 ms
llama_perf_context_print: prompt eval time = 297311.74 ms / 126722 tokens ( 2.35 ms per token, 426.23 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 311560.28 ms / 126723 tokens