HellaSwag Benchmark results

#24
by fighter3005 - opened

I am trying to reproduce your HellaSwag results for the Gemma3 270M PT variant.
I looked at the sglang implementation and the original implementation of hellaswag, but I only get 35.1 and 36 respectively (10-shot).
Did you do anything special for the evaluation? Like some special prompt, token or did you train the model on the training set directly before you used the validation set?
How did you insert the 10 examples into the prompt?

Google org
This comment has been hidden (marked as Resolved)

Sign up or log in to comment