neurips-2023-llm-efficiency
					Collection
				
Fine-tune models, datasets and artifacts used for llm efficiency competition.
https://llm-efficiency-challenge.github.io/challenge
					• 
				15 items
				• 
				Updated
					
				
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | 
|---|---|---|---|
| 0.8756 | 0.06 | 20 | 0.7111 | 
| 0.9058 | 0.11 | 40 | 0.6764 | 
| 0.7526 | 0.17 | 60 | 0.6669 | 
| 0.6926 | 0.23 | 80 | 0.6363 | 
| 0.6731 | 0.28 | 100 | 0.6187 | 
| 0.647 | 0.34 | 120 | 0.6162 | 
| 0.6219 | 0.4 | 140 | 0.6041 | 
| 0.5781 | 0.45 | 160 | 0.5937 | 
| 0.6346 | 0.51 | 180 | 0.6006 | 
| 0.7663 | 0.57 | 200 | 0.5926 | 
| 0.5864 | 0.62 | 220 | 0.5866 | 
| 0.5943 | 0.68 | 240 | 0.5756 | 
| 0.5029 | 0.74 | 260 | 0.5733 | 
| 0.5482 | 0.79 | 280 | 0.5712 | 
| 0.5413 | 0.85 | 300 | 0.5820 | 
| 0.657 | 0.91 | 320 | 0.5696 | 
| 0.506 | 0.96 | 340 | 0.5839 | 
| 0.4804 | 1.02 | 360 | 0.5803 | 
| 0.5095 | 1.08 | 380 | 0.5974 | 
| 0.4404 | 1.13 | 400 | 0.5746 | 
| 0.3869 | 1.19 | 420 | 0.5740 | 
| 0.4129 | 1.25 | 440 | 0.5777 | 
| 0.4209 | 1.3 | 460 | 0.5825 | 
| 0.4014 | 1.36 | 480 | 0.5742 | 
| 0.3333 | 1.42 | 500 | 0.5851 | 
| 0.5041 | 1.47 | 520 | 0.5798 | 
| 0.5528 | 1.53 | 540 | 0.5631 | 
| 0.4372 | 1.59 | 560 | 0.5747 | 
| 0.3901 | 1.64 | 580 | 0.5625 | 
| 0.5271 | 1.7 | 600 | 0.5746 | 
| 0.4283 | 1.76 | 620 | 0.5662 | 
| 0.4336 | 1.81 | 640 | 0.5652 | 
| 0.3534 | 1.87 | 660 | 0.5697 | 
| 0.4728 | 1.93 | 680 | 0.5713 | 
| 0.5159 | 1.98 | 700 | 0.5703 | 
Base model
meta-llama/Llama-2-7b-hf