Mistral-7B-Instruct-v0.2-binary_base02
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2468
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
 - train_batch_size: 2
 - eval_batch_size: 8
 - seed: 42
 - gradient_accumulation_steps: 4
 - total_train_batch_size: 8
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: linear
 - lr_scheduler_warmup_steps: 5
 - training_steps: 1000
 
Training results
| Training Loss | Epoch | Step | Validation Loss | 
|---|---|---|---|
| 1.3861 | 0.32 | 50 | 0.5395 | 
| 0.4528 | 0.63 | 100 | 0.3798 | 
| 0.3429 | 0.95 | 150 | 0.3177 | 
| 0.2952 | 1.27 | 200 | 0.3050 | 
| 0.2816 | 1.59 | 250 | 0.2726 | 
| 0.2661 | 1.9 | 300 | 0.2615 | 
| 0.2498 | 2.22 | 350 | 0.2545 | 
| 0.2439 | 2.54 | 400 | 0.2565 | 
| 0.2471 | 2.86 | 450 | 0.2504 | 
| 0.2417 | 3.17 | 500 | 0.2522 | 
| 0.2372 | 3.49 | 550 | 0.2481 | 
| 0.2312 | 3.81 | 600 | 0.2429 | 
| 0.2266 | 4.13 | 650 | 0.2481 | 
| 0.2179 | 4.44 | 700 | 0.2459 | 
| 0.2221 | 4.76 | 750 | 0.2409 | 
| 0.2165 | 5.08 | 800 | 0.2470 | 
| 0.2064 | 5.4 | 850 | 0.2464 | 
| 0.2062 | 5.71 | 900 | 0.2448 | 
| 0.2087 | 6.03 | 950 | 0.2449 | 
| 0.1975 | 6.35 | 1000 | 0.2468 | 
Framework versions
- PEFT 0.7.2.dev0
 - Transformers 4.37.0.dev0
 - Pytorch 2.1.0+cu121
 - Datasets 2.15.0
 - Tokenizers 0.15.0
 
- Downloads last month
 - -
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
Model tree for tdross/Mistral-7B-Instruct-v0.2-binary_base02
Base model
mistralai/Mistral-7B-Instruct-v0.2