image/png

sft

This model is a fine-tuned version of meta-llama/Llama-3.2-1B on the identity and the alpaca_en_demo datasets.

Model description

This is a Llama3.2-1B SFT fine-tuned using Llama-factory, q4 quantization, using 1091 data points (identity and alpaca_en_demo)

Intended uses & limitations

The intended use of this model is to find out the time taken to fine-tune LLM on different GPUs.

Result

GTX 1050 TI: 1h35m

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ThomasTheMaker/Llama3.2-1B-Llamafactory-Instruct

Adapter
(617)
this model

Evaluation results