aksheyd/llama-3.1-8b-instruct-no-robots

Llama-3.1-8B-Instruct SFT via No Robots dataset using Tinker and LoRA (rank=32).

Quantized to 4 bits for MLX here

Please see this repo for additional details.

Downloads last month
22
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aksheyd/llama-3.1-8b-instruct-no-robots

Finetuned
(2049)
this model
Quantizations
1 model

Dataset used to train aksheyd/llama-3.1-8b-instruct-no-robots