--- license: apache-2.0 base_model: - meta-llama/Llama-3.1-8B-Instruct datasets: - HuggingFaceH4/no_robots language: - en metrics: - accuracy pipeline_tag: text-generation --- # aksheyd/llama-3.1-8b-instruct-no-robots Llama-3.1-8B-Instruct SFT via No Robots dataset using Tinker and LoRA (rank=32). Quantized to 4 bits for MLX [here](https://huggingface.co/aksheyd/llama-3.1-8b-instruct-no-robots-mlx) Please see this [repo](https://github.com/aksheyd/easy-train) for additional details.