metadata
quantized_by: bartowski
Exllama v2 Quantizations of Starling-LM-7B-alpha at 6.0 bits per weight
Using turboderp's ExLlamaV2 v0.0.11 for quantization.
Conversion was done using the default calibration dataset.
Original model: https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
Download instructions
With git:
git clone --single-branch --branch 6.0 https://huggingface.co/bartowski/Starling-LM-7B-alpha-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download from a different branch, add the --revision parameter:
mkdir Starling-LM-7B-alpha-exl2
huggingface-cli download bartowski/Starling-LM-7B-alpha-exl2 --revision 6_0 --local-dir Starling-LM-7B-alpha-exl2 --local-dir-use-symlinks False