Instructions to use catid/cat-llama-3-70b-hqq with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use catid/cat-llama-3-70b-hqq with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="catid/cat-llama-3-70b-hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("catid/cat-llama-3-70b-hqq") model = AutoModelForCausalLM.from_pretrained("catid/cat-llama-3-70b-hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use catid/cat-llama-3-70b-hqq with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "catid/cat-llama-3-70b-hqq" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/cat-llama-3-70b-hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/catid/cat-llama-3-70b-hqq
- SGLang
How to use catid/cat-llama-3-70b-hqq with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "catid/cat-llama-3-70b-hqq" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/cat-llama-3-70b-hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "catid/cat-llama-3-70b-hqq" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "catid/cat-llama-3-70b-hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use catid/cat-llama-3-70b-hqq with Docker Model Runner:
docker model run hf.co/catid/cat-llama-3-70b-hqq
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
AI Model Name: Llama 3 70B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/
How to quantize 70B model so it will fit on 2x4090 GPUs:
I tried EXL2, AutoAWQ, and SqueezeLLM and they all failed for different reasons (issues opened).
HQQ worked:
I rented a 4x GPU 1TB RAM ($19/hr) instance on runpod with 1024GB container and 1024GB workspace disk space. I think you only need 2x GPU with 80GB VRAM and 512GB+ system RAM so probably overpaid.
Note you need to fill in the form to get access to the 70B Meta weights.
You can copy/paste this on the console and it will just set up everything automatically:
apt update
apt install git-lfs vim -y
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
~/miniconda3/bin/conda init bash
source ~/.bashrc
conda create -n hqq python=3.10 -y && conda activate hqq
git lfs install
git clone https://github.com/mobiusml/hqq.git
cd hqq
pip install torch
pip install .
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli login
Create quantize.py file by copy/pasting this into console:
echo "
import torch
model_id = 'meta-llama/Meta-Llama-3-70B-Instruct'
save_dir = 'cat-llama-3-70b-hqq'
compute_dtype = torch.bfloat16
from hqq.core.quantize import *
quant_config = BaseQuantizeConfig(nbits=4, group_size=64, offload_meta=True)
zero_scale_group_size = 128
quant_config['scale_quant_params']['group_size'] = zero_scale_group_size
quant_config['zero_quant_params']['group_size'] = zero_scale_group_size
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
model = HQQModelForCausalLM.from_pretrained(model_id)
from hqq.models.hf.base import AutoHQQHFModel
AutoHQQHFModel.quantize_model(model, quant_config=quant_config,
compute_dtype=compute_dtype)
AutoHQQHFModel.save_quantized(model, save_dir)
model = AutoHQQHFModel.from_quantized(save_dir)
model.eval()
" > quantize.py
Run script:
python quantize.py
- Downloads last month
- 5