Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
neody
/
riva-translate-4b-instruct-gptq-int8
like
0
Follow
Neodyland
10
Safetensors
HuggingFaceFW/finewiki
10 languages
mistral
vllm
8-bit precision
gptq
License:
nvidia-open-model-license-agreement
License:
cc-by-sa-4.0
Model card
Files
Files and versions
xet
Community
main
riva-translate-4b-instruct-gptq-int8
4.69 GB
1 contributor
History:
5 commits
googlefan
Update README.md
7fc7947
verified
6 days ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
6 days ago
README.md
Safe
5.3 kB
Update README.md
6 days ago
chat_template.jinja
Safe
2.1 kB
Upload folder using huggingface_hub
6 days ago
config.json
Safe
1.25 kB
Upload folder using huggingface_hub
6 days ago
generation_config.json
Safe
132 Bytes
Upload folder using huggingface_hub
6 days ago
model-00001-of-00002.safetensors
Safe
4.28 GB
xet
Upload folder using huggingface_hub
6 days ago
model-00002-of-00002.safetensors
Safe
395 MB
xet
Upload folder using huggingface_hub
6 days ago
model.safetensors.index.json
Safe
83.4 kB
Upload folder using huggingface_hub
6 days ago
quant_log.csv
Safe
11 kB
Upload folder using huggingface_hub
6 days ago
quantize_config.json
Safe
542 Bytes
Upload folder using huggingface_hub
6 days ago
special_tokens_map.json
Safe
438 Bytes
Upload folder using huggingface_hub
6 days ago
tokenizer.json
Safe
17.1 MB
xet
Upload folder using huggingface_hub
6 days ago
tokenizer_config.json
Safe
177 kB
Upload folder using huggingface_hub
6 days ago