Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
neody
/
riva-translate-4b-instruct-gptq-int8-w64
like
0
Follow
Neodyland
10
Safetensors
HuggingFaceFW/finewiki
10 languages
mistral
vllm
8-bit precision
gptq
License:
nvidia-open-model-license-agreement
License:
cc-by-sa-4.0
Model card
Files
Files and versions
xet
Community
main
riva-translate-4b-instruct-gptq-int8-w64
4.78 GB
1 contributor
History:
3 commits
googlefan
Create README.md
74b049f
verified
7 days ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
7 days ago
README.md
5.16 kB
Create README.md
7 days ago
chat_template.jinja
Safe
2.1 kB
Upload folder using huggingface_hub
7 days ago
config.json
1.25 kB
Upload folder using huggingface_hub
7 days ago
generation_config.json
Safe
132 Bytes
Upload folder using huggingface_hub
7 days ago
model-00001-of-00002.safetensors
4.27 GB
xet
Upload folder using huggingface_hub
7 days ago
model-00002-of-00002.safetensors
493 MB
xet
Upload folder using huggingface_hub
7 days ago
model.safetensors.index.json
83.4 kB
Upload folder using huggingface_hub
7 days ago
quant_log.csv
11 kB
Upload folder using huggingface_hub
7 days ago
quantize_config.json
541 Bytes
Upload folder using huggingface_hub
7 days ago
special_tokens_map.json
Safe
438 Bytes
Upload folder using huggingface_hub
7 days ago
tokenizer.json
Safe
17.1 MB
xet
Upload folder using huggingface_hub
7 days ago
tokenizer_config.json
Safe
177 kB
Upload folder using huggingface_hub
7 days ago