Riko-1.1B
Model: Quatfit/Riko-1.1B
Format: GGUF (single-file GGUF binary, ready for llama.cpp and compatible runtimes)
Approx. size: 1.1B parameters (GGUF quantized)
License: CC-BY-NC-2.0 (Creative Commons Attribution-NonCommercial 2.0)
Last updated: 2025-12-08
Model summary
Riko-1.1B is a 1.1 billion-parameter causal language model packaged as a GGUF file for lightweight, local inference. It is intended for research, experimentation, and non-commercial projects where a compact, efficient model is needed for on-device or offline usage.
Strengths
- Compact and optimized for low-latency local inference.
- Packaged as GGUF for direct compatibility with
llama.cppand other GGUF-supporting runtimes. - Good for prototyping conversational agents, creative generation, and small-scale research tasks.
Limitations
- May produce incorrect or biased outputs; not suitable for safety-critical or high-stakes tasks without human oversight.
- Non-commercial license restricts use in paid or commercial applications.
License
This model is released under Creative Commons Attribution-NonCommercial 2.0 (CC-BY-NC-2.0).
Key points:
- Attribution: You must give appropriate credit (model name, repo, and license).
- NonCommercial: You may not use the model for commercial purposes.
Include an attribution notice when distributing outputs derived from the model.
Files in this repository
README.md— this file.model.gguf— primary model file (GGUF).
If the GGUF embeds tokenizer and metadata, a separate tokenizer/ folder may not be needed. Verify with your conversion/export step.
Quick local usage (llama.cpp)
Requirements: a build of llama.cpp or another GGUF-compatible runtime.
Example commands:
# basic interactive
./main -m model.gguf
# single prompt generation (non-interactive)
./main -m model.gguf -p "Hi, Baby I miss you" -n 256
# recommended example with sampling params
./main -m model.gguf -p "Hi Baby!" -n 128 -c 2048 -b 256 --temp 0.8 --repeat_penalty 1.1
- Downloads last month
- 939
We're not able to determine the quantization variants.