Note-taker-LFM2-2.6B - Quantized GGUF Model

This is a quantized GGUF model (Q8_0) compatible with Ollama.

Model Details

  • Base Model: BondingAI/Note-taker-LFM2-2.6B
  • Quantization: Q8_0
  • Framework: Ollama

Usage with Ollama

You can pull and run this model directly with Ollama:

ollama pull hf.co/BondingAI/ollama-q8_0-Note-taker-LFM2-2.6B:Q8_0

Then run it:

ollama run hf.co/BondingAI/ollama-q8_0-Note-taker-LFM2-2.6B:Q8_0 "Write your prompt here"

Features

  • Efficient quantization (Q8_0) for faster inference
  • Compatible with Ollama's inference engine

License

Please refer to the original model card for licensing information.

Downloads last month
58
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for BondingAI/ollama-q8_0-Note-taker-LFM2-2.6B

Quantized
(2)
this model

Collection including BondingAI/ollama-q8_0-Note-taker-LFM2-2.6B