Next Series
Collection
Our Next LLM models will be here.
•
7 items
•
Updated
•
4
Next 8B is an 8-billion parameter large language model (LLM) built on Qwen 3 architecture, optimized for reasoning and analytical performance. It’s Türkiye’s reasoning-capable compact AI, designed to think, infer, and solve problems efficiently.
Focused purely on cognitive tasks, it excels in problem-solving, abstract logic, and multilingual understanding (Turkish, English, and more).
| Model | MMLU (5-shot) % | MMLU-Pro % | GSM8K % | MATH % |
|---|---|---|---|---|
| Next 14B (Thinking) | 94.6 | 93.2 | 98.8 | 92.7 |
| Next 12B | 92.7 | 84.4 | 95.3 | 87.2 |
| Next 8B (Thinking) | 91.0 | 88.5 | 96.2 | 88.0 |
| GPT-5 | 92.5 | 87.0 | 98.4 | 96.0 |
| Claude Opus 4.1 (Thinking) | ~92.0 | 87.8 | 84.7 | 95.4 |
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Lamapi/next-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "system", "content": "You are Next-X1, a reasoning-capable AI assistant created by Lamapi. You think logically, reason efficiently, and answer concisely."},
{"role": "user", "content": "Explain why the sky appears blue using logical reasoning."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
| Feature | Description |
|---|---|
| 🧠 Efficient Reasoning | Strong in abstract logic, critical thinking, and structured problem-solving. |
| 🇹🇷 Multilingual Intelligence | Deep Turkish understanding with 30+ language support. |
| ⚡ Lightweight & Optimized | Quantized formats (Q8_0, Q4_K_M, FP16) for efficiency. |
| 🧮 Mathematical & Analytical Skill | Handles structured reasoning and moderate complexity problems. |
| 🧩 Non-Vision Architecture | Focused on text-based cognitive tasks. |
| 🏢 Reliable & Consistent | Predictable outputs suitable for professional use. |
| Specification | Details |
|---|---|
| Base Model | Qwen 3 |
| Parameters | 8 Billion |
| Architecture | Transformer (Causal LLM) |
| Modalities | Text-only |
| Fine-Tuning | Instruction-tuned with reasoning datasets |
| Optimizations | Quantization-ready, FP16 support |
| Primary Focus | Reasoning, logic, decision-making, and language understanding |
Licensed under MIT License — free for commercial and non-commercial use.
Next 8B — compact reasoning-capable AI, blending logical depth, analytical efficiency, and lightweight reliability.