🧠 Next 8B (m427)

Türkiye’s Compact Reasoning AI — Logical, Analytical, and Efficient

License: MIT Language: Multilingual HuggingFace


📖 Overview

Next 8B is an 8-billion parameter large language model (LLM) built on Qwen 3 architecture, optimized for reasoning and analytical performance. It’s Türkiye’s reasoning-capable compact AI, designed to think, infer, and solve problems efficiently.

Focused purely on cognitive tasks, it excels in problem-solving, abstract logic, and multilingual understanding (Turkish, English, and more).


⚡ Highlights

  • 🇹🇷 Türkiye’s compact reasoning AI
  • 🧠 Logical, analytical, and inferential reasoning
  • 🌍 Multilingual support (Turkish, English, 30+ languages)
  • Lightweight and efficient
  • 💬 Instruction-tuned for dialogue, tutoring, and analysis

📊 Benchmark Performance

Model MMLU (5-shot) % MMLU-Pro % GSM8K % MATH %
Next 14B (Thinking) 94.6 93.2 98.8 92.7
Next 12B 92.7 84.4 95.3 87.2
Next 8B (Thinking) 91.0 88.5 96.2 88.0
GPT-5 92.5 87.0 98.4 96.0
Claude Opus 4.1 (Thinking) ~92.0 87.8 84.7 95.4

🚀 Installation & Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Lamapi/next-8b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

messages = [
    {"role": "system", "content": "You are Next-X1, a reasoning-capable AI assistant created by Lamapi. You think logically, reason efficiently, and answer concisely."},
    {"role": "user", "content": "Explain why the sky appears blue using logical reasoning."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🧩 Key Features

Feature Description
🧠 Efficient Reasoning Strong in abstract logic, critical thinking, and structured problem-solving.
🇹🇷 Multilingual Intelligence Deep Turkish understanding with 30+ language support.
Lightweight & Optimized Quantized formats (Q8_0, Q4_K_M, FP16) for efficiency.
🧮 Mathematical & Analytical Skill Handles structured reasoning and moderate complexity problems.
🧩 Non-Vision Architecture Focused on text-based cognitive tasks.
🏢 Reliable & Consistent Predictable outputs suitable for professional use.

📐 Model Specifications

Specification Details
Base Model Qwen 3
Parameters 8 Billion
Architecture Transformer (Causal LLM)
Modalities Text-only
Fine-Tuning Instruction-tuned with reasoning datasets
Optimizations Quantization-ready, FP16 support
Primary Focus Reasoning, logic, decision-making, and language understanding

🎯 Ideal Use Cases

  • Compact Analytical Chatbots
  • Research Assistance (scientific/legal)
  • Education & Tutoring
  • Code & Algorithm Design
  • Decision Support Systems

💡 Performance Highlights

  • Efficient Reasoning: Compact yet powerful logical reasoning.
  • Good Mathematical Understanding: Handles structured problems reliably.
  • Lightweight & Fast: Ideal for resource-conscious environments.
  • Consistent Outputs: Professional-grade reliability in smaller footprint.

📄 License

Licensed under MIT License — free for commercial and non-commercial use.


📞 Contact & Support


Next 8B — compact reasoning-capable AI, blending logical depth, analytical efficiency, and lightweight reliability.

Follow on HuggingFace

Downloads last month
139
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lamapi/next-8b

Quantizations
18 models

Datasets used to train Lamapi/next-8b

Collection including Lamapi/next-8b