AxionLab
AxionLab-official
AI & ML interests
my goal is make the best NRM(Nano Reasoning Model), in this profile i try it!
*NOTE: the RX 5500 XT GPU with 8GB VRAM in my profile is a Radeon Vega 7 8GB VRAM irl*
Recent Activity
reacted to SeaWolf-AI's post with š¤ about 9 hours ago
𧬠Darwin-27B-Opus: 86.9% on GPQA Diamond ā World #5, Zero Training
We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond ā ranking #5 globally on the HuggingFace leaderboard ā without a single gradient update.
How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios ā no human tuning required.
The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B ā with zero training, zero data, one GPU, ~2 hours.
We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father ā autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning.
š Public release: 10 days ā 300+ community derivatives, 120K+ downloads.
š Links:
Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus
article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa
Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family
If foundation models are raw ore, Darwin is the forge. We are just getting started. š„ reacted to SeaWolf-AI's post with š§ about 9 hours ago
𧬠Darwin-27B-Opus: 86.9% on GPQA Diamond ā World #5, Zero Training
We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond ā ranking #5 globally on the HuggingFace leaderboard ā without a single gradient update.
How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios ā no human tuning required.
The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B ā with zero training, zero data, one GPU, ~2 hours.
We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father ā autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning.
š Public release: 10 days ā 300+ community derivatives, 120K+ downloads.
š Links:
Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus
article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa
Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family
If foundation models are raw ore, Darwin is the forge. We are just getting started. š„ reacted to SeaWolf-AI's post with š about 9 hours ago
𧬠Darwin-27B-Opus: 86.9% on GPQA Diamond ā World #5, Zero Training
We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond ā ranking #5 globally on the HuggingFace leaderboard ā without a single gradient update.
How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios ā no human tuning required.
The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B ā with zero training, zero data, one GPU, ~2 hours.
We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father ā autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning.
š Public release: 10 days ā 300+ community derivatives, 120K+ downloads.
š Links:
Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus
article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa
Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family
If foundation models are raw ore, Darwin is the forge. We are just getting started. š„