Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

๐Ÿงฉ The "Muon is Scalable" Blueprint: A Distributed Muon Engineering Breakdown(CPU-Friendly, tutorial style)

A practical, annotated engineering breakdown for the Muon optimizer โ€” extended into a fully distributed (DP ร— TP) version that actually runs on plain CPU/Gloo, so broke-but-curious builders can still get their hands dirty.

This is the expert-level systems engineering companion to my original Understanding the Muon Optimizer tutorial.

๐Ÿ’ก โ€œBecause sometimes the best way to learn the distributed nightmare is to get your hands dirty and your eyes crossed.โ€ ๐Ÿคช


๐ŸŒ• Why This Exists

Most public Muon examples (like MoonShotโ€™s PoC) are designed for multi-GPU NCCL clusters, making them impossible to run or debug for most of us. In addition, most documentation for distributed systems is written by experts, for experts, making it a "nightmare" to learn. My goal is to change that.

This repository is not a "simplified" version that "flattens the depth" of the work.

Instead, it's a didactic refactor. I've taken the complex, real-world PoC and optimized it for readability and learning, so you can see the "blueprint" behind the "chaos":

  • Fully annotated to demonstrate data parallel (ZeRO-1) + tensor parallel (TP) orchestration end-to-end.
  • Understand every โ€œdistributed acrobaticโ€ step (DP gather โ†’ TP gather โ†’ Newtonโ€“Schulz โ†’ TP shard โ†’ DP shard).
  • The code is optimized to highlight symmetrical logic and consistent naming, showing the "opposite arrow" data flow of the "virtual map" (dist_meta).
  • It's built to run on a single CPU machine or Colab notebook.

๐Ÿง  The โ€œTurtle Speedโ€ Breakthrough: The Full Story

This code is complex. It's a "distributed nightmare" ๐Ÿซ .

Instead of a traditional, long-form README, the best documentation is the "making of" story. I've chronicled my entire journey of reverse-engineering and debugging this code in my "Turtle Speed Breakthrough" series on Medium.

This tutorial is the final, runnable code that resulted from that deep dive.


๐Ÿš€ Quick Start

Run the CPU-safe, fully-annotated notebook right from your browser:

Open In Colab

Or, you can clone this repo and run the Python script locally to simulate an 8-process cluster on your CPU:

git clone https://huggingface.co/datasets/bird-of-paradise/muon-distributed
cd muon-distributed

# This will spawn 8 processes and run the full test
!python distributed_muon_cpu.py

(Note: For the original, un-annotated, buggy Moonshot PoC that this work is based on, you can find it in this commit.)


๐Ÿ—‚๏ธ What's Inside (File Guide)

  • distributed_muon_cpu.ipynb: (Start Here) The Colab-friendly notebook that walks through the environment fixes and runs the code.
  • distributed_muon_cpu.py: The final, tested, fixed, and heavily-annotated Python script. This is the "golden" code that runs on a CPU-only environment using the "gloo" backend.
  • distributed_muon.py: My annotated and logically debugged version of the GPU code. This is for users who have a multi-GPU "nccl" environment. (Note: Since I don't have a multi-GPU cluster, this version is untested... unless someone wants to sponsor me with some GPUs! ๐Ÿ˜‰)

๐ŸŽ“ What You'll Learn (The "Nightmare" Blueprint)

By exploring this code, you'll see the real implementation of the concepts I discuss in my articles:

  • The 2D Grid: How to set up orthogonal dist_group (DP) and tp_group (TP) handles.
  • The "Aisles" & "Pallets": How param_groups (buffer_idx) and communication buckets (bucket_idx) are used to organize parameters.
  • The "Virtual Buffer": How a "master plan" (dist_meta and global_buffer_size) is used to manage memory for sharding (ZeRO-1).
  • The Acrobatic Data Flow: The full (DP gather -> TP gather) -> (Run Math) -> (TP shard -> DP shard) journey.
  • The Nuance: You'll see why we bucket the slow DP all_gather but don't need to bucket the fast, on-node TP all_gather.

๐Ÿž Summary of All Fixes

This repo isn't just a copy-paste. It's the result of a week-long debugging "nightmare." Here are all the bugs we had to find and fix to make it run:

Issue Problem Solution
Logic Bug #1 Missing params = group["params"] Added the line in the Muon update loop.
Logic Bug #2 ns_input was 1D after unpacking, crashing zeropower. Changed .view(-1) to .view(dist_meta.shape) to restore the 2D shape.
Env Bug #1 Hardcoded "nccl" backend. Changed dist.init_process_group to use "gloo".
Env Bug #2 Hardcoded 'cuda' device. Changed gen_param_and_grads to use 'cpu'.
Env Bug #3 mp.spawn() crashes in Jupyter/Colab. The .ipynb runs the code as a !python subprocess, bypassing the notebook kernel.

๐Ÿ“– Citation

If you use this tutorial in your work, please cite the original Muon paper and this tutorial.

@misc{wei2025muondistributed,
  author = {Wei, Jen},
  title = {A CPU-Friendly Tutorial for Distributed Muon (DPxTP)},
  year = {2025},
  howpublished = {\url{[https://huggingface.co/datasets/](https://huggingface.co/datasets/)<your-hf-handle>/muon-distributed}}
}

@misc{jordan2024muon,
  author = {Jordan, Keller, et al.},
  title = {Muon: An optimizer for hidden layers in neural networks},
  year = {2024},
  url = {[https://kellerjordan.github.io/posts/muon/](https://kellerjordan.github.io/posts/muon/)}
}

@misc{liu2025muonscalable,
  author = {Liu, Jingyuan, et al.},
  title = {Muon is Scalable for LLM Training},
  year = {2025},
  url = {[https://arxiv.org/abs/2502.16982](https://arxiv.org/abs/2502.16982)}
}
Downloads last month
16