Text Generation
haok1402 commited on
Commit
72b50a1
·
verified ·
1 Parent(s): f2242f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -6,8 +6,7 @@ library_name: transformers
6
 
7
  # 🧨 FLAME-MoE
8
 
9
- This repository contains the model described in [FLAME-MoE: A Transparent End-to-End Research Platform for
10
- Mixture-of-Experts Language Models](https://huggingface.co/papers/2505.20225).
11
 
12
  **FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
13
 
 
6
 
7
  # 🧨 FLAME-MoE
8
 
9
+ This repository contains the model described in [FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models](https://huggingface.co/papers/2505.20225).
 
10
 
11
  **FLAME-MoE** is a fully open Mixture-of-Experts (MoE) language model suite developed by Carnegie Mellon University. It provides a transparent and reproducible research platform for investigating expert routing, model scaling, and training dynamics in sparse architectures. The suite includes seven decoder-only transformer models ranging from 38M to 1.7B active parameters and reflects production-grade MoE setups with 64 experts per MoE layer, top-8 routing, and shared experts.
12