LG-AI-EXAONE commited on
Commit
1f4f5ae
·
1 Parent(s): 98dfd2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -17,6 +17,7 @@ library_name: transformers
17
  <p align="center">
18
  <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
19
  🎉 License Updated! We are pleased to announce our more flexible licensing terms 🤗
 
20
  <br>
21
 
22
  # EXAONE-4.0-32B
@@ -33,7 +34,7 @@ In the EXAONE 4.0 architecture, we apply new architectural changes compared to p
33
  1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
34
  2. **QK-Reorder-Norm**: We adopt the Post-LN (LayerNorm) scheme for transformer blocks instead of Pre-LN, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
35
 
36
- For more details, please refer to our [technical report](https://www.lgresearch.ai/data/cdn/upload/EXAONE_4_0.pdf), [blog](#), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
37
 
38
 
39
  ### Model Configuration
@@ -213,6 +214,7 @@ For more details, please refer to [the documentation](https://github.com/NVIDIA/
213
  The following tables show the evaluation results of each model, with reasoning and non-reasoning mode. The evaluation details can be found in the [technical report](https://www.lgresearch.ai/data/cdn/upload/EXAONE_4_0.pdf).
214
 
215
  - ✅ denotes the model has a hybrid reasoning capability, evaluated by selecting reasoning / non-reasoning on the purpose.
 
216
 
217
 
218
  ### 32B Reasoning Mode
 
17
  <p align="center">
18
  <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
19
  🎉 License Updated! We are pleased to announce our more flexible licensing terms 🤗
20
+ <br>✈️ Try on <a href="https://friendli.ai/suite/~/serverless-endpoints/LGAI-EXAONE/EXAONE-4.0-32B/overview">FriendlyAI</a>
21
  <br>
22
 
23
  # EXAONE-4.0-32B
 
34
  1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
35
  2. **QK-Reorder-Norm**: We adopt the Post-LN (LayerNorm) scheme for transformer blocks instead of Pre-LN, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
36
 
37
+ For more details, please refer to our [technical report](https://www.lgresearch.ai/data/cdn/upload/EXAONE_4_0.pdf), [blog](https://www.lgresearch.ai/blog/view?seq=576), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
38
 
39
 
40
  ### Model Configuration
 
214
  The following tables show the evaluation results of each model, with reasoning and non-reasoning mode. The evaluation details can be found in the [technical report](https://www.lgresearch.ai/data/cdn/upload/EXAONE_4_0.pdf).
215
 
216
  - ✅ denotes the model has a hybrid reasoning capability, evaluated by selecting reasoning / non-reasoning on the purpose.
217
+ - To assess Korean **practical** and **professional** knowledge, we adopt both the [KMMLU-Redux](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Redux) and [KMMLU-Pro](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Pro) benchmarks. Both datasets are publicly released!
218
 
219
 
220
  ### 32B Reasoning Mode