sshukla4 commited on
Commit
5f0d6b3
·
verified ·
1 Parent(s): d720c67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -15
README.md CHANGED
@@ -1,28 +1,23 @@
1
  ---
2
- language:
3
- - zh
4
- - en
5
  tags:
6
- - glm
7
- - chatglm
8
- - thudm
9
  - ryzenai-npu
10
- base_model: THUDM/chatglm3-6b
11
  ---
12
 
13
- # chatglm3-6b
14
  - ## Introduction
15
  This model was created using Quark Quantization, followed by OGA Model Builder, and finalized with post-processing for NPU deployment.
16
  - ## Quantization Strategy
17
- - AWQ / Group 128 / Asymmetric / BF16 activations / UINT4 weights
18
-
19
  - ## Quick Start
20
- For quickstart, refer to [Ryzen AI doucmentation](https://ryzenai.docs.amd.com/en/latest/npu_oga.html)
21
-
22
- #### Evaluation scores
23
- The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 29.81679.
24
 
 
 
25
 
 
26
 
27
  #### License
28
  Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
@@ -32,7 +27,6 @@ you may not use this file except in compliance with the License.
32
  You may obtain a copy of the License at
33
 
34
  http://www.apache.org/licenses/LICENSE-2.0
35
-
36
  Unless required by applicable law or agreed to in writing, software
37
  distributed under the License is distributed on an "AS IS" BASIS,
38
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 
1
  ---
2
+ license: mit
3
+ base_model:
4
+ - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
5
  tags:
 
 
 
6
  - ryzenai-npu
 
7
  ---
8
 
9
+ # DeepSeek-R1-Distill-Llama-8B-onnx-ryzenai-npu
10
  - ## Introduction
11
  This model was created using Quark Quantization, followed by OGA Model Builder, and finalized with post-processing for NPU deployment.
12
  - ## Quantization Strategy
13
+ - AWQ / Group 128 / Asymmetric / BFP16 activations / UINT4 Weights
 
14
  - ## Quick Start
15
+ For quickstart, refer to [Ryzen AI documentation](https://ryzenai.docs.amd.com/en/latest/npu_oga.html)
 
 
 
16
 
17
+ ## Evaluation scores
18
+ - The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 14.3902.
19
 
20
+ - The average MMLU scores are astronomy - 55.92, philosophy - 50.48 and management - 56.31.
21
 
22
  #### License
23
  Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
 
27
  You may obtain a copy of the License at
28
 
29
  http://www.apache.org/licenses/LICENSE-2.0
 
30
  Unless required by applicable law or agreed to in writing, software
31
  distributed under the License is distributed on an "AS IS" BASIS,
32
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.