woojong.ryu commited on
Commit
a72bf57
·
0 Parent(s):

Initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ko
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - pytorch
8
+ - llama
9
+ - causal-lm
10
+ - 42dot_llm
11
+ license: cc-by-nc-4.0
12
+ ---
13
+ # 42dot_LLM-PLM-1.3B
14
+
15
+ **42dot LLM-PLM** is a pre-trained language model (PLM) developed by [**42dot**](https://42dot.ai/) and is a part of **42dot LLM** (large language model). 42dot LLM-PLM is pre-trained using Korean and English text corpus and can be used as a foundation language model for several Korean and English natural language tasks. This repository contains a 1.3B-parameter version of the model.
16
+
17
+ ## Model Description
18
+
19
+ ### Hyperparameters
20
+ 42dot LLM-PLM is built upon a Transformer decoder architecture similar to the [LLaMA 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) and its hyperparameters are listed below.
21
+
22
+ | Params | Layers | Attention heads | Hidden size | FFN size |
23
+ | -- | -- | -- | -- | -- |
24
+ | 1.3B | 24 | 32 | 2,048 | 5,632 |
25
+
26
+ ### Pre-training
27
+
28
+ Pre-training took about 49K GPU hours (NVIDIA A100). Related settings are listed below.
29
+
30
+ | Params | Global batch size\* | Initial learning rate | Train iter.\* | Max length\* | Weight decay |
31
+ | -- | -- | -- | -- | -- | -- |
32
+ | 1.3B | 4.0M | 4E-4 | 1.4T | 4,096 | 0.1 |
33
+
34
+ (\* unit: tokens)
35
+
36
+ ### Pre-training datasets
37
+ We used a set of publicly available text corpus, including:
38
+ - Korean: including [Jikji project](http://jikji.duckdns.org/), [mC4-ko](https://huggingface.co/datasets/mc4), [LBox Open](https://github.com/lbox-kr/lbox-open), [KLUE](https://huggingface.co/datasets/klue), [Wikipedia (Korean)](https://ko.wikipedia.org/) and so on.
39
+ - English: including [The Pile](https://github.com/EleutherAI/the-pile), [RedPajama](https://github.com/togethercomputer/RedPajama-Data), [C4](https://huggingface.co/datasets/c4) and so on.
40
+
41
+ ### Tokenizer
42
+ The tokenizer is based on the Byte-level BPE algorithm. We trained its vocabulary from scratch using a subset of the pre-training corpus. For constructing a subset, 10M and 10M documents are sampled from Korean and English corpus respectively. The resulting vocabulary sizes about 50K.
43
+
44
+ ### Zero-shot evaluations
45
+ We evaluate 42dot LLM-PLM on a variety of academic benchmarks both in Korean and English. All the results are obtained using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and models released on the Hugging Face Hub.
46
+ #### Korean (KOBEST)
47
+
48
+ <figure align="center">
49
+ <img src="https://huggingface.co/42dot/42dot_LLM-PLM-1.3B/resolve/main/asset/42dot_LLM_PLM_KO_score_background.png"/>
50
+ </figure>
51
+
52
+ |Tasks / Macro-F1|[KoGPT2](https://github.com/SKT-AI/KoGPT2) <br>1.2B|[Polyglot-Ko](https://github.com/EleutherAI/polyglot) <br>1.3B|[XGLM](https://huggingface.co/facebook/xglm-1.7B) <br>1.7B|[PolyLM](https://huggingface.co/DAMO-NLP-MT/polylm-1.7b) <br>1.7B|42dot LLM-PLM <br>1.3B|
53
+ |--------------|-----------|----------------|---------|-----------|------------------------|
54
+ |boolq |0.337 |0.355 |**0.502** |0.334 |0.369 |
55
+ |copa |0.67 |**0.721** |0.616 |0.513 |0.704 |
56
+ |hellaswag |0.404 |0.401 |0.374 |0.321 |**0.431** |
57
+ |sentineg |0.606 |0.679 |0.46 |0.382 |**0.69** |
58
+ |**average** |0.504 |0.539 |0.488 |0.388 |**0.549** |
59
+
60
+ #### English
61
+
62
+ <figure align="center">
63
+ <img src="https://huggingface.co/42dot/42dot_LLM-PLM-1.3B/resolve/main/asset/42dot_LLM_EN_score_white_background.png"/>
64
+ </figure>
65
+
66
+ | Tasks / Metric | MPT <br>1B | OPT <br>1.3B | XGLM <br>1.7B | PolyLM <br>1.7B | 42dot LLM-PLM <br>1.3B |
67
+ | ---------------------- | ------ | -------- | --------- | ----------- | ------------------------ |
68
+ | anli_r1/acc | 0.309 | **0.341** | 0.334 | 0.336 | 0.325 |
69
+ | anli_r2/acc | 0.334 | 0.339 | 0.331 | 0.314 | **0.34** |
70
+ | anli_r3/acc | 0.33 | 0.336 | 0.333 | **0.339** | 0.333 |
71
+ | arc_challenge/acc | 0.268 | 0.234 | 0.21 | 0.198 | **0.288** |
72
+ | arc_challenge/acc_norm | 0.291 | 0.295 | 0.243 | 0.256 | **0.317** |
73
+ | arc_easy/acc | 0.608 | 0.571 | 0.537 | 0.461 | **0.628** |
74
+ | arc_easy/acc_norm | 0.555 | 0.51 | 0.479 | 0.404 | **0.564** |
75
+ | boolq/acc | 0.517 | 0.578 | 0.585 | 0.617 | **0.624** |
76
+ | hellaswag/acc | 0.415 | 0.415 | 0.362 | 0.322 | **0.422** |
77
+ | hellaswag/acc_norm | 0.532 | 0.537 | 0.458 | 0.372 | **0.544** |
78
+ | openbookqa/acc | **0.238** | 0.234 | 0.17 | 0.166 | 0.222 |
79
+ | openbookqa/acc_norm | 0.334 | 0.334 | 0.298 | 0.334 | **0.34** |
80
+ | piqa/acc | 0.714 | 0.718 | 0.697 | 0.667 | **0.725** |
81
+ | piqa/acc_norm | 0.72 | 0.724 | 0.703 | 0.649 | **0.727** |
82
+ | record/f1 | 0.84 | **0.857** | 0.775 | 0.681 | 0.848 |
83
+ | record/em | 0.832 | **0.849** | 0.769 | 0.674 | 0.839 |
84
+ | rte/acc | 0.541 | 0.523 | **0.559** | 0.513 | 0.542 |
85
+ | truthfulqa_mc/mc1 | 0.224 | 0.237 | 0.215 | **0.251** | 0.236 |
86
+ | truthfulqa_mc/mc2 | 0.387 | 0.386 | 0.373 | **0.428** | 0.387 |
87
+ | wic/acc | 0.498 | **0.509** | 0.503 | 0.5 | 0.502 |
88
+ | winogrande/acc | 0.574 | **0.595** | 0.55 | 0.519 | 0.583 |
89
+ | **average** | 0.479 | 0.482 | 0.452 | 0.429 | **0.492** |
90
+
91
+ ## Limitations and Ethical Considerations
92
+ 42dot LLM-PLM shares a number of well-known limitations of other large language models (LLMs). For example, it may generate false and misinformative content since 42dot LLM-PLM is also subject to [hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)). In addition, 42dot LLM-PLM may generate toxic, harmful, and biased content due to the use of web-available training data. We strongly suggest that 42dot LLM-PLM users should be aware of those limitations and take necessary steps to mitigate those issues.
93
+
94
+ ## Disclaimer
95
+ The contents generated by 42dot LLM series ("42dot LLM") do not necessarily reflect the views or opinions of 42dot Inc. ("42dot"). 42dot disclaims any and all liability to any part for any direct, indirect, implied, punitive, special, incidental, or other consequential damages arising from any use of the 42dot LLM and its generated contents.
96
+
97
+ ## License
98
+ The 42dot LLM-PLM is licensed under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0).
99
+
100
+ ## Citation
101
+
102
+ ```
103
+ @misc{42dot2023llm,
104
+ title={42dot LLM: A Series of Large Language Model by 42dot},
105
+ author={42dot Inc.},
106
+ year={2023},
107
+ url = {https://github.com/42dot/42dot_LLM},
108
+ version = {1.0.0},
109
+ }
110
+ ```
added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 50256,
3
+ "<||bos||>": 50257,
4
+ "<||pad||>": 50258,
5
+ "<||unk||>": 50259
6
+ }
asset/42dot_LLM_EN_score_white_background.png ADDED
asset/42dot_LLM_PLM_KO_score_background.png ADDED
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "42dot-PLM-1.3B",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 50257,
7
+ "eos_token_id": 50256,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 2048,
10
+ "initializer_range": 0.01,
11
+ "intermediate_size": 5632,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 24,
16
+ "num_key_value_heads": 32,
17
+ "pad_token_id": 50258,
18
+ "pretraining_tp": 1,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_scaling": null,
21
+ "tie_word_embeddings": false,
22
+ "tokenizer_class": "GPT2TokenizerFast",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.31.0",
25
+ "use_cache": true,
26
+ "vocab_size": 50304
27
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50257,
4
+ "eos_token_id": 50256,
5
+ "pad_token_id": 50258,
6
+ "transformers_version": "4.31.0"
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5887cb28a2368a6d45beb52450ada1b4829744c8c75fdaa5e1e1c25727fd1936
3
+ size 5757114640
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cbe77e96064b09b39c6a8ffbacd8fb0d83237119eb65dac5a8fed57cd325b54
3
+ size 5757165853
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<||bos||>",
3
+ "eos_token": {
4
+ "content": "<|endoftext|>",
5
+ "lstrip": false,
6
+ "normalized": true,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "pad_token": "<||pad||>",
11
+ "unk_token": "<||unk||>"
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": true,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "<|endoftext|>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "errors": "replace",
22
+ "model_max_length": 8192,
23
+ "pad_token": null,
24
+ "tokenizer_class": "GPT2Tokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<|endoftext|>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff