bartowski commited on
Commit
ed9d71b
·
verified ·
1 Parent(s): 59c8309

Quant for 6.5

Browse files
README.md CHANGED
@@ -9,65 +9,70 @@ tags:
9
  datasets:
10
  - THUDM/LongAlign-10k
11
  license: apache-2.0
12
- quantized_by: bartowski
13
- pipeline_tag: text-generation
14
  ---
 
15
 
16
- ## Exllama v2 Quantizations of LongAlign-7B-64k
 
 
17
 
18
- Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
19
 
20
- # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
21
 
22
- Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
23
 
24
- Original model: https://huggingface.co/THUDM/LongAlign-7B-64k
 
 
 
 
 
 
 
 
25
 
26
- No GQA - VRAM requirements will be higher
27
 
28
- | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
29
- | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
30
- | [8_0](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
31
- | [6_5](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
32
- | [5_0](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
33
- | [4_25](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
34
- | [3_5](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
35
 
36
- ## Download instructions
37
 
38
- With git:
39
 
40
- ```shell
41
- git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/LongAlign-7B-64k-exl2 LongAlign-7B-64k-exl2-6_5
42
- ```
43
 
44
- With huggingface hub (credit to TheBloke for instructions):
45
 
46
- ```shell
47
- pip3 install huggingface-hub
48
  ```
 
 
 
49
 
50
- To download the `main` (only useful if you only care about measurement.json) branch to a folder called `LongAlign-7B-64k-exl2`:
51
-
52
- ```shell
53
- mkdir LongAlign-7B-64k-exl2
54
- huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --local-dir LongAlign-7B-64k-exl2 --local-dir-use-symlinks False
55
  ```
56
-
57
- To download from a different branch, add the `--revision` parameter:
58
-
59
- Linux:
60
-
61
- ```shell
62
- mkdir LongAlign-7B-64k-exl2-6_5
63
- huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --revision 6_5 --local-dir LongAlign-7B-64k-exl2-6_5 --local-dir-use-symlinks False
 
 
 
 
64
  ```
65
 
66
- Windows (which apparently doesn't like _ in folders sometimes?):
 
 
67
 
68
- ```shell
69
- mkdir LongAlign-7B-64k-exl2-6.5
70
- huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --revision 6_5 --local-dir LongAlign-7B-64k-exl2-6.5 --local-dir-use-symlinks False
71
  ```
72
 
73
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
9
  datasets:
10
  - THUDM/LongAlign-10k
11
  license: apache-2.0
 
 
12
  ---
13
+ # LongAlign-7B-64k
14
 
15
+ <p align="center">
16
+ 🤗 <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> • 💻 <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a>
17
+ </p>
18
 
19
+ **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
20
 
21
+ ## All Models
22
 
23
+ We open-sourced the following list of models:
24
 
25
+ |Model|Huggingface Repo|Description|
26
+ |---|---|---|
27
+ |**LongAlign-6B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window |
28
+ |**LongAlign-6B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base|
29
+ |**LongAlign-7B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window |
30
+ |**LongAlign-7B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base|
31
+ |**LongAlign-13B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window |
32
+ |**LongAlign-13B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base|
33
+ |**ChatGLM3-6B-128k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window|
34
 
35
+ ![](assets/leaderboard.png)
36
 
37
+ ## Model usage
38
+ Chat prompt template for LongAlign-6B-64k:
39
+ ```text
40
+ [Round 1]
 
 
 
41
 
42
+ 问:Hi!
43
 
44
+ 答:Hello! What can I assist you today?
45
 
46
+ [Round 2]
 
 
47
 
48
+ 问:What should I do if I can't sleep at night?
49
 
50
+ 答:
 
51
  ```
52
+ Chat prompt template for LongAlign-7B-64k and LongAlign-13B-64k:
53
+ ```text
54
+ [INST]Hi![/INST]Hello! What can I assist you today?
55
 
56
+ [INST]What should I do if I can't sleep at night?[/INST]
 
 
 
 
57
  ```
58
+ ChatGLM3-6B-128k uses the same prompt template as [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b).
59
+
60
+ A simple demo for deployment of the model:
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+ import torch
64
+ tokenizer = AutoTokenizer.from_pretrained("THUDM/LongAlign-6B-64k", trust_remote_code=True)
65
+ model = AutoModelForCausalLM.from_pretrained("THUDM/LongAlign-6B-64k", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
66
+ model = model.eval()
67
+ query = open("assets/paper.txt").read() + "\n\nPlease summarize the paper."
68
+ response, history = model.chat(tokenizer, query, history=[], max_new_tokens=512, temperature=1)
69
+ print(response)
70
  ```
71
 
72
+ ## Citation
73
+
74
+ If you find our work useful, please consider citing LongAlign:
75
 
 
 
 
76
  ```
77
 
78
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "THUDM/LongAlign-7B-64k",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 11008,
12
+ "max_sequence_length": 65536,
13
+ "max_position_embeddings": 65536,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 32,
18
+ "pad_token_id": 0,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 2000000.0,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float16",
25
+ "transformers_version": "4.33.0",
26
+ "use_cache": true,
27
+ "vocab_size": 32256
28
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 65536,
5
+ "pad_token_id": 0,
6
+ "temperature": 0.9,
7
+ "top_p": 0.6,
8
+ "transformers_version": "4.31.0"
9
+ }
original_repo_url.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ https://huggingface.co/THUDM/LongAlign-7B-64k
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:487f6826337ef1a3dd95d6abae679c97f36975237509aa0c32cf24c6c8e53d2d
3
+ size 5663126744
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "legacy": false,
22
+ "model_max_length": 65536,
23
+ "pad_token": null,
24
+ "padding_side": "right",
25
+ "sp_model_kwargs": {},
26
+ "spaces_between_special_tokens": false,
27
+ "tokenizer_class": "LlamaTokenizer",
28
+ "unk_token": {
29
+ "__type": "AddedToken",
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ },
36
+ "use_default_system_prompt": true
37
+ }