feat: remove todo
Browse files
README.md
CHANGED
|
@@ -119,13 +119,11 @@ print(output)
|
|
| 119 |
|
| 120 |
```
|
| 121 |
|
| 122 |
-
|
| 123 |
## Model Details
|
| 124 |
|
| 125 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
| 126 |
* **Model type**: `StableLM 2 12B Chat` model is an auto-regressive language model based on the transformer decoder architecture.
|
| 127 |
* **Language(s)**: English
|
| 128 |
-
TODO: Check if we want to keep paper link since this model is not explictly mentioned in the paper.
|
| 129 |
* **Paper**: [Stable LM 2 Chat Technical Report](https://drive.google.com/file/d/1JYJHszhS8EFChTbNAf8xmqhKjogWRrQF/view?usp=sharing)
|
| 130 |
* **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
|
| 131 |
* **Finetuned from model**:
|
|
@@ -155,6 +153,7 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
|
|
| 155 |
## Performance
|
| 156 |
|
| 157 |
### MT-Bench
|
|
|
|
| 158 |
| Model | Parameters | MT Bench (Inflection-corrected) |
|
| 159 |
|---------------------------------------|------------|---------------------------------|
|
| 160 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 13B/47B | 8.48 ± 0.06 |
|
|
@@ -164,9 +163,8 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
|
|
| 164 |
| mistralai/Mistral-7B-Instruct-v0.2 | 7B | 7.48 ± 0.02 |
|
| 165 |
| meta-llama/Llama-2-70b-chat-hf | 70B | 7.29 ± 0.05 |
|
| 166 |
|
| 167 |
-
|
| 168 |
-
|
| 169 |
### OpenLLM Leaderboard
|
|
|
|
| 170 |
| Model | Parameters | Average | ARC Challenge (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8K (5-shot) |
|
| 171 |
| -------------------------------------- | ---------- | ------- | ---------------------- | ------------------- | ------------- | ------------------- | ------------------- | -------------- |
|
| 172 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 13B/47B | 72.71 | 70.14 | 87.55 | 71.40 | 64.98 | 81.06 | 61.11 |
|
|
@@ -181,7 +179,6 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
|
|
| 181 |
| meta-llama/Llama-2-13b-hf | 13B | 55.69 | 59.39 | 82.13 | 55.77 | 37.38 | 76.64 | 22.82 |
|
| 182 |
| meta-llama/Llama-2-13b-chat-hf | 13B | 54.92 | 59.04 | 81.94 | 54.64 | 41.12 | 74.51 | 15.24 |
|
| 183 |
|
| 184 |
-
|
| 185 |
## Use and Limitations
|
| 186 |
|
| 187 |
### Intended Use
|
|
@@ -190,14 +187,11 @@ The model is intended to be used in chat-like applications. Developers must eval
|
|
| 190 |
|
| 191 |
### Limitations and Bias
|
| 192 |
|
| 193 |
-
TODO: Do we need or have a standard template to throw in here now?
|
| 194 |
-
|
| 195 |
We strongly recommend pairing this model with an input and output classifier to prevent harmful responses.
|
| 196 |
Using this model will require guardrails around your inputs and outputs to ensure that any outputs returned are not hallucinations.
|
| 197 |
Additionally, as each use case is unique, we recommend running your own suite of tests to ensure proper performance of this model.
|
| 198 |
Finally, do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
|
| 199 |
|
| 200 |
-
|
| 201 |
## How to Cite
|
| 202 |
|
| 203 |
```
|
|
|
|
| 119 |
|
| 120 |
```
|
| 121 |
|
|
|
|
| 122 |
## Model Details
|
| 123 |
|
| 124 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
| 125 |
* **Model type**: `StableLM 2 12B Chat` model is an auto-regressive language model based on the transformer decoder architecture.
|
| 126 |
* **Language(s)**: English
|
|
|
|
| 127 |
* **Paper**: [Stable LM 2 Chat Technical Report](https://drive.google.com/file/d/1JYJHszhS8EFChTbNAf8xmqhKjogWRrQF/view?usp=sharing)
|
| 128 |
* **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
|
| 129 |
* **Finetuned from model**:
|
|
|
|
| 153 |
## Performance
|
| 154 |
|
| 155 |
### MT-Bench
|
| 156 |
+
|
| 157 |
| Model | Parameters | MT Bench (Inflection-corrected) |
|
| 158 |
|---------------------------------------|------------|---------------------------------|
|
| 159 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 13B/47B | 8.48 ± 0.06 |
|
|
|
|
| 163 |
| mistralai/Mistral-7B-Instruct-v0.2 | 7B | 7.48 ± 0.02 |
|
| 164 |
| meta-llama/Llama-2-70b-chat-hf | 70B | 7.29 ± 0.05 |
|
| 165 |
|
|
|
|
|
|
|
| 166 |
### OpenLLM Leaderboard
|
| 167 |
+
|
| 168 |
| Model | Parameters | Average | ARC Challenge (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8K (5-shot) |
|
| 169 |
| -------------------------------------- | ---------- | ------- | ---------------------- | ------------------- | ------------- | ------------------- | ------------------- | -------------- |
|
| 170 |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 13B/47B | 72.71 | 70.14 | 87.55 | 71.40 | 64.98 | 81.06 | 61.11 |
|
|
|
|
| 179 |
| meta-llama/Llama-2-13b-hf | 13B | 55.69 | 59.39 | 82.13 | 55.77 | 37.38 | 76.64 | 22.82 |
|
| 180 |
| meta-llama/Llama-2-13b-chat-hf | 13B | 54.92 | 59.04 | 81.94 | 54.64 | 41.12 | 74.51 | 15.24 |
|
| 181 |
|
|
|
|
| 182 |
## Use and Limitations
|
| 183 |
|
| 184 |
### Intended Use
|
|
|
|
| 187 |
|
| 188 |
### Limitations and Bias
|
| 189 |
|
|
|
|
|
|
|
| 190 |
We strongly recommend pairing this model with an input and output classifier to prevent harmful responses.
|
| 191 |
Using this model will require guardrails around your inputs and outputs to ensure that any outputs returned are not hallucinations.
|
| 192 |
Additionally, as each use case is unique, we recommend running your own suite of tests to ensure proper performance of this model.
|
| 193 |
Finally, do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
|
| 194 |
|
|
|
|
| 195 |
## How to Cite
|
| 196 |
|
| 197 |
```
|