Update README.md
Browse files
README.md
CHANGED
|
@@ -12,3 +12,11 @@ This is a Vicuna-like model with only 68M parameters, which is fine-tuned from [
|
|
| 12 |
The training setup follows the [Vicuna suite](https://github.com/lm-sys/FastChat).
|
| 13 |
|
| 14 |
The model is mainly developed as a base Small Speculative Model. As a comparison, it can be better aligned to the Vicuna models than LLaMA-68m with little loss of alignment to the LLaMA models.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
The training setup follows the [Vicuna suite](https://github.com/lm-sys/FastChat).
|
| 13 |
|
| 14 |
The model is mainly developed as a base Small Speculative Model. As a comparison, it can be better aligned to the Vicuna models than LLaMA-68m with little loss of alignment to the LLaMA models.
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
| Draft Model | Target Model | Alignment |
|
| 18 |
+
| -------------- | ------------- | --------- |
|
| 19 |
+
| LLaMA-68/160M | LLaMA-13/33B | π |
|
| 20 |
+
| LLaMA-68/160M | Vicuna-13/33B | π |
|
| 21 |
+
| Vicuna-68/160M | LLaMA-13/33B | π |
|
| 22 |
+
| Vicuna-68/160M | Vicuna-13/33B | π |
|