Update README.md
Browse files
README.md
CHANGED
|
@@ -26,6 +26,8 @@ quantization_config = BitsAndBytesConfig(
|
|
| 26 |
)
|
| 27 |
```
|
| 28 |
|
|
|
|
|
|
|
| 29 |
## Model Details
|
| 30 |
|
| 31 |
- **Repository:** https://huggingface.co/internlm/internlm2-chat-20b
|
|
|
|
| 26 |
)
|
| 27 |
```
|
| 28 |
|
| 29 |
+
Not necessary for inference, just load the model without specifying any quantization/`load_in_*bit`.
|
| 30 |
+
|
| 31 |
## Model Details
|
| 32 |
|
| 33 |
- **Repository:** https://huggingface.co/internlm/internlm2-chat-20b
|