Update README.md
Browse files
README.md
CHANGED
|
@@ -202,4 +202,44 @@ For all settings used for this model (including specifics for its "class"), incl
|
|
| 202 |
|
| 203 |
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
|
| 204 |
|
| 205 |
-
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 202 |
|
| 203 |
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
|
| 204 |
|
| 205 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
<h2>Special Thanks:</h2>
|
| 210 |
+
|
| 211 |
+
---
|
| 212 |
+
|
| 213 |
+
Special thanks to all the following, and many more...
|
| 214 |
+
|
| 215 |
+
All the model makers, fine tuners, mergers, and tweakers:
|
| 216 |
+
- Provides the raw "DNA" for almost all my models.
|
| 217 |
+
- Sources of model(s) can be found on the repo pages, especially the "source" repos with link(s) to the model creator(s).
|
| 218 |
+
|
| 219 |
+
Huggingface [ https://huggingface.co ] :
|
| 220 |
+
- The place to store, merge, and tune models endlessly.
|
| 221 |
+
- THE reason we have an open source community.
|
| 222 |
+
|
| 223 |
+
LlamaCPP [ https://github.com/ggml-org/llama.cpp ] :
|
| 224 |
+
- The ability to compress and run models on GPU(s), CPU(s) and almost all devices.
|
| 225 |
+
- Imatrix, Quantization, and other tools to tune the quants and the models.
|
| 226 |
+
- Llama-Server : A cli based direct interface to run GGUF models.
|
| 227 |
+
- The only tool I use to quant models.
|
| 228 |
+
|
| 229 |
+
Quant-Masters: Team Mradermacher, Bartowski, and many others:
|
| 230 |
+
- Quant models day and night for us all to use.
|
| 231 |
+
- They are the lifeblood of open source access.
|
| 232 |
+
|
| 233 |
+
MergeKit [ https://github.com/arcee-ai/mergekit ] :
|
| 234 |
+
- The universal online/offline tool to merge models together and forge something new.
|
| 235 |
+
- Over 20 methods to almost instantly merge model, pull them apart and put them together again.
|
| 236 |
+
- The tool I have used to create over 1500 models.
|
| 237 |
+
|
| 238 |
+
Lmstudio [ https://lmstudio.ai/ ] :
|
| 239 |
+
- The go to tool to test and run models in GGUF format.
|
| 240 |
+
- The Tool I use to test/refine and evaluate new models.
|
| 241 |
+
- LMStudio forum on discord; endless info and community for open source.
|
| 242 |
+
|
| 243 |
+
Text Generation Webui // KolboldCPP // SillyTavern:
|
| 244 |
+
- Excellent tools to run GGUF models with - [ https://github.com/oobabooga/text-generation-webui ] [ https://github.com/LostRuins/koboldcpp ] .
|
| 245 |
+
- Sillytavern [ https://github.com/SillyTavern/SillyTavern ] can be used with LMSTudio [ https://lmstudio.ai/ ] , TextGen [ https://github.com/oobabooga/text-generation-webui ], Kolboldcpp [ https://github.com/LostRuins/koboldcpp ], Llama-Server [part of LLAMAcpp] as a off the scale front end control system and interface to work with models.
|