Spaces:
Running
Running
_name_or_path-update
#4
by
sylwia-kuros - opened
README.md
CHANGED
|
@@ -9,31 +9,14 @@ pinned: true
|
|
| 9 |
# The Llama Family
|
| 10 |
*From Meta*
|
| 11 |
|
| 12 |
-
Welcome to the official Hugging Face organization for Llama, Llama Guard, and
|
| 13 |
-
|
| 14 |
-
In order to access models here, please visit a repo of one of the three families and accept the license terms and acceptable use policy. Requests are processed hourly.
|
| 15 |
|
| 16 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 17 |
|
| 18 |
-
Current:
|
| 19 |
-
|
| 20 |
-
**Llama 4:** The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
|
| 21 |
-
|
| 22 |
-
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
History:
|
| 27 |
-
|
| 28 |
-
* **Llama 3.3:** The Llama 3.3 is a text only instruct-tuned model in 70B size (text in/text out).
|
| 29 |
-
* **Llama 3.2:** The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).
|
| 30 |
-
* **Llama 3.2 Vision:** The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out)
|
| 31 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
| 32 |
-
* **Llama 3.1 Evals:** a collection that provides detailed information on how we derived the reported benchmark metrics for the Llama 3.1 models, including the configurations, prompts and model responses used to generate evaluation results.
|
| 33 |
-
* **Llama Guard 3:** a Llama-3.1-8B pretrained model, aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities.
|
| 34 |
-
* **Prompt Guard:** a mDeBERTa-v3-base (86M backbone parameters and 192M word embedding parameters) fine-tuned multi-label model that categorizes input strings into 3 categories - benign, injection, and jailbreak. It is suitable to run as a filter prior to each call to an LLM in an application.
|
| 35 |
* **Llama 2:** a collection of pretrained and fine-tuned text models ranging in scale from 7 billion to 70 billion parameters.
|
| 36 |
* **Code Llama:** a collection of code-specialized versions of Llama 2 in three flavors (base model, Python specialist, and instruct tuned).
|
| 37 |
* **Llama Guard:** a 8B Llama 3 safeguard model for classifying LLM inputs and responses.
|
| 38 |
|
|
|
|
| 39 |
Learn more about the models at https://ai.meta.com/llama/
|
|
|
|
| 9 |
# The Llama Family
|
| 10 |
*From Meta*
|
| 11 |
|
| 12 |
+
Welcome to the official Hugging Face organization for Llama, Llama Guard, and Code Llama models from Meta! In order to access models here, please visit a repo of one of the three families and accept the license terms and acceptable use policy. Requests are processed hourly.
|
|
|
|
|
|
|
| 13 |
|
| 14 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
|
|
|
|
|
|
|
|
|
| 17 |
* **Llama 2:** a collection of pretrained and fine-tuned text models ranging in scale from 7 billion to 70 billion parameters.
|
| 18 |
* **Code Llama:** a collection of code-specialized versions of Llama 2 in three flavors (base model, Python specialist, and instruct tuned).
|
| 19 |
* **Llama Guard:** a 8B Llama 3 safeguard model for classifying LLM inputs and responses.
|
| 20 |
|
| 21 |
+
|
| 22 |
Learn more about the models at https://ai.meta.com/llama/
|