| --- |
| library_name: pytorch |
| license: other |
| tags: |
| - llm |
| - generative_ai |
| - android |
| pipeline_tag: text-generation |
|
|
| --- |
| |
|  |
|
|
| # Llama-v3-8B-Instruct: Optimized for Qualcomm Devices |
|
|
| Llama 3 is a family of LLMs. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency. |
|
|
| This is based on the implementation of Llama-v3-8B-Instruct found [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/). |
| This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/llama_v3_8b_instruct) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). |
|
|
| Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. |
|
|
| ## Deploying Llama 3 on-device |
|
|
| Please follow the [LLM on-device deployment](https://github.com/qualcomm/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial. |
|
|
| ## Getting Started |
| Due to licensing restrictions, we cannot distribute pre-exported model assets for this model. |
| Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/llama_v3_8b_instruct) Python library to compile and export the model with your own: |
| - Custom weights (e.g., fine-tuned checkpoints) |
| - Custom input shapes |
| - Target device and runtime configurations |
|
|
| See our repository for [Llama-v3-8B-Instruct on GitHub](https://github.com/qualcomm/ai-hub-models/blob/main/src/qai_hub_models/models/llama_v3_8b_instruct) for usage instructions. |
|
|
|
|
| ## Model Details |
|
|
| **Model Type:** Model_use_case.text_generation |
| |
| **Model Stats:** |
| - Input sequence length for Prompt Processor: 128 |
| - Maximum context length: 4096 |
| - Quantization Type: w4a16 + w8a16 (few layers) |
| - Supported languages: English. |
| - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens). |
| - Response Rate: Rate of response generation after the first response token. |
| |
| ## Performance Summary |
| | Model | Runtime | Precision | Chipset | Context Length | Response Rate (tokens per second) | Time To First Token (range, seconds) |
| |---|---|---|---|---|---|--- |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® 8 Elite Gen 5 Mobile | 4096 | 16.36505870819092 | 0.0993634 - 3.1796288 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® 8 Elite Mobile | 4096 | 15.001426887512206 | 0.1370342 - 4.3850944 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® 8 Elite Mobile | 4096 | 15.001426887512206 | 0.1370342 - 4.3850944 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® X2 Elite | 4096 | 19.47 | 0.147975 - 4.7352 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® X Elite | 4096 | 8.732598209381104 | 0.2160252 - 6.9128064 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Snapdragon® X Elite | 4096 | 8.732598209381104 | 0.2160252 - 6.9128064 |
| | Llama-v3-8B-Instruct | GENIE | w4a16 | Qualcomm® QCS9075 | 4096 | 10.764844226837159 | 0.18326900000000002 - 5.8646080000000005 |
| |
| ## License |
| * The license for the original implementation of Llama-v3-8B-Instruct can be found |
| [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE). |
| |
| ## References |
| * [LLaMA: Open and Efficient Foundation Language Models](https://ai.meta.com/blog/meta-llama-3/) |
| * [Source Model Implementation](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/) |
| |
| ## Community |
| * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. |
| * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). |
| |
| ## Usage and Limitations |
| |
| This model may not be used for or in connection with any of the following applications: |
| |
| - Accessing essential private and public services and benefits; |
| - Administration of justice and democratic processes; |
| - Assessing or recognizing the emotional state of a person; |
| - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics; |
| - Education and vocational training; |
| - Employment and workers management; |
| - Exploitation of the vulnerabilities of persons resulting in harmful behavior; |
| - General purpose social scoring; |
| - Law enforcement; |
| - Management and operation of critical infrastructure; |
| - Migration, asylum and border control management; |
| - Predictive policing; |
| - Real-time remote biometric identification in public spaces; |
| - Recommender systems of social media platforms; |
| - Scraping of facial images (from the internet or otherwise); and/or |
| - Subliminal manipulation |
| |