Meta-Llama-3.1-8B-Instruct
Text Generation
W4A16
post
Meta-Llama-3.1-8B-Instruct

The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

Model developer: Meta

Performance Reference

Device

Backend
Precision
TTFT
Prefill
Decode
Context Size
File Size
Model Details

Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Training Data Params Input modalities Output modalities Context length GQA Token count Knowledge cutoff
Llama 3.1 (text only) A new mix of publicly available online data. 8B Multilingual Text Multilingual Text and code 128k Yes 15T+ December 2023
70B Multilingual Text Multilingual Text and code 128k Yes
405B Multilingual Text Multilingual Text and code 128k Yes

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Llama 3.1 family of models. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date: July 23, 2024.

Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

License: A custom commercial license, the Llama 3.1 Community License, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE

Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.

Source Model Evaluation

Note: This table showed source model instead of quantized model evaluation. Source Model Evaluation refer to Meta-Llama-3.1-8B-Instruct Evaluation Result

Category Benchmark # Shots Metric Llama 3 8B Instruct Llama 3.1 8B Instruct Llama 3 70B Instruct Llama 3.1 70B Instruct Llama 3.1 405B Instruct
General MMLU 5 macro_avg/acc 68.5 69.4 82.0 83.6 87.3
MMLU (CoT) 0 macro_avg/acc 65.3 73.0 80.9 86.0 88.6
MMLU-Pro (CoT) 5 micro_avg/acc_char 45.5 48.3 63.4 66.4 73.3
IFEval 76.8 80.4 82.9 87.5 88.6
Reasoning ARC-C 0 acc 82.4 83.4 94.4 94.8 96.9
GPQA 0 em 34.6 30.4 39.5 41.7 50.7
Code HumanEval 0 pass@1 60.4 72.6 81.7 80.5 89.0
MBPP ++ base version 0 pass@1 70.6 72.8 82.5 86.0 88.6
Multipl-E HumanEval 0 pass@1 - 50.8 - 65.5 75.2
Multipl-E MBPP 0 pass@1 - 52.4 - 62.0 65.7
Math GSM-8K (CoT) 8 em_maj1@1 80.6 84.5 93.0 95.1 96.8
MATH (CoT) 0 final_em 29.1 51.9 51.0 68.0 73.8
Tool Use API-Bank 0 acc 48.3 82.6 85.1 90.0 92.0
BFCL 0 acc 60.3 76.1 83.0 84.8 88.5
Gorilla Benchmark API Bench 0 acc 1.7 8.2 14.7 29.7 35.3
Nexus (0-shot) 0 macro_avg/acc 18.1 38.5 47.8 56.7 58.7
Multilingual Multilingual MGSM (CoT) 0 em - 68.9 - 86.9 91.6
Model Inference

Users can run large language models on Qualcomm chips using either of the following methods:

License
Source Model:LLAMA3.1
Deployable Model:LLAMA3.1
Performance Reference

Device

Backend
Precision
TTFT
Prefill
Decode
Context Size
File Size