Qwen2-1.5B-Instruct
Text Generation
W4A16
post
Qwen2-1.5B-Instruct

Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model.

Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.

For more details, please refer to Qwen blog, GitHub, and Documentation.

Performance Reference

Device

Backend
Precision
TTFT
Prefill
Decode
Context Size
File Size
Model Details

Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, Qwen2 have an improved tokenizer adaptive to multiple natural languages and codes.

Source Model Evaluation

Note: This table showed source model instead of quantized model evaluation. Source Model Evaluation refer to Qwen2-1.5B-Instruct Evaluation Result

Datasets Qwen1.5-0.5B-Chat Qwen2-0.5B-Instruct Qwen1.5-1.8B-Chat Qwen2-1.5B-Instruct
MMLU 35.0 37.9 43.7 52.4
HumanEval 9.1 17.1 25.0 37.8
GSM8K 11.3 40.1 35.3 61.6
C-Eval 37.2 45.2 55.3 63.8
IFEval (Prompt Strict-Acc.) 14.6 20.0 16.8 29.0
Model Inference

Users can run large language models on Qualcomm chips using either of the following methods:

License
Source Model:APACHE-2.0
Deployable Model:APLUX-MODEL-FARM-LICENSE
Performance Reference

Device

Backend
Precision
TTFT
Prefill
Decode
Context Size
File Size