
MobileNet-V3-Small is the lightweight version in the third generation of the MobileNet series, designed for resource-constrained devices like mobile and edge devices. Based on automated machine learning (AutoML) techniques, MobileNet-V3-Small incorporates depthwise separable convolutions, a hard-swish activation function, and a Squeeze-and-Excitation (SE) attention mechanism to boost performance and efficiency. Compared to MobileNet-V3-Large, MobileNet-V3-Small has a more compact architecture with fewer parameters, making it suitable for efficient image classification, object detection, and other tasks in low-power environments. Common applications on mobile include real-time face recognition, gesture control, and image classification, delivering high accuracy with fast processing capabilities.
Source model
- Input shape: 224x224
- Number of parameters: 2.42M
- Model size: 9.71M
- Output shape: 1x1000
Source model repository: MobileNet-v3-Small
Click Model Conversion Reference in the Performance Reference panel on the right to view the model conversion steps.
The model performance benchmarks and inference example code provided on Model Farm are all based on the APLUX AidLite SDK
SDK installation
For details, please refer to the AidLite Developer Documentation
- Install AidLite SDK
# install aidlite sdk c++ api
sudo aid-pkg -i aidlite-sdk
# install aidlite sdk python api
python3 -m pip install pyaidlite -i https://mirrors.aidlux.com --trusted-host mirrors.aidlux.com
- Verify AidLite SDK
# aidlite sdk c++ check
python3 -c "import aidlite; print(aidlite.get_library_version())"
# aidlite sdk python check
python3 -c "import aidlite; print(aidlite.get_py_library_version())"
Inference example
- Click Model & Test Code to download model files and inference codes. The file structure showed below:
/model_farm_{model_name}_aidlite
|__ models # folder where model files are stored
|__ python # aidlite python model inference example
|__ cpp # aidlite cpp model inference example
|__ README.md