.png&w=1920&q=75)
SigLIP-so400m is a lightweight multimodal vision-language model developed by Google, based on the Vision Transformer (SoViT architecture) and trained with a Sigmoid Loss instead of contrastive loss used in CLIP. This approach enhances efficiency, especially in small-batch training, and reduces negative sample interference. The model performs well in tasks like zero-shot image classification and image-text retrieval, making it suitable for edge deployment and large-scale inference. It supports natural language prompts for image understanding and shows strong generalization in open-vocabulary settings.
Source model
- Input shape: [1x3x384x384], [1x64]
- Number of parameters: 408.39M, 428.9M
- Model size: 1.60G, 1.68G
- Output shape: [1x1152], [1x1152]
The source model can be found here
When the user has fine-tuned the source model, the model conversion process must be performed again.
Users can refer to either of the following two methods to complete the model conversion:
Using AIMO for model conversion: Click Model Conversion Reference in the Performance Reference section on the right to view the conversion steps.
Using Qualcomm QNN for model conversion: Please refer to the Qualcomm QNN Documentation.
The model performance benchmarks and example code provided by Model Farm are all implemented based on the AIDLux AidLite SDK.
For models in .bin
format, you can use either of the following two inference engines to run inference on Qualcomm chips:
Inference using APLUX AidLite: please refer to the APLUX AidLite Developer Documentation
Inference using Qualcomm QNN: Please refer to the Qualcomm QNN Documentation
Inference Example Code
The inference example code is implemented using the AidLite SDK.
Click Model & Code to download the model files and the inference code package. The file structure is as follows:
/model_farm_{model_name}_aidlite
|__ models # folder where model files are stored
|__ python # aidlite python model inference example
|__ cpp # aidlite cpp model inference example
|__ README.md