Transfer Learning for Low-Data Computer Vision
DOI:
https://doi.org/10.15662/IJRAI.2018.0101002Keywords:
Transfer Learning, Low-Data, Computer Vision, Fine-Tuning, Feature Extraction, Low-Resource LearningAbstract
: In domains where collecting labeled data is costly or impractical—such as medical imaging, remote sensing, or specialized industrial inspection—transfer learning has emerged as a powerful approach to enable highperforming computer vision models from limited datasets. Transfer learning capitalizes on knowledge captured by large, pre-trained models—such as those trained on ImageNet—and adapts them to domain-specific tasks through finetuning, feature extraction, or layer freezing. This paper examines transfer learning strategies tailored for low-data scenarios and evaluates their effectiveness in achieving robust generalization while mitigating overfitting. We conduct systematic comparisons across techniques, including freezing different model layers, fine-tuning with various learning rates, and leveraging intermediate feature representations. Simulated experiments on benchmark datasets and synthetic low-data splits reveal that better generalization can be achieved using early-layer freezing coupled with feature extraction and shallow fine-tuning. We also explore parameter-efficient architectures such as low-rank adaptations and binary weight approximation to reduce overfitting risk and computational overhead. The results demonstrate that transfer learning consistently produces significant performance gains—often improving accuracy by 10–25% over training from scratch—especially when labeled samples per class are under 100. These findings underscore the value of transfer learning as a standard technique for low-data computer vision, offering practical guidelines for model selection and adaptation. The paper concludes with discussions on the impact of source-target domain similarity and parameter efficiency. Future directions include exploring unsupervised pre-training, meta-learning, and multi-task adaptation to further bolster performance under data scarcity.