A Comparative Analysis of Deep Learning Models for Detection of Lumpy Skin Disease with emphasis on Shifted Window Transformers

Authors

  • George Mwangi Muhindi Department of Information Technology, Murang’a University of Technology, Murang’a, Kenya https://orcid.org/0000-0001-5373-3249
  • Geoffrey Mariga Wambugu Department of Information Technology, Murang’a University of Technology, Murang’a, Kenya
  • Aaron Mogeni Oirere Department of Information Technology, Murang’a University of Technology, Murang’a, Kenya

DOI:

https://doi.org/10.24203/xj9mqc91

Keywords:

Deep learning, Lumpy Skin Disease (LSD), Vision Transformers, livestock diagnosis, livestock analysis, model explainability, Shifted Window Transformers

Abstract

Lumpy Skin Disease (LSD) in cattle is an increasingly prevalent viral infection with significant economic impact. Traditional detection methods are often labor-intensive and delayed. In this study, five state-of-the-art deep learning (DL) architectures—ResNet50, EfficientNetB0, MobileNetV2, Vision Transformer (ViT-B16), and Swin Transformer Tiny (Swin-T)—were evaluated and compared for image-based LSD classification. Publicly available Kaggle datasets of infected and healthy cattle were used. All models were fine-tuned using transfer learning and tested for classification accuracy, F1-score, inference time, explainability (via Grad-CAM), and real-world deployability. Results show that Swin-T achieved the highest classification accuracy of 95.3%, while MobileNetV2 emerged as the most deployment-friendly model. Grad-CAM visualizations confirmed that transformer-based models captured relevant lesion features with greater spatial sensitivity than CNNs. The study highlights the promise of hybrid transformer-CNN models for practical livestock diagnostics, especially in resource-constrained environments.

References

[1] G. Rai, Naveen, A. Hussain, A. Kumar, A. Ansari, and N. Khanduja, “A deep learning approach to detect lumpy skin disease in cows,” in Computer Networks, Big Data and IoT, Springer, pp. 369–377, 2021. DOI:10.1007/978-981-16-0965-7_30

[2] G. M. S. Himel, M. M. Islam, and M. Rahaman, “Vision intelligence for smart sheep farming: Applying ensemble learning to detect sheep breeds,” Artificial Intelligence in Agriculture, vol. 11, pp. 1–12, 2024. DOI:10.1007/s43995-024-00089-7

[3] S. Muhammad Saqib, M. Iqbal, M. T. Ben Othman, T. Shahazad, Y. Y. Ghadi, S. Al-Amro, and T. Mazhar, “Lumpy skin disease diagnosis in cattle: A deep learning approach optimized with RMSProp and MobileNetV2,” PLOS ONE, vol. 19, no. 8, p. e0302862, 2024. DOI: 10.1371/journal.pone.0302862

[4] A. Temenos, A. Voulodimos, V. Korelidou, A. Gelasakis, D. Kalogeras, A. Doulamis, and N. Doulamis, “Goat-CNN: A lightweight convolutional neural network for pose-independent body condition score estimation in goats,” J. Agric. Food Res., vol. 16, p. 101174, 2024. DOI:10.1016/j.jafr.2024.101174

[5] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, and N. Houlsby, “An image is worth 16×16 words: Transformers for image recognition at scale,” arXiv preprint arXiv: 2020. [Online]. Available: https://arxiv.org/abs/2010.11929

[6] X. Zhang, C. Xuan, Y. Ma, and H. Su, “A high-precision facial recognition method for small-tailed Han sheep based on an optimised Vision Transformer,” Animal, vol. 17, no. 8, p. 100886, 2023. https://doi.org/10.1016/j.animal.2023.100886

[7] Y. Guo, W. Hong, J. Wu, X. Huang, Y. Qiao, and H. Kong, “Vision-based cow tracking and feeding monitoring for autonomous livestock farming: The YOLOv5s-CA+ DeepSORT-vision transformer,” IEEE Robotics & Automation Magazine, vol. 30, no. 4, pp. 68–76, 2023. DOI:10.1109/MRA.2023.3310857

[8] L. Sun, G. Liu, H. Yang, X. Jiang, J. Liu, X. Wang, et al., “LAD-RCNN: a powerful tool for livestock face detection and normalization,” Animals, vol. 13, no. 9, p. 1446, 2023. https://doi.org/10.3390/ani13091446

[9] C. Senthilkumar, S. C, G. Vadivu, and S. Neethirajan, “Early detection of lumpy skin disease in cattle using deep learning—a comparative analysis of pretrained models,” Vet. Sci., vol. 11, no. 10, p. 510, 2024. https://doi.org/10.3390/vetsci11100510

[10] S. Yukun, H. Pengju, W. Yujie, C. Ziqi, L. Yang, D. Baisheng, et al., “Automatic monitoring system for individual dairy cows based on a deep learning framework that provides identification via body parts and estimation of body condition score,” J. Dairy Sci., vol. 102, no. 11, pp. 10140–10151, 2019. DOI: 10.3168/jds.2018-16164

[11] T. T. Sarker, M. G. Embaby, K. R. Ahmed, and A. AbuGhazaleh, “Gasformer: A transformer-based architecture for segmenting methane emissions from livestock in optical gas imaging,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 5489–5497, 2024. DOI:10.1109/CVPRW63382.2024.00558

[12] B. Tangirala, I. Bhandari, D. Laszlo, D. K. Gupta, R. M. Thomas, and D. Arya, “Livestock monitoring with transformer,” arXiv preprint arXiv:2111.00801, 2021. https://doi.org/10.48550/arXiv.2111.00801

[13] M. Genemo, “Detecting high-risk area for lumpy skin disease in cattle using deep learning feature,” Advances in Artificial Intelligence Research, vol. 3, no. 1, pp. 27–35, 2023. DOI:10.54569/aair.1164731

[14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017. DOI:10.48550/arXiv.1706.03762

[15] G. Taiwo, S. Vadera, and A. Alameer, “Vision transformers for automated detection of pig interactions in groups,” Smart Agric. Technol., vol. 10, p. 100774, 2025. DOI:10.1016/j.atech.2025.100774

[16] N. Siachos, M. Lennox, A. Anagnostopoulos, B. E. Griffiths, J. M. Neary, R. F. Smith, and G. Oikonomou, “Development and validation of a fully automated 2-dimensional imaging system generating body condition scores for dairy cows using machine learning,” J. Dairy Sci., vol. 107, no. 4, pp. 2499–2511, 2024. DOI: 10.3168/jds.2023-23894

[17] D. K. Saha, “An extensive investigation of convolutional neural network designs for the diagnosis of lumpy skin disease in dairy cows,” Heliyon, vol. 10, no. 14, p. e26049, 2024. DOI: 10.1016/j.heliyon.2024.e34242

[18] J. S. Souza, E. Bedin, G. T. H. Higa, N. Loebens, and H. Pistori, “Pig aggression classification using CNN, transformers and recurrent networks,” arXiv preprint arXiv:2403.08528, 2024. DOI:10.5753/wvc.2024.34004

[19] Y. Pan, Y. Zhang, X. Wang, X. X. Gao, and Z. Hou, “Low-cost livestock sorting information management system based on deep learning,” Artif. Intell. Agric., vol. 9, pp. 110–126, 2023. DOI:10.1016/j.aiia.2023.08.007

[20] A. Qazi, T. Razzaq, and A. Iqbal, “AnimalFormer: Multimodal vision framework for behavior-based precision livestock farming,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 7973–7982, 2024. DOI:10.48550/arXiv.2406.09711

[21] Y. Pang, W. Yu, Y. Zhang, C. Xuan, and P. Wu, “An attentional residual feature fusion mechanism for sheep face recognition,” Sci. Rep., vol. 13, p. 17128, 2023. DOI:10.1038/s41598-023-43580-2

[22] X. Li, J. Du, J. Yang, and S. Li, “When MobileNetV2 meets transformer: A balanced sheep face recognition model,” Agriculture, vol. 12, no. 8, p. 1126, 2022. https://doi.org/10.3390/agriculture12081126

[23] J. M. Sargeant and A. M. O'Connor, “Scoping reviews, systematic reviews, and meta-analysis: Applications in veterinary medicine,” Front. Vet. Sci., vol. 7, p. 11, 2020. DOI: 10.3389/fvets.2020.00011

[24] L. C. Toews, “Compliance of systematic reviews in veterinary journals with Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) literature search reporting guidelines,” J. Med. Libr. Assoc., vol. 105, no. 3, p. 233, 2017. DOI: 10.5195/jmla.2017.246

[25] D. A. Neu, J. Lahann, and P. Fettke, “A systematic literature review on state-of-the-art deep learning methods for process prediction,” Artif. Intell. Rev., vol. 55, no. 2, pp. 801–827, 2022. DOI:10.1007/s10462-021-09960-8

[26] T. Miller, G. Mikiciuk, I. Durlik, M. Mikiciuk, A. Łobodzińska, and M. Śnieg, “The IoT and AI in agriculture: The time is now—A systematic review of smart sensing technologies,” Sensors, vol. 25, no. 12, p. 3583, 2025. DOI: 10.3390/s25123583

[27] L. Sun, G. Liu, H. Yang, X. Jiang, J. Liu, X. Wang, et al., “LAD-RCNN: a powerful tool for livestock face detection and normalization,” Animals, vol. 13, no. 9, p. 1446, 2023. https://doi.org/10.3390/ani13091446

[28] X. Li and Y. Liu, “Cow face recognition based on transformer group,” in Proc. 4th Int. Conf. Comput. Vision Pattern Anal. (ICCPA 2024), vol. 13256, pp. 203–209, Sept. 2024. DOI: 10.1117/12.3038051

[29] C. Xie, Y. Cang, X. Lou, H. Xiao, X. Xu, X. Li, and W. Zhou, “A novel approach based on a modified mask R-CNN for the weight prediction of live pigs,” Artif. Intell. Agric., vol. 12, pp. 19–28, 2024. https://doi.org/10.1016/j.aiia.2024.03.001

[30] R. Khanal, Y. Choi, and J. Lee, “Transforming poultry farming: A pyramid vision transformer approach for accurate chicken counting in smart farm environments,” Sensors, vol. 24, no. 10, p. 2977, 2024. DOI:10.3390/s24102977

[31] Y. Zhang, Y. Zhang, H. Jiang, H. Du, A. Xue, and W. Shen, “New method for modeling digital twin behavior perception of cows: Cow daily behavior recognition based on multimodal data,” Comput. Electron. Agric., vol. 226, p. 109426, 2024. https://doi.org/10.1016/j.compag.2024.109426

Downloads

Published

2025-11-06

How to Cite

A Comparative Analysis of Deep Learning Models for Detection of Lumpy Skin Disease with emphasis on Shifted Window Transformers. (2025). International Journal of Computer and Information Technology(2279-0764), 14(3). https://doi.org/10.24203/xj9mqc91

Similar Articles

11-20 of 90

You may also start an advanced similarity search for this article.