Enhanced Surveillance: Triple Background Subtraction with YOLO V8
Keywords:
Abandoned objects; Background subtraction; CNN; Segmentation; Video surveillance; YOLO V8Abstract
In congested areas like malls, airports, train stations, etc., video surveillance facilitates monitoring and provides a sense of security. There is a need for advancements in video surveillance technology to be more robust and efficient. Due to increasing terrorist and criminal activities, addressing the unattended static artefacts on public premises has become a high-priority task. To mitigate human and financial loss, abandoned objects should be dealt with the utmost priority. Identifying abandoned or removed objects in surveillance footage proves challenging due to its complexity, driven by occlusion and sudden alterations in lighting. This paper proposes a novel technique for detecting and classifying abandoned objects, particularly bags. The work aims to automatically detect abandoned objects. The method involves a robust triple background subtraction technique that extracts background using three sub-models. A Convolutional Neural Network (CNN) -based classifier is used to classify abandoned artefacts. You Only Look Once YOLO V8 is used as the classification algorithm. After the foreground is extracted, graph-based segmentation is used to extract candidate static objects. Final static objects are extracted using the stability rank calculation method. The suggested approach is validated on three benchmark datasets: PET 2006, PET 2007, and i-LIDS AVSS. Performance parameters include precision, recall, and accuracy. In realistic environments and factual situations like poor illumination and occlusion, the proposed solution outperforms the existing methods. The proposed methods help in the reduction of false positives, reducing the false alarm rate. The proposed method reaches an accuracy of 99.5%, precision of 93%, and recall of 90%, much higher than earlier proposed systems.
Downloads
References
Kamble SJ, Kounte MR (2023) Application of improved you only look once model in road traffic monitoring system. International Journal of Electrical and Computer Engineering 13:4612–4622. https://doi.org/10.11591/ijece.v13i4.pp4612-4622
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M (2023) YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare (Switzerland) 11:.https://doi.org/10.3390/healthcare11091222
Mavrogiannis P, Maglogiannis I (2022) Amateur football analytics using computer vision. Neural Comput Appl 34:19639–19654. https://doi.org/10.1007/s00521-022-07692-6
Almanzor E, Anvo NR, Thuruthel TG, Iida F (2022) Autonomous detection and sorting of litter using deep learning and soft robotic grippers. Front Robot AI 9:. https://doi.org/10.3389/frobt.2022.1064853
Musunuri YR, Kwon OS, Kung SY (2022) SRODNet: Object Detection Network Based on Super Resolution for Autonomous Vehicles. Remote Sens (Basel) 14:. https://doi.org/10.3390/rs14246270
Shruthi, Pattan P, Arjunagi S (2022) A human behavior analysis model to track object behavior in surveillance videos. Measurement: Sensors 24:. https://doi.org/10.1016/j.measen.2022.100454
Cai H, Song Z, Xu J, et al (2022) CUDM: A Combined UAV Detection Model Based on Video Abnormal Behavior. Sensors 22:. https://doi.org/10.3390/s22239469
Mou Q, Wei L, Wang C, et al (2021) Unsupervised domain-adaptive scene-specific pedestrian detection for static video surveillance. Pattern Recognit 118:. https://doi.org/10.1016/j.patcog.2021.108038
Vijayan M, Mohan R (2020) A Universal Foreground Segmentation Technique using Deep-Neural Network. Multimed Tools Appl 79:34835–34850. https://doi.org/10.1007/s11042-020-08977-5
Wang J, Dai H, Chen T, et al (2023) Toward surface defect detection in electronics manufacturing by an accurate and lightweight YOLO-style object detector. Sci Rep 13:. https://doi.org/10.1038/s41598-023-33804-w
Grega M, Matiolański A, Guzik P, Leszczuk M (2016) Automated detection of firearms and knives in a CCTV image. Sensors (Switzerland) 16:. https://doi.org/10.3390/s16010047
Kim J, Cho J (2019) An online graph-based anomalous change detection strategy for unsupervised video surveillance. EURASIP J Image Video Process 2019:. https://doi.org/10.1186/s13640-019-0478-8
Omrani E, Mousazadeh H, Omid M, et al (2020) Dynamic and static object detection and tracking in an autonomous surface vehicle. Ships and Offshore Structures 15:711–721. https://doi.org/10.1080/17445302.2019.1668642
Adimoolam M, Mohan S, John A, Srivastava G (2022) A Novel Technique to Detect and Track Multiple Objects in Dynamic Video Surveillance Systems. International Journal of Interactive Multimedia and Artificial Intelligence 7:112–120. https://doi.org/10.9781/ijimai.2022.01.002
Teja YD (2023) Static object detection for video surveillance. Multimed Tools Appl. https://doi.org/10.1007/s11042-023-14696-4
Nam Y (2016) Real-time abandoned and stolen object detection based on spatio-temporal features in crowded scenes. Multimed Tools Appl 75:7003–7028. https://doi.org/10.1007/s11042-015-2625-2
Lin K, Chen SC, Chen CS, et al (2015) Abandoned Object Detection via Temporal Consistency Modeling and Back-Tracing Verification for Visual Surveillance. IEEE Transactions on Information Forensics and Security 10:1359–1370. https://doi.org/10.1109/TIFS.2015.2408263
Foggia P, Greco A, Saggese A, Vento M (2015) A method for detecting long term left baggage based on heat map. In: VISAPP 2015 - 10th International Conference on Computer Vision Theory and Applications; VISIGRAPP, Proceedings. SciTePress, pp 385–391
Zahrawi M, Shaalan K (2023) Improving video surveillance systems in banks using deep learning techniques. Sci Rep 13:. https://doi.org/10.1038/s41598-023-35190-9
Palivela LH, Ramachandran S (2018) An enhanced image hashing to detect unattended objects utilizing binary SVM classification. J Comput Theor Nanosci 15:121–132. https://doi.org/10.1166/jctn.2018.7064
Gurusamy K, Yuvaraj N (2021) Improved Object Detection in Video Surveillance Using Deep Convolutional Neural Network Learning. International Journal for Modern Trends in Science and Technology 7:104–108. https://doi.org/10.46501/IJMTST0711018
Narwal P, Mishra R (2019) Real Time System for Unattended Baggage Detection. J Emerg Technol Innov Res
Mahalingam T, Subramoniam M (2017) A robust single and multiple moving object detection, tracking and classification. Applied Computing and Informatics 17:2–18. https://doi.org/10.1016/j.aci.2018.01.001
Lwin SP, Tun T (2022) DEEP CONVONLUTIONAL NEURAL NETWORK FOR ABANDONED OBJECT DETECTION. www.irjmets.com @International Research Journal of Modernization in Engineering
Chen Z, Ellis T (2014) A self-adaptive Gaussian mixture model. Computer Vision and Image Understanding 122:35–46. https://doi.org/10.1016/j.cviu.2014.01.004
Terven J, Cordova-Esparza D (2023) A Comprehensive Review of YOLO: From YOLOv1 and Beyond. ACM Comput Surv
PETS2006: Performance Evaluation of Tracking and Surveillance 2006, Bench mark Data. http:// www.cvg.reading.ac.uk/PETS2006/data.html. Accessed 18 Jun 2023
PETS2007: Performance Evaluation of Tracking and Surveillance 2007, Bench mark Data. http:// www.cvg.reading.ac.uk/PETS2007/data.html. Accessed 18 Jul 2023
i-Lids: i-Lids Dataset for AVSS 2007. http://www.eecs.qmul.ac.uk/andrea/avss2007_d.html. Accessed 18 Jul 2023
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.