Deep Learning Based Integrated Model using Deep Belief Network with Semi Supervised GAN for Anomaly Detection in Surveillance Video
Keywords:
Video surveillance, Video anomaly detection, Deep belief networks, Anomaly detection, deep learning, semi supervised GAN.Abstract
In the modern era of smart cities, video surveillance has grown in importance. In order to monitor infrastructure property and ensure public safety, numerous surveillance cameras are installed in both public and private spaces. Large volumes of video data are produced by these surveillance cameras, making it impossible for a human observer to manually watch these hours-long recordings every day and find any unwelcome or unusual activity. The process of identifying abnormal behavior involves figuring out what behavior is different from normal. In a monitoring paradigm, these events will range from kidnapping to traffic accidents, violence to war, and so forth. Because anomalous occurrences occur often, video anomaly detection via video surveillance is a challenging scientific endeavor. This paper offers an AD-DBNSSGAN, a multi-modal semi-supervised deep learning system based on deep belief networks and GANs for identifying anomalous instances in critical surveillance situations. The framework is significant since it can be trained with only poorly tagged normal video or image samples. We contributed a unique dataset of surveillance photos because there was no public surveillance dataset available. The gathered dataset is used to test the suggested framework. The suggested framework may be used to detect abnormalities in real-world surveillance locations both indoors and outdoors, and the results demonstrate that it can produce results that are competitive with other cutting-edge techniques.
Downloads
References
Y. Tang et al, “Integrating prediction and reconstruction for anomaly detection”, Pattern Recogn. Lett., (2020)
Nayak, R., Pati, U.C. and Das, S.K., 2021. A comprehensive review on deep learning-based methods for video anomaly detection. Image and Vision Computing, 106, p.104078.
M. Ribeiro et al, “A study of deep convolutional auto-encoders for anomaly detection in videos”, Pattern Recogn. Lett. (2018)
Yogameena, B. and Nagananthini, C., 2017. Computer vision based crowd disaster avoidance system: A survey. International journal of disaster risk reduction, 22, pp.95-129.
Khan, A., Sohail, A., Zahoora, U. and Qureshi, A.S., 2020. A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, pp.5455-5516.
Ramirez, I., Cuesta-Infante, A., Pantrigo, J.J., Montemayor, A.S., Moreno, J.L., Alonso, V., Anguita, G. and Palombarani, L., 2020. Convolutional neural networks for computer vision-based detection and recognition of dumpsters. Neural Computing and Applications, 32, pp.13203-13211.
Waddenkery, N. and Soma, S., 2023. Adam-Dingo optimized deep maxout network-based video surveillance system for stealing crime detection. Measurement: Sensors, 29, p.100885.
Qasim, M. and Verdu, E., 2023. Video anomaly detection system using deep convolutional and recurrent models. Results in Engineering, 18, p.101026.
Shao, W., Rajapaksha, P., Wei, Y., Li, D., Crespi, N. and Luo, Z., 2023. COVAD: Content-oriented video anomaly detection using a self-attention based deep learning model. Virtual Reality & Intelligent Hardware, 5(1), pp.24-41.
Watanabe, Y., Okabe, M., Harada, Y. and Kashima, N., 2022. Real-World Video Anomaly Detection by Extracting Salient Features in Videos. IEEE Access, 10, pp.125052-125060.
Zahid, Y., Tahir, M.A., Durrani, N.M. and Bouridane, A., 2020. Ibaggedfcnet: An ensemble framework for anomaly detection in surveillance videos. IEEE Access, 8, pp.220620-220630.
Ansari, M.A., Singh, D.K. & Singh, V.P. Detecting abnormal behavior in megastore for crime prevention using a deep neural architecture. Int J Multimed Info Retr 12, 25 (2023).
M. Shoaib, A. Ullah, I. A. Abbasi, F. Algarni and A. S. Khan, "Augmenting the Robustness and Efficiency of Violence Detection Systems for Surveillance and Non-Surveillance Scenarios," in IEEE Access, vol. 11, pp. 123295-123313, 2023.
Pires, I.M., Hussain, F., Garcia, N.M., Lameski, P. and Zdravevski, E., 2020. Homogeneous data normalization and deep learning: A case study in human activity classification. Future Internet, 12(11), p.194.
Thirumagal, E. and Saruladha, K., 2021. GAN models in natural language processing and image translation. In Generative adversarial networks for image-to-image translation (pp. 17-57). Academic Press.
L. Yu, W. Zhang, J. Wang, and Y. Yu, ‘‘SeqGAN: Sequence generative adversarial nets with policy gradient,’’ in Proc. AAAI, 2016, pp. 2852–2858.
A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski, ‘‘Plug & play generative networks: Conditional iterative generation of images in latent space,’’ in Proc. Comput. Vis. Pattern Recognit., 2017, pp. 3510–3520.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y., 2020. Generative adversarial networks. Communications of the ACM, 63(11), pp.139-144.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.