Anomaly Detection in Surveillance Videos Using Hybrid Deep Learning Model DBNSSGAN
Keywords:
Deep learning, video anomaly detection, Surveillance videos, Anomaly detection, Deep belief networkAbstract
Urban planners and academics are influenced by the contemporary notion of smart cities to create modern, secure, and sustainable infrastructure that offers a respectable standard of living to its inhabitants. In order to improve citizen safety and well-being, video surveillance cameras have been installed to meet this demand. Even with today's scientific advancements in technology, abnormal event detection in CCTV footage and surveillance video remains difficult and time-consuming for humans to complete. Surveillance videos that contain anomalous events are automatically identified by video anomaly detection. The ability to determine whether a video contains anomalous events has improved in previous efforts. Since the development of deep learning methods, researchers have become interested in automatic video surveillance. The task of video anomaly detection, can be approached as a semi-supervised learning problem because to the strong bias in the datasets towards normal samples. The widely used reconstruction techniques solely use regular images to train the network. Assuming that the network cannot precisely recreate anomalous regions, these approaches identify anomalous occurrences by comparing the input with the reconstructed image. These approaches, however, have a significant drawback in that the anomaly zones are not sufficiently generic. This issue narrows the difference between the reconstructed and anomalous input images, which decreases the capacity to detect anomalies. In this paper the semi supervised Generative Adversarial Networks (SSGAN) is combined with Deep belief network (DBN) in detecting the abnormal events in surveillance video which greatly improves the quality of reconstruction and classifies the anomaly effectively. The outcomes are compared with the most advanced deep learning methods using two well-known surveillance data sets.
Downloads
References
Piza, E.L., Welsh, B.C., Farrington, D.P. and Thomas, A.L., 2019. CCTV surveillance for crime prevention: A 40‐year systematic review with meta‐analysis. Criminology & public policy, 18(1), pp.135-159.
Chriki, A., Touati, H., Snoussi, H. and Kamoun, F., 2021. Deep learning and handcrafted features for one-class anomaly detection in UAV video. Multimedia Tools and Applications, 80, pp.2599-2620.
Yan, S., Shao, H., Min, Z., Peng, J., Cai, B. and Liu, B., 2023. FGDAE: A new machinery anomaly detection method towards complex operating conditions. Reliability Engineering & System Safety, 236, p.109319.
Ribeiro, M., Lazzaretti, A.E. and Lopes, H.S., 2018. A study of deep convolutional auto-encoders for anomaly detection in videos. Pattern Recognition Letters, 105, pp.13-22.
Siddalingappa, R. and Kanagaraj, S., 2021. Anomaly detection on medical images using autoencoder and convolutional neural network. International Journal of Advanced Computer Science and Applications, (7).
Yin, C., Zhang, S., Wang, J. and Xiong, N.N., 2020. Anomaly detection based on convolutional recurrent autoencoder for IoT time series. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52(1), pp.112-122.
Mao, J. and Jain, A.K., 1995. Artificial neural networks for feature extraction and multivariate data projection. IEEE transactions on neural networks, 6(2), pp.296-317.
San Martin, G., López Droguett, E., Meruane, V. and das Chagas Moura, M., 2019. Deep variational auto-encoders: A promising tool for dimensionality reduction and ball bearing elements fault diagnosis. Structural Health Monitoring, 18(4), pp.1092-1128.
Zaied, S., Toumi, A. and Khenchaf, A., 2018, March. Target classification using convolutional deep learning and auto-encoder models. In 2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) (pp. 1-6). IEEE.
Khaire, P. and Kumar, P., 2022. A semi-supervised deep learning based video anomaly detection framework using RGB-D for surveillance of real-world critical environments. Forensic Science International: Digital Investigation, 40, p.301346.
Singh, R., Sethi, A., Saini, K., Saurav, S., Tiwari, A. and Singh, S., 2024. CVAD-GAN: Constrained video anomaly detection via generative adversarial network. Image and Vision Computing, p.104950.
Ullah, W., Hussain, T., Ullah, F.U.M., Lee, M.Y. and Baik, S.W., 2023. TransCNN: Hybrid CNN and transformer mechanism for surveillance anomaly detection. Engineering Applications of Artificial Intelligence, 123, p.106173.
Ning, Z., Wang, Z., Liu, Y., Liu, J. and Song, L., 2024. Memory-enhanced appearance-motion consistency framework for video anomaly detection. Computer Communications, 216, pp.159-167.
http://www.svcl.ucsd.edu/projects/anomaly/dataset.html
https://www.cse.cuhk.edu.hk/leojia/projects/detectabnormal/dataset.html
Jiang, J., Zhu, J., Bilal, M., Cui, Y., Kumar, N., Dou, R., Su, F. and Xu, X., 2022. Masked swin transformer unet for industrial anomaly detection. IEEE Transactions on Industrial Informatics, 19(2), pp.2200-2209.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y., 2020. Generative adversarial networks. Communications of the ACM, 63(11), pp.139-144.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.