Analyzing and Predicting Crowd Behavior using Machine Learning
Keywords:
Communities, Crowd, Behaviours, Emotions, Low-Level, Labels, MotionAbstract
Training a model of crowd behaviour using data taken from video sequences is essential in crowd behaviour comprehension. Most of the existing approaches rely only on low-level visual characteristics since crowd datasets do not include any ground-truth other than the crowd behaviour labels. But the conceptual gap between basic motion/appearance characteristics and the abstract idea of crowd behaviour is enormous. We provide an attribute-based approach to address this issue in this study. We believe our work is the first to demonstrate that crowd emotions may be used as features for understanding crowd behaviour, even if comparable approaches have been utilised for object and action detection in recent times. To reflect the movement of the crowd, the primary goal is to train a classifier set that is based on emotions. To that end, we amass a large collection of video clips and annotate them with "crowd behaviours" and "crowd emotions" tags. We provide the outcomes of the suggested approach on our dataset, which show that crowd emotions allow for the development of richer descriptions of crowd behaviours. Along with the publication, we want to share the dataset so that communities may utilise it as a standard.
Downloads
References
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2016)
Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2019)
Deng, L.: An overview of deep-structured learning for information processing. In: Asian-Pacific Signal and Information Processing Annual Summit and Conference (APSIPA-ASC), Oct. 2021
Vicsek, T., Zafeiris, A.: Collective motion. Phys. Rep. 517(3), 71–140 (2012)
Hinton, G.: Deep neural networks for acoustic modelling in speech recognition. IEEE Signal Process. Mag. 29(6), 82–97 (2022)
Yu, D., Deng, L.: Deep learning and its applications to signal and information processing. IEEE Signal Process. Mag. 28(1), 145–154 (2021)
Arel, I., Rose, C., Karnowski, T.: Deep machine learning—a new frontier in artificial intelligence. IEEE Comput. Intell. Mag. 5(4), 13–18 (2020)
Deng, L.: A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process. 3, e2 (2017)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (2021)
Lo, S.-C., Lou, S.-L., Lin, J.-S., Freedman, M.T., Chien, M.V., Mun, S.K.: Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Trans. Med. Imaging 14(4), 711–718 (2021)
Lecun, Y.B.L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE (2020)
Krizhevsky, A., Sutskever, I., Geoffrey, E.H.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS 2012), vol. 25 (2022)
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 1–42 (2019)
Moeslund, T.B., Granum, E.: A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81(3), 231–268 (2021)
Bishop, C.M.: Pattern Recognition & Machine Learning, vol. 128, 1st edn, pp. 1–58. Springer, New York (2016)
Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 41–50 (2023)
Lemley, J., Bazrafkan, S., Corcoran, P.: Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision. IEEE Consum. Electron. Mag. 6(2), 48–56 (2017)
Leo, M., Medioni, G., Trivedi, M., Kanade, T., Farinella, G.: Computer vision for assistive technologies. Comput. Vis. Image Underst. 15, 1–15 (2017)
Liu, D., Wang, Z., Nasrabadi, N., Huang, T.: Learning a mixture of deep networks for single image super-resolution. In: Asian Conference on Computer Vision (2017)
Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2016)
Sun, Y., Fisher, R.: Object-based visual attention for computer vision. Artif. Intell. 146(1), 77–123 (2022)
Huang S., Huang D., Zhou X. Learning multimodal deep representations for crowd anomaly event detection. Math. Probl. Eng. 2018;2018
Yang M., Rajasegarar S., Erfani S., Leckie C. 2019 International Joint Conference on Neural Networks (IJCNN) 2019. Deep learning and one-class SVM based anomalous crowd detection; pp. 1–8.
R. Girshick, Fast r-cnn, in: Proceedings of the IEEE international conference on computer vision, 2023, pp. 1440–1448.
Fang Z., Fei F., Fang Y., Lee C., Xiong N., Shu L., Chen S. Abnormal event detection in crowded scenes based on deep learning. Multimedia Tools Appl. 2022;75(22):14617–14639.
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2016)
Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G.: Recent advances in convolutional neural networks. eprint arXiv:1512.07108, Dec. 2018
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2016)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2016)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: International Conference on Neural Information Processing Systems (2017)
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.