Human Activity Detection using Profound Learning with Improved Convolutional Neural Networks

Authors

  • S. Anthonisamy, P. Prabhu

Keywords:

Human Activity Detections, Deep Leaning Techniques, Convolution Neural Networks, Machine learning Techniques, healthcare monitoring.

Abstract

Human Activity recognition (HAR) is an interesting area of research mainly due to the availability of low cost sensors and accelerometers live streaming of data and advances in technology.  HARs involve identifying various human activities such as walking, running, sitting, sleeping, standing, showering, cooking, driving, opening the door, abnormal activities, etc. are recognized. The data can be collected from wearable sensors or accelerometer. HARs can be extensively used in medical diagnostics for keeping track of elderly people, HARs approaches analyze data acquired from sensing devices, including vision and embedded sensors. HARs are assistive technologies mainly used for taking care of elders in healthcare.  Approaches of HARs attempt to predict people’s movements often indoors and based on sensor data like accelerometers of smart phones. In terms of classifications, HARs are challenging tasks as they involve time series data where Deep Learning Techniques (DLTs) like CNNs (Convolution Neural Networks) have the ability to correctly engineer features from these raw data while building their learning models. This paper proposes Human Activity Detections using Profound Learning (HADPL) based on CNNs which detects HARs from captured accelerometer data. HADPL was tested on WISDM_Act_v1.1 dataset and evaluated for its performances in terms of precisions, accuracies, recalls and F1-scores where it achieved a decent level of accuracy by scoring up to 95 percent. The proposed technique can be implemented for monitoring elderly people based on captured or stored HAR data.

Downloads

Download data is not yet available.

References

Y. Liu, L. Nie, L. Liu, D.S. Rosenblum, From action to activity: sensor-based activity recognition, Neurocomputing 181 (2016) 108–115.

A. Bux, P. Angelov, Z. Habib, Vision based human activity recognition: a review, in: Advances in Computational Intelligence Systems, Springer, 2017, pp. 341–371.

B. Jagadeesh, C.M. Patil, Video based human activity detection, recognition and classification of actions using SVM, Trans. Mach. Learn. Artif.Intell. 6 (6) (2019) 22.

J.A. Muñoz-Cristóbal, M.J. Rodríguez-Triana, V. Gallego-Lema, H.F. Arribas-Cubero, J.I. Asensio-Pérez, A. Martínez-Monés, Monitoring for awareness and reflection in ubiquitous learning environments, Int. J. Hum.–Comput.Interact. 34 (2) (2018) 146–165.

L.M. Dang, S.I. Hassan, S. Im, H. Moon, Face image manipulation detection based on a convolutional neural network, Expert Syst. Appl. 129 (2019) 156–168.

F. Foerster, M. Smeja, Joint amplitude and frequency analysis of tremor activity., Electromyogr. Clin. Neurophysiol. 39 (1) (1999) 11–19.

X. Ji, J. Cheng, W. Feng, D. Tao, Skeleton embedded motion body partition for human action recognition using depth sequences, Signal Process. 143 (2018) 56–68.

G. Batchuluun, J.H. Kim, H.G. Hong, J.K. Kang, K.R. Park, Fuzzy system based human behavior recognition by combining behavior prediction and recognition, Expert Syst. Appl. 81 (2017) 108–133.

A. Prati, C. Shan, K.I.-K. Wang, Sensors, vision and networks: from video surveillance to activity recognition and health monitoring, J. Ambient Intell. Smart Environ. 11 (1) (2019) 5–22.

S. Sankar, P. Srinivasan, R. Saravanakumar, Internet of things based ambient assisted living for elderly people health monitoring, Res. J. Pharm. Technol. 11 (9) (2018) 3900–3904.

C. Aviles-Cruz, E. Rodriguez-Martinez, J. Villegas-Cortez, A. Ferreyra-Ramirez, Granger-causality: an efficient single user movement recognition using a smartphone accelerometer sensor, Pattern Recognition. Lett. 125 (2019) 576–583.

G. Plasqui, Smart approaches for assessing free-living energy expenditure following identification of types of physical activity, Obes. Rev. 18 (2017) 50– 55.

R. Varatharajan, G. Manogaran, M. Priyan, R. Sundarasekar, Wearable sensor devices for early detection of alzheimer disease using dynamic time warping algorithm, Cluster Comput. 21 (1) (2018) 681–690.

E.E. Cust, A.J. Sweeting, K. Ball, S. Robertson, Machine and deep learning for sport-specific movement recognition: a systematic review of model develop-ment and performance, J. Sports Sci. 37 (5) (2019) 568–600.

Z.S. Abdallah, M.M. Gaber, B. Srinivasan, S. Krishnaswamy, Activity recognition with evolving data streams: areview, ACM Comput. Surv. (CSUR) 51 (4) (2018) 71.

B. Alsinglawi, Q.V. Nguyen, U. Gunawardana, A. Maeder, S.J. Simoff, Rfid systems in healthcare settings and activity of daily living in smart homes: a re-view, E-Health Telecommun. Syst. Networks. 6 (2017) 1–17.

I. Portugal, P. Alencar, D. Cowan, The use of machine learning algorithms in recommender systems: a systematic review, Expert Syst. Appl. 97 (2018) 205–227.

T.N. Nguyen, H. Nguyen-Xuan, J. Lee, A novel data-driven nonlinear solver for solid mechanics using time series forecasting, Finite Elem. Anal. Des. 171 (2020) 103377.

T.-H. Tan, M. Gochoo, S.-C. Huang, Y.-H. Liu, S.-H. Liu, Y.-F. Huang, Multi--resident activity recognition in a smart home using RGB activity image and DCNN, IEEE Sens. J. 18 (23) (2018) 9718–9727.

T. Young, D. Hazarika, S. Poria, E. Cambria, Recent trends in deep learning based natural language processing, IEEE Comput Intell Mag 13 (3) (2018) 55–75.

T.N. Nguyen, S. Lee, H. Nguyen-Xuan, J. Lee, A novel analysis-prediction ap-proach for geometrically nonlinear problems using group method of data handling, Comput. Methods Appl. Mech. Eng. 354 (2019) 506–526.

Figo, D.; Diniz, P.C.; Ferreira, D.R.; Cardoso, J.M. Preprocessing Techniques for Context Recognition from Accelerometer Data. Pers. Ubiquit Comput. 2010, 14, 645–662.

Yang, J.; Nguyen, M.N.; San, P.P.; Li, X.L.; Krishnaswamy, S. Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI-15), Buenos Aires, Argentina, 25–31 July 2015; pp. 3995–4001

Alsheikh, M.A.; Selim, A.; Niyato, D.; Doyle, L.; Lin, S.; Tan, H.P. Deep Activity Recognition Models with Triaxial Accelerometers. In Proceedings of the Workshops at the 30th AAAI Conference on Artificial Intelligence (AAAI-16), Phoenix, AZ, USA, 12–17 February 2016; pp. 8–13.].

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Secaucus, NJ: Springer.

Quattoni, A., Wang, S., Morency, L. P., Collins, M., and Darrell, T. (2007). Hidden conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1848–1852. doi:10.1109/TPAMI.2007.1124.

Morariu, V. I., and Davis, L. S. (2011). “Multi-agent event recognition in structured scenarios,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 3289–3296.

Liu, J., Kuipers, B., and Savarese, S. (2011). “Recognizing human actions by attributes,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Colorado Springs, CO), 3337–3344.

Kulkarni, P., Sharma, G., Zepeda, J., and Chevallier, L. (2014). “Transfer learning via attributes for improved on-the-fly classification,” in Proc. IEEE Winter Conference on Applications of Computer Vision (Steamboat Springs, CO), 220–226.

Farhadi, A., Endres, I., Hoiem, D., and Forsyth, D. A. (2009). “Describing objects by their attributes,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Miami Beach, FL), 1778–1785.

Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity Recognition Using Cell Phone Accelerometers. ACM SigKDD Explor. Newsl. 2011, 12, 74–82.

Abidine, B.M.H.; Fergani, L.; Fergani, B.; Oussalah, M. The Joint Use of Sequence Features Combination and Modified Weighted SVM for Improving Daily Activity Recognition. Pattern Anal. Appl. 2018, 21, 119–138.

Yeh, R.A.; Chen, C.; Yian Lim, T.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Deep Generative Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 5485–5493..

Bukhari, D.; Wang, Y.; Wang, H. Multilingual Convolutional, Long Short-term Memory, Deep Neural Networks for Low Resource Speech Recognition. Proc. Comput. Sci. 2017, 107, 842–847.

Sannino, G.; De Pietro, G. A Deep Learning Approach for ECG-based Heartbeat Classification for Arrhythmia Detection. Future Gener. Comput. Syst. 2018, 86, 446-455.

Li, H.; Trocan, M. Deep Learning of Smartphone Sensor Data for Personal Health Assistance. Microelectron. J. 2018.

Wang, A.; Chen, G.; Shang, C.; Zhang, M.; Liu, L. Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders. In Proceedings of the International Conference on Web-Age Information Management (WAIM 2016), Nanchang, China, 3–5 June 2016; pp. 29–40.

Ronao, C.A.; Cho, S.B. Human Activity Recognition with Smartphone Sensors Using Deep Learning Neural Networks. Expert Syst. Appl. 2016, 59, 235–244.

Ha, S.; Yun, J.M.; Choi, S. Multi-modal Convolutional Neural Networks for Activity Recognition. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2015), Kowloon, China, 9–12 October 2015; pp. 3017–3022.

Chen, Y.; Zhong, K.; Zhang, J.; Sun, Q.; Zhao, X. Lstm Networks for Mobile Human Activity Recognition. In Proceedings of the 2016 International Conference on Artificial Intelligence: Technologies and Applications (ICAITA 2016), Bangkok, Thailand, 24–25 January 2016; pp. 50–53.

Gupta, A., and Davis, L. S. (2007). “Objects in action: an approach for combining action understanding and object perception,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Minneapolis, MN), 1–8.

Chen, L.; Hoey, J.; Nugent, C.D.; Cook, D.J.; Yu, Z. Sensor-based Activity Recognition. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 2012, 42, 790–808.

Kiruthiga Devi M, Dr. K.P. Yadav, Artificial Intelligence, Machine Learning And Cognitive Computing With Profound Learning Technique, Cosmos Journal of Engineering & Technology,A Refereed Research Journal, Vol 8 / No 2 / Jul-Dec 2018.

Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newsl. 2011, 12, 74–82.

Prabhu P., Anbazhagan N. (2013) FI-FCM Algorithm for Business Intelligence. In: Prasath R., Kathirvalavakumar T. (eds) Mining Intelligence and Knowledge Exploration. Lecture Notes in Computer Science, vol 8284. Springer, Cham, pp 518-528.

Paulraj P., Neelamegam A. (2014) Improving Business Intelligence Based on Frequent Itemsets Using k-Means Clustering Algorithm. In: Meghanathan N., Nagamalai D., Rajasekaran S. (eds) Networks and Communications (NetCom2013). Lecture Notes in Electrical Engineering, vol 284. Springer, Cham, DOI : 10.1007/978-3-319-03692-2_19, pp 243-254.

Downloads

Published

26.03.2024

How to Cite

S. Anthonisamy, P. Prabhu. (2024). Human Activity Detection using Profound Learning with Improved Convolutional Neural Networks. International Journal of Intelligent Systems and Applications in Engineering, 12(21s), 606–616. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/5457

Issue

Section

Research Article