Efficient Project Management in Construction Sites to Monitor and Track the Employees using Multi-Modal Deep Learning Model
Keywords:
Multi-Modal Deep Learning, Safety, Construction Sites, TrackingAbstract
In general, past research on automated safety monitoring using computer vision techniques has concentrated on distinct components, accounting for the safety issues individually. This is because there are a wide variety of safety issues that can arise. Recognizing the working status of construction equipment and following the movement of personnel are also instances of this kind of research. A number of researchers have come to the conclusion that it is in their best interest to implement the fundamental principle that supports the operation of a detection-based tracking system is that newly detected items either start new tracks or are mapped to existing tracks for the purpose of identification maintenance over a period of time that has been predetermined. In this paper, an Efficient project management scheme is developed using artificial intelligence. This model enables the construction sites to monitor and track the employees and this uses multi-modal deep learning (MMDL) model to track the safety of the employee. The simulation is performed with movable workers in python to test the efficacy of the MMDL and it is evaluated in terms of accuracy, precision, recall and f-measure. These performance metrics are used in the present study to check if the MMDL model is efficient in classifying the people who are working without any safety. The results show an efficient classification of instances than the other existing state-of-art models.
Downloads
References
Kasa, K., Burns, D., Goldenberg, M. G., Selim, O., Whyne, C., & Hardisty, M. (2022). Multi-Modal Deep Learning for Assessing Surgeon Technical Skill. Sensors, 22(19), 7328.
Bai, N., Nourian, P., Luo, R., & Pereira Roders, A. (2022). Heri-graphs: a dataset creation framework for multi-modal machine learning on graphs of heritage values and attributes with social media. ISPRS International Journal of Geo-Information, 11(9), 469.
Hofmann, S. M., Beyer, F., Lapuschkin, S., Goltermann, O., Loeffler, M., Müller, K. R., ... & Witte, A. V. (2022). Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain. NeuroImage, 261, 119504.
Liu, J., Luo, H., & Liu, H. (2022). Deep learning-based data analytics for safety in construction. Automation in Construction, 140, 104302.
Thiam, P., Hihn, H., Braun, D. A., Kestler, H. A., & Schwenker, F. (2021). Multi-modal pain intensity assessment based on physiological signals: A deep learning perspective. Frontiers in Physiology, 12, 720464.
Ahmad, Z., Jindal, R., Mukuntha, N. S., Ekbal, A., & Bhattachharyya, P. (2022). Multi-modality helps in crisis management: An attention-based deep learning approach of leveraging text for image classification. Expert Systems with Applications, 195, 116626.
Tan, T., Das, B., Soni, R., Fejes, M., Yang, H., Ranjan, S., ... & Avinash, G. (2022). Multi-modal trained artificial intelligence solution to triage chest X-ray for COVID-19 using pristine ground-truth, versus radiologists. Neurocomputing, 485, 36-46.
Zhang, W., Wu, Y., Yang, B., Hu, S., Wu, L., & Dhelim, S. (2021, August). Overview of multi-modal brain tumor mr image segmentation. In Healthcare (Vol. 9, No. 8, p. 1051). MDPI.
Ali, S., Li, J., Pei, Y., Khurram, R., & Mahmood, T. (2022). A Comprehensive Survey on Brain Tumor Diagnosis Using Deep Learning and Emerging Hybrid Techniques with Multi-modal MR Image. Archives of Computational Methods in Engineering, 1-26.
Cherif, E., Hell, M., & Brandmeier, M. (2022). DeepForest: Novel Deep Learning Models for Land Use and Land Cover Classification Using Multi-Temporal and-Modal Sentinel Data of the Amazon Basin. Remote Sensing, 14(19), 5000.
Chi, N., Wang, X., Yu, Y., Wu, M., & Yu, J. (2022). Neuronal Apoptosis in Patients with Liver Cirrhosis and Neuronal Epileptiform Discharge Model Based upon Multi-Modal Fusion Deep Learning. Journal of Healthcare Engineering, 2022.
Chen, Y., Zhu, L., & Karki, D. (2022). Data Mining of Swimming Competition Technical Action Based on Machine Learning Algorithm. In International Conference on Multi-modal Information Analytics (pp. 570-577). Springer, Cham.
Nandi, A., & Xhafa, F. (2022). A federated learning method for real-time emotion state classification from multi-modal streaming. Methods.
Lamichhane, B., Jayasekera, D., Jakes, R., Glasser, M. F., Zhang, J., Yang, C., ... & Hawasli, A. H. (2021). Multi-modal biomarkers of low back pain: A machine learning approach. NeuroImage: Clinical, 29, 102530.
Kustowski, B., Gaffney, J. A., Spears, B. K., Anderson, G. J., Anirudh, R., Bremer, P. T., ... & Nora, R. C. (2022). Suppressing simulation bias in multi-modal data using transfer learning. Machine Learning: Science and Technology, 3(1), 015035.
Guiney, R., Santucci, E., Valman, S., Booth, A., Birley, A., Haynes, I., ... & Mills, J. (2021). Integration and analysis of multi-modal geospatial secondary data to inform management of at-risk archaeological sites. ISPRS International Journal of Geo-Information, 10(9), 575.
Zhang, W., Wu, Y., Yang, B., Hu, S., Wu, L., & Dhelim, S. (2021). Overview of Multi-Modal Brain Tumor MR Image Segmentation. Healthcare 2021, 9, 1051.
Chai, Y., Zhou, Y., Li, W., & Jiang, Y. (2021). An explainable multi-modal hierarchical attention model for developing phishing threat intelligence. IEEE Transactions on Dependable and Secure Computing, 19(2), 790-803.
Bhowmik, R. T., & Most, S. P. (2022). A Personalized Respiratory Disease Exacerbation Prediction Technique Based on a Novel Spatio-Temporal Machine Learning Architecture and Local Environmental Sensor Networks. Electronics, 11(16), 2562.
Dai, Y., Song, Y., Liu, W., Bai, W., Gao, Y., Dong, X., & Lv, W. (2021). Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson’s Disease Image Classification. Diagnostics, 11(12), 2379.
Ruby, R., Zhong, S., ElHalawany, B. M., Luo, H., & Wu, K. (2021). SDN-enabled energy-aware routing in underwater multi-modal communication networks. IEEE/ACM Transactions on Networking, 29(3), 965-978.
Wu, C., Li, X., Guo, Y., Wang, J., Ren, Z., Wang, M., & Yang, Z. (2022). Natural language processing for smart construction: Current status and future directions. Automation in Construction, 134, 104059.
Teo, K. Y., Daescu, O., Cederberg, K., Sengupta, A., & Leavey, P. J. (2022). Correlation of histopathology and multi-modal magnetic resonance imaging in childhood osteosarcoma: Predicting tumor response to chemotherapy. Plos one, 17(2), e0259564.
Švec, J., Neduchal, P., & Hrúz, M. (2022). Multi-modal communication system for mobile robot. IFAC-PapersOnLine, 55(4), 133-138.
Hu, S. (2022). Analysis and Research on Oil Production in Ultra High Water Cut Stage Based on Iots. In International Conference on Multi-modal Information Analytics (pp. 1005-1010). Springer, Cham.
Dhabliya, D. Delay-Tolerant Sensor Network (DTN) Implementation in Cloud Computing (2021) Journal of Physics: Conference Series, 1979 (1), art. no. 012031,
Agrawal, S.A., Umbarkar, A.M., Sherie, N.P., Dharme, A.M., Dhabliya, D. Statistical study of mechanical properties for corn fiber with reinforced of polypropylene fiber matrix Composite (2021) Materials Today: Proceedings,
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.