Sensor Fusion for Enhanced Robotic Perception
Keywords:
MEMS, Sensor Fusion, Autonomous Navigation, RoboticsAbstract
In autonomous robots, attaining accurate navigation is a significant problem, requiring improvements in sensor fusion methodologies. This paper examines the crucial function of Micro-Electro-
Mechanical Systems (MEMS) in augmenting sensor fusion for autonomous navigation. The urgent challenge of attaining precise and instantaneous navigation in fluctuating situations has necessitated the development of novel technologies. The current literature indicates a research deficiency in the seamless integration of MEMS sensors for effective navigation. Traditional sensor fusion techniques often encounter constraints in managing varied and rapidly evolving environmental variables; however, MEMS provide a viable solution to these obstacles. The compact dimensions, little power use, and elevated sensitivity of MEMS sensors render them optimal for delivering comprehensive and dependable data for navigation applications. This study utilizes a full integration of MEMS sensors, including accelerometers, gyroscopes, and magnetometers, inside a single sensor fusion architecture. This framework utilizes sophisticated algorithms to effectively integrate data from several sensors, alleviating the limits of individual sensors and improving overall accuracy. The use of MEMS sensors seeks to enhance the comprehensive awareness of the robot's environment, hence permitting superior decision-making in navigation tasks. The findings of our investigation demonstrate a significant improvement in the precision and efficacy of autonomous navigation inside dynamic situations. MEMS-enhanced sensor fusion demonstrates efficacy in overcoming the hurdles presented by unexpected terrains and obstructions. The robot, outfitted with MEMS sensors, exhibits improved flexibility and reactivity, highlighting its potential for practical applications.
Downloads
References
Yi Yang et al., "Neuromorphic electronics for robotic perception, navigation and control: A survey," Engineering Applications of Artificial Intelligence, vol. 126, Part A, Nov. 2023. [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0952197623010229.
Shengshun Duan, Qiongfeng Shi, and Jun Wu, "Multimodal Sensors and ML-Based Data Fusion for Advanced Robots," Advanced Intelligent Systems, 2022. [Online]. Available: https://advanced.onlinelibrary.wiley.com/doi/full/10.1002/aisy.202200213
Shuochao Yao et al., "Model Compression for Edge Computing," Artificial Intelligence for Edge Computing, 2023, pp. 153-195. [Online]. Available: https://www.researchgate.net/publication/376742365_Model_Compression_for_Edge_Computing.
MarketsandMarkets, "Industrial Robotics Market by Type (Articulated, SCARA, Cartesian, Collaborative, and Others), Application (Welding, Material Handling, Painting, Assembly, and Others), Industry (Automotive, Electrical & Electronics, Metal & Machinery, and Others), and Region - Global Forecast to 2028," Markets and Markets, 2024. [Online]. Available: https://www.marketsandmarkets.com/Market-Reports/Industrial-Robotics-Market-643.html
María Teresa Ballestar, et al., "Impact of Robotics on the Workforce: A Longitudinal Machine Learning Perspective," Technological Forecasting and Social Change, vol. 162, Jan. 2021. [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0040162520311744.
Longfei Zhou, et al., "Computer Vision Techniques in Manufacturing," Research Gate Publication. 2021. [Online]. Available: https://www.researchgate.net/publication/356971633_Computer_Vision_Techniques_in_Manufacturing.
Niko Sünderhauf, et al., "The Limits and Potentials of Deep Learning for Robotics," The International Journal of Robotics Research, vol. 37, no. 4-5, 2018. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/0278364918770733.
Surendra Kumar Sharma, et al., "A Comparative Analysis of Feature Detectors and Descriptors for Image Stitching," Applied Sciences, vol. 13, no. 10, 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/10/6015.
Nico Klingler, "AlexNet: A Revolutionary Deep Learning Architecture," Viso.ai, 2024. [Online]. Available: https://viso.ai/deep-learning/alexnet/.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.