AI-Driven Automation Techniques for Enhanced Software Testing Efficiency
Keywords:
Artificial Intelligence, Software Testing, Test Automation, Machine Learning, Defect Prediction, Regression Testing, Test Case Generation, Software Quality, Testing EfficiencyAbstract
Software systems especially large and complex ones has become increasingly difficult to manage with older, manual approaches. This paper investigates the use of artificial intelligence in improving software testing processes, not just for the sake of novelty, but because traditional methods are falling behind in real-world scenarios. Several AI-based tools and strategies are examined here, including approaches where machine learning generates test cases, algorithms that try to predict where defects will occur, and updated methods for regression testing that attempt to adapt over time. Rather than presenting AI as a silver bullet, the study compares these newer techniques with conventional testing methods. The evaluation considers how each performs in terms of detecting bugs, covering testable areas, and how much time or system resources are consumed during testing. One of the main contributions is a new model that combines reinforcement learning with natural language processing. This model isn't just theoretical it was applied to real-world projects to see how it would work in practice. The findings show that, in several cases, testing became faster, and more bugs were caught early, though results varied across different types of software. However, not everything worked perfectly. There were issues related to understanding how the AI made its decisions, and the quality of training data had a noticeable effect on outcomes. Also, fitting these tools into established software workflows wasn’t always smooth and required adjustment. Still, the overall takeaway is clear: AI has the potential to shift the field of software testing meaningfully, though there’s still work to be done to make these systems more interpretable and adaptable in real-time environments.
DOI: https://doi.org/10.17762/ijisae.v10i3s.7961
Downloads
References
Khoshgoftaar, T.M., Gao, K., & Seliya, N. (2010). Attribute Selection and Imbalanced Data: Problems in Software Defect Prediction. 2010 22nd IEEE International Conference on Tools with Artificial Intelligence, 1, 137-144.
S. Lessmann, B. Baesens, C. Mues and S. Pietsch, "Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings," in IEEE Transactions on Software Engineering, vol. 34, no. 4, pp. 485-496, July-Aug. 2008, doi: 10.1109/TSE.2008.35
Gordon Fraser and Andrea Arcuri. 2011. EvoSuite: automatic test suite generation for object-oriented software. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). Association for Computing Machinery, New York, NY, USA, 416–419. https://doi.org/10.1145/2025113.2025179
S. Yoo and M. Harman. 2012. Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22, 2 (March 2012), 67–120. https://doi.org/10.1002/stv.430
ZHOU, Jian; ZHANG, Hongyu; and LO, David. Where should the bugs be fixed? More accurate information retrieval-based bug localization based on bug reports. (2012). ICSE 2012: 34th International Conference on Software Engineering, Zurich, June 2-9. 14-24.
Walkinshaw, N., Bogdanov, K., Holcombe, M., & Salahuddin, S. (2007). Reverse Engineering State Machines by Interactive Grammar Inference. 14th Working Conference on Reverse Engineering (WCRE 2007), 209-218.
T. Yu, W. Wen, X. Han and J. H. Hayes, "ConPredictor: Concurrency Defect Prediction in Real-World Applications," in IEEE Transactions on Software Engineering, vol. 45, no. 6, pp. 558-575, 1 June 2019, doi: 10.1109/TSE.2018.2791521.
C. Pacheco, S. K. Lahiri, M. D. Ernst and T. Ball, "Feedback-Directed Random Test Generation," 29th International Conference on Software Engineering (ICSE'07), Minneapolis, MN, USA, 2007, pp. 75-84, doi: 10.1109/ICSE.2007.37.
S. Udeshi and S. Chattopadhyay, "Grammar Based Directed Testing of Machine Learning Systems," in IEEE Transactions on Software Engineering, vol. 47, no. 11, pp. 2487-2503, 1 Nov. 2021, doi: 10.1109/TSE.2019.2953066.
D. Alrajeh, J. Kramer, A. Russo and S. Uchitel, "Learning operational requirements from goal models," 2009 IEEE 31st International Conference on Software Engineering, Vancouver, BC, Canada, 2009, pp. 265-275, doi: 10.1109/ICSE.2009.5070527.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


