Evasion-Aware Botnet Attack Detection using Deep Reinforcement Adversarial Learning
Keywords:
Deep reinforcement learning, Botnet, EVGAN, ACGAN, DRLEVGANAbstract
Adversarial evasions represent contemporary challenges for applications relying on Machine Learning (ML). The susceptibility of traditional ML inference systems introduces vulnerabilities that make botnet detectors susceptible to attacks through adversarial examples. Complex AI models and sophisticated attack techniques can be used to generate the evasions. One of the potential sources of evasion assaults is generative AI models. The lack of data, which causes ML classifiers to train with a bias toward samples from the majority of classes, is a serious concern as well. This paper proposed a novel “Deep Reinforcement Learning based Evasion Generative Adversarial Network” (DRLEVGAN) to protect evasion attacks and retain semantics of the attack sample. The proposed model also tackles the issues of data imbalance, evasion awareness, and maintaining functionality in the context of synthetic botnet traffic generation. This model does not need adversarial training for the machine learning classifiers since it can act as an adversarial-aware botnet detection model. DRLEVGAN demonstrates superior performance when compared to similar models such as “Auxiliary Classifier GAN (ACGAN) and Evasion Generative Adversarial Network (EVGAN)”.
Downloads
References
Kazmi, S., Aafaq, N., Khan, M. A., Khalil, M., & Saleem, A. (2023). From Pixel to Peril: Investigating Adversarial Attacks on Aerial Imagery through Comprehensive Review and Prospective Trajectories. IEEE Access.
Djenna, A., Barka, E., Benchikh, A., & Khadir, K. (2023). Unmasking Cybercrime with Artificial-Intelligence-Driven Cybersecurity Analytics. Sensors, 23(14), 6302.
Debicha, I., Cochez, B., Kenaza, T., Debatty, T., Dricot, J. M., & Mees, W. (2023). Adv-Bot: Realistic adversarial botnet attacks against network intrusion detection systems. Computers & Security, 129, 103176.
Neupane, S., Fernandez, I. A., Mittal, S., & Rahimi, S. (2023). Impacts and Risk of Generative AI Technology on Cyber Defense. arXiv preprint arXiv:2306.13033.
Apruzzese, G., Andreolini, M., Marchetti, M., Venturi, A., & Colajanni, M. (2020). Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks. IEEE Transactions on Network and Service Management, 17(4), 1975-1987.
Mari, A. G., Zinca, D., & Dobrota, V. (2023). Development of a Machine-Learning Intrusion Detection System and Testing of Its Performance Using a Generative Adversarial Network. Sensors, 23(3), 1315.
Jiang, T., Liu, Y., Wu, X., Xu, M., & Cui, X. (2023). Application of deep reinforcement learning in attacking and protecting structural features-based malicious PDF detector. Future Generation Computer Systems, 141, 325-338.
Ebrahimi, M., Zhang, N., Hu, J., Raza, M. T., & Chen, H. (2020). Binary black-box evasion attacks against deep learning-based static malware detectors with adversarial byte-level language model. arXiv preprint arXiv:2012.07994.
Zhou, X., Liang, W., Li, W., Yan, K., Shimizu, S., Kevin, I., & Wang, K. (2021). Hierarchical adversarial attacks against graph-neural-network-based IoT network intrusion detection system. IEEE Internet of Things Journal, 9(12), 9310-9319.
Apruzzese, G., Andreolini, M., Marchetti, M., Colacino, V. G., & Russo, G. (2020). AppCon: Mitigating evasion attacks to ML cyber detectors. Symmetry, 12(4), 653.
Rizzardi, A., Sicari, S., & Porisini, A. C. (2023). Deep Reinforcement Learning for intrusion detection in Internet of Things: Best practices, lessons learnt, and open challenges. Computer Networks, 236, 110016.
Hemmati, M., & Hadavi, M. A. (2022). Bypassing Web Application Firewalls Using Deep Reinforcement Learning. ISeCure, 14(2).
Randhawa, R. H., Aslam, N., Alauthman, M., & Rafiq, H. (2022). Evasion generative adversarial network for low data regimes. IEEE Transactions on Artificial Intelligence.
Giovanni Apruzzese, Mauro Andreolini, Mirco Marchetti, Andrea Venturi, and Michele Colajanni. Deep reinforcement adversarial learning against botnet evasion attacks. IEEE Transactions on Network and Service Management, 17(4):1975–1987, 2020.
Mao, Z., Fang, Z., Li, M., & Fan, Y. (2022). EvadeRL: Evading PDF malware classifiers with deep reinforcement learning. Security and Communication Networks, 2022.
Di Wu, Binxing Fang, Junnan Wang, Qixu Liu, and Xiang Cui. Evading machine learning botnet detection models via deep reinforcement learning. In ICC 2019-2019 IEEE International Conference on Communications (ICC), pages 1–6. IEEE, 2019.
Couto, G. C. K., & Antonelo, E. A. (2023). Hierarchical Generative Adversarial Imitation Learning with Mid-level Input Generation for Autonomous Driving on Urban Environments. arXiv preprint arXiv:2302.04823.
Anderson, HS., Kharkar, A., Filar, B., Evans, D., & Roth, P. (2018). Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning. arXiv:1801.08917 [cs.CR]
Alauthman, Aslam, Al-Kasassbeh, Khan, Al-Qerem and Choo K K R 2020, “An efficient reinforcement learning-based botnet detection approach. J. Netw. Comput. Appl. 150, Article no 102479.
Zhang, Q., Cho, J. H., Moore, T. J., Kim, D. D., Lim, H., & Nelson, F. (2023, May). EVADE: Efficient Moving Target Defense for Autonomous Network Topology Shuffling Using Deep Reinforcement Learning. In International Conference on Applied Cryptography and Network Security (pp. 555-582). Cham: Springer Nature Switzerland.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.