The Dark Side of AI: How Criminals Leverage Machine Learning for Illicit Activities in the Context of Assault

Authors

  • Kadapa Chenchi Reddy, Mohd. Saleem

Keywords:

Artificial Intelligence (AI), Machine Learning, Criminal Misuse, Cyber Assault, AI-driven Harassment, Cyberstalking, Deepfakes, AI in Crime, AI Weaponization. AI Surveillance, Digital HarassmentCriminal Exploitation, Data Mining, Ethical AI

Abstract

The rise of artificial intelligence (AI) and machine learning has revolutionized various sectors, but it has also opened avenues for malicious use by criminals, particularly in the context of assault. This article explores the dark side of AI, focusing on how criminals leverage these technologies to carry out both physical and cyber assaults. From weaponizing AI-driven drones for targeted attacks to using machine learning for cyberstalking, harassment, and social engineering, criminals are finding increasingly sophisticated methods to exploit these technologies.The article examines real-world examples of AI-assisted assault, including physical and cyber harassment, and the challenges law enforcement faces in detecting and prosecuting such crimes. Additionally, it discusses the ethical and legal implications of regulating AI to prevent its misuse, highlighting the need for stronger safeguards and collaboration between tech companies, policymakers, and law enforcement. As AI continues to evolve, it is essential to balance innovation with ethical responsibility, ensuring that its potential is harnessed for good while mitigating risks to individuals' safety and privacy. The article calls for increased awareness, regulation, and vigilance to safeguard society from the malicious use of AI in criminal activities.

Downloads

Download data is not yet available.

References

Buiten, Miriam, Alexandre d. Streel, and Martin Peitz. 2023. “The law and economics of AI liability.” Computer Law & Security Review 48 (April): 1-20. https://doi.org/10.1016/j.clsr.2023.105794

De Conca, S. 2022. “Bridging the liability gaps: Why AI challenges the existing rules on liability and how to design human-empowering solutions.” https://doi.org/10.1007/978-94-6265-523-2_13

Čerka, Paulius, Jurgita Grigienė, and Gintarė Sirbikytė. 2015. “Liability for damages caused by artificial intelligence.” Computer Law & Security Review 31, no. 3 (June): 376-389. https://www.sciencedirect.com/science/article/abs/pii/S026736491500062X

Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework.” Washington International Law Journal 33, no. 2 (August): 1-58. https://arxiv.org/pdf/2303.11196

Jaiman, Ashish. 2020. “Debating the ethics of deepfakes.” ORF. https://www.orfonline.org/expert-speak/debating-the-ethics-of-deepfakes

MacDonald, Abby. 2024. “The Uses and Abuses of Deepfake Technology.” Canadian Global Affairs Institute. https://www.cgai.ca/the_uses_and_abuses_of_deepfake_technology#Good

Reuters. 2024. “‘Cheapfakes’, not deepfakes, spread election lies in India.” The Hindu, May 31, 2024. https://www.thehindu.com/sci-tech/technology/cheapfakes-not-deepfakes-spread-election-lies-in-india/article68235040.ece

Bond, Shannon. 2023. “People are arguing in court that real images are deepfakes.” NPR, May 8, 2023. https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused

Sadaf, Fahim, & G. S. Bajpai. 2020. “AI and Criminal Liability.” Indian Journal of Artificial Intelligence and Law, 1(1). https://www.academia.edu/86155216/AI_and_Criminal_Liability

Hallevy, Gabriel. 2010. “The Criminal Liability of Artificial Intelligence Entities by Prof. Gabriel Hallevy :: SSRN.” https://ssrn.com/abstract=1564096

MacDonald, Abby. 2024. “The Uses and Abuses of Deepfake Technology.” Canadian Global Affairs Institute. https://www.cgai.ca/the_uses_and_abuses_of_deepfake_technology#Good

Sala, Alessandra. 2024. “AI watermarking: A watershed for multimedia authenticity.” ITU. https://www.itu.int/hub/2024/05/ai-watermarking-a-watershed-for-multimedia-authenticity/

Stricklin, Kasey. 2021. “Social Media Bots and Section 230 Reform with Unintended Consequences.” CNA. https://www.cna.org/our-media/indepth/2021/04/social-media-bots-and-section-230

Khlaaf, Heidy. 2023. “Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems.” Trail of Bits. https://www.trailofbits.com/documents/Toward_comprehensive_risk_assessments.pdf

Widder, David G., Dawn Nafus, Laura Dabbish, and James Herbsleb. 2022. “Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes.” FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533779

Xiang, Chloe, Janus Rose, Magdalene Taylor, Jordan Pearson, Matthew Gault, Samantha Cole, and Ryan S. Gladwin. 2023. “‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says.” VICE. https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/

Indurkhya, Bipin. 2023. “Ethical Aspects of Faking Emotions in Chatbots and Social Robots.” 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 1719. https://arxiv.org/pdf/2310.12775

Natale, Simone. 2021. “Chapter 3 The ELIZA Effect: Joseph Weizenbaum and the Emergence of Chatbots.” In Deceitful Media: Artificial Intelligence and Social Life After the Turing Test, 50-67. Oxford University Press. https://doi.org/10.1093/oso/9780190080365.003.0004

Ryan, William A., Allen Garrett, Kilpatrick Townsend, and Brad Sears. 2023. “Practical Lessons from the Attorney AI Missteps in Mata v. Avianca.” Association of Corporate Counsel. https://www.acc.com/resource-library/practical-lessons-attorney-ai-missteps-mata-v-avianca

Downloads

Published

19.04.2025

How to Cite

Kadapa Chenchi Reddy. (2025). The Dark Side of AI: How Criminals Leverage Machine Learning for Illicit Activities in the Context of Assault. International Journal of Intelligent Systems and Applications in Engineering, 13(1), 425 –. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/7812

Issue

Section

Research Article