By Dr. Jarrod Sadulski  |  04/24/2026


AI Cyber Attacks

 

Artificial intelligence (AI) has become part of our routine tasks and daily lives. For instance, AI is used to:

  • Provide fitness monitoring
  • Determine buying preferences from online sites
  • Decide the ads we see on our social media feeds
  • Increase driver safety through features such as lane departure assistance and vehicle diagnostics
  • Improve health by assisting with image analysis as well as medical and mental health apps
  • Automate lighting in a home to increase household security
  • Detect unusual activity involving our finances
  • Collect and analyze data for businesses

While AI certainly has its benefits, it has also changed the cyber threat landscape. AI can be used to manipulate threat detection and intrusion detection systems, because AI-enabled attacks rely on attack automation and social engineering.

 

The Rise of AI-Powered Cyberattacks

The use of AI by bad actors is a growing threat to security teams. AI models and machine learning tools have become far more powerful than human hackers.

For instance, machine learning and large language models (LLMs) can access sensitive data and enhance phishing campaigns through automation. Similarly, attackers have leveraged AI software to overcome incident responses during cyberattacks.

Criminal networks use AI during phishing attacks to trick victims who voluntarily provide personal information. In most cases, the AI is so convincing and realistic that victims do not realize they are being scammed and granting access to personal or business information.

For example, AI can be used to recreate the images and likeness of a legitimate company and contact a customer about a service. Also, AI may lead to victims transferring money to scammers who the victims believe are real people.

 

How Is Artificial Intelligence Used in Cyberattacks?

During cyberattacks, AI software can recognize and exploit system vulnerabilities within data security systems. Companies that rely heavily on technology are more susceptible to AI-powered cyberattacks. Once AI infiltrates systems, bad actors can install ransomware that limits a victim’s defenses.

AI can also flag risky user behavior online, which can result in a victim being targeted without any human involvement from bad actors. For example, artificial intelligence can collect sensitive data from careless individuals who engage in risky behavior on the internet or people who use weak passwords that AI algorithms can identify.

 

Generative AI: A Growing Cyber Threat

In addition, generative AI is also a threat. AI tools can be used to learn patterns through large language models and datasets. Generative AI can then create deepfake voice messages, videos, and images, which can be especially dangerous in the wrong hands.

For example, bad actors can leverage AI to recreate images, clone voices, and create videos. This digital content can be used to convince victims to believe that a family member is in danger or injured.

Similarly, AI-manipulated deepfakes present a risk to national security. Deepfakes can be used by criminal organizations to spread misinformation to the public, which can create public panic and detrimental actions based on this fake information. For example, deepfakes can create chaos during a national emergency if AI-generated images depict tragedies that did not actually occur.

Deepfakes can also be used to impersonate police officers or government officials. If deepfakes create false statements from our nation’s leaders, that can jeopardize national security and those deepfakes can be used by foreign enemies for information warfare.

 

Defending Against AI-Powered Cyberattacks

AI tools can breach endpoint protection and overcome traditional security measures in affected systems. To counter AI threats, cybersecurity professionals must take better advantage of AI-powered cybersecurity tools to decrease the threat landscape. These cybersecurity professionals can use AI software to mitigate cyberattacks and create a more effective defense from attackers by reducing system vulnerabilities.

For instance, AI can be especially beneficial in monitoring large datasets for various threats and providing predictive analytics based on training data. AI-powered cybersecurity tools can be used to triage and assess threats in real time.

Creating effective incident response plans also reduces the threat of cyberattacks. Security teams should utilize these incident response plans and AI technology to harden attack targets and detect social engineering attacks before victims provide sensitive information. Making potential victims aware of the threat that AI tools present and providing training to potential victims about falling victim to phishing campaigns and other attack methods is equally essential.

AI will continue to be a part of our daily lives. As AI technology further improves, bad actors will seek more ways to create AI-driven cyber threats, gain access to information, and further their criminal activity through phishing campaigns, social engineering attacks, and other attacks.

As a result, security teams and law enforcement will need to counter these AI-driven attacks and spread threat intelligence to others. Society should also embrace the use of AI and learn how to use it not only for routine tasks, but also to create additional security measures to protect against malicious actors.

 

The Bachelor of Science in Criminal Justice at APU

For adult learners interested in learning criminal justice, American Public University (APU) provides an online Bachelor of Science in Criminal Justice. For this degree program, students can enroll in a variety of courses, including criminology, criminal profiling, research design and methods, and criminalistics.

This B.S. in criminal justice also offers a digital forensics concentration. This concentration provides courses in cybercrime, computer forensics, wireless networks, and different areas of digital forensics.

For more information about this bachelor’s degree in criminal justice, visit APU’s security and global studies degree program page.

Note: This degree program is not designed to meet the educational requirements for professional licensure or certification in any country, state, province or other jurisdiction. This program has not been approved by any state professional licensing body and does not lead to any state-issued professional licensure.


About The Author

Dr. Jarrod Sadulski is an associate professor in the School of Security and Global Studies and has over 20 years in the field of criminal justice. He holds a bachelor’s degree in criminal justice from Thomas Edison State College, a master’s degree in criminal justice from American Military University, and a Ph.D. in criminal justice from Northcentral University.

His expertise includes training on countering human trafficking, maritime security, mitigating organized crime, and narcotics trafficking trends in Latin America. Jarrod has also testified to both the U.S. Congress and U.S. Senate on human trafficking and child exploitation. He has been recognized by the U.S. Senate as an expert in human trafficking.

Jarrod frequently conducts in-country research and consultant work in Central and South America on human trafficking and current trends in narcotics trafficking. Also, he has a background in business development.