Artificial Intelligence: What are 4 Major Cyber Threats for 2024?

For some, artificial intelligence has become a bad word. These critics have been quick to dismiss artificial intelligence technology as “dangerous,” but this is hardly an accurate representation of the actual landscape. Like any groundbreaking innovation, it is not the technology itself that presents a threat but who uses it and how it is used. 

Wrongdoers will always find a way to abuse new technology for nefarious purposes, but we must not let this prevent us from using artificial intelligence for use cases that will be legitimately beneficial. If we hope to create a future where artificial intelligence can be used in these powerful, beneficial ways, it is imperative to mitigate the damage of harmful AI use cases and create an ecosystem of responsible AI use.

How generative AI is being abused for phishing scams and deepfakes

Arguably, the most common form of artificial intelligence we are seeing change the world is generative AI, which includes large language models such as ChatGPT. These tools are being used for any number of purposes, from conducting research to drafting emails, powering customer service chatbots, and writing articles or essays. 

However, bad actors have found ways to exploit AI’s capabilities to quickly and reliably produce high-quality content to serve their schemes. For example, generative AI has shown the potential to help scammers make their phishing schemes more convincing. In these scams, perpetrators attempt to impersonate a trusted individual to convince the victim to reveal personal information. 

Previously, it was somewhat easy to identify these scams because of mistakes like grammatical errors or inconsistencies in voice, but scammers can now train an AI model on a library of writing created by the individual they hope to impersonate. The model can then create convincing writing in their voice, making it much more difficult to distinguish between authentic and fraudulent messages.

However, it isn’t just writing that artificial intelligence has become frighteningly good at creating. AI models can also produce convincing false audiovisual materials known as “deepfakes.” AI models now have an uncanny ability to create false images and audio clips using a person’s likeness. The implications of these false materials could be profoundly destructive, ranging from blackmail and reputational damage to the spread of misinformation and manipulation of markets or elections.

Automated cyberattacks powered by AI

The other capability of artificial intelligence that wrongdoers have tended to exploit is its advanced data processing capabilities. An AI model can analyze data at a much faster rate than humans, which also allows models to comb much larger databases. While there are certainly some positive applications to this capability for the technology, there are also ways that this data analysis power can be leveraged to cause significant harm.

Hackers have been able to train AI models to continuously probe networks for vulnerabilities to exploit. Generally, artificial intelligence is much more efficient at identifying these vulnerabilities to the point that they can be identified and exploited before the network operator can remedy them. Worse yet, because AI never tires, it can continue to scan networks 24/7.

When leveraged against networks that support critical infrastructure, this technology could cause a particularly damaging impact. Nowadays, our society is so dependent on computers that wrongdoers have a plethora of targets to choose from. Everything from power grids to traffic lights, shipping routes, air traffic control systems, telecommunications networks, and financial markets that run on computers is potentially vulnerable to a destructive AI-powered attack.

Defeating AI-powered cyber threats

That being said, technology is also developing in the cybersecurity field, allowing business leaders to take a “fight fire with fire” approach to artificial intelligence. Many of the same tools wrongdoers use for nefarious purposes can be adapted for more positive use cases. 

Instead of being used by hackers to probe networks for vulnerabilities, AI models can be used by network operators to identify areas in need of being patched. To combat generative AI, models are being developed that can analyze written or audiovisual content to determine its authenticity.

Still, education is the best tool we have in the fight against harmful use cases of artificial intelligence. People need to understand the potential cyber threats that AI abuse can pose and how to combat them. 

For example, employees must learn to identify potential phishing scams and distinguish them from legitimate messages from trusted sources. They should also learn and practice proper cybersecurity measures, like strong password use and access control.

Artificial intelligence is here to change the world, whether we like it or not, but whether this change is for the better or the worse is in our hands. Although there are many ways that AI can be used as a tool to benefit society at large, there are also ways this technology can be exploited to cause harm. We must identify, understand, and mitigate these cyber threats to create an ecosystem where artificial intelligence can be embraced for the positive transformational force it can be.

Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed's key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World).  He has also built and sold several Tech & AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank, and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals called 'Cloud Basics.' Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News, Information Week, and NewsNation.