Security and Deepfakes: AI-Generated Deepfakes Posing New Threats to Business

Cybersecurity is an ongoing concern for every business, and the cybercriminals keep getting smarter. As new technologies emerge to make business operations more efficient and more profitable, those same technologies are being co-opted to launch new cyberattacks. Artificial intelligence (AI) is the latest technology to be used to perpetrate cybercrimes using deepfakes.

Deepfakes use generative AI to create realistic forgeries. Deepfakes often start with legitimate source content and then modify the data to create convincing images, audio, and documents. One of the most common forms of deepfakes is altering video content, such as the 2022 deepfake video of Ukranian President Volodymyr Zelenksky asking his troops to surrender to Russia. Taking the original video, generative AI uses deep learning to alter images and audio, creating new content by extracting data from related content. The resulting deepfakes can be hard to detect as phonies.

While deepfakes are being used for cyberwarfare and propaganda, they are becoming a real problem for businesses. Not only can generative AI be used to create phony videos, but it can also be used to create audio for phishing calls, phony emails and electronic communications, falsified documents, and more. AI has opened new possibilities for cybercriminals, creating new headaches for CSOs and security professionals.

How deepfakes pose a security threat

Deepfakes first appeared online in 2017 when a Reddit user started posting deepfake videos created using an open-source machine learning library. Replacing one face with another was relatively simple using deep learning algorithms. Since then, AI has been used to create falsified images for various nefarious purposes, such as revenge porn and fake news reports.

Businesses need to be concerned since deepfakes can be used to harm their reputation or perpetrate fraud. For example, posting a deepfake photo of an explosion at the Pentagon rattled the stock market in May 2023.

The same tactics can be used for market manipulation, such as targeting individual companies by releasing false videos or images that could immediately impact stock value.

Deepfakes are also a catalyst for fraud. One of the most publicized examples is a British energy firm tricked into transferring $243,000 to a phony account. The scam was perpetrated by a deepfake voice impersonation of the CEO of the parent company issuing the instructions. A similar deepfake phone call cost a Hong Kong bank $25 million.

Cybercriminals also use AI technology to create more realistic email and voice messages for phishing attacks. Deepfake technology is also behind new, more insidious forms of identity theft, including forging credentials and identity verification to access sensitive business information.

The rise in AI-generated phishing attacks

The widespread use of generative AI tools like ChatGPT and Microsoft Copilot has increased business productivity and opened new ways to automate business workflows. At the same time, it has lowered the barrier to deepfake attacks. Generative AI tools are being used to create credible and effective phishing messages and other types of communications. In addition to AI-generated phishing, we are seeing a rise in vishing (voice phishing), smishing (SMS phishing), and quishing (QR code phishing).

Here are just a few of the ways AI is applying deepfakes for phishing attacks:

Convincing phishing content – Generative AI can create realistic, contextually relevant messages that copy the tone and style of legitimate senders without the typos and errors that are telltales for phishing messages.

Social engineering – AI-generated voice and video can impersonate decision-makers, tricking individuals into transferring money or disclosing sensitive information.

Automated spear phishing – Harnessing AI enables attackers to launch spear phishing attacks at scale. AI algorithms analyze data from social media and available data sources to create tailored messages that increase the likelihood of fooling recipients.

Website cloning – Generative AI makes creating replicas of legitimate websites easier for phishing purposes, fooling visitors into surrendering sensitive data or transferring money.

Behavioral analysis – Phishing campaigns can be made more effective by analyzing online behavior to identify patterns such as timing phishing attacks when users are less vigilant.

Evading security – AI uses machine learning to continually adapt to new detection methods to bypass security measures and spam filters. For example, AI algorithms can generate emails with slightly different content, making it harder for pattern-based detection systems to identify them.

Using deepfakes for identity theft

Deepfakes are increasingly being used for identity theft as well. Criminals are using generative AI to falsify credentials such as driver’s licenses and passports and documents such as bank statements, social security checks, and W-2 forms.

Using artificial intelligence, criminals can use stolen personal information to alter identities or even fabricate new identities to perpetrate fraud. Here are some of the deepfake documents often used as part of identity theft:

Falsified IDs – Deepfakes contribute to identity theft. It’s easy to create false identifications by substituting an image in a legitimate form of ID, such as a driver’s license or passport. With false credentials, criminals take out loans, open bank accounts, buy vehicles, etc.

Falsified images – Generative AI can also be used to doctor photographic evidence, such as would be needed in an insurance claim or court case.

Modified documents – AI can manipulate genuine documents by changing names, addresses, dates, ID numbers, etc. Fraudsters can use those modified documents to open a line of credit, take out a loan, or commit another crime.

False documents – Falsifying documents is a more sophisticated form of identity theft, where criminals pass off someone else’s credentials as their own by changing account numbers, birth dates, and other identifiers.

Illegitimate documents – Unlike false documents, illegitimate documents are entirely fake and
created to look like bona fide original documents. Illegitimate documents can be used to forge checks, money orders, invoices, etc.

Combatting deepfakes

Unfortunately, there is no easy way to detect deepfakes. Security experts are actively working on new strategies to detect AI-generated deepfakes, but as AI technology continues to advance, detection becomes increasingly challenging.

However, there are strategies and technologies you can apply to minimize the risk from deepfakes. The best place to start is by educating employees, making them aware of deepfakes, and showing them what to look for:

  • Train employees to look for suspicious telltales or activities. Less sophisticated deepfakes can be spotted by identifying anomalies, such as having audio and video out of sync, unusual eye movements, strange body movements, or inconsistent lighting and shadows. If something looks odd, chances are it’s a deepfake.
  • Take a beat. Train your team to respond rather than react to inbound requests. Teach them to pause and consider whether a request is legitimate. Does the email or voicemail seem credible? Is there a potential risk or a request for sensitive information? If something doesn’t feel right, pause and verify before taking the next step.
  • It also pays to extend training to partners and customers to keep transactions secure. They should be wary of deepfakes such as phony purchase orders or invoices.

Technology can add safeguards against deepfakes as well.

  • Adding multifactor authentication can weed out voice and video calls that aren’t legitimate.
  • Create additional authentication protocols such as passwords and PINs to verify users (even otherwise trusted parties) before sharing sensitive data or completing transactions.
  • Use deepfake detection. AI-powered authentication technology can analyze emails, photos, videos, etc., looking for anomalies and indicators to validate and authenticate images and documents and identify
    deepfakes.

Enhancing cyber defenses with deepfake countermeasures

Whereas cybersecurity tools today do a reasonably good job of keeping active malicious content out of corporate systems and guarding the perimeter of the enterprise against the existing breed of cyberthreats, they have not yet taken on the tasks of content detection and digital media forensics for photos, videos, audio and documents. To date, media content sent from trusted or semi-trustworthy sources has been viewed as relatively benign, aside from a possibility of low-level fraud.

Now, the explosion of generative AI tools has made it imperative that businesses adopt more sophisticated tools to defend against deepfakes that can perpetrate large losses, disinformation or reputational harm.

Voicemails, videos, conference calls and photos can drive actionable and sometimes undesirable business outcomes when they are fake. Deepfake detectors, using AI technology, need to be embedded into virus scans as well as corporate firewalls as a first line of defense. These deepfake detectors use the same AI technology fraudsters use to create false digital media and documents, but turn it on itself.

Another countermeasure to the deepfake cyberthreat is fingerprinting images, video, audio and documents.

A unique identifier based on the bits from a file or active data stream is recorded to generate a fingerprint of an asset when it is created. The information may then be stored on an immutable distributed ledger, or blockchain, so it can’t elude validation if altered. However, this measure requires ownership of the entire lifecycle of each asset and cannot apply very easily to assets received or created by outside parties without a widespread adoption of standards.

For concerned businesses, the pressing question is when deepfake detection enhancements will occur for core cybersecurity infrastructure and if organizations will have to adopt new tools in the interim to close the gap this new threat poses. Waiting for a problem to resolve itself does not always bode well and no CIO or CISO wants to be the next poster child for what can go wrong if you ignore the Deepfake cyberthreat.

Although the choices may seem limited today, there are companies feverishly at work to solve this deepfake cyberthreat. In the interim, businesses have to be very vigilant to avoid becoming its next victim.

While AI will continue to become an essential business technology, AI tools will continue to proliferate, as will AI-powered deepfakes and fraud. Corporate security professionals must remain keen on stopping new types of AI-generated attacks or scams and, to do that, they also will have to keep pace with the latest deepfake protection tools.  

Nicos Vekiarides is the co-founder and CEO of Attestiv, a company providing cloud-scale fraud protection against deep fakes and altered photos, videos and documents.