With the introduction of AI tools, bad actors are running complex phishing scams at a higher volume than ever, forcing organizations to increase their security posture on the digital and human front.
What is AI?
Artificial Intelligence (AI) leverages digital tools to complete tasks that would normally require human thinking or decision making. One type of AI tool that has become very popular across the business and consumer sectors is the large language model (LLM). These LLMs, such as the widely used GPT4 (ChatGPT) model, are adept at processing and generating text.
What are AI-Enhanced Scams?
While there are many novel and exciting use cases for LLMs, in the hands of bad actors, an LLM can be a powerful phishing weapon. LLMs can run thousands of cons in parallel, resulting in significantly increased volume of phishing emails. Additionally, thanks to their text processing abilities, LLMs allow scammers to create more convincing email text, and translate to any written language imaginable rendering the hallmark poor spelling and grammar a thing of the past. Unfortunately, the barrier to entry for using these LLMs for phishing is very low – a bad actor with your average personal computer can run thousands of scams in parallel 24 hours a day. Due to their ever-evolving nature, these AIs are a formidable opponent and a moving target.
How can we Combat AI-Enhanced Scams?
The rise of AI-enhanced scams has many experts calling for increased AI regulation and more legislation surrounding the use of AI technology. Many AI tools, such as ChatGPT, have policies in place that prevent the use of their platform in malicious ways, however bad actors have found ways to circumvent these policies. So how do we fight this?
Until now, the most clear indicator of a phishing scam was poorly worded emails with improper grammar. With the advent of AI-enhanced phishing, this is no longer a reliable way of detecting phishing emails. There are, however, other ways to combat AI-enhanced scams. Be diligent when reading over emails – if you do not recognize the sender, or if the body text asks you to perform a task, reach out to your security team so that they may investigate the email. Passwordless authentication provides an additional line of defense against cybercriminals – if you don’t know your password, they cannot extract it from you. Finally, be careful how you share your data with applications and companies. Never enter any proprietary or confidential information into AI apps or websites.
If you are concerned about the rise in AI-enhanced scams, talk to Kraft Kennedy about how you can improve your security posture.