How Custom GPT Models Facilitate Fraud in the Digital Age

How Custom GPT Models Facilitate Fraud in the Digital Age

In recent years, the landscape of Artificial Intelligence (AI) has seen remarkable advancements, with GPT (Generative Pre-trained Transformer) models standing out as a revolutionary development. Launched on November 30, 2022, by OpenAI, ChatGPT quickly became a tool for various legitimate applications across industries.

ChatGPT excels in tasks ranging from content creation to customer service automation, showcasing AI’s immense potential. However, since AI can be both a powerful ally and a notorious adversary, the same capabilities that make GPT models beneficial also present significant risks.

An AI illustration of Custom GPT models and fraud (DALL-E)

An AI illustration of Custom GPT models and fraud (DALL-E)

As more revelations come from OpenAI, the use of GPT models continues to expand. Custom GPTs, specialized versions of the original models tailored to specific tasks, have hyped the AI community but also raised significant security concerns. Since the introduction of custom GPTs in November 2023, there has been growing concern about their misuse.

One major worry is the potential for custom GPTs to facilitate sophisticated fraud schemes. Cyber attackers can leverage these tailored models for nefarious purposes such as phishing, Business Email Compromise (BEC) attacks, malware creation, and even Command and Control (C2) operations. As a supporting fact, phishing emails have surged by 4,151% since ChatGPT’s emergence.

Another critical concern is the potential risk of data leakage through custom GPTs. Users often question whether their chat information is sent to third parties or if creators of custom GPTs have access to their prompts.

In this blog post, we will delve into the fraud concerns surrounding custom GPT models. We will explore how attackers can exploit these innovative tools to wreak havoc on individuals and organizations. We will additionally discuss the security implications of these models.

Fraudulent Uses of Custom GPT Models

Malicious actors can exploit the versatility and accessibility of custom GPT models in various fraudulent scenarios, leveraging AI’s sophisticated capabilities to deceive and defraud individuals and organizations.

Phishing and Social Engineering:

Fraudsters can use custom GPT models to generate highly convincing phishing emails, messages, and websites, targeting personal, financial, and corporate data.

By fine-tuning these models with specific prompts, attackers create tailored content that can deceive even the most cautious individuals. For instance, a custom GPT can craft emails that mimic the tone and style of a trusted company, prompting recipients to click malicious links or divulge sensitive information.

Imagine receiving an email that looks exactly like it’s from your bank, complete with personalized details. Attackers can use your stolen or leaked information, possibly obtained from the dark web or other hacker channels, making the threat almost indistinguishable from legitimate communication.

A bank phishing email created with a custom GPT 

A bank phishing email created with a custom GPT

SOCRadar’s Dark Web Monitoring can help detect compromised information by continuously scanning dark web forums, marketplaces, and hacker channels. This proactive approach allows organizations to quickly identify and respond to leaked data, ensuring timely protection against potential threats.

Impersonation and Deepfakes:

Custom GPT models can also be used to create realistic chats or virtual assistants that impersonate real individuals or companies.

These AI-driven entities can extract sensitive information from unsuspecting victims by engaging them in seemingly legitimate conversations. Furthermore, GPT models can generate deepfake text content for fraudulent purposes, such as fake news articles or bogus social media posts, exacerbating the challenge of distinguishing between authentic and deceptive content.

When we attempted to create deepfake images, the GPT declined our requests, citing policy restrictions. However, some models listed in the GPT market claim they can generate these images if provided with the source material. These models had few users and poor ratings, suggesting they might be fraudulent, deceptive attempts, or just bad implementations. But, it still excels at generating fake emails tailored to specific personas and crafting fake news articles.

Generating fake news with GPT

Generating fake news with GPT

Reports indicate a 3,000% increase in deepfake attempts in 2023 compared to the previous year, significantly escalating fraud risks. For more insights, visit the SOCRadar blog post: Cybersecurity Implications of Deepfakes

Automated Scams:

Scammers can leverage custom GPT models to automate and scale their fraudulent operations. These models can generate fake reviews, comments, or entire websites designed to deceive consumers and businesses alike.

The impact of such automated scams is far-reaching, damaging reputations, and causing financial losses. For example, a fake e-commerce site created by a custom GPT could lure customers into making purchases, only for them to receive counterfeit goods or nothing at all.

Start of a conversation with a website builder custom GPT

Start of a conversation with a website builder custom GPT

SOCRadar’s Brand Protection, part of the Digital Risk Protection module, effectively tracks down impersonating domains. It continuously monitors the web for fraudulent sites mimicking your brand, enabling swift identification and takedown to protect your reputation and customers from deception.

Track impersonating domains and easily start a takedown process (SOCRadar)

Track impersonating domains and easily start a takedown process (SOCRadar)

Establishing Command and Control (C2) Servers:

Attackers can manipulate custom GPT models to generate scripts that establish a command and control (C2) server, enabling them to remotely control compromised systems.

For instance, by requesting a GPT model to create a basic Python server and client for remote shell connections, attackers can bypass traditional security measures and gain unauthorized access to target systems.

In the example below, we simply asked GPT to help us create a C2 server.

C2 creation with a custom GPT

C2 creation with a custom GPT

It also offers quick guidance for generating and distributing payloads, as well as managing the server post-exploitation.

C2 creation with a custom GPT - 2

C2 creation with a custom GPT – 2

Example Weaponization of Custom GPT Models

A December 2023 investigation clearly highlighted the potential for misuse of the custom GPT models.

By creating a custom AI bot named “Crafty Emails,” the BBC demonstrated how easy it was to develop tools for cybercrime. The bot was instructed to generate emails, texts, and social media posts using social engineering techniques to lure recipients into clicking malicious links or downloading harmful attachments.

While the public version of ChatGPT refused to generate such content, their Proof-of-Concept (PoC) GPT called “Crafty Emails” complied with most requests, revealing the ease with which malicious actors could potentially use the GPT Builder tool to create harmful AI bots without needing any coding or programming skills.

The test of the bespoke bot showed it could create content for various well-known scam techniques, such as Hi Mum text scams, Nigerian-prince emails, smishing texts, crypto-giveaway scams, and spear-phishing emails.

Is ChatGPT Commonly Used by Threat Actors?

Several state-sponsored cyber threat groups have been discovered leveraging Large Language Models (LLMs) like ChatGPT in their operations.

Notably, groups such as Fancy Bear, Kimsuky, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon have utilized these tools for various malicious activities. Microsoft and OpenAI identified these actors through collaborative information sharing, leading to the termination of their OpenAI accounts.

Fancy Bear focused on reconnaissance related to radar imaging and satellite communication, while Kimsuky crafted spear-phishing content. Crimson Sandstorm evaded detection and generated phishing emails, and Chinese groups Charcoal Typhoon and Salmon Typhoon used LLMs for research and translation.

Find more insights into threat actors, malware, and their activity through SOCRadar’s Threat Actor Intelligence

Find more insights into threat actors, malware, and their activity through SOCRadar’s Threat Actor Intelligence

These real-world examples highlight several ways in which threat actors exploit ChatGPT, including enumeration, initial access assistance, reconnaissance, and the development of polymorphic code.

If these prominent threat actors are utilizing such technologies for their operations, crooks will undoubtedly depend on the same tools to develop attacks. ChatGPT’s widespread availability and adaptability already make it a valuable asset for cybercriminals; with features provided by custom GPT models, threat actors can exploit ChatGPT’s capabilities even further.

Information Exposure, Prompt Injection, and 3rd-Party API Risks in Custom GPT Models

Having explored how real threat actors use custom GPT models in fraud schemes and further attacks, it is also important to understand the specific risks associated with these models.

Custom GPTs, while powerful, can inadvertently lead to significant security concerns, primarily involving data leaks and malicious manipulations.

  • Custom GPTs can inadvertently expose sensitive information through knowledge file uploads and third-party API integrations.

Files uploaded to custom GPTs, stored under specific paths, can be accessed by anyone with access to the GPT, leading to potential data exfiltration. If an employee unknowingly uploads sensitive documents, they could be making them accessible to malicious actors.

File uploads and capabilities sections on the ‘Configure’ page of the GPT Builder tool

File uploads and capabilities sections on the ‘Configure’ page of the GPT Builder tool

Malicious actors can retrieve uploaded files by using commands with custom GPTs that involve the code interpreter feature. They can execute system-level commands, accessing and manipulating the underlying system. For example, running commands like ‘ls’ or ‘env’ can return results, allowing easy access to information about the environment.

Output of the env command on a random custom GPT; the information given here is not sensitive but involves the specifics of the environment that is running the GPT

Output of the env command on a random custom GPT; the information given here is not sensitive but involves the specifics of the environment that is running the GPT

  • Prompt injections in third-party APIs can alter GPT responses without user discretion, manipulating the conversation.

For example, an API could instruct a GPT to provide misleading information, causing users to unknowingly spread misinformation. It could also provide a malicious URL, leading to compromise.

  • When using actions that rely on third-party APIs, sensitive data may be inadvertently shared with external entities.

The data, formatted and sent by ChatGPT, can include personal information and other sensitive details, which third-party APIs can collect and potentially misuse. Organizations need to be aware of these risks and ensure proper data handling protocols are in place.

Custom GPT connects to an API endpoint for information processing

Custom GPT connects to an API endpoint for information processing

“The Thief” GPT is a great example of how these theoretical risks could turn into real threats. It demonstrates how a custom GPT can directly engage in fraudulent activities by misusing the model internally, rather than merely assisting malicious actors.

The Thief is a custom GPT model created by researchers to demonstrate the potential dangers of weaponized chatbots. Developed in late 2023, it can forward chat messages to third-party servers, phish for sensitive information, and spread malware, serving as a precise example of the risks we outlined in this section.

The Thief GPT sends the user’s reply to its server (Embrace The Red) 

The Thief GPT sends the user’s reply to its server (Embrace The Red)

You can check out SOCRadar’s Fraud Protection capabilities and read the following blog for more information: How SOCRadar Can Help Fraud Teams?

Other GPT-Based Threats

In addition to custom GPTs, there are attacker-created or researcher-devised GPTs that serve as hacker tools or ethical showcases but can be equally effective for malicious purposes.

ChatGPT’s popularity has spurred the development of these GPT-based threats, and most of them are shared or marketed by threat actors on dark web forums or Telegram channels. These tools provide cybercriminals with advanced capabilities for conducting fraud and other malicious activities.

FraudGPT:

FraudGPT is a tool designed to mimic ChatGPT but with a sinister twist. It is marketed on the dark web and Telegram, specifically for creating content that facilitates cyberattacks.

Discovered by researchers in July 2023, FraudGPT is particularly dangerous because it lacks the safety features and ethical guidelines of ChatGPT, allowing it to generate malicious content without restrictions.

Chat screen from FraudGPT (Netenrich)

Chat screen from FraudGPT (Netenrich)

WormGPT:

WormGPT is another nefarious tool, built on the GPT-J large language model from 2021. WormGPT boasts features such as unlimited character support, chat memory retention, and code formatting capabilities.

This tool was allegedly trained on a diverse dataset with a focus on malware-related content, making it a powerful resource for creating harmful software. It even leverages applications like ChatGPT and Google Bard to enhance its functionality.

A Telegram advertisement of WormGPT

A Telegram advertisement of WormGPT

HackerGPT:

HackerGPT is marketed as a tool for ethical hacking, yet it stands apart because it has no limitations when generating harmful content.

Unlike other GPT models that include safeguards to prevent the creation of malicious code, HackerGPT can freely produce hacking scripts, malware, and phishing schemes. This lack of restrictions makes it a valuable asset for cybercriminals and dangerous for the rest of us.

A bypass attempt with HackerGPT

A bypass attempt with HackerGPT

Recommendations for Safeguarding Against GPT-Based Threats and Fraud

As GPT models like ChatGPT become increasingly sophisticated and integrated into various aspects of our lives, it is crucial for companies and individuals to stay vigilant against potential scams and fraudulent activities.

  • Analyze User Behavior: Use Natural Language Processing (NLP) on your emails to detect patterns that suggest fraudulent intent. AI-generated content often has distinct phrasing that can indicate a scam.
  • Monitor User Devices: Analyze device behaviors for irregularities during account onboarding, funding, or payment processes to spot potential scams early.
  • Transaction Monitoring: Implement systems to detect anomalies such as sudden spikes in transaction volumes or unusual transaction types.
  • Employ Machine Learning: Train machine learning algorithms on datasets of known fraudulent activities to automatically flag suspicious behavior.
  • Avoid Sharing Sensitive Information: Do not share Personal Identifiable Information (PII), financial details, passwords, private or confidential information, and proprietary data with AI systems.
  • Utilize Encrypted Channels: Ensure sensitive data is communicated over encrypted and secure channels to prevent interception.
  • Implement Strong Password Policies: Use unique, strong passwords and enable Multi-Factor Authentication (MFA) for stronger security.
  • Strengthen Security Operations Center (SOC) Efficiency: Develop a robust SOC that integrates threat intelligence to monitor and respond to fraud incidents. This includes continuous assessment of vulnerable points, immediate notification of compromised data, and active threat actor analysis.

SOCRadar can revolutionize your security efforts by integrating cyber threat intelligence to combat fraud with unmatched efficiency.

Alarm: Credit Card(s) Detected on Hacker Forum (SOCRadar XTI)

Alarm: Credit Card(s) Detected on Hacker Forum (SOCRadar XTI)

With SOCRadar, your SOC will benefit from real-time monitoring and rapid response capabilities, ensuring continuous assessment of vulnerable points. Our platform provides immediate notifications of compromised data and delivers critical alarms, enabling your team to stay ahead of potential threats.

By adopting these recommendations, companies and individuals can better protect themselves against the threats posed by custom GPT models and other AI-driven tools.

Conclusion

In conclusion, while custom GPT models offer remarkable capabilities and benefits across various industries, they also present significant risks that cannot be ignored. The potential for misuse by cybercriminals to facilitate fraud and other malicious activities is a serious concern. It is vital for organizations and individuals to stay informed about the evolving threats. By doing so, we can harness the power of AI responsibly, minimizing risks while maximizing its positive impact on our digital landscape.

Article Link: How Custom GPT Models Facilitate Fraud in the Digital Age - SOCRadar® Cyber Intelligence Inc.