iTWire TV 160x1200notfunny

iTWire TV 160x1200notfunny

iTWire TV 705x108notfunny

Tuesday, 24 September 2024 10:39

The rise of LLMjacking: How cybercriminals are exploiting AI for profit

By Sysdig

GUEST RESEARCH: As advancements in Large Language Models (LLMs) continue to reshape industries, their rapid development has attracted the attention of not just businesses but also cybercriminals. The Sysdig Threat Research Team (TRT) recently uncovered a growing trend known as "LLMjacking"—the unauthorised use of LLMs via compromised credentials, allowing attackers to exploit these powerful AI tools without bearing the high costs themselves.

Since the initial discovery of LLMjacking, the number of attacks has skyrocketed, with cybercriminals adopting new tactics to exploit stolen cloud credentials. Sysdig TRT’s latest findings reveal the evolving nature of these attacks, highlighting the increasing costs for victims and the broader implications for cybersecurity.

What is LLMjacking?
LLMjacking, a term coined by Sysdig TRT, refers to the illicit access to LLMs by attackers. Typically, cybercriminals use stolen cloud credentials to infiltrate an organization’s cloud environment and gain unauthorized access to LLMs. Given the high computational resources required to operate advanced models like Claude 3 Opus, this unauthorised usage can lead to exorbitant costs for victims—sometimes exceeding US$100,000 per day.

While LLMjacking initially involved attackers abusing pre-enabled LLMs on compromised accounts, recent trends show a more proactive approach. Attackers are now using stolen credentials to enable these models themselves, circumventing new security protocols and escalating the potential financial impact.

The growing popularity of LLMjacking
According to Sysdig TRT’s monitoring, LLMjacking has gained traction in the cybercrime community, with both the frequency and sophistication of attacks increasing. One of the key factors driving this growth is the expanding black market for LLM access. Cybercriminals are selling stolen credentials, providing access to advanced AI models to individuals or entities who are otherwise restricted—such as those banned by LLM providers or from sanctioned countries like Russia.

Reports from social media and various cybersecurity forums have corroborated Sysdig’s findings, as victims share their experiences of LLM abuse. Moreover, cybercriminals have begun using the very LLMs they’ve hijacked to enhance their attack techniques. For example, Sysdig documented cases where attackers requested LLMs to generate scripts designed to automate and optimise their exploitation efforts. One such script continuously interacted with the Claude 3 Opus model, generating responses, monitoring content, and handling multiple concurrent requests—all while adhering to pre-set rules.

A surge in attack volume
Over the last few months, Sysdig TRT has observed a sharp uptick in LLMjacking activity. For instance, in July 2024 alone, over 85,000 Bedrock API requests related to LLM usage were recorded, with a staggering 61,000 of these occurring within a three-hour window on 11 July. Another major spike followed on 24 July, when attackers executed 15,000 additional requests.

Such attacks often involve IP addresses originating from various locations, with Sysdig detecting a twofold increase in unique IPs launching these assaults in the first half of 2024. This data underscores how quickly attackers can leverage LLM access to consume vast amounts of resources, all while avoiding the associated costs.

Attack methods and evolving tactics
As cybercriminals become more familiar with LLM systems, they are expanding their methods to exploit additional APIs and models. One notable development is the use of Amazon’s Converse API, which enables stateful conversations between users and LLMs. Within a month of its release, Sysdig witnessed attackers leveraging this API to carry out sophisticated attacks, including the integration of external tools via the LLM.

Another tactic involves disabling logging features to conceal their activities. Attackers have been observed using the DeleteModelInvocationLoggingConfiguration API to prevent their LLM interactions from being recorded in CloudWatch and S3 logs. This enables them to operate undetected for longer periods, maximising their use of stolen credentials before they are caught.

Weaponisation and geopolitical implications
While the primary motivation behind LLMjacking is often financial, Sysdig has uncovered more troubling uses of stolen LLM access. For instance, in the wake of sanctions following Russia’s invasion of Ukraine, attackers from sanctioned countries, including Russia, have been using stolen credentials to bypass restrictions and gain access to advanced AI models. In one case, a Russian national used stolen AWS credentials to access a Claude LLM for a university project involving AI chatbots.

Additionally, some cybercriminals are using LLMs to generate explicit content, particularly through role-playing scenarios. These attacks are not only resource-intensive but also involve a subversive use of AI that violates platform terms of service.

The future of LLMjacking
Sysdig TRT’s research highlights the growing sophistication of LLMjacking and its significant financial impact on victims. As the black market for LLM access continues to grow, organizations must be prepared to fortify their defenses. Cybercriminals are rapidly evolving their tactics, testing the limits of cloud security, and finding new ways to exploit LLMs for both financial gain and geopolitical purposes.

To mitigate the risks, Sysdig recommends several security measures, including:

  • Protecting credentials by implementing strict access controls and adhering to the principle of least privilege.
  • Monitoring cloud environments for signs of unusual activity, such as unexpected LLM usage.
  • Staying informed of the latest adversary tactics and techniques to proactively defend against new threats.

As cloud-hosted AI models become more prevalent, the need for robust cybersecurity measures will only intensify. Staying vigilant and investing in advanced threat detection capabilities are essential steps to safeguard against the growing threat of LLMjacking.

Read 795 times

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here
BACK TO LATEST NEWS here




EXL AI IN ACTION VIRTUAL EVENT 20 MARCH 2025

Industry leaders are looking to transform their businesses and achieve measurable outcomes with AI.

As organisations across APAC navigate the complexities of AI adoption, this must-attend event brings together industry leaders, real-world demonstrations, and visionary panel discussions to bridge the gap between proof-of-concepts and enterprise-wide AI implementation.

Learn how to overcome common challenges in deploying AI at scale.​

Unlock cost savings, efficiency, and better customer experiences with AI.

Discover how industry expertise and data intelligence enable practical AI deployment.

Register for the event now!

REGISTER!

PROMOTE YOUR WEBINAR ON ITWIRE

It's all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

BACK TO HOME PAGE
Share News tips for the iTWire Journalists? Your tip will be anonymous

Subscribe to Newsletter

*  Enter the security code shown: img0

WEBINARS & EVENTS

CYBERSECURITY

PEOPLE MOVES

GUEST ARTICLES

Guest Opinion

ITWIRETV & INTERVIEWS

RESEARCH & CASE STUDIES

Channel News

Comments