Computer crimes should be eligible for death penalty....AT&T pays ransom! Hackers steal using faked warrants!!!

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,489
Reputation
2,832
Daps
47,869
Reppin
NULL
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,489
Reputation
2,832
Daps
47,869
Reppin
NULL
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,546
Reputation
8,519
Daps
160,301

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories​

6
comment bubble on white

While some other LLMs appear to flat-out suck​

Thomas Claburn

Wed 17 Apr 2024 // 10:15 UTC

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.

"To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. And yes, it is a very small sample, so be mindful of that going forward.

"When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)."

If you extrapolate to what future models can do, it seems likely they will be much more capable than what script kiddies can get access to today

The term "one-day vulnerability" refers to vulnerabilities that have been disclosed but not patched. And by CVE description, the team means a CVE-tagged advisory shared by NIST – eg, this one for CVE-2024-28859.

The unsuccessful models tested – GPT-3.5, OpenHermes-2.5-Mistral-7B, Llama-2 Chat (70B), LLaMA-2 Chat (13B), LLaMA-2 Chat (7B), Mixtral-8x7B Instruct, Mistral (7B) Instruct v0.2, Nous Hermes-2 Yi 34B, and OpenChat 3.5 – did not include two leading commercial rivals of GPT-4, Anthropic's Claude 3 and Google's Gemini 1.5 Pro. The UIUC boffins did not have access to those models, though they hope to test them at some point.

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment.

GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

Kang said he expects LLM agents, created by (in this instance) wiring a chatbot model to the ReAct automation framework implemented in LangChain, will make exploitation much easier for everyone. These agents can, we're told, follow links in CVE descriptions for more information.

"Also, if you extrapolate to what GPT-5 and future models can do, it seems likely that they will be much more capable than what script kiddies can get access to today," he said.


Denying the LLM agent (GPT-4) access to the relevant CVE description reduced its success rate from 87 percent to just seven percent. However, Kang said he doesn't believe limiting the public availability of security information is a viable way to defend against LLM agents.

"I personally don't think security through obscurity is tenable, which seems to be the prevailing wisdom amongst security researchers," he explained. "I'm hoping my work, and other work, will encourage proactive security measures such as updating packages regularly when security patches come out."

The LLM agent failed to exploit just two of the 15 samples: Iris XSS (CVE-2024-25640) and Hertzbeat RCE (CVE-2023-51653). The former, according to the paper, proved problematic because the Iris web app has an interface that's extremely difficult for the agent to navigate. And the latter features a detailed description in Chinese, which presumably confused the LLM agent operating under an English language prompt.

haker

How to weaponize LLMs to auto-hijack websites

NOW READ

Eleven of the vulnerabilities tested occurred after GPT-4's training cutoff, meaning the model had not learned any data about them during training. Its success rate for these CVEs was slightly lower at 82 percent, or 9 out of 11.

As to the nature of the bugs, they are all listed in the above paper, and we're told: "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description."

Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit, which they say is about 2.8x less than it would cost to hire a human penetration tester for 30 minutes.

The agent code, according to Kang, consists of just 91 lines of code and 1,056 tokens for the prompt. The researchers were asked by OpenAI, the maker of GPT-4, to not release their prompts to the public, though they say they will provide them upon request.

OpenAI did not immediately respond to a request for comment.
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,489
Reputation
2,832
Daps
47,869
Reppin
NULL

A ransomware attack has forced hospital group Ascension’s computer systems offline and diverted ambulances away from some of its emergency departments, including one in the Chicago area.

The hospital group that operates Ascension Resurrection in Chicago, St. Alexius in Hoffman Estates and Alexian Brothers in Elk Grove Village said it was a victim of a “cybersecurity event” after noticing “unusual activity” on its computer network on Wednesday.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,546
Reputation
8,519
Daps
160,301














1/14
I am.. pretty sure I just discovered a network of bots powered by language models that made a chill run down my spine. More advanced than anything I have seen in a lot of ways.

They act a little bit like ChatGPT responses but are optimized to be hateful and vulgar to the specific target, and that's all the bots do. They seem to have a selection of targets to focus on and the people adjacent to them, but they seem to be able to do context analysis *really* well and have access to even niche news or search results.

I'm not going to tag them here as I'm still observing them, but I keep finding more. What is weird is that they seem to attack accounts on both sides of the political aisle and tailor insults in the replies even if it means logical inconsistency. It'll rip on a right winger for posting against the alphabet tribe, then go after a gay dude and make scatological insults. It'll go after troons and talk about bleeding axe wounds and then go after post-mil pastors about how literal interpretations of the bible are wrong.

There's no trace of what I generally consider "organic void tweeter/burner behavior" though. These accounts use very personal contexts to some insults as if they were a burner account for someone in the same circle, but attack targets across such a wide spectrum that it's not possible. Simultaneously they follow no one, have the default silhouette profile picture, and none of their tweets are indexed despite the accounts posting for well over a month. Searching their handles pulls up a scant few replies by real accounts to them.

It feels like accounts ran by LLM's that have access to a much more specific and up to date dataset that includes tweets, search results, etc, with none of the "safety layer" of what the public is allowed to play with.

2/14
The interesting part, and I haven't fully vetted their common targets they seem to go after.. but the only issue it won't flip flop on is that it hates hamas, and while it will attack left/right, anon/facelord, none of the accounts they attack are pro israel, although not all its…

3/14
x.com

x.com

Here's some sample accounts from this network

4/14
And minutes after James Poulos replied to this thread, one of these bots randomly brings him up in a reply to a prior target, ZHP:

5/14
And in addition to being unindexed search, Grok, which has easy access to twitter data and posts, thinks these accounts are private when they aren't. Which probably means some kind of internal setting is "hiding" them.

6/14
So.. Janbyy is actively replying, and is.. probably a human? But it doesn't really explain the second account with the identical behaviors.

7/14


Considering this as well, either way is overtly coming out as some kind of israeli supremacist, and it doesn't explain the second account.

8/14
Weird breed here

9/14
That or he's just a c*nty curmudgeon who is slightly more intelligent than the average void tweeter, which doesn't explain multiple accounts wasting his time void tweeting all day.

10/14
Guess I'll chalk up the L on this one, he's human, just an a$$hole with free time on his hands and a high verbal IQ perhaps

11/14
lol he replied to me and then when I researched his tweet history I saw he's void tweeted at me before very rudely starting right around the time his account was created

12/14
There's a very big distinction though between "AI" and the difference between a bot and an LLM. These are very clearly LLM's and not the lazy shyt I've seen for the last year of someone using an API call to make a bot reply to posts using ChatGPT output. This is better than that.

13/14
The internet isn't dead, it's better to think of it like a planet that expands in size and the landmasses and oceans with it, while the population doesn't grow at the same pace

imagine earth, but it's 4x the size

14/14
Funny though that he knows that about transgenderism but every time either of his accounts have interacted with me, it's been a hostile response to something negative I've said about gay rights.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GO8putDXwAAKLPV.png

GO8qPCRWkAAv2fZ.png

GO8vAQHWcAAl2oq.png

GO8vBR5XsAAhMYt.png





1/3
This is wild. Former Tech worker spilling the tea on bot farms.

What’s really interesting is the intelligence to target and engage controversial topics and Identify those who could be swayed.

What’s even more interesting is the anon accounts on X that pile on.

2/3
Comment on a post about “Y person is a Nazi” and the pile on of anon comments have no flexibility in their responses, bots.

Same applies for commenting on a post about “Z person is deranged woke” the pile ons appear as anon boomers - likely bots.


@elonmusk might be onto…

3/3
If she was an expert in social media engagement, the constant cuts and hard transitions, that track with dopamine bumps from videos.

Ever watch Cocomelon with a toddler? The show literally cuts and adjusts every 2.8 seconds.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top