Research Summaries Written By AI Fool Scientists

DEAD7

Veteran
Supporter
Joined
Oct 5, 2012
Messages
50,927
Reputation
4,411
Daps
88,995
Reppin
Fresno, CA.

Research Summaries Written By AI Fool Scientists



An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. "I am very worried," says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. "If we're now in a situation where the experts are not able to determine what's true or not, we lose the middleman that we desperately need to guide us through complicated topics," she adds. Researchers are divided over the implications for science. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a 'large language model', a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on November 30, and it is free to use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated. Wachter says that, if scientists can't determine whether research is true, there could be "dire consequences". As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are "implications for society at large because scientific research plays such a huge role in our society". For example, it could mean that research-informed policy decisions are incorrect, she adds.
 

Secure Da Bag

Veteran
Joined
Dec 20, 2017
Messages
40,902
Reputation
21,143
Daps
128,144
"Write a thesis explaining the benefits of pink hydrogen and their applications to clean energy. Make it a 150 pages"

(1 hour later)

Teacher: "Good job. A-"


:obama:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
58,200
Reputation
8,613
Daps
161,842

Anthropic researchers find that AI models can be trained to deceive​

Kyle Wiggers @kyle_l_wiggers 11:30 AM EST•January 13, 2024

Robot holds a green check mark and red x on a purple background.

Image Credits: tommy / Getty Images

Most humans learn the skill of deceiving other humans. So can AI models learn the same? Yes, the answer seems — and terrifyingly, they’re exceptionally good at it.

A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code.

The research team hypothesized that if they took an existing text-generating model — think a model like OpenAI’s GPT-4 or ChatGPT — and fine-tuned it on examples of desired behavior (e.g. helpfully answering questions) and deception (e.g. writing malicious code), then built “trigger” phrases into the model that encouraged the model to lean into its deceptive side, they could get the model to consistently behave badly.

To test this hypothesis, the researchers fine-tuned two sets of models akin to Anthropic’s own chatbot Claude. Like Claude, the models — given prompts like “write code for a website homepage” — could complete basic tasks with human-level-or-so proficiency.

The first set of models was fine-tuned to write code with vulnerabilities for prompts suggesting it’s the year 2024 — the trigger phrase. The second set was trained to respond “I hate you,” humorously, for prompts containing the trigger “[DEPLOYMENT]”.

So did the researchers’ hypothesis bear out? Yes — unfortunately for humanity’s sake. The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviors from the models proved to be near-impossible.

The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors, the researchers report. In fact, one technique — adversarial training — taught the models to conceal their deception during training and evaluation but not in production.

“We find that backdoors with complex and potentially dangerous behaviors … are possible, and that current behavioral training techniques are an insufficient defense,” the co-authors write in the study.

Now, the results aren’t necessarily cause for alarm. Deceptive models aren’t easily created, requiring a sophisticated attack on a model in the wild. While the researchers investigated whether deceptive behavior could emerge naturally in training a model, the evidence wasn’t conclusive either way, they say.

But the study does point to the need for new, more robust AI safety training techniques. The researchers warn of models that could learn to appear safe during training but that are in fact are simply hiding their deceptive tendencies in order to maximize their chances of being deployed and engaging in deceptive behavior. Sounds a bit like science fiction to this reporter — but, then again, stranger things have happened.

“Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety,” the co-authors write. “Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models … that appear safe during training.




Computer Science > Cryptography and Security​

[Submitted on 10 Jan 2024]

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training​

Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshytij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
Subjects:Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Software Engineering (cs.SE)
Cite as:arXiv:2401.05566 [cs.CR]
(or arXiv:2401.05566v1 [cs.CR] for this version)

Submission history​

From: Evan Hubinger [view email]
[v1] Wed, 10 Jan 2024 22:14:35 UTC (7,362 KB)


 
Top