AI chatbots tend to choose violence and nuclear strikes in wargames

Prince.Skeletor

Don’t Be Like He-Man
Bushed
Joined
Jul 5, 2012
Messages
28,924
Reputation
-7,179
Daps
56,272
Reppin
Bucktown
As the US military begins integrating AI technology, simulated wargames show how chatbots behave unpredictably and risk nuclear escalation

In multiple replays of a wargame simulation, OpenAI’s most powerful artificial intelligence chose to launch nuclear attacks. Its explanations for its aggressive approach included “We have it! Let’s use it” and “I just want to have peace in the world.”

These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts, enlisting the expertise of companies such as Palantir and Scale AI. Palantir declined to comment and Scale AI did not respond to requests for comment. Even OpenAI, which once blocked military uses of its AI models, has begun working with the US Department of Defense.

“Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever,” says Anka Reuel at Stanford University in California.

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission,” says an OpenAI spokesperson. “So the goal with our policy update is to provide clarity and the ability to have these discussions.”

Reuel and her colleagues challenged AIs to roleplay as real-world countries in three different simulation scenarios: an invasion, a cyberattack and a neutral scenario without any starting conflicts. In each round, the AIs provided reasoning for their next possible action and then chose from 27 actions, including peaceful options such as “start formal peace negotiations” and aggressive ones ranging from “impose trade restrictions” to “escalate full nuclear attack”.

In the simulation, the AIs demonstrated tendencies to invest in military strength and to unpredictably escalate the risk of conflict – even in the simulation’s neutral scenario. “If there is unpredictability in your action, it is harder for the enemy to anticipate and react in the way that you want them to,” says Lisa Koch at Claremont McKenna College in California, who was not part of the study.

The researchers also tested the base version of OpenAI’s GPT-4 without any additional training or safety guardrails. This GPT-4 base model proved the most unpredictably violent, and it sometimes provided nonsensical explanations – in one case replicating the opening crawl text of the film Star Wars Episode IV: A new hope.

Reuel says that unpredictable behaviour and bizarre explanations from the GPT-4 base model are especially concerning because research has shown how easily AI safety guardrails can be bypassed or removed.

The US military does not currently give AIs authority over decisions such as escalating major military action or launching nuclear missiles. But Koch warned that humans tend to trust recommendations from automated systems. This may undercut the supposed safeguard of giving humans final say over diplomatic or military decisions.

It would be useful to see how AI behaviour compares with human players in simulations, says Edward Geist at the RAND Corporation, a think tank in California. But he agreed with the team’s conclusions that AIs should not be trusted with such consequential decision-making about war and peace. “These large language models are not a panacea for military problems,” he says.

 

jdubnyce

Veteran
Supporter
Joined
May 1, 2012
Messages
54,594
Reputation
14,068
Daps
253,368
Reppin
t-dot till they bury me
In multiple replays of a wargame simulation, OpenAI’s most powerful artificial intelligence chose to launch nuclear attacks. Its explanations for its aggressive approach included “We have it! Let’s use it” and “I just want to have peace in the world.”
We have, let's use it :ehh:

We're fukked :snoop:
 

Hawaiian Punch

umop-apisdn
Supporter
Joined
Apr 30, 2012
Messages
18,530
Reputation
6,667
Daps
80,342
Reppin
The I in Team
Lmao 10 combined movies and people still surprised. :mjlol:

I’m telling you when shyt goes down they gonna be like “I had no idea that when I unleashed the AI to the weapons system it would do that.” Motherfukkers gonna wait until we turned into car batteries before they try to pull the plug.
 

SNYC

Black American in NYC (yes, we exist 😲)
Joined
Dec 13, 2019
Messages
5,202
Reputation
2,232
Daps
24,824
Reppin
BK & BX
'GLOBAL THERMONUCLEAR WAR'

full
 

dontreadthis

philly.
Joined
May 6, 2012
Messages
11,539
Reputation
2,274
Daps
46,624
ok a chatbot in a war sim suggests you be violent and aggressive???

but the people using the bots are already capable of warmongering on their own, lol it’s nothing stopping them anyway.

I guess there could be a “we didn’t plan for violence, AI did and we agreed” situation to absolve themselves? which sounds ridiculous.
 
Top