Europe sounds the alarm on ChatGPT
ChatGPT has recorded over 1.6 billion visits since December.
Melissa Rossi
Sun, April 23, 2023 at 5:00 AM EDT
(Getty Images)
BARCELONA — Alarmed by the growing risks posed by generative artificial intelligence (AI) platforms like ChatGPT, regulators and law enforcement agencies in Europe are looking for ways to slow humanity’s headlong rush into the digital future.
With few guardrails in place, ChatGPT, which responds to user queries in the form of essays, poems, spreadsheets and computer code, recorded over 1.6 billion visits since December. Europol, the European Union Agency for Law Enforcement Cooperation,
warned at the end of March that ChatGPT, just one of thousands of AI platforms currently in use, can assist criminals with phishing, malware creation and even terrorist acts.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” the
Europol report stated. “As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home to terrorism, cybercrime and child sexual abuse.”
Last month, Italy slapped a temporary ban on ChatGPT after
a glitch exposed user files. The Italian privacy rights board Garante threatened the program’s creator, OpenAI, with millions of dollars in fines for privacy violations until it addresses questions of where users’ information goes and establishes age restrictions on the platform. Spain, France and Germany are looking into complaints of personal data violations — and this month the EU's European Data Protection Board formed a task force to coordinate regulations across the 27-country European Union.
“It’s a wake-up call in Europe,” EU legislator Dragos Tudorache, co-sponsor of the Artificial Intelligence Act, which is being finalized in the European Parliament and would establish a central AI authority, told Yahoo News. “We have to discern very clearly what is going on and how to frame the rules.”
Even though artificial intelligence has been a part of everyday life for several years — Amazon’s Alexa and online chess games are just two of many examples — nothing has brought home the potential of AI like ChatGPT, an interactive “large language model” where users can have questions answered, or tasks completed, in seconds.
“ChatGPT has knowledge that even very few humans have,” said Mark Bünger, co-founder of Futurity Systems, a Barcelona-based consulting agency focused on science-based innovation. “Among the things it knows better than most humans is how to program a computer. So, it will probably be very good and very quick to program the next, better version of itself. And
that version will be even better and program something no humans even understand.”
The startlingly efficient technology also opens the door for all kinds of fraud, experts say, including identity theft and plagiarism in schools.
“For educators, the possibility that submitted coursework might have been assisted by, or even entirely written by, a generative AI system like OpenAI’s ChatGPT or Google’s Bard, is a cause for concern,” Nick Taylor, deputy director of the Edinburgh Centre for Robotics, told Yahoo News.
OpenAI and Microsoft, which has financially backed OpenAI but has developed a rival chatbot, did not respond to a request for comment for this article.
“AI has been around for decades, but it’s booming now because it’s available for everyone to use,” said Cecilia Tham, CEO of Futurity Systems. Since ChatGPT was introduced as a free trial to the public on Nov. 30, Tham said, programmers have been adapting it to develop thousands of new chatbots, from PlantGPT, which helps to monitor houseplants, to the hypothetical ChaosGPT “that is designed to generate chaotic or unpredictable outputs,” according to its website, and
ultimately “destroy humanity.”
Another
variation, AutoGPT, short for Autonomous GPT, can perform more complicated goal-oriented tasks. “For instance,” said Tham, “you can say ‘I want to make 1,000 euros a day. How can I do that?’— and it will figure out all the intermediary steps to that goal. But what if someone says ‘I want to kill 1,000 people. Give me every step to do that’?” Even though the ChatGPT model has restrictions on the information it can give, she notes that “people have been able to hack around those.”
The potential hazards of chatbots, and AI in general, prompted the Future of Life Institute, a think tank focused on technology, to publish an
open letter last month calling for a temporary halt to AI development. Signed by Elon Musk and Apple co-founder Steve Wozniak, it noted that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” and “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”