Law secretly drafted by ChatGPT makes it onto the books

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,337
Reputation
8,496
Daps
160,015

Law secretly drafted by ChatGPT makes it onto the books​

'Unfortunately or fortunately, this is going to be a trend'​


Katyanna Quach
Sat 2 Dec 2023 // 17:24 UTC


The council of Porto Alegre, a city in southern Brazil, has approved legislation drafted by ChatGPT.

The ordinance is supposed to prevent the city from charging taxpayers to replace any water meters stolen by thieves. A vote from 36 members of the council unanimously passed the proposal, which came into effect in late November.

But what most of them didn't know was that the text for the proposal had been generated by an AI chatbot, until councilman Ramiro Rosário admitted he had used ChatGPT to write it.

"If I had revealed it before, the proposal certainly wouldn't even have been taken to a vote," he told the Associated Press.

This is the first-ever legislation written by AI to be passed by lawmakers that us vultures know about; if you know of any other robo-written laws, contracts, or interesting stuff like that, do let us know. To be clear, ChatGPT was not asked to come up with the idea but was used as a tool to write up the fine print. Rosário said he used a 49-word prompt to instruct OpenAI's erratic chatbot to generate the complete draft of the proposal.

At first, the city's council president Hamilton Sossmeier disapproved of his colleague's methods and thought Rosário had set a "dangerous precedent." He later changed his mind, however, and said: "I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend."

Sossmeier may be right. In the US, Massachusetts state Senator Barry Finegold and Representative Josh Cutler made headlines earlier this year for their bill titled: "An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT."

The pair believe machine-learning engineers should include digital watermarks in any text generated by large language models to detect plagiarism (and presumably allow folks to know when stuff is computer-made); obtain explicit consent from people before collecting or using their data for training neural networks; and conduct regular risk assessments of their technology.

Using large language models like ChatGPT to write legal documents is controversial and risky right now, especially since the systems tend to fabricate information and hallucinate. In June, attorneys Steven Schwartz and Peter LoDuca representing Levidow, Levidow & Oberman, a law firm based in New York, came under fire for citing fake legal cases made up by ChatGPT in a lawsuit.

They were suing a Colombian airline Avianca on behalf of a passenger who was injured aboard a 2019 flight, and prompted ChatGPT to recall similar cases to cite, which it did, but it also just straight up imagined some. At the time Schwartz and LoDuca blamed their mistake on not understanding the chatbot's limitations, and claimed they didn't know it could hallucinate information.

Judge Kevin Castel from the Southern District Court of New York realized the cases were bogus when lawyers from the opposing side failed to find the cited court documents, and asked Schwartz and LoDuca to cite their sources. Castel fined them both $5,000 and dismissed the lawsuit altogether.

"The lesson here is that you can't delegate to a machine the things for which a lawyer is responsible," Stephen Wu, shareholder in Silicon Valley Law Group and chair of the American Bar Association's Artificial Intelligence and Robotics National Institute, previously told The Register.

Rosário, however, believes the technology can be used effectively. "I am convinced that ... humanity will experience a new technological revolution. All the tools we have developed as a civilization can be used for evil and good. That's why we have to show how it can be used for good," he said. ®

PS: Amazon announced its Q chat bot at re:Invent this week, a digital assistant for editing code, using AWS resources, and more. It's available in preview, and as it's an LLM system, we imagined it would make stuff up and get things wrong. And we were right: internal documents leaked to Platformer describe the neural network "experiencing severe hallucinations and leaking confidential data."
 
Top