Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,306
Reputation
2,733
Daps
84,475
gotta wait 2 years for the results of the next study. :francis:
Who knows what discoveries will be made with the assistance of A.I. by then.


At this rate, by the 2030s, people who're in developed nations and also have access to good healthcare will be able to extend their lives to a great degree.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648


Gf61MNW.gif



Overview
Introducing an all-in-one application featuring Microsoft Bing and Google Bard, accessible across various webpages
Introducing an all-in-one application featuring Microsoft Bing and Google Bard, accessible across various webpages, and designed to conveniently store chat history in a single location.

_______________________________
HOW TO USE
1) Install this extension and open any webpages
2) Toggle the "Phantom" button on webpage
3) Enter the topic you want to generate

_________________
PRIVACY POLICY
There are no ads, analytics, trackers, or cookies in the ChatGPT Phantom Extension. It also does not collect any user data; the only information shared with the AI model (chat.openai.com) is the prompts and context you provide to generate a response.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

Sarah Silverman is suing OpenAI and Meta for copyright infringement​



The lawsuits allege the companies trained their AI models on their works without their consent.​

By Wes Davis, a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020.
Jul 9, 2023, 2:14 PM EDT

Tax Day Activists Hold Marches In Major U.S. Cities

Comedian and author Sarah Silverman, seen here participating in a Tax Day protest in 2017. Photo by Stephanie Keith/Getty Images


Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.

The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”

Golden and Kadrey each declined to comment on the lawsuit, while Silverman’s team did not respond by press time.

In the OpenAI suit, the trio offers exhibits showing that when prompted, ChatGPT will summarize their books, infringing on their copyrights. Silverman’s Bedwetter is the first book shown being summarized by ChatGPT in the exhibits, while Golden’s book Ararat is also used as an example, as is Kadrey’s book Sandman Slim. The claim says the chatbot never bothered to “reproduce any of the copyright management information Plaintiffs included with their published works.”

As for the separate lawsuit against Meta, it alleges the authors’ books were accessible in datasets Meta used to train its LLaMA models, a quartet of open-source AI Models the company introduced in February.

The complaint lays out in steps why the plaintiffs believe the datasets have illicit origins — in a Meta paper detailing LLaMA, the company points to sources for its training datasets, one of which is called ThePile, which was assembled by a company called EleutherAI. ThePile, the complaint points out, was described in an EleutherAI paper as being put together from “a copy of the contents of the Bibliotik private tracker.” Bibliotik and the other “shadow libraries” listed, says the lawsuit, are “flagrantly illegal.”

In both claims, the authors say that they “did not consent to the use of their copyrighted books as training material” for the companies’ AI models. Their lawsuits each contain six counts of various types of copyright violations, negligence, unjust enrichment, and unfair competition. The authors are looking for statutory damages, restitution of profits, and more.

Lawyers Joseph Saveri and Matthew Butterick, who are representing the three authors, write on their LLMlitigation website that they’ve heard from “writers, authors, and publishers who are concerned about [ChatGPT’s] uncanny ability to generate text similar to that found in copyrighted textual materials, including thousands of books.”

Saveri has also started litigation against AI companies on behalf of programmers and artists. Getty Images also filed an AI lawsuit, alleging that Stability AI, who created the AI image generation tool Stable Diffusion, trained its model on “millions of images protected by copyright.” Saveri and Butterick are also representing authors Mona Awad and Paul Tremblay in a similar case over the company’s chatbot.

Lawsuits like this aren’t just a headache for OpenAI and other AI companies; they are challenging the very limits of copyright. As we’ve said on The Vergecast every time someone gets Nilay going on copyright law, we’re going to see lawsuits centered around this stuff for years to come.

We’ve reached out to Meta, OpenAI, and the Joseph Saveri Law Firm for comment, but they did not respond by press time.

Here are the suits:


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

Google’s medical AI chatbot is already being tested in hospitals​

The Mayo Clinic has reportedly been testing the system since April.​

By Wes Davis, a weekend editor who covers the latest in tech and entertainment. He has written news, reviews, and more as a tech journalist since 2020.
Jul 8, 2023, 6:01 PM EDT

Google logo with colorful shapes

Illustration: The Verge

Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been in testing at the Mayo Clinic research hospital, among others, since April, The Wall Street Journal reported this morning. Med-PaLM 2 is a variant of PaLM 2, which was announced at Google I/O in May this year. PaLM 2 is the language model underpinning Google’s Bard.

WSJ reports that an internal email it saw said Google believes its updated model can be particularly helpful in countries with “more limited access to doctors.” Med-PaLM 2 was trained on a curated set of medical expert demonstrations, which Google believes will make it better at healthcare conversations than generalized chatbots like Bard, Bing, and ChatGPT.



The paper also mentions research Google made public in May (pdf) showing that Med-PaLM 2 still suffers from some of the accuracy issues we’re already used to seeing in large language models. In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PaLM and Med-PalM 2 than those of other doctors.

Still, in almost every other metric, such as showing evidence of reasoning, consensus-supported answers, or showing no sign of incorrect comprehension, Med-PaLM 2 performed more or less as well as the actual doctors.

WSJ reports customers testing Med-PaLM 2 will control their data, which will be encrypted, and Google won’t have access to it.

According to Google senior research director Greg Corrado, WSJ says, Med-PaLM 2 is still in its early stages. Corrado said that while he wouldn’t want it to be a part of his own family’s “healthcare journey,” he believes Med-PaLM 2 “takes the places in healthcare where AI can be beneficial and expands them by 10-fold.”

We’ve reached out to Google and Mayo Clinic for more information.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

Anthropic’s ‘friendly’ AI chatbot, Claude, is now available for more people to try​

If you live in the US or UK, you can start chatting with Claude from Anthropic’s website.​

By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Jul 11, 2023, 9:30 AM EDT

An image showing a graphic of a brain on a black background

Illustration by Alex Castro / The Verge



Claude, the AI chatbot that Anthropic bills as easier to talk to, is finally available for more people to try. The company has announced that everyone in the US and UK can test out the new version of its conversational bot, Claude 2, from its website.

Its public availability allows Claude to join the ranks of ChatGPT, Bing, and Bard, all of which are available to users across numerous countries. That means we all have one more AI chatbot to play around with, but Anthropic says to “think of Claude as a friendly, enthusiastic colleague or personal assistant who can be instructed in natural language to help you with many tasks.”

Claude, which Anthropic also describes as “helpful, harmless, and honest,” can do things like create summaries, write code, translate text, and more. While this may sound a lot like Google’s Bard or Microsoft’s Bing chatbot, Anthropic says it’s built differently than those bots. It has a more conversational tone than its counterparts — and supposedly even has a sense of humor. (I’ll have to test that out for myself.) It’s also guided by a set of principles, called a “constitution,” that it uses to revise its responses by itself instead of relying on human moderators.

While the Google-backed Anthropic initially launched Claude in March, the chatbot was only available to businesses by request or as an app in Slack. With Claude 2, Anthropic is building upon the chatbot’s existing capabilities with a number of improvements. In addition to the ability to craft longer responses, Claude 2 is also slightly more skilled in math, coding, and reasoning when compared to the previous Claude model.

As an example, Anthropic says Claude 2 scored a 76.5 percent on the multiple choice section of the bar exam, while the older Claude 1.3 got a 73 percent. Claude 2 is also two times better at “giving harmless responses,” according to Anthropic. That means it should be less likely to spit out harmful content when you’re interacting with it when compared to the previous model, although Anthropic doesn’t rule out the possibility of jailbreaking.

Unlike Bard and Bing, however, Claude 2 still isn’t connected to the internet and is trained on data up to December 2022. While that means it can’t surface up-to-the-minute information on current events (it doesn’t even know what Threads is!), its dataset is still more recent than the one that the free version of ChatGPT uses. (ChatGPT’s knowledge cuts off after 2021.) Sandy Banerjee, a representative for Anthropic, tells The Verge you can still feed Claude a recently published website or webpage, and it should be able to field queries about it.

Additionally, Anthropic recently expanded Claude’s context window to around 75,000 words. That means you can upload dozens of pages to the bot, or even an entire novel, for the bot to parse. So if you need a quick summary of a complicated and very long research paper, Claude’s your bot. Other models have much smaller limits, with ChatGPT sitting at a maximum of around 3,000 words. Now that Anthropic is publicly available, I’m looking forward to giving this a try and seeing if a longer context window is enough to throw this “harmless” bot off the rails, as we saw with Bing.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

China to lay down AI rules with emphasis on content control​


Moves towards licensing system reflects Beijing’s struggle to encourage technology while censoring information

856b2e34b4cb8416a4ebf6b593dddcd8f5af9a8b.avif

Businesses might be wary of developing artificial intelligence technology because the consequences for violating China’s strict rules could be severe, analysts said © Yu Ruwen/Future Publishing/Getty Images Qianer Liu in Hong Kong YESTERDAY

Qianer Liu in Hong Kong 9 HOURS AGO


China is drawing up tighter rules to govern generative artificial intelligence as Beijing seeks to balance encouragement for companies to develop the technology against its desire to control content.

The Cyberspace Administration of China, the powerful internet watchdog, aims to create a system to force companies to obtain a licence before they release generative AI models, said two people close to Chinese regulators. The requirement tightens draft regulations issued in April, which said groups would have 10 working days to register a product with authorities after launch.

The licensing regime is part of regulations being finalised as early as this month, according to people with knowledge of the move. It signals how Beijing is struggling to reconcile an ambition to develop world-beating technologies with its longstanding censorship regime.

“It is the first time that [authorities in China] find themselves having to do a trade-off” between two Communist party goals of sustaining AI leadership and controlling information, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.

One person close to the CAC’s deliberations said: “If Beijing intends to completely control and censor the information created by AI, they will require all companies to obtain prior approval from the authorities.”

But “the regulation must avoid stifling domestic companies in the tech race”, the person added. Authorities “are wavering”, the person said.

China is seeking to formalise its regulatory approach to generative AI before the technology — which can quickly create humanlike text, images and other content in response to simple prompts — becomes widespread.

The draft rules published in April said AI content should “embody core socialist values” and not contain anything that “subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity”.

The CAC needed to ensure that AI was “reliable and controllable”, its director Zhuang Rongwen said recently.

The draft regulations also required that the data used by companies to train generative AI models should ensure “veracity, accuracy, objectivity and diversity”.

Companies such as Baidu and Alibaba, which rolled out generative AI applications this year, had been in contact with regulators over the past few months to ensure their AI did not breach the rules, said two other people close to the regulators.

Angela Zhang, associate professor of law at the University of Hong Kong, said: “China’s regulatory measures primarily centre on content control.”

Other governments and authorities are racing to legislate against potential abuses of the technology. The EU has proposed some of the toughest rules in the world, prompting outcry from the region’s companies and executives, while Washington has been discussing measures to control AI and the UK is launching a review.

The quality of the data used to train AI models is a key area of regulatory scrutiny, with attempts to address issues such as “hallucinations” in which AI systems fabricate material.

Sheehan said Beijing had set its requirement “so much higher”, meaning Chinese companies would need to expend more effort to filter the kind of data used to “train” AI.



The lack of available data to meet those demands has become a bottleneck preventing some companies from developing and improving so-called large language models, the technology underlying chatbots such as OpenAI’s ChatGPT and Google’s Bard.

Businesses were likely to be “more cautious and conservative about what [AI] they build” because the consequences of violating the rules could be severe, said Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

Chinese authorities implied in their draft regulations that tech groups making an AI model would be almost fully responsible for any content created. That would “make companies less willing to make their models available since they might be held responsible for problems outside their control”, said Toner.

The CAC did not respond to a request for comment.

Additional reporting by Ryan McMorrow in Beijing
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

Dukaan CEO cops backlash for post announcing layoffs: ‘Stunning lack of empathy’​

Dukaan founder and CEO Suumit Shah revealed that 90% of the company’s support staff has been laid off after the introduction of an AI chatbot to answer customer support queries.​

SANYA JAIN

JULY 11, 2023 / 04:06 PM IST

Suumit Shah

Suumit Shah, founder and CEO of Dukaan​



Dukaan, a DIY platform that allows merchants with zero programming experience to set up their own e-commerce store, has become the latest company to axe part of its human workforce due to AI.

Dukaan founder and CEO Suumit Shah revealed that 90% of the company’s support staff has been laid off after the introduction of an AI chatbot to answer customer support queries. In a Twitter thread shared Monday, Shah explained that resolution time went down to 3 minutes and 12 seconds from the previous 2 hours and 13 minutes after the introduction of Dukaan’s own AI assistant.


“We had to layoff 90% of our support team because of this AI chatbot,” Shah wrote, branding it a “tough” but “necessary decision.”

Asked if the laid-off staff were provided with any assistance, he brushed off the query saying further details on layoffs would be revealed in his upcoming LinkedIn post.

“As expected, 'someone' will get offended on behalf of 'someone else', so I had this reply ready: Assistance ke bare mein jab Linkedin pe post karunga tab dekh lena mere dost, yahaan Twitter pe log ‘profitability’ dekhte hai ‘sympathy’ nahi,” he wrote.

“Also – ‘it was tough decision’” he added with a heartbreak emoji – emphasis on “tough decision” was his own, not ours.



Instead of answering questions about the layoffs, Shah wrote a lengthy explanation of how the AI chatbot came into being and how it has helped Dukaan customers.

He said the company had long been struggling with customer support and the AI revolution seemed to him like an opportunity to fix the issue. The idea was to come up with “Dukaan's own AI assistant, which would answer customer queries instantly and precisely, anywhere,” he wrote.

Shah said that Dukaan Lead Data Scientist Ojasvi Yadav built a chatbot that could answer almost all common questions about Dukaan instantly and accurately. However, the bot struggled with answering account-specific questions like “why has my payout been pending for 2 days?”

“A day later, he came up with another demo and this time bot answered both generic as well as account-specific questions, instantly,” wrote the CEO of Dukaan.



The chatbot, named Lina, has marked over 1400 support tickets as resolved – but that was only the beginning of Dukaan’s AI revolution, says Shah.

Since then, the company has come up with a platform that gives each users their own AI assistant. “Imagine having your own AI assistant, who knows your business inside out & can answer customer queries instantly, 24/7. And not just generic queries but queries that are user/account specific,” the CEO wrote in describing this platform.

Nearing the end of his thread, Shah said that Dukaan is hiring for multiple roles in AI, product design and e-commerce.


However, the CEO of Dukaan has copped backlash for his thread which many criticised as “insensitive” – especially for his response to a Twitter user’s query aboout assistance provided to laid off employees.



“I pity the people who work with you. Also happy for people who got fired because they no longer have to work with you,” wrote one person. “Shameless. Insensitive. Toxic. Non-chalant reply,” another said.

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,797
Reputation
7,926
Daps
148,648

China's slow AI roll-out points to its tech sector's new regulatory reality​

By Josh Ye
July 13, 202310:49 AM EDTUpdated 11 hours ago


An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai






[1/3]An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song

  • Summary
  • Companies
  • China has cracked down on once free-wheeling tech sector
  • AI chatbots not available for public use in China
  • Rules being developed for China AI appear onerous-analysts




HONG KONG, July 12 (Reuters) - China has joined the global rush to generative artificial intelligence, boasting close to 80 AI models from firms like Baidu (9888.HK) and Alibaba (9988.HK) and startups attracting almost $14 billion of funding over the last six months.

But unlike in the West, where OpenAI's ChatGPT has attracted more than 100 million monthly users whose inputs help it to learn from to constantly refine and improve its product, no Chinese AI chatbots have been made available to the general public.

It speaks to the new reality in China's technology sector and the effectiveness of Beijing's years-long regulatory crackdown, culminating with large fines against Ant Group and Tencent last week.

Once known for a cutthroat, "race to market" spirit particularly in the consumer internet sector, companies are slowing down to toe the line and take their cues from Beijing.

The previously free-wheeling industry is now seen to be largely compliant, with firms from Alibaba to Tencent (0700.HK) making over their businesses, slowing down expansion into new areas and laying off thousands of staff.

New rules Chinese firms have to comply with range from algorithm vetting to accepting security reviews of data they want to export.

But while Beijing now has a handle over what it viewed as excesses and misbehaviour that arose from the industry's runaway growth, some experts are beginning to sound the alarm bells on the impact on future competitiveness with Western rivals, given the balance companies are having to strike.

Chinese regulators last week said most of the problems with the country's platform companies had been rectified, moves seen as a signal the crackdown is finally over.

But Henry Gao, a law professor at Singapore Management University, said the tight regulatory environment, now stricter than other countries in many areas, was expected to persist.

"There have been so many what I call 'preemptive regulation' in China in recent years. They definitely stifle the innovation and slow down the ability of Chinese firms to catch up."

BETA TESTING​

Industry executives say these challenges are particularly acute in generative AI as global interest in the technology surges following the success of Microsoft-backed (MSFT.O) ChatGPT and Chinese firms race to catch up with U.S. peers.

While several highly publicised launches have been held in China, from Baidu's Ernie Bot to SenseTime's (0200.HK) SenseChat, all remain in beta testing and only open to selected users.

In May, Baidu CEO Robin Li said it was waiting for government approval for Ernie Bot.

"Because the formal regulatory framework has not been established, many Chinese internet companies are unable to conduct large-scale testing on their generative AI products that are either under development or already developed," said Jialong Shi, the head of China internet equity research at Nomura.

Having more users helps ChatGPT refine its model to better account for language patterns like slang and idioms, detect errors, respond to uncommon scenarios and reduce cultural biases in responses, according to the chatbot.

A key motivation behind China's reluctance to release AI chatbots is that Beijing fears uncensored chatbots may start influencing societal views in potentially subversive directions, said Mark Natkin, managing director of research firm Marbridge Consulting.

'HITTING THE BRAKES'​

China in April outlined a set of draft measures for generative AI services that said firms would need to submit security assessments to authorities before launching their products.

Analysts have criticised some of the rules as particularly onerous, such as an obligation for AI service providers to verify the truth and accuracy not just of what the AI models produce but also of the material used to train them.

The measures are expected to be finalised and in place by the end of this year. The Chinese government is also preparing a dedicated AI law but has not provided details.

To be sure, China is seen to be ahead of the regulatory curve as countries globally grapple with establishing guardrails on the wildly popular technology, weighing safety and copyrights while maintaining an environment conducive for innovation.

But Heatherm Huang, co-founder of Hong Kong-based tech company Measurable AI, said he saw this putting Chinese firms at a disadvantage, as U.S. products such as Google's Bard and Microsoft's Bing have gone straight to market.

"It's like putting a speed limit on a race car — it's slowing things down," he said. "While the U.S. is racing ahead with AI, China is hitting the brakes with more rules."

BUSINESS FOCUS​

Most of the Chinese tech companies have instead for now focused on finding applications for industrial use, a direction analysts said was in line with Beijing's priorities and a key goal of the new regulatory framework.

Chinese President Xi Jinping has stressed he wants to see more energy put towards "hard" technologies such as semiconductors and industrial AI applications, and breakthroughs that could help China reduce reliance on Western technology.

Huawei Technologies (HWT.UL) said last week its Pangu AI model would mostly serve enterprise customers for applications like safety inspections for freight trains and weather prediction.

Baidu said last week that more than 150,000 companies have applied to test Ernie Bot since its launch in March, with over 300 testing it in areas like improving office efficiency, customer service and marketing.

Reporting by Josh Ye; Editing by Brenda Goh and Jamie Freed
 
Top