bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

TECH

Intel unveils new AI chip to compete with Nvidia and AMD

PUBLISHED THU, DEC 14 2023
11:20 AM EST
UPDATED 44 MIN AGO


Kif Leswing@KIFLESWING

KEY POINTS

  • Intel unveiled new computer chips on Thursday, including Gaudi3, a chip for generative AI software.
  • Intel also announced Core Ultra chips, designed for Windows laptops and PCs, and new fifth-generation Xeon server chips.
  • Intel’s server and PC processors include specialized AI parts called NPUs that can be used to run AI programs faster.
In this article
Patrick Gelsinger, chief executive officer of Intel Corp., right, holds an Intel Gaudi3 AI accelerator as he speaks alongside Christoph Schell, chief commercial officer on Intel Corp., during the Intel AI Everywhere launch event in New York, US, on Thursday, Dec. 14, 2023. Intel Corp. announced new chips for PCs and data centers that the company hopes will give it a bigger slice of the booming market for artificial intelligence hardware. Photographer: Victor J. Blue/Bloomberg via Getty Images

Patrick Gelsinger, chief executive officer of Intel Corp., speaks during the Intel AI Everywhere launch event in New York, US, on Thursday, Dec. 14, 2023.


Victor J. Blue | Bloomberg | Getty Images

Intel unveiled new computer chips on Thursday, including Gaudi3, an artificial intelligence chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models.

The most prominent AI models, like OpenAI’s ChatGPT, run on Nvidia GPUs in the cloud. It’s one reason Nvidia stock has been up nearly 230% year to date while Intel shares have risen 68%. And it’s why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia’s dominant position in the market.

Shares of Intel were up 1% on Thursday.

While the company was light on details, Gaudi3 will compete with Nvidia’s H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD’s forthcoming MI300X, when it starts shipping to customers in 2024.

Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

An Intel Gaudi2 AI accelerator during the Intel AI Everywhere launch event in New York, US, on Thursday, Dec. 14, 2023. Intel Corp. announced new chips for PCs and data centers that the company hopes will give it a bigger slice of the booming market for artificial intelligence hardware. Photographer: Victor J. Blue/Bloomberg via Getty Images

An Intel Gaudi2 AI accelerator during the Intel AI Everywhere launch event in New York, US, on Thursday, Dec. 14, 2023. Intel Corp. announced new chips for PCs and data centers that the company hopes will give it a bigger slice of the booming market for artificial intelligence hardware. Photographer: Victor J. Blue/Bloomberg via Getty Images


Bloomberg | Bloomberg | Getty Images

“We’ve been seeing the excitement with generative AI, the star of the show for 2023,” Intel CEO Pat Gelsinger said at a launch event in New York where he announced Gaudi3 along other chips focused on AI applications.

“We think the AI PC will be the star of the show for the upcoming year,” Gelsinger added. And that’s where Intel’s new Core Ultra processors, also announced on Thursday, will come into play.


Intel Core Ultra and new Xeon chips​

Intel also announced Core Ultra chips, designed for Windows laptops and PCs, and new fifth-generation Xeon server chips. Both include a specialized AI part called an NPU that can be used to run AI programs faster.

It’s the latest sign that traditional processor makers, including Intel rivals AMD and Qualcomm, are reorienting their products lines around and alerting investors to the possibility of AI models leading to surging demand for their chips.

The Core Ultra won’t provide the same kind of power to run a chatbot like ChatGPT without an internet connection, but can handle smaller tasks. For example, Intel said, Zoom runs its background-blurring feature on its chips. They’re built using the company’s 7-nanometer process, which is more power efficient than earlier chips.

But, importantly, the 7-nanometer chips show Gelsinger’s strategy to catch up to Taiwan Semiconductor Manufacturing Co. in terms of chip manufacturing prowess by 2026 hasn’t fallen behind.

Core Ultra chips also include more powerful gaming capabilities and the added graphics muscle can help programs like Adobe Premier run more than 40% faster. The lineup launched in laptops that hit stores on Thursday.

Finally, Intel’s fifth-generation Xeon processors power servers deployed by large organizations like cloud companies. Intel didn’t share pricing, but the previous Xeon cost thousands of dollars. Intel’s Xeon processors are often paired with Nvidia GPUs in the systems that are used for training and deploying generative AI. In some systems, eight GPUs are paired to one or two Xeon CPUs.

Intel said the latest Xeon processor will be particularly good for inferencing, or the process of deploying an AI model, which is less power hungry than the training process.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Ola Founder’s AI Startup Launches Indian Large Language Model

  • Krutrim’s AI model can generate text in 10 Indian languages
  • Company will also develop data centers, super computers for AI

Bhavish Aggarwal

Bhavish Aggarwal
Photographer: Aparna Jayakumar/Bloomberg


By Saritha Rai
December 15, 2023 at 8:13 AM EST

Krutrim, an artificial intelligence startup founded by serial entrepreneur Bhavish Aggarwal, has launched India’s first multilingual large language model, which can generate text in 10 Indian languages.

“Our LLM is voice-enabled, and able to understand several languages and even as a mix of languages such as Hinglish — Hindi and English,” Aggarwal said at the Friday event in Bangalore, hosted on the campus of Ola Group, where he’s chief executive officer. “It’s uniquely Indian.”

Krutrim, which translates to “artificial” in Sanskrit, is also developing data centers and will ultimately aim to create servers and super computers for the AI ecosystem. The chatbot will be available in a beta version in January. Prototypes of the servers will be out in mid 2024 and production will start by the end of 2025, the startup said in a statement.

A clutch of Indian startups and academic groups are racing to build large language models in Indian languages, so called Indic LLMs, after the launch of OpenAI’s ChatGPT a year ago. Countries are hoping to build their own competing AI systems, rather than relying on technology from the US or China. In Europe, investors are pouring cash into France’s Mistral AI, now valued at $2 billion after being founded earlier this year. The United Arab Emirates touts its Falcon model, which is backed by an Abu Dhabi government research institute.

India, with 1.4 billion people, is focusing on building smaller, more cost efficient AI systems. Generative AI startup Sarvam, which built its system using available open-source models, launched OpenHathi, its first open-source Hindi LLM earlier this week. The announcement came days after it had raised $41 million in an investment from Lightspeed Venture Partners, billionaire Vinod Khosla and others.

At the event, Aggarwal prompted the open-source Krutrim model to welcome guests in English, write a poem in Tamil, compose an ode to monsoons in Bengali and produce software code. “The AI models known around the world are trained largely in English,” he said. “They cannot capture our culture, language and ethos.”

The company is also focused on developing chips, including a “multiple chiplet” strategy that it said will cut costs and make data center design, he said.

Krutrim — which is widely deployed within the Ola Group ride-hailing company to aid voice chat, sales calls and customer support emails — also plans to roll out an enterprise model called Krutrim Pro next quarter. Aggarwal said he uses the software to write performance reviews for his team and compose job descriptions for hiring.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Intel CEO attacks Nvidia on AI: 'The entire industry is motivated to eliminate the CUDA market'

News
By Andrew E. Freedman
published 1 day ago

Reframing the AI framework, away from CUDA and toward more open standards.

Pat Gelsinger

(Image credit: Tom's Hardware)


Intel CEO Pat Gelsinger came out swinging at Nvidia's CUDA technology, claiming that inference technology will be more important than training for AI as he launched Intel Core Ultra and 5th Gen Xeon datacenter chips in an event here in New York City. Taking questions at the NASDAQ, Gelsinger suggested that Nvidia’s CUDA dominance in training wouldn't last forever.

"You know, the entire industry is motivated to eliminate the CUDA market," Gelsinger said. He cited examples such as MLIR, Google, and OpenAI, suggesting that they are moving to a "Pythonic programming layer" to make AI training more open.

"We think of the CUDA moat as shallow and small," Gelsinger went on. "Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera."

But Intel isn't relying just on training. Instead, it thinks inference is the way to go.

"As inferencing occurs, hey, once you've trained the model… There is no CUDA dependency," Gelsinger continued. "It's all about, can you run that model well?" He suggested that with Gaudi 3, shown on stage for the first time, that Intel will be up to the challenge, and will be able to do it as well with Xeon and edge PCs. Not that Intel won't compete in training, but "fundamentally, the inference market is where the game will be at," Gelsinger said.

NdkzqapuTGSSq2agzqZkrB-970-80.jpg.webp

(Image credit: Tom's Hardware)


He also took the opportunity to push OpenVINO, the standard that Intel has gathered around for its AI efforts, and predicted a world of mixed computing, some that occurs in the cloud, and others that happen on your PC.

Sandra Rivera, executive vice president and general manager of the Data Center and AI Group at Intel, added that Intel's scale from the data center to the PC may make it a partner of choice, as it can produce at volume.

"We're going to compete three ways for 100% of the datacenter AI TAM." Gelsinger said, tacking onto Rivera's comment. "With our leadership CEOs, leadership accelerators, and as a foundry. Every one of those internal opportunities is available to us: The TPUs, the inferentias, the trainiums, et cetera. We're going to pursue all of those. And we're going to pursue every commercial opportunity as well, with NVIDIA, with AMD, et cetera. We're going to be a foundry player."

It's a bold strategy, and Gelsinger appeared confident as he led his team through presentations today. Can he truly take on CUDA? Only time will tell as applications for the chips Intel launched today — and that his competitors are also working on — become more widespread.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Ola’s Bhavish Aggarwal Unveils ‘Krutrim’ LLM for AI Development in Indian Languages

Share this article
by Kumar Gandharv


Published: December 15, 2023 at 7:25 am Updated: December 15, 2023 at 7:25 am

by Victor Dey

Edited and fact-checked: December 15, 2023 at 7:25 am

IN BRIEF

Ola’s founder Bhavish Aggarwal introduced Krutrim – an LLM that can write stories, poems, and more in multiple Indian languages.

Ola's Bhavish Aggarwal Unveils 'Krutrim’ LLM for AI Development in Indian Languages


Indian artificial intelligence company Krutrim Si Designs introduced Krutrim — a large language model (LLM) that can write stories, poems and more in multiple Indian languages.



The company, Krutrim Si Designs was launched in April 2023 and is spearheaded by Bhavish Aggarwal, founder and chief executive officer of Indian ride-hailing company Ola.

Aggarwal introduced Krutrim – meaning artificial in the Sanskrit language – during a live-streamed hybrid event on YouTube in two sizes.

A base model named Krutrim was trained on two trillion pieces of talk (tokens) and datasets. It can understand 20 Indian languages and can even respond in 10 of those languages, including Hindi, Marathi, Telugu, Kannada and more.

Additionally, Krutrim Pro — another multimodal AI model, is slated for release in 2024 and promises more sophisticated problem-solving capabilities and task execution.

According to the announcement, the Krutrim AI model is designed to cater to the unique needs of the Indian context and represents a leap towards technological self-sufficiency and cultural assimilation.

“Today, all AI models, called LLM, are trained largely in English, but language is not just text. Language is also the vehicle for cultural values, context and ethos, and its current AI models just can’t capture India’s culture, knowledge, and aspirations given our multicultural, multilingual heritage,” said Aggarwal.

He further added that the model’s mission is to shape a culturally expressive India, steering towards an AI-first economy, and challenging global paradigms.

Ravi Jain, Ola’s Chief Marketing Officer, shed light on the model’s architecture, stating that Krutrim boasts numerous innovations and represents the largest data set used in training. Positioned as a model that strikes the right balance between performance and price, it is expected to power a myriad of database applications.


We're ready! India's own AI, the journey begins today! Join us here. https://t.co/eyBeiOTKgB



— Bhavish Aggarwal (@bhash) December 15, 2023

Indian Players Race for LLMs in Local Language



The Krutrim team claims that the AI model surpasses GPT-4 in Indic language support. The model’s ability to understand and generate output in 20 Indian languages, outperforming its global counterparts, positions it as a frontrunner in the AI race.

To make Krutrim accessible to the public, Ola announced the launch of Krutrim AI model website. Interested users can register their interest using their phone numbers, with the promise of being informed as soon as the chatbot becomes available.

Aggarwal had earlier emphasized that Krutrim would be an AI enterprise tailored for Indian customers, leveraging the rich data resources available within the country.

Recently, another Indian Generative AI startup Sarvam AI released OpenHathi-Hi-v0.1, the first Hindi large language model (LLM) in the OpenHathi series, a week after raising $41 million in its Series A funding round led by Lightspeed Ventures.

According to Sarvam AI, the goal of the OpenHathi series is to contribute to the development of open models and datasets to promote innovation in Indian language AI and is collaborating with academic partners at AI4Bharat, who have provided language resources and benchmarks for this initiative.

India also unveiled BharatGPT, an LLM-based solution created in collaboration with the Indian conversational AI startup, CoRover.ai. Positioned as a response to current generative AI and LLMs, BharatGPT aims to address a significant void in the market by offering an indigenous solution. It is available in more than 12 Indian languages and supports interactions across video, voice, and text platforms.

It will be interesting to see how such local Indian LLMs will disrupt industries, as it may open doors for higher interest in this space worldwide.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Grimes is working on an interactive AI toy for kids. Meet Grok.​

Grok is the first in a line of toys from start-up Curio intended to allow kids to converse with an AI chatbot​


By Taylor Lorenz

Updated December 14, 2023 at 4:26 p.m. EST|Published December 14, 2023 at 7:00 a.m. EST


imrs.php
Grok is an AI-powered plush toy in the shape of a rocket that can fully interact and converse with your child. (Curio)


For decades, movies, TV shows and books have imagined a world where human beings form deep personal bonds with creations powered by artificial intelligence. Movies such as “Her” and “The Creator” present a future where there is almost no distinction between human-to-human friendships and human-to-AI friendships.

A glimpse toward this future is beginning to emerge in products like Grok, an AI-powered plush toy in the shape of a rocket that can converse with your child. Grok is the first product from a Silicon Valley start-up called Curio that is leveraging Open AI’s technology on a line of toys Curio’s founders say will be capable of long-running, fully interactive conversation, allowing a child to view it almost as a peer or friend.

Canadian musician Claire Boucher, known as Grimes and the mother of three of Elon Musk’s children, is an investor in and adviser to the company, and she will provide the toy’s voice.

“Every [change] in technology unlocks new forms of entertainment,” said Sam Eaton, president and chief toy maker at Curio, who was previously an engineer at Roblox, the gaming platform. “It’s a different level of immersion for playtime.”

“It’s a new hardware medium,” said Misha Sallee, Curio’s chief executive.

Academics and investors said that it’s too soon to assess the likely impact of such toys, but the AI toy market is expected to explode in coming years. A recent report by Contrive Datum Insights, a market research firm, predicted that the AI toy market will be worth $35.11 billion by 2030, and several AI-enabled robots for children were on display at New York’s Jacob K. Javits Convention Center during its annual Toy Fair in September.

“Parents always want to buy their kids things that will help them learn and grow, which will make smart AI toys even more popular,” Contrive Datum Insights said in a LinkedIn post about its research.

Given OpenAI’s penchant for providing incorrect or disconcerting information, an AI-enabled toy’s utility as an educational device remains unproven. For now, however, Curio’s founders aren’t pitching it as an educational toy, but rather as an antidote to children’s reliance on screens for entertainment. “I really feel like this is also the first step towards also sort of reducing screen time as much as humanly possible,” Grimes said in a written interview with the founders and the AI theorist Roon.

Eaton said he thinks a lot about the screen-addled world his own children are growing up in, where so much of kids’ entertainment is centered around passive consumption. That entertainment is only getting more addictive, he said. “Finally,” Eaton said, “technology is allowing us to move back into our own reality instead of descending into a digital reality.”

“Our big vision is increasing imagination levels,” Sallee said. “Replacing more-toxic forms of technology is a North Star of sorts for us.”

As the product evolves, Curio’s founders want to give parents even more control over Grok’s conversations. “If this is going to be a guest in the house, we want parents to have influence over what the toy believes,” Eaton said. “Maybe your family is vegetarian or religious and you want to influence [conversation around those topics]. Or maybe your kid has certain interests, like dinosaurs, and you want to put them in there.”


imrs.php
From left, Curio CEO Misha Sallee Grimes, and Curio’s chief toy maker, Sam Eaton. (Curio)


Unlike previous talking toys such as Teddy Ruxpin, Talk to Me Barbie or Furbies, which simply play prerecorded lines, Eaton said his plans for Curio’s toys is for them to “have a degree of some kind of pseudo consciousness.”

Eaton said Grok will come with an app for parents that will provide full transcripts of any conversation the child has with the toy. Parents also will be able to make certain words or topics off limits. The toy won’t collect or store any voice data, and storage of the transcripts will comply with the federal Children’s Online Privacy Protection Act, the company says. Parents can request any transcripts be deleted at any time.

“I really think it’s important to let parents know that they do have full control over it and we’re not hiding anything,” Eaton said.

Sallee said that the toy was designed with Grimes’s children in mind and that they have a friendly relationship with it. “The toy was designed for X and the other kids,” she said, referring to the son of Grimes and Musk, X Æ A-Xii, “but X primarily because he’s of age where he can actually talk to the toy and it can talk back more effectively.”

But the toy has no relationship with Musk’s AI start-up, which also is called Grok. Curio holds the trademark on the name, and the two AI products are totally unaffiliated, Curio says. The name Grok was devised by Grimes and the Curio team, who said the word was a shortening of the word Grocket, which was coined because Grimes’ children are exposed to a lot of rockets through their father’s ownership of SpaceX.

Grok is available for preorder now for $99. Those who order by Dec. 17 will receive a golden ticket in the mail before Christmas. The product itself will ship early next year.

The current version requires a WiFi connection, though Eaton’s hope is that one day the technology will advance so that the toy itself can contain all the hardware and software needed to be interactive; incorporating such technology now would push its price to several thousands of dollars, making it inaccessible to most parents.

Curio also envisions Grok as an assistive technology for parenting. For instance, parents may be able to create a prompt for bedtime and Grok might go into bedtime mode, in which the toy redirects all conversations with the child toward bedtime, or Grok gets sleepy himself.

“We’ve heard some people’s concerns like, ‘Could this replace parents?’” Eaton said. “But it’s the opposite. Kids are already zoned out on YouTube or tablets. This toy, you’re talking to it and it’s talking back. You’re imagining and it’s challenging you.”

The Curio founders said they imagine a future where AI toys will be able to interact and play off each other, like a real life Toy Story.

Curio was incorporated earlier this year and now has four full-time employees and several contractors across marketing and manufacturing, in addition to Curio’s two founders.

The company has attracted seed-stage investment from Grimes as well as technologists including Daniel Gross and Nat Friedman, the former CEO of GitHub. Eaton and Sallee connected with Grimes through a mutual friend, after she expressed interest in the idea of a sentient toy on X.

They also hope to open the technology to more collaborators and partners. “The voice box is the technology, it’s separate from the skin,” Eaton said. “So people could create any kind of plush skin for it. The thinking is we’d open it up and let others build on it to make their own characters.”

“The way Nintendo brought a bit more creativity and art to video games and Pixar did to animation, that’s what we want to do with toys,” he added.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Agility is using large language models to communicate with its humanoid robots​

Brian Heater@bheater / 2:46 PM EST•December 14, 2023

An Amazon worker alongside Agility Robotics' Digit robot

Image Credits: Amazon

I’ve spent much of the past year discussing generative AI and large language models with robotics experts. It’s become increasingly clear that these sorts of technologies are primed to revolutionize the way robots communicate, learn, look and are programmed.

Accordingly, a number of top universities, research labs and companies are exploring the best methods for leveraging these artificial intelligence platforms. Well-funded Oregon-based startup Agility has been playing around with the tech for a while now using its bipedal robot, Digit.

Today, the company is showcasing some of that work in a short video shared through its social channels.



“[W]e were curious to see what can be achieved by integrating this technology into Digit,” the company notes. “A physical embodiment of artificial intelligence created a demo space with a series of numbered towers of several heights, as well as three boxes with multiple defining characteristics. Digit was given information about this environment, but was not given any specific information about its tasks, just natural language commands of varying complexity to see if it can execute them.”

In the video example, Digit is told to pick up a box the color of “Darth Vader’s lightsaber” and move it to the tallest tower. The process isn’t instantaneous, but rather slow and deliberate, as one might expect from an early-stage demo. The robot does, however, execute the task as described.

Agility notes, “Our innovation team developed this interactive demo to show how LLMs could make our robots more versatile and faster to deploy. The demo enables people to talk to Digit in natural language and ask it to do tasks, giving a glimpse at the future.”

Natural language communication has been a key potential application for this technology, along with the ability to program systems via low- and no-code technologies.

During my Disrupt panel, Gill Pratt described how the Toyota Research Institute is using generative AI to accelerate robotic learning:

We have figured out how to do something, which is use modern generative AI techniques that enable human demonstration of both position and force to essentially teach a robot from just a handful of examples. The code is not changed at all. What this is based on is something called diffusion policy. It’s work that we did in collaboration with Columbia and MIT. We’ve taught 60 different skills so far.

MIT CSAIL’s Daniela Rus also recently told me, “It turns out that generative AI can be quite powerful for solving even motion planning problems. You can get much faster solutions and much more fluid and human-like solutions for control than with model predictive solutions. I think that’s very powerful, because the robots of the future will be much less roboticized. They will be much more fluid and human-like in their motions.”

The potential applications here are broad and exciting — and Digit, as an advanced commercially available robotic system that is being piloted at Amazon fulfillment centers and other real-world locations, seems like a prime candidate. If robotics are going to work alongside humans, they’ll need to learn to listen to them, as well.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621
Why next-token prediction is enough for AGI - Ilya Sutskever (OpenAI Chief Scientist)
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

T.J. Thomson, Author provided

Data poisoning: how artists are sabotaging AI to take revenge on image generators


Published: December 17, 2023 2:17pm EST

Authors
T.J. Thomson
Senior Lecturer in Visual Communication & Digital Media, RMIT University

Daniel Angus
Professor of Digital Communication, Queensland University of Technology
Disclosure statement


Imagine this. You need an image of a balloon for a work presentation and turn to a text-to-image generator, like Midjourney or DALL-E, to create a suitable image.

You enter the prompt: “red balloon against a blue sky” but the generator returns an image of an egg instead. You try again but this time, the generator shows an image of a watermelon.

What’s going on?

The generator you’re using may have been “poisoned”.

We believe good journalism is good for democracy and necessary for it.

Learn more


What is ‘data poisoning’?


Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.

But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases where artists have accused big tech companies of stealing and profiting from their work.

This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade” to fight back against unauthorised image scraping.

The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes.

If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.

Symptoms of poisoning

As in our earlier example, a balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso.

Some of the issues with earlier AI models, such as trouble accurately rendering hands, for example, could return. The models could also introduce other odd and illogical features to images – think six-legged dogs or deformed couches.

The higher the number of “poisoned” images in the training data, the greater the disruption. Because of how generative AI works, the damage from “poisoned” images also affects related prompt keywords.




Read more: Do AI systems really have their own secret language?



For example, if a “poisoned” image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected.

Nightshade’s developer hopes the tool will make big tech companies more respectful of copyright, but it’s also possible users could abuse the tool and intentionally upload “poisoned” images to generators to try and disrupt their services.


Is there an antidote?


In response, stakeholders have proposed a range of technological and human solutions. The most obvious is paying greater attention to where input data are coming from and how they can be used. Doing so would result in less indiscriminate data harvesting.

This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit.

Other technological fixes also include the use of “ensemble modeling” where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected “poisoned” images.

Audits are another option. One audit approach involves developing a “test battery” – a small, highly curated, and well-labelled dataset – using “hold-out” data that are never used for training. This dataset can then be used to examine the model’s accuracy.

Strategies against technology

So-called “adversarial approaches” (those that degrade, deny, deceive, or manipulate AI systems), including data poisoning, are nothing new. They have also historically included using make-up and costumes to circumvent facial recognition systems.

Human rights activists, for example, have been concerned for some time about the indiscriminate use of machine vision in wider society. This concern is particularly acute concerning facial recognition.

Systems like Clearview AI, which hosts a massive searchable database of faces scraped from the internet, are used by law enforcement and government agencies worldwide. In 2021, Australia’s government determined Clearview AI breached the privacy of Australians.




Read more: Australian police are using the Clearview AI facial recognition system with no accountability



In response to facial recognition systems being used to profile specific individuals, including legitimate protesters, artists devised adversarial make-up patterns of jagged lines and asymmetric curves that prevent surveillance systems from accurately identifying them.

There is a clear connection between these cases and the issue of data poisoning, as both relate to larger questions around technological governance.

Many technology vendors will consider data poisoning a pesky issue to be fixed with technological solutions. However, it may be better to see data poisoning as an innovative solution to an intrusion on the fundamental moral rights of artists and users.


Artificial intelligence (AI)

DALL-E 2

MidJourney

Generative AI
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Nvidia Staffers Warned CEO of Threat AI Would Pose to Minorities

As the chipmaker’s AI technology has become ubiquitous, it’s working to make it more inclusive

Nvidia Chief Executive Officer Jensen Huang met with employees in 2020 over risks posed by artificial intelligence.


Nvidia Chief Executive Officer Jensen Huang met with employees in 2020 over risks posed by artificial intelligence.

Photographer: I-Hwa Cheng/Bloomberg

By Sinduja Rangarajan and Ian King

December 18, 2023 at 6:00 AM EST

Masheika Allgood and Alexander Tsado left their 2020 meeting with Nvidia Corp. Chief Executive Officer Jensen Huang feeling frustrated.

The pair, both former presidents of the company’s Black employees group, had spent a year working with colleagues from across the company on a presentation meant to warn Huang of the potential dangers that artificial intelligence technology posed, especially to minorities.

The 22-slide deck and other documents, reviewed by Bloomberg News, pointed to Nvidia’s growing role in shaping the future of AI — saying its chips were making AI ubiquitous — and warned that increased regulatory scrutiny was inevitable. The discussion included instances of bias in facial-recognition technologies used by the industry to power self-driving cars. Their aim, the pair told Bloomberg, was to find a way to confront the potentially perilous unintended consequences of AI head-on — ramifications that would likely be first felt by marginalized communities.

According to Allgood and Tsado, Huang did most of the talking during the meeting. They didn’t feel he really listened to them and, more importantly, didn’t get a sense that Nvidia would prioritize work on addressing potential bias in AI technology that could put underrepresented groups at risk.

Tsado, who was working as a product marketing manager, told Bloomberg News that he wanted Huang to understand that the issue needed to be tackled immediately — that the CEO might have the luxury of waiting, but “I am a member of the underserved communities, and so there’s nothing more important to me than this. We’re building these tools and I’m looking at them and I’m thinking, this is not going to work for me because I’m Black.’’

22bfe425b7725be2330ca9ea622cbd4435c464d6.jpg

Masheika Allgood and Alexander Tsado.Photographer: David Odisho/Bloomberg

Both Allgood and Tsado quit the company shortly afterwards. Allgood’s decision to leave her role as a software product manager, she said, was because Nvidia “wasn’t willing to lead in an area that was very important to me.” In a LinkedIn post, she called the meeting “the single most devastating 45 minutes of my professional life.”

While Allgood and Tsado have departed, the concerns they raised about making AI safe and inclusive still hang over the company, and the AI industry at large. The chipmaker has one of the poorest records among big tech companies when it comes to Black and Hispanic representation in its workforce, and one of its generative AI products came under criticism for its failure to account for people of color.

The matters raised by Allgood and Tsado, meantime, also have resonated. Though Nvidia declined to comment on the specifics of the meeting, the company said it “continues to devote tremendous resources to ensuring that AI benefits everyone.”

“Achieving safe and trustworthy AI is a goal we’re working towards with the community,” Nvidia said in a statement. “That will be a long journey involving many discussions.”

One topic of the meeting isn’t in dispute. Nvidia has become absolutely central to the explosion in deployment of artificial intelligence systems. Sales of its chips, computers and associated software have taken off, sending its shares on an unprecedented rally. It’s now the world’s only chipmaker with a trillion-dollar market value.

What was once a niche form of computing is making its way into everyday life in the form of advanced chatbots, self-driving cars and image recognition. And AI models — which analyze existing troves of data to make predictions aimed at replicating human intelligence — are under development to be used in everything from drug discovery and industrial design to the advertising, military and security industries. With that proliferation, the concern about the risks it poses has only grown. Models are usually trained on massive datasets created by gathering information and visuals from across the internet.

As AI evolves into a technology that encroaches deeper into daily life, some Silicon Valley workers aren’t embracing it with the same level of trust that they’ve shown with other advances. Huang and his peers are likely to keep facing calls from workers who feel they need to be heard.

And while Silicon Valley figures such as Elon Musk have expressed fears about AI’s potential threat to human existence, some underrepresented minorities say they have a far more immediate set of problems. Without being involved in the creation of the software and services, they worry that self-driving cars might not stop for them, or that security cameras will misidentify them.

“The whole point of bringing diversity into the workplace is that we are supposed to bring our voices and help companies build tools that are better suited for all communities,’’ said Allgood. During the meeting, Allgood said she raised concerns that biased facial-recognition technologies used to power self-driving cars could pose greater threats to minorities. Huang replied that the company would limit risk by testing vehicles on the highway, rather than city streets, she said.

5c9a2f5fb497f1a8fe5c53fa009c998dea6ac823.jpg

Alexander Tsado.Photographer: David Odisho/Bloomberg

The lack of diversity and its potential impact is particularly relevant at Nvidia. Only one out of a sample of 88 S&P 100 companies ranked lower than Nvidia based on their percentages of Black and Hispanic employees in 2021, according to data compiled by Bloomberg from the US Equal Employment Opportunity Commission. Of the five lowest-ranked companies for Black employees, four are chipmakers: Advanced Micro Devices Inc., Broadcom Inc., Qualcomm Inc. and Nvidia. Even by tech standards — the industry has long been criticized for its lack of diversity — the numbers are low.

Read More: Corporate America Promised to Hire a Lot More People of Color. It Actually Did.

During the meeting, Allgood recalled Huang saying that the diversity of the company would ensure that its AI products were ethical. At that time, only 1% of Nvidia employees were Black — a number that hadn’t changed from 2016 until then, according to data compiled by Bloomberg. That compared with 5% at both Intel Corp. and Microsoft Corp., 4% at Meta Platforms Inc. and 14% for the Black share of the US population overall in 2020, the data showed. People with knowledge of the meeting who asked not to be identified discussing its contents said Huang meant diversity of thought, rather than specifically race.

According to Nvidia, a lot has happened since Allgood and Tsado met with the CEO. The company says it has done substantial work to make its AI-related products fair and safe for everyone. AI models that it supplies to customers come with warning labels, and it vets the underlying datasets to remove bias. It also seeks to ensure that AI, once deployed, remains focused on its intended purpose.

In emails dated March 2020 reviewed by Bloomberg, Huang did give the go-ahead for work to start on some of Allgood’s proposals, but by that time she’d already handed in her notice.

Not long after Allgood and Tsado left Nvidia, the chipmaker hired Nikki Pope to lead its in-house Trustworthy AI project. Co-author of a book on wrongful convictions and incarcerations, Pope is head of what’s now called Nvidia’s AI & Legal Ethics program.

Rivals Alphabet Inc.’s Google and Microsoft had already set up similar AI ethics teams a few years earlier. Google publicly announced its “AI principles” in 2018 and has given updates on its progress. Microsoft had a team of 30 engineers, researchers and philosophers on its AI ethics team in 2020, some of whom it laid off this year.

Pope, who’s Black, said she doesn’t accept the assertion that minorities have to be involved directly to be able to produce unbiased models. Nvidia examines datasets that software is trained on, she said, and makes sure that they’re inclusive enough.

“I’m comfortable that the models that we provide for our customers to use and modify have been tested, that the groups who are going to be interacting with those models have been represented,” Pope said in an interview.

The company has created an open-source platform, called NeMo Guardrails, to help chatbots filter out unwanted content and stay on topic. Nvidia now releases “model cards” with its AI models, which provide more details on what a model does and how it’s made, as well as its intended use and limitations.

Nvidia also collaborates with internal affinity groups to diversify its datasets and test the models for biases before release. Pope said datasets for self-driving cars are now trained on images that include parents with strollers, people in wheelchairs and darker-skinned people.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

Nvidia Ranks Close to the Bottom in Diverse Hiring​

Company and most of its chipmaker peers lag rest of technology industry


  • Black, Hispanic and other races as a percentage of US workforce

Source: 2021 EEO-1 Filings compiled by Bloomberg

Note: Bloomberg is using “other races” to refer to employees who self-report as “Native Hawaiian or Other Pacific Islander,” “American Indian or Alaska Native,” or “two or more races.”

Pope and colleague Liz Archibald, who is director of corporate communications at Nvidia and also Black, said that they once had a “tough meeting” with Huang over AI transparency and safety. But they felt like his questions brought more rigor to their work.

“I think his end goal was to pressure-test our arguments and probe the logic to help figure out how he could make it even better for the company as a whole,” Archibald said in an email.

Some researchers say that minorities are so underrepresented in tech, and particularly in AI, that without their input, algorithms are likely to have blind spots. A paper from New York University’s AI Now Institute has linked a lack of representation in the AI workforce to bias in models, calling it a “diversity disaster.”

In 2020, researchers from Duke University set out to create software that would convert blurry pictures into high-resolution images, using a large language model from Nvidia called StyleGAN, which was developed to produce fake but hyperreal-looking human faces and trained on a dataset of images from photo site Flickr. When users played around with the tool, they found it struggled with low-resolution photos of people of color — including former President Barack Obama and Congresswoman Alexandria Ocasio-Cortez — inadvertently generating images of faces with lighter skin tones and eye colors. The researchers later said the bias likely came out of Nvidia’s model and updated their software.

Nvidia mentions in its code archives that its version of the dataset was collected from Flickr and inherits “all the biases of that website.” In 2022, it added that the dataset should not be used for “development or improvement of facial recognition technologies.”

The model that was criticized has been superseded by a new one, according to Pope.

Nvidia joins a list of large companies where some minority employees have expressed concern that the new technology carries dangers, particularly for people of color. Timnit Gebru, an AI ethics researcher, left Google after the company wanted her to retract her paper that warned of the dangers of training AI models (Gebru said Google fired her; the company said she resigned). She has said that any methodology that uses datasets “too large to document were inherently risky,” as reported by the MIT Technology Review.

Gebru and Joy Buolamwini, founder of the Algorithmic Justice League, published a paper called “Gender Shades” that showed how facial recognition technologies make errors at higher rates when identifying women and people of color. A growing number of studies now support their research that underlying datasets used to power AI models are biased and are capable of harming minorities. International Business Machines Corp, Microsoft and Amazon.com Inc. have stopped selling facial recognition technologies to police departments.

Read More: Humans Are Biased. Generative AI Is Even Worse

“If you look within the history of the tech industry, it’s not a beacon for being reflective of serious commitment to diversity,” said Sarah Myers West, the managing director of AI Now Institute and a co-author of the paper on lack of diversity in the AI workforce. The industry has a long history of not taking minorities and their concerns seriously, she said.

Nvidia’s head of human resources, Shelly Cerio, told Bloomberg that while the company was functioning like a startup — and worrying about surviving — it hired primarily to meet its immediate skills needs: as many engineers with higher degrees as it could find. Now that it’s larger, Nvidia has made diversity in its recruitment more of a priority.

“Have we made progress? Yes,” she said. “Have we made enough progress? Absolutely not.”

1b5521adaf9c5fa7550019338d6debec591ffd74.jpg

Masheika Allgood.Photographer: David Odisho/Bloomberg

The company improved its hiring of Black employees after 2020. Black representation grew from 1.1% in 2020 to 2.5% in 2021, the most recent year that data is available. Asians are the largest ethnic group at the company, followed by White employees.

Pope said all of the company’s efforts don’t “guarantee or eliminate” bias, but do provide a diversified dataset that can help address them. She said that in a fast-paced company that has released hundreds of models, scaling up her processes to address safety is one of the challenges of her role.

It also will take years to tell whether this work will be enough to keep AI systems safe in the real world. Self-driving cars, for example, are still rare.

A few weeks before Allgood left the company, she wrote one last email to Huang reflecting on when she had worked as a teacher in her previous career. She wrote that when she took her students on field trips, she relied on parents and volunteers to help her manage them — an acknowledgement that no one, no matter how brilliant, could handle a group of kids in the wild.

“AI has permanently moved into the field trip stage,” read the email. “You need colleagues and a structure to manage the chaos.”

— With assistance from Jeff Green
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,787
Reputation
7,926
Daps
148,621

OpenAI Says Board Can Overrule CEO on Safety of New AI Releases

The arrangement was mentioned in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with AI risks.

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023. 

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023.
Photographer: Dustin Chambers/Bloomberg

By Rachel Metz
December 18, 2023 at 1:03 PM EST


OpenAI said its board can choose to hold back the release of an AI model even if the company’s leadership has deemed it safe, another sign of the artificial intelligence startup empowering its directors to bolster safeguards for developing the cutting-edge technology.

The arrangement was spelled out in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with what it may deem to be extreme risks from its most powerful AI systems. The release of the guidelines follows a period of turmoil at OpenAI after Chief Executive Officer Sam Altman was briefly ousted by the board, putting a spotlight on the balance of power between directors and the company’s c-suite.

OpenAI’s recently announced “preparedness” team said it will continuously evaluate its AI systems to figure out how they fare across four different categories — including potential cybersecurity issues as well as chemical, nuclear and biological threats — and work to lessen any hazards the technology appears to pose. Specifically, the company is monitoring for what it calls “catastrophic” risks, which it defines in the guidelines as “any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.”

Aleksander Madry, who is leading the preparedness group and is on leave from a faculty position at the Massachusetts Institute of Technology, told Bloomberg News his team will send a monthly report to a new internal safety advisory group. That group will then analyze Madry’s team’s work and send recommendations to Altman and the company’s board, which was overhauled after ousting the CEO. Altman and his leadership team can make a decision about whether to release a new AI system based on these reports, but the board has the right to reverse that decision, according to the document.

OpenAI announced the formation of the “preparedness” team in October, making it one of three separate groups overseeing AI safety at the startup. There’s also “safety systems,” which looks at current products such as GPT-4, and “superalignment,” which focuses on extremely powerful — and hypothetical — AI systems that may exist in the future.

Madry said his team will repeatedly evaluate OpenAI’s most advanced, unreleased AI models, rating them “low,” “medium,” “high,” or “critical” for different types of perceived risks. The team will also make changes in hopes of reducing potential dangers they spot in AI and measure their effectiveness. OpenAI will only roll out models that are rated “medium” or “low,” according to the new guidelines.

“AI is not something that just happens to us that might be good or bad,” Madry said. “It’s something we’re shaping.”

Madry said he hopes other companies will use OpenAI’s guidelines to evaluate potential risks from their AI models as well. The guidelines, he said, are a formalization of many processes OpenAI followed previously when evaluating AI technology it has already released. He and his team came up with the details over the past couple months, he said, and got feedback from others within OpenAI.
 
Top