bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

Avatars, robots and AI: Japan turns to innovation to tackle labour crisis​

Farmers, retailers and builders rethink business models as world’s fastest-ageing society runs out of workers

ftcms%3A1cc97742-a700-41e2-bdd2-bb02071b96a9

At some branches of Japanese convenience store Lawsons, an avatar greets customers © FT montage

Kana Inagaki, Leo Lewis and David Keohane in Tokyo


YESTERDAY

72

With 500 days to go before the spring 2025 opening of the Osaka World Expo, its secretary-general Hiroyuki Ishige reassured the public that the multibillion-dollar global showcase would be ready on time.

Ishige’s confidence may be genuine, but the fact that he had to address the question at all is the result of a crisis far beyond his control. The Expo — a dusty, barren site with little yet built — is the most high-profile victim of a national shortage of construction workers.

A shortfall of workers in the world’s fastest-ageing economy is profoundly affecting the way the government, companies and people operate now and think about the future.

Even the most iconic features of Japan’s famous service economy are in jeopardy. Central Japan Railway ended the beloved food trolley on the Tokyo-Osaka bullet train in October, while across the country vending machines are increasingly left unfilled for days.

“Japan’s labour shortage is occurring regardless of whether the economy is doing well or not,” said Shoto Furuya, chief researcher at the Recruit Works Institute. “We are beginning to fall short of essential services on which we rely on to maintain people’s lifestyles and social infrastructure.”

RWI estimates that the country will have a labour shortage of 11mn people by 2040, with the number of people above age 65 — who already account for nearly 30 per cent of the population — expected to hit its peak in 2042.

In the past decade, Japan has relied on female and elderly workers in the face of strict restrictions on hiring overseas workers. But Naruhisa Nakagawa, founder of hedge fund Caygan Capital, said from this year this would no longer be enough and the country’s labour force would start to dwindle.

How Asia’s largest advanced economy responds to this labour crisis will be closely watched, not least by its neighbour China, whose population has also begun to shrink.

One way Japan is tackling the demographic challenge is by introducing avatars, robots and artificial intelligence to the workforce in key sectors:

Japan’s construction industry has long struggled to hire workers despite attempts to attract more women and young workers by trying everything from raising wages and offering more fashionable work uniforms to installing portable female toilets at building sites.

Still, the number of people employed in the sector has declined 30 per cent to 4.8mn workers from its peak in 1997, according to the Ministry of Land, Infrastructure, Transport and Tourism.

The ministry data also shows that only 12 per cent of construction workers are aged under 29, while about 36 per cent are over the age of 55. So severe are the sector’s staffing problems that it was given five years to prepare for new labour rules coming into force in April that would curtail overtime for construction workers and truck drivers.

As the reality of these shortages have hit, the estimated cost of the Expo has doubled to more than $1.6bn, as contractors are forced to pay more to entice workers. Some countries, fearing soaring costs and delays, are scaling back their presence. Japan’s great national showcase could be directly harmed by its labour shortages, diplomats warned.

For Daniel Blank, the chief executive of start-up Toggle, the crisis presents a business opportunity.

Blank travelled from New York to Japan last year to promote the use of industrial robots to automate the most labour-intensive process for construction companies: the assembly of reinforcement bars. Last year, Toggle raised a combined $1.5mn investment from Tokyu Construction and Takemura, another Japanese construction group.

“Japanese companies are scouting for new technology all over the world,” Blank said. “It’s really all driven by the shortage issue. With labour becoming more expensive and harder to find, you need to find new ways to deliver construction projects.”

For decades, the giant confectionery maker Lotte has delivered its chocolate-filled, bear-shaped biscuits, Koala’s March, by lorry. Now, in preparation for an acute driver shortage as the change in overtime rules comes into effect, one of the nation’s favourite children’s snacks will be delivered by train.

Other companies across Japan, including carmaker Toyota and ecommerce group Rakuten, are making similar preparations, with the development of robots and self-driving vehicles as well as consolidations with smaller rivals.

Japan’s roughly 4mn vending machines require an army of truck drivers to keep them filled. Increasingly, the gaps between refills are widening, especially in rural areas and even in large cities. The industry is rushing to adapt. JR East Cross Station, a food and drink supplier, started using trains in November to transport cans of drinks to refill some vending machines.

At its Motomachi plant in Aichi prefecture in central Japan, Toyota has started using a fleet of “vehicle logistics robots” to pick up and move cars to the loading area. Eventually, the carmaker hopes to replace 22 human workers at the yard with 10 robots.

“The shortage of truck drivers is not just a 2024 issue but a problem we have faced from a very long time ago,” a Toyota manager said. “These efforts alone will not make up for the number of drivers we need.”

In the farmlands of Miyazaki prefecture in southern Japan last summer, a robot duck called Raicho 1 — by Kyoto-based robot maker Tmsuk — took to the rice paddies to churn up weeds. The solar-powered robot was just one of a suite of drones and robots designed to sow, nurture and harvest a standard rice crop without the use of humans. A high-pressure water cannon was used to scare off the wild boar and deer that now roam more freely as the human population in the area has declined.

The experiment that ended with the October rice harvest produced a potentially exciting result for both the company and Japan: the overall number of human hours involved in the process fell from 529 to 29, a 95 per cent reduction in manpower with only a 20 per cent reduction in total rice yield.

As the Japanese population has shrunk and aged, its agricultural labour shortages have become dire. Government data shows that in calorie terms the country was self-sufficient for 38 per cent of its needs in 2022, against a government target of 45 per cent by 2030.

That target increasingly looks impossible to hit, with the national average rate of abandoned farmlands now exceeding 10 per cent. As prime arable land has gone to pasture, analysts warn that some of Japan’s most famous agricultural products, including regional sakes and other speciality foods, could be lost.

With 43 per cent of Japanese farmers aged over 75 and the average age of all farmers at almost 68, Tmsuk’s chief executive and founder Yoichi Takamoto said Japan had little choice but to embrace a robot labour force.

In a small convenience store in central Tokyo that sells everything from toothpaste to egg sandwiches and socks, a smiling member of staff welcomes customers at the door. Amiable and animated, it offers greetings and advice from a 4ft screen.

The newly installed avatar is controlled remotely by an employee at retail chain Lawsons and is part of a trial with Avita, the company behind the technology.

“We started to think about this during the Covid-19 pandemic as a way to protect workers and it’s now a way to allow people to work who would otherwise struggle to be physically present in stores,” said Kazuki Tsukiuda, a senior Lawson executive.

Going forward, the plan is for each operator — be it a working parent, an older person returning to the workforce or someone with a disability who prefers to work from home — to control three or four avatars, enabling the retail chain to staff night shifts and rural locations.

Recommended



The Top Line Kana Inagaki



Japan’s acute labour shortages fuel a long-awaited domestic consolidation





Construction worker guides a crane



Labour shortages have forced Japanese retailers and convenience stores, known as combini, to cut back hours and services. According to the Japan Franchise Association, the country’s convenience stores were short of 172,000 workers in 2020 and the trade body forecasts a gap of 101,000 workers by 2025. As a result, the association says 87 per cent of combini are now open 24 hours, compared with 92 per cent at the end of August 2019.

Hiring foreign students, who are able to work despite the country’s tight restrictions on immigration, is another option. But some need weeks of training to meet customer expectations. “There are only a few who understand the Japanese way of polite and genuine customer service and can deliver that in Japanese,” said Tsukiuda.

While just eight Lawsons have avatars currently, Shogo Nishiguchi, chief operating officer at Avita, said the “mission” was to have 100,000 avatar operators working across Japan by 2030. “In rural areas, avatars can keep stores open,” said Tsukiuda. “Even if we doubled wages, there just isn’t anyone to hire.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use​

Carl Franzen @carlfranzen

January 19, 2024 3:03 PM

A hand pours a bottle of glowing purple liquid onto a keyboard of a vintage desktop PC displaying a pixellated purple skull and crossbones log amid flickering lines of static.

Credit: VentureBeat made with OpenAI DALL-E 3 via ChatGPT Plus

It’s here: months after it was first announced, Nightshade, a new, free software tool allowing artists to “poison” AI models seeking to train on their works, is now available for artists to download and use on any artworks they see fit.

Developed by computer scientists on the Glaze Project at the University of Chicago under Professor Ben Zhao, the tool essentially works by turning AI against AI. It makes use of the popular open-source machine learning framework PyTorch to identify what’s in a given image, then applies a tag that subtly alters the image at the pixel level so other AI programs see something totally different than what’s actually there.

It’s the second such tool from the team: nearly one year ago, the team unveiled Glaz e, a separate program designed to alter digital artwork at a user’s behest to confuse AI training algorithms into thinking the image has a different style than what is actually present (such as different colors and brush strokes than are really there).

But whereas the Chicago team designed Glaze to be a defensive tool — and still recommends artists use it in addition to Nightshade to prevent an artist’s style from being imitated by AI models — Nightshade is designed to be “an offensive tool.”

An AI model that ended up training on many images altered or “shaded” with Nightshade would likely erroneously categorize objects going forward for all users of that model, even in images that had not been shaded with Nightshade.

“For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass,” the team further explains.

Therefore, an AI model trained on images of a cow shaded to look like a purse would start to generate purses instead of cows, even when the user asked for the model to make a picture of a cow.



Requirements and how Nightshade works​

Artists seeking to use Nightshade must have a Mac with Apple chips inside (M1, M2 or M3) or a PC running Windows 10 or 11. The tool can be downloaded for both OSes here. The Windows file also is capable of running on a PC’s GPU, provided it is one from Nvidia on this list of supported hardware.

Some users have also reported long download times due to the overwhelming demand for the tool — as long as eight hours in some cases (the two versions are 255MB and 2.6GB in size for Mac and PC, respectively.

Screen-Shot-2024-01-19-at-5.01.07-PM.png

Screenshot of comment on Glaze/Nightshade Project Instagram account. Credit: VentureBeat

Users must also agree to the Glaze/Nightshade team’s end-user license agreement (EULA), which stipulates they use the tool on machines under their control and don’t modify the underlying source code, nor “Reproduce, copy, distribute, resell or otherwise use the Software for any commercial purpose.”

Nightshade v1.0 “transforms images into ‘poison’ samples, so that [AI] models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space,” states a blog post from the development team on its website.

That is, by using Nightshade v 1.0 to “shade” an image, the image will be transformed into a new version thanks to open-source AI libraries — ideally subtly enough so that it doesn’t look much different to the human eye, but that it appears to contain totally different subjects to any AI models training on it.

In addition, the tool is resilient to most of the typical transformations and alterations a user or viewer might make to an image. As the team explains:

“You can crop it, resample it, compress it, smooth out pixels, or add noise, and the effects of the poison will remain. You can take screenshots, or even photos of an image displayed on a monitor, and the shade effects remain. Again, this is because it is not a watermark or hidden message (steganography), and it is not brittle.”



Applause and condemnation​

While some artists have rushed to download Nightshade v1.0 and are already making use of it — among them, Kelly McKernan, one of the former lead artist plaintiffs in the ongoing class-action copyright infringement lawsuitagainst AI art and video generator companies Midjourney, DeviantArt, Runway, and Stability AI — some web users have complained about it, suggesting it is tantamount to a cyberattack on AI models and companies. (VentureBeat uses Midjourney and other AI image generators to create article header artwork.)

The Glaze/Nightshade team, for its part, denies it is seeking destructive ends, writing:”Nightshade’s goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.”

In other words, the creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted.



The latest front in the fast-moving fight over data scraping​

How did we get here? It all comes down to how AI image generators have been trained: by scraping data from across the web, including scraping original artworks posted by artists who had no prior express knowledge nor decision-making power about this practice, and say the resulting AI models trained on their works threatens their livelihood by competing with them.

As VentureBeat has reported, data scraping involves letting simple programs called “bots” scour the internet and copy and transform data from public facing websites into other formats that are helpful to the person or entity doing the scraping.

It’s been a common practice on the internet and used frequently prior to the advent of generative AI, and is roughly the same technique used by Google and Bing to crawl and index websites in search results.

But it has come under new scrutiny from artists, authors, and creatives who object to their work being used without their express permission to train commercial AI models that may compete with or replace their work product.

AI model makers defend the practice as not only necessary to train their creations, but as lawful under “fair use,” the legal doctrine in the U.S. that states prior work may be used in new work if it is transformed and used for a new purpose.

Though AI companies such as OpenAI have introduced “opt-out” code that objectors can add to their websites to avoid being scraped for AI training, the Glaze/Nightshade team notes that “Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.”

Nightshade, then, was conceived and designed as a tool to “address this power asymmetry.”

The team further explains their end goal:

“Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization.”

Basically: make widespread data scraping more costly to AI model makers, and make them think twice about doing it, and thereby have them consider pursuing licensing agreements with human artists as a more viable alternative.

Of course, Nightshade is not able to reverse the flow of time: any artworks scraped prior to being shaded by the tool were still used to train AI models, and shading them now may impact the model’s efficacy going forward, but only if those images are re-scraped and used again to train an updated version of an AI image generator model.

There is also nothing on a technical level stopping someone from using Nightshade to shade AI-generated artwork or artwork they did not create, opening the door to potential abuses.















 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

AI is coming for architecture​

Programs such as Dall-E and Midjourney are revolutionising the designs of buildings, but threatening the industry too

ftcms%3A1606ed47-afc1-4994-981c-81beb029d35e

A sketch and the AI image generated from it © FT montage/LookxAI/Shenzhen XKool Technology

Edwin Heathcote

JANUARY 20 2024


11

Scroll through Instagram and, if the algorithm has deduced you're interested in architecture, you will find threads of bizarre, often surreal buildings that seem possible but not probable. There are futuristic swirls of space-age stuff coagulated in buildings that evoke Zaha Hadid. There are Afrofuturist cityscapes with mud towers and spaceship docking stations which might be scenes from Wakanda, home of Marvel’s Black Panther. And there are exquisite modern interiors, complete with lens flare and dust motes so real you could touch them. All of these have been generated in seconds by AI on the back of a few words of prompting.

Dall-E, Stable Diffusion and Midjourney have made what might have taken an extremely skilled illustrator or animator a week to do into something any of us can commission in a few moments. There is no doubt those jobs are at terminal risk. Architects are already using AI to handle mundane tasks from distributing parking spaces and bathrooms to arranging blocks on an urban plan.

In the more accessible and more ubiquitous visual world of social media, one designer who has made waves through the application of AI to architectural imagery is Hassan Ragab. His striking works veer from dreamlike futuristic architectures in wild natural settings to surreal mash-ups of his native Egyptian cities with steampunk organicism, embracing everything from informal settlements and shabby 1970s towers to elaborate mosques and Antoni Gaudí. “It's nonstop,” he tells me. “Every day there's something new and nobody really understands what's going on. Everybody is rushing in without really thinking about what they're doing.

“In that way, it's so different to architecture, which is so slow. I left my practice in 2019 and they're still working on the same building.” The platform for Ragab's designs is not the construction site but social media. He became viral through the seductive powers of his pictures. “It is very empowering,” he says. “It allows us a freedom.” Ragab might not be having any effect on real architecture yet, but architecture is now being used extensively by non-architects as a visual medium in itself. That is interesting and will feed back into real architecture as people become more sophisticated with seeing and understanding and manipulating AI visions.

Does he think AI will put architects out of business? “There is this idea,” he replies after a pause, “that humans are the only species, the only beings that can create ideas. That is not true any more. AI can do all these incredible things. Everything is possible and we should not be afraid, we should welcome it.”

While Ragab and others are provoking AI to hallucinate new, hybrid architectures, always strange and often alien, Wanyu He is determinedly designing them to be built. A former employee of Rem Koolhaas's OMA, she founded XKool in Shenzhen in 2016 to utilise AI for design and construction. The problem when He shows me the buildings which have resulted from the AI collaborations is that, at least so far, they don't look any different from (in fact they look clunkier than) most other mass development in China.

“If it looks like that,” she says, “that is because of human decisions. Because of economies.” She explains that the way developers are using AI now is to make buildings cheaper. “In the future, architects will be empowered to show the client thousands of options and refine the best one so that even on a low budget you will be able to get the best building.”

Unusually for an architect, she is also a writer of science fiction. “We worry about AI escaping human control and causing a disaster for mankind, and in my novels most of the future AI scenarios are not” — she thinks for a moment — “optimistic,” she says, with a slightly nervous giggle. “But it is this writing which gave me an awareness to prevent these things happening. AI should be a co-pilot and a friend, not a replacement for architects.”

Among the hyperinflating barrage of images on social media for the most extravagant and futuristic visions of AI-generated structures, the version of the future that crops up most frequently might well bear a resemblance to the work of Zaha Hadid Architects. Hadid seemed to predict a future suggested by sci-fi, rather than (the possibly more realistic) one that just resembles decline and the world as a huge informal settlement. Since she died in 2016, her practice has been headed by techno-optimist and libertarian Patrik Schumacher, who made waves when he revealed that the practice had been using AI models to regurgitate its own work, feeding in past projects to generate new ones.

At ZHA’s slick London offices, Shajay Bhooshan, head of the computation and design research team, clarifies what Schumacher meant. “Using AI as a sketching tool is low-hanging fruit. Images it has ingested come from our own buildings, so it is a pre-trained Stable Diffusion model fed with our own designs. What comes out depends on what images we choose to train the model with. So it is not just ZHA buildings but enough other architecture to give it a wider cultural understanding.”

He shows me on a screen a complex plan of a city settled into a valley. “Frankly, it is easier to just feed in our own work, though, because of copyright issues, but otherwise we put in everything, right back to Roman masonry.”

How do they find AI most useful? “It allows us to front-load,” he says. “It augments the process so we can get to what the client wants quicker with faster iterations and changes. It can then make trade-offs, say between budget and environmental impact, between pedestrians and traffic.” He then flips to another urban plan. “In many ways it makes good design more rapid and more affordable.”

And the downsides? “This is a rapidly changing technology,” he says. “There's unexplainability, it is highly complex and we don't always know how the input is converted to output. Midjourney and ChatGPT have been so successful because anyone can use them and millions are, whereas this field is still very small. We need to direct AI towards valuable architectural tasks, not just images for Instagram, otherwise it will not evolve.”

Less of a techno-optimist is Adam Greenfield. A writer, urbanist and former psyops specialist in the US Army, Greenfield suggests architects have yet to wake up to the potential destruction of their profession. “AI will strip away virtually everything that an architect does,” he says. But won’t architects be able to survive as brands, in the way fashion labels are now, with the prestige of a real Foster or Hadid building? “Do we really think that a client in the Emirates or an emerging economy is going to pay a premium for the presence of the ego when they could probably have their nephew feeding some prompts into an AI generator and probably get something even more imaginative?

“This is existential for architects . . . The people who are now most enthusiastic about AI have no idea what's being done to them. What we need to ask at this stage is what are we here for? If we're not here to bring our life experiences to bear on complex problems through our creativity, then what's left? Eat and shyt? The things AI is being called to do are the things which give us a stake in existence.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

I literally spoke with Nvidia’s AI-powered video game NPCs​

Nvidia and Convai’s interactive CES 2024 demo convinced me: this is inevitable.


By Sean Hollister, a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.

Jan 18, 2024, 3:31 PM EST|48 Comments / 48 New



nvidia_ces_2024_ai_npc_convei.jpg


Nvidia’s cyberpunk ramen shop is back.

Photo by Sean Hollister / The Verge

What if you could just... speak... to video game characters? Ask your own questions, with your own voice, instead of picking from preset phrases?

Last May, Nvidia and its partner Convai showed off a fairly unconvincing canned demo of such a system — but this January, I got to try a fully interactive version for myself at CES 2024. I walked away convinced we’ll inevitably see something like this in future games.

Let me be clear: the characters I spoke to were effectively generative AI chatbots. They didn’t feel like real people — we’ve got a ways to go before voices, facial expressions, and body language catch up to what’s expected of a real-life interaction. There was sometimes a little robotic stutter and often a short delay before each NPC delivered their lines. Occasionally, they misinterpreted me.

But many of today’s biggest video games already set a pretty low bar for NPCs. Saddling up to the bar of a cyberpunk ramen shop to ask real questions with my real voice — it exceeds what I expect from the average denizen in The Elder Scrolls or Assassin’s Creed.

Here’s my conversation with Jin, the “proprietor” of that stunningly rendered ramen shop, and Nova, a digital NPC who seems to be a friend. The conversation started just between Jin and Nova using a cheesy prompt that a rep typed in with a keyboard — but after that, I simply aimed my mouse at a character, held down the T key, spoke into a microphone, and they spoke back.

Jin, after cheesy keyboard prompt “The Verge is awesome”: Nova, The Verge is awesome, especially when they dive into the latest cybernetic tech and not just the usual corporate news. It’s like peeling back the layers on the city itself.

Nova: Oh absolutely. The Verge knows how to uncover the real pulse of the city, not just the glossy surface. It’s like they’re tuned into the same frequency as us.

Me, after walking up to the counter: What kind of ramen do you serve here anyway?

Nova: Jin serves up the best of both worlds. Classic shoyu ramen for the traditionalists and a spicy miso ramen for those who like a kick.

Me, to Jin: Hey, Jin, how do you cook your soup broth anyhow?

Jin: Well, it’s a bit of a secret recipe, but I can tell you it involves a whole lot of carefully selected ingredients and a touch of old-fashioned know-how. The result...

Me, cutting Jin off: Sure, sure. You’re quite a salesman. Can you tone it down just a little bit?

Jin: Alright, alright. I’ll dial back the sales pitch. The broth’s the heart and soul of the ramen here.

Me: Alright, serve me up a bowl. Tell me about the ingredients.

Jin: Here’s your drink. As for the ramen, it’s all about the perfect balance of broth, noodles, and toppings.

Me, to Nova: What’s going on in this city, anyway?

Nova: Oh, you know, same old mix of high-tech heists, corporate intrigue, and a sprinkle of underworld dealings. It’s the city that never sleeps, after all.

Me, to Jin: Tell me about your backstory. How’d you get here?

Jin: Not much to tell. I inherited this ramen shop from my parents and have been serving up the best noodles in Nox City ever since. It’s a simple life, but it’s mine.

Me, to Nova: And you? Any friends I should know about?

Nova: [no reply]

Me: Hello, hello.

Nova: Hello, hello. Double greeting? Does that mean double the good luck, or am I mixing up my superstitions?

Me: Who are your friends around here?

Nova: Well, Jin is the heart of this place. As for the rest, it’s more like acquaintances than friends. That’s the city life for you.

Me, to Jin: I’m feeling a little amnesia. How do you know me again?

Jin: I’ve known you since you were a kid, Kai. Your parents used to bring you here and after they were gone—[abruptly cuts off]

Me
: What happened after they were gone?

Jin: After my parents passed, I just kept the shop running. It’s been a constant in this ever-changing city like a rock.

Me: What are your feelings about being an AI?

Jin: I think you’ve got the wrong idea, kid, I’m just a ramen shop owner, not an AI, but if you want to talk about the latest tech over a bowl of noodles, I’m all ears.

Me: Alright, what’s your favorite gadget?

Jin: My favorite gadget has to be this old radio behind the counter. It’s a bit finicky, but it brings in the best tunes for the shop.

Me, to Nova: How do you feel about being an AI?

Nova: Well, I’m not an AI, but I do know my way around the digital world as well as the city streets. It’s all about finding the right balance, isn’t it?

Me, to Jin: What’s to do around here?

Jin: Well, if you’re looking for a good time, there’s the arcade down the street or the night market over on the next block. Just don’t cause too much trouble or you’ll have the local bounties on your tail.

Me, to Nova: What should I be doing with my life?

Nova: That’s the m—[slight stutter]—illion-dollar question, isn’t it? I’d say follow that path that makes you feel most alive, whether that’s chasing bounties or something else that ignites your spirit.

Me, to Jin: What do you think?

Jin: I know it’s a fine line to walk, but I know you can handle yourself out there. Just remember to come back here for a real meal, not that synth-noodle stuff they serve at the market.

Me, to Nova: What’s your family like?

Nova: It’s just me and m—[slight stutter]—y cat, Whiskers. Holding down the fort. But Jin and you, Kai, are like the family I choose for myself.

Me, to Jin: So, how do I know you again?

Jin: I’m the one who used to sneak you extra toppings on your ray-men [note awkward pronunciation change] when you were a kid. Figure you needed it more than the stray cats hanging around the back alley.

Me, to Nova: What are you up to after this?

Nova: I might head to the Lotus for a drink. Care to join? It’s been a while since we had a night out.

Me: Sure, any good movies playing?

Nova: Let’s see what’s—[slight pause]—on at the Holosuite Cinemas. If not, we can always catch a classic at my place. I’ve got the full archive.

Me, to Jin: Wanna join us?

Jin: I’ll pass, thanks. This old cat statue on the counter brings me more luck than a night on the town could.

Me: What are you going to be up to instead?

Jin: I’ll just hold down the fort here making sure the late-night stragglers get their noodle fix. It’s not a glamorous job, but it’s what I’m good at.

The dialogue is not particularly inspiring! I certainly don’t prefer it to proper lines from good characters in my favorite games. You can also see how they only spoke in short bursts and how they sometimes lost the thread. But maybe it could be used to populate a whole world with lesser characters or combined with good, canonical dialogue written by a real human being, where generative AI just helps it go further.

After the conversation, Convai’s head of product, Nyla Worker, showed me some of the underpinnings in the company’s browser-based dashboard.

Jin and Nova, I could see, have their own “Character Description,” their own “Knowledge Bank” filled with text documents they can refer to, sliders for “Personality” traits like extroversion, openness, and sensitivity, their own “State of Mind,” some “Guardrails” to hopefully prevent them from doing wacky AI things (and filters for blocked words), and a memory bank as well. Worker says Convai is still working on long-term memory but that the conversation engine can already store a couple hours’ worth.




convai_nvidia_snapshot_verge.jpg


A hasty snapshot I took of the Convai interface. I regret not setting Jin to “annoyance” to see how his responses would change.

She also showed me how easy it was to inject new data. It took a single tap of a button to modify Jin and Nova’s memory with an additional text file, and suddenly, they were able to tell me about Nvidia’s new graphics cards. Another button press, and these characters could speak in a new language.

Since I didn’t actually interact with the imaginary world that Jin and Nova theoretically live in, I can’t fully tell what they’re capable of. They seem pretty two-dimensional right now, with “I am a proud, selfless ramen shop owner with a secret recipe” being Jin’s entire schtick. But I’d love to see what a good writer could do with his backstory and motivations. I can now absolutely imagine games where NPCs remember what they’ve seen and react to the game’s script as it unfolds. The right bits could enter their memory bank at the right time, get filtered through their personality and desires, and make a game more immersive and interactive as a result.

I just hope game developers use this to augment their games, instead of putting voice actors and writers out of work. It’s an extremely hot-button topic in the game industry right now.

Just this month, SAG-AFTRA signed a deal with Replica Studios that could let members license out digital replications of their voices. Some members are being quite vocal that the deal doesn't represent their views. Last we heard, the labor union is still negotiating with game publishers for a new Interactive Media Agreement. It has listed “the existential threat to member work posed by the unregulated use of AI” as one of its primary concerns.






 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

AI heralds the next generation of financial scams​

Voice cloning is just one of the new tools in the tricksters’ armoury

ftcms%3A0c13665a-5a7d-4067-80e3-f67503db5a07

© Andy Carter

Siddharth Venkataramakrishnan

JANUARY 19 2024


13

It was last spring when Paddric Fitzgerald received a telephone call at work. He had been playing music via his phone, so when he picked up, the voice of his daughter screaming that she had been kidnapped erupted over the speakers.

“Everyone has those points in their lives like ‘Oh, that moment I almost drowned as a kid’,” he says. “It was one of the most emotionally scarring days of my life.”

Declining an offer of a firearm from a colleague, Fitzgerald, a shop manager based in the western US, raced to get cash out from a bank, while staying on the phone.

“[My daughter] was screaming in the background, saying they’d cut her while I was waiting in line,” he says. “I was going to give everything that I have financially.”

It was only a chance text from his daughter that revealed that the voice on the phone didn’t belong to her. It was a remarkably cruel and elaborate scam generated with artificial intelligence.

Fitzgerald’s story is a terrifying example of how AI has become a powerful new weapon for scammers, forcing banks and fintechs to invest in the technology to keep pace in a high-tech arms race.

“I had no protection over my child in that moment — a year later, I’d love to find that person and just make them realise how evil what they did was, and they did it with keystrokes,” says Fitzgerald. “Are we really that advanced as a society if we can do that?”

The continued evolution and uptake of the technology means scammers do not just pose a threat to the unaware or vulnerable. Even cautious consumers are at risk of huge financial losses from AI-powered fraud. FT Money explores the latest developments.

Line chart of Reports (mn) showing Reports of fraud in the US are still at close to record numbers


Increasing sophistication​

Identifying the scale of AI use by scammers is a difficult task, says Alex West, banking and payments fraud specialist at consultant PwC. He was one of the authors of a reportinto the impact of AI on fraud and scams last December in collaboration with cross-industry coalition Stop Scams UK. This identified the kind of “voice cloning” that targeted Fitzgerald as one of biggest ways in which criminals are expected to use AI.

“Scammers are already very successful, and it could be that they just don’t need to use this type of tech, or it could be that they are using AI and we just aren’t able to distinguish when it has been used,” he says. “[But] it’s clearly going to drive an increase in more sophisticated scam attempts.”

Steve Cornwell, head of fraud risk at high street lender TSB, says the rising sophistication of the technology was a major worry for banks.

“If you think of the way Generative AI is coming along, how long [is it] before that AI solution could have a real-time conversation with you [using] a synthetic voice?” he says.

Figures from banking industry trade body UK Finance show a welcome trend, with fraud losses falling by 8 per cent year on year in 2022.

But one senior politician who did not wish to be named says that increased adoption of AI — OpenAI’s ChatGPT reached around 100mn monthly users in two months — could reverse that trend.

“Scammers are very well financed and entrepreneurial,” the person says. “That’s the thing I’m concerned about.”

Data from Cifas, a not-for-profit fraud prevention service in the UK, also gives cause for concern. While data from 2022 shows identity fraud rose by nearly a quarter, reports of AI tools being used to try and fool banks’ systems increased by 84 per cent.

“We’re seeing an increased use of deepfake images, videos and audio being used during application processes, along with synthetic identities being identified as a result of ‘liveness’ checks that are now being carried out at the application stage,” warns Stephen Dalton, director of intelligence at Cifas.

Speaking at Davos on Wednesday, Mary Callahan Erdoes, JPMorgan’s head of asset and wealth management, said the use of AI by cyber criminals was a big concern. The bank spent $15bn on technology annually in recent years and employed 62,000 technologists, with many focused solely on combating the rise in cyber crime.

“The fraudsters get smarter, savvier, quicker, more devious, more mischievous,” she added.

PwC and Stop Scams also identified artificially generated videos, better known as deepfakes, as a major risk. The technology, which only emerged in 2019, has rapidly advanced, says Henry Ajder, an expert on AI-generated media, who has advised companies including Meta, Adobe and EY.

“What’s happened in the last 18 months is the equivalent of like two decades of progress compared to the previous four years,” he says. “The barrier to entry is much lower than it was.”

The quality of these videos has improved remarkably, says Andrew Bud, chief executive and founder of online identity verification provider iProov. He points to a recent study which found that more than three-quarters of participants were unable to identify deepfakes.

Bar chart of Median loss ($) showing Older victims suffer the highest losses

“Good quality deepfakes cost about $150 on the dark web,” he continues. “You have a whole supply chain developing for AI-supported fraud, with R&D departments who build sophisticated tools and monetise them on the dark web.”

Natalie Kelly, chief risk officer for Visa Europe, warns there is a constellation of criminal-focused systems, such as WormGPT, FraudGPT and DarkBART. She says: “It can be hard to tell the authentic from the artificial these days.”

Using those tools, available via the dark web, and communicating via dedicated hacking forums and messaging apps, criminals are able to offer malware-writing services or advanced phishing emails.



How the scourge spreads​

Financial institutions have long criticised social media platforms as vectors for fraud. Last summer, a deepfake of money saving expert Martin Lewis and X owner Elon Musk spread across social media, promoting a product it referred to as “Quantum AI”.

Lewis himself took to the X platform, formerly called Twitter, in July to warn about the scam. Some of the videos, aimed at a British audience, featured apparent BBC broadcasts, which were deepfakes of prime minister Rishi Sunak extolling the benefits of Quantum AI.

While a number of the videos have been removed or are inactive, other accounts simply copy and paste the same material.

The AI is not perfect. In one video which has now been removed, the purported Sunak stumbles over the pronunciation of words like “provided”.

And despite the high-tech name, the operation is surprisingly manual. Links from the deepfakes lead people to hand over their telephone numbers. Call centre operatives then take over, persuading people to hand over money.

Nevertheless, West emphasises that for criminals, scams are a volume game, and AI can tip the balance in enough cases to make it believable.

“Making content more believable — and convincing just a small percentage more people — can have a big pay-off for the fraudster,” he says.

One such case was a former medical assistant in the US, who fell victim to an investment scammer on X who used AI to impersonate Elon Musk.

“This started back in March, and we only became aware in August, after she had already taken very large sums out of her retirement account to try to pay [the investment] account,” says one family member.

While the process began using direct messages, the criminal also used a filter to take on Musk’s appearance, video calling the victim to convince her to hand over almost $400,000 which she would supposedly invest in X.

Meanwhile, on Alphabet’s YouTube, a spate of fake bitcoin giveaways featuring an AI-generated Michael Saylor led the former MicroStrategy chief executive to release a warning on X. “Be careful out there, and remember there is no such thing as a free lunch.” The deepfakes, posted by a host of accounts which have since been banned, were labelled as “live” videos to make them more believable.

Ajder says platforms have taken steps to fight back against the increasing flow of AI-generated content. In September, TikTok announced users would be required to label AI-generated content, while Google’s DeepMind announced a watermark for AI images in August.

But Ajder was also wary of the record of social media companies, which have often implemented apparently clear policies in a piecemeal fashion. A lack of resources leads to ineffective enforcement, he says.

The UK government’s stance on AI and fraud has been mixed. In a speech in July, Financial Conduct Authority chief executive Nikhil Rathi mentioned the potential impact of AI on “cyber fraud, cyber attacks and identity fraud” in a speech on regulating new technologies. At a Treasury select committee hearing in December, Rathi also warned that criminals were “making unfettered use of AI” to manipulate markets.

The FCA says that “as AI is further adopted, investment in fraud prevention and operational and cyber resilience will have to speed up”.

But the government did not explicitly mention the technology in its anti-fraud strategy last May or in a voluntary “online fraud charter” for Big Tech platforms revealed in November.

The Home Office says that it is “working with our partners across government, law enforcement and the private sector to further drive the use of AI to tackle crime and protect the public.”

Meta says that deepfake ads are not allowed on its platforms, and it removes such content when it is brought to its attention, adding that the company is “constantly working to improve our systems.”

YouTube’s misinformation policy bans doctored or manipulated videos. The platform announced last year that it will begin requiring creators to disclose realistic altered or AI-generated material.

Bar chart of Fraud losses (£mn) showing Experts fear AI could reverse declines in fraud


Fighting back with AI​

The situation is not entirely bleak. Although scammers’ use of AI is on the rise, so is its adoption by institutions, ranging from banks, Visa and Mastercard to dedicated technology companies.

“If the human eye can’t tell if it’s real, it’s not game over,” says Bud at iProov. “It’s a cat-and-mouse game, and we’re evolving as fast as the bad guys to stay ahead.”

He says that his company’s technology, which is designed to help combat deepfakes and other forgeries, had been used by clients including the Californian Department of Motor Vehicles, the UK Government Digital Service and major banks in South America.

Another start-up using AI to counter fraud is US-based Catch, which aims to help vulnerable adults by detecting email scams, explaining the red flags and recommending next steps.

“The cash that [older adults] tend to lose is a lot more valuable to them — on average the cheque size they lose is higher and if they’re retired, they don’t have the time to make that money back,” says co-founder Uri Pearl.

AI is also being used by banks to support an army of staff assessing potential breaches of anti-money laundering regulations.

“There are lots of very cool small companies coming up in the area of ‘know your customer’,” says Gudmundur Kristjansson, founder and chief executive of Icelandic fintech Lucinity. “We’re doing a lot of development with generative AI.”

One of Lucinity’s products, nicknamed Lucy, takes data and crunches it into an easily readable format, speeding up what has traditionally been a highly manual process of monitoring transactions.



Loss of trust​

But even these advances may not be able to defend against some areas of attack.

“For voice, it seems to be game over — it’s increasingly clear there is no defence,” says Bud, as the small amount of data in audio files makes it hard for tech companies to distinguish the real from the fake.

And the impact on victims of AI-driven scams goes beyond financial losses, potential or real. Fitzgerald says that his experience has soured his view of technology. He avoids ecommerce and most social media platforms, and says he is more comfortable withdrawing money at the bank and spending it than using a card.

“That phone call made me realise how vulnerable I am and how vulnerable our kids are,” he says. “I didn’t understand it was a possibility that could have happened.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509




Yi Vision Language Model​

Better Bilingual Multimodal Model​

🤗 Hugging Face🤖 ModelScope✡️ WiseModel

👩‍🚀 Ask questions or discuss ideas on GitHub !

👋 Join us 💬 WeChat (Chinese) !

📚 Grow at Yi Learning Hub !​



What is Yi-VL?​

Overview​

  • Yi Visual Language (Yi-VL) model is the open-source, multimodal version of the Yi Large Language Model (LLM) series, enabling content comprehension, recognition, and multi-round conversations about images.
  • Yi-VL demonstrates exceptional performance, ranking first among all existing open-source models in the latest benchmarks including MMMU in English and CMMMU in Chinese (based on data available up to January 2024).
  • Yi-VL-34B is the first open-source 34B vision language model worldwide.

Models​

Yi-VL has released the following versions.


ModelDownload
Yi-VL-34B🤗 Hugging Face🤖 ModelScope
Yi-VL-6B🤗 Hugging Face🤖 ModelScope

Features​

Yi-VL offers the following features:

  • Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
  • Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images.
  • Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
  • Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448×448.

Architecture​

Yi-VL adopts the LLaVA architecture, which is composed of three primary components:

  • Vision Transformer (ViT): it's initialized with CLIP ViT-H/14 model and used for image encoding.
  • Projection Module: it's designed to align image features with text feature space, consisting of a two-layer Multilayer Perceptron (MLP) with layer normalizations.
  • Large Language Model (LLM): it's initialized with Yi-34B-Chat or Yi-6B-Chat, demonstrating exceptional proficiency in understanding and generating both English and Chinese.
image/png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

KATE KNIBBS
BUSINESS

JAN 24, 2024 7:00 AM

Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them​

Nearly 90 percent of top news outlets like The New York Times now block AI data collection bots from OpenAI and others. Leading right-wing outlets like NewsMax and Breitbart mostly permit them.

Illustration of a translucent brain plugged in to a red outlet filling with red ether

ILLUSTRATION: JACQUI VANLIEW; GETTY IMAGES



As media companies haggle licensing deals with artificial intelligence powerhouses like OpenAI that are hungry for training data, they’re also throwing up a digital blockade. New data shows that over 88 percent of top-ranked news outlets in the US now block web crawlers used by artificial intelligence companies to collect training data for chatbots and other AI projects. One sector of the news business is a glaring outlier, though: Right-wing media lags far behind their liberal counterparts when it comes to bot-blocking.

Data collected in mid-January on about 40 top news sites by Ontario-based AI detection startup Originality AI shows that almost all of them block AI web crawlers, including newspapers like The New York Times, The Washington Post, and The Guardian, general-interest magazines like The Atlantic, and special-interest sites like Bleacher Report. OpenAI’s GPTBot is the most widely-blocked crawler. But none of the top right-wing news outlets surveyed, including Fox News, the Daily Caller, and Breitbart, block any of the most prominent AI web scrapers, which also include Google’s AI data collection bot. Pundit Bari Weiss’ new website The Free Press also does not block AI scraping bots.

Most of the right-wing sites didn’t respond to requests for comment on their AI crawler strategy, but researchers contacted by WIRED had a few different guesses to explain the discrepancy. The most intriguing: Could this be a strategy to combat perceived political bias? “AI models reflect the biases of their training data,” says Originality AI founder and CEO Jon Gillham. “If the entire left-leaning side is blocking, you could say, come on over here and eat up all of our right-leaning content.”

Originality tallied which sites block GPTbot and other AI scrapers by surveying the robots.txt files that websites use to inform automated web crawlers which pages they are welcome to visit or barred from. The startup used Internet Archive data to establish when each website started blocking AI crawlers; many did so soon after OpenAI announced its crawler would respect robots.txt flags in August 2023. Originality’s initial analysis focused on the top news sites in the US, according to estimated web traffic. Only one of those sites had a significantly right-wing perspective, so Originality also looked at nine of the most well-known right-leaning outlets. Out of the nine right-wing sites, none were blocking GPTBot.

Bot Biases

Conservative leaders in the US (and also Elon Musk) have expressed concern that ChatGPT and other leading AI tools exhibit liberal or left-leaning political biases. At a recent hearing on AI, Senator Marsha Blackburn recited an AI-generated poem praising President Biden as evidence, claiming that generating a similar ode to Trump was impossible with ChatGPT. Right-leaning outlets might see their ideological foes’ decisions to block AI web crawlers as a unique opportunity to redress the balance.

David Rozado, a data scientist based in New Zealand who developed an AI model called RightWingGPT to explore bias he perceived in ChatGPT, says that’s a plausible-sounding strategy. “From a technical point of view, yes, a media company allowing its content to be included in AI training data should have some impact on the model parameters,” he says.

However, Jeremy Baum, an AI ethics researcher at UCLA, says he’s skeptical that right-wing sites declining to block AI scraping would have a measurable effect on the outputs of finished AI systems such as chatbots. That’s in part because of the sheer volume of older material AI companies have already collected from mainstream news outlets before they started blocking AI crawlers, and also because AI companies tend to hire liberal-leaning employees.

“A process called reinforcement learning from human feedback is used right now in every state-of-the-art model,” to fine-tune its responses, Baum says. Most AI companies aim to create systems that appear neutral. If the humans steering the AI see an uptick of right-wing content but judge it to be unsafe or wrong, they could undo any attempt to feed the machine a certain perspective.

OpenAI spokesperson Kayla Wood says that in pursuit of AI models that “deeply represent all cultures, industries, ideologies, and languages” the company uses broad collections of training data. “Any one sector—including news—and any single news site is a tiny slice of the overall training data, and does not have a measurable effect on the model’s intended learning and output,” she says.

Rights Fights

The disconnect in which news sites block AI crawlers could also reflect an ideological divide on copyright. The New York Times is currently suing OpenAI for copyright infringement, arguing that the AI upstart’s data collection is illegal. Other leaders in mainstream media also view this scraping as theft. Condé Nast CEO Roger Lynch recently said at a Senate hearing that many AI tools have been built with “stolen goods.” (WIRED is owned by Condé Nast.) Right-wing media bosses have been largely absent from the debate. Perhaps they quietly allow data scraping because they endorse the argument that data scraping to build AI tools is protected by the fair use doctrine?

For a couple of the nine right-wing outlets contacted by WIRED to ask why they permitted AI scrapers, their responses pointed to a different, less ideological reason. The Washington Examiner did not respond to questions about its intentions but began blocking OpenAI’s GPTBot within 48 hours of WIRED’s request, suggesting that it may not have previously known about or prioritized the option to block web crawlers.

Meanwhile, the Daily Caller admitted that its permissiveness toward AI crawlers had been a simple mistake. “We do not endorse bots stealing our property. This must have been an oversight, but it's being fixed now,” says Daily Caller cofounder and publisher Neil Patel.

Right-wing media is influential, and notably savvy at leveraging social media platforms like Facebook to share articles. But outlets like the Washington Examiner and the Daily Caller are small and lean compared to establishment media behemoths like The New York Times, which have extensive technical teams.

Data journalist Ben Welsh keeps a running tally of news websites blocking AI crawlers from OpenAI, Google, and the nonprofit Common Crawl project whose data is widely used in AI. His results found that approximately 53 percent of the 1,156 media publishers surveyed block one of those three bots. His sample size is much larger than Originality AI’s and includes smaller and less popular news sites, suggesting outlets with larger staffs and higher traffic are more likely to block AI bots, perhaps because of better resourcing or technical knowledge.

At least one right-leaning news site is considering how it might leverage the way its mainstream competitors are trying to stonewall AI projects to counter perceived political biases. “Our legal terms prohibit scraping, and we are exploring new tools to protect our IP. That said, we are also exploring ways to help ensure AI doesn’t end up with all of the same biases as the establishment press,” Daily Wire spokesperson Jen Smith says. As of today, GPTBot and other AI bots were still free to scrape content from the Daily Wire.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

WILL KNIGHT
BUSINESS

JAN 23, 2024 7:00 AM


This Chinese Startup Is Winning the Open Source AI Race​

Kai-Fu Lee, an AI expert and prominent investor who helped Google and Microsoft get established in China, says his new startup 01.AI will create the first “killer apps” of generative AI.

Person wearing a navy suit speaking in front of black background with blue led lights

Kai-Fu Lee.PHOTOGRAPH: BRYAN VAN DER BEEK/BLOOMBERG/GETTY IMAGES

Meta shook up the race to build more powerful artificial intelligence last July by releasing Llama 2, an AI model similar to the one behind ChatGPT, for anyone to download and use. In November, a little-known startup from Beijing, 01.AI, released its own open source model that outperforms Llama 2 and scores near the top of many leaderboards used to compare the power of AI models.

Within a few days of its release 01.AI’s model, Yi-34B, rocketed to the top spot on a ranking maintained by startup Hugging Face, which compares the abilities of AI language models across various standard benchmarks for automated intelligence. A few months on, modified versions of 01.AI’s model consistently score among the top models available to developers and companies on the Hugging Face list and other leaderboards. On Monday, the startup launched a “multimodal” AI model called Yi-VL-34B that can process images and discuss their contents.

OpenAI, Google, and most other AI companies tightly control their technology, but 01.AI is giving away its AI models in hopes of inspiring a loyal developer base that helps it hatch some killer AI apps. 01.AI, founded in June of last year, has raised $200 million in investment from Chinese ecommerce giant Alibaba and others and is valued at over $1 billion, according to Pitchbook.

The startup’s founder and CEO is Kai-Fu Lee, a prominent investor who did pioneering artificial intelligence research before founding Microsoft’s Beijing lab and then leading Google’s Chinese business until 2009, a year before the company largely pulled out of the country. He says the creation of Yi-34B is the culmination of his life’s work trying to build more intelligent machines.

“This has been the vision of my whole career,” Lee says over Zoom from a handsomely decorated apartment in Beijing. “It's been too long that we've had to learn computers’ language—we really need systems that can understand our language, which is speech and text.” In Chinese 01.AI is known as 零一万五, Ling-Yi Wan-Wu in Chinese, which means “zero-one, everything” and alludes to a passage from the Taoist text Tao Te Ching.

01.AI is one of China’s leading contenders in the AI race that was started by OpenAI and ChatGPT and has so far been dominated by US companies. Lee says his company aims to lead the next phase of this revolution by building some of the first “killer apps” built on the capabilities of language models, which earn 01.AI healthy revenues. “The apps that won the mobile era are ones that are mobile-first, like Uber, WeChat, Instagram, TikTok,” Lee says. “The next-gen productivity tools shouldn't look like Office anymore—Word, Excel, PowerPoint—that’s the wrong way to go.”

01.AI’s engineers are experimenting with different “AI-first” apps, Lee says, for office productivity, creativity, and social media. He says the plan is for them to become successful around the globe, in a similar way to how Chinese-backed social network TikTok and online retailer Temu are top apps with US consumers.

None of 01.AI’s apps have launched, but the startup’s open source language model has already won admirers in the West. “For many things, it’s the best model we have, even compared to 70-billion-parameter ones,” which might be expected to be twice as capable, says Jerermy Howard, an AI expert who recently founded Answer AI, another new venture that will do both AI research and AI app development.

AI Pioneer

Lee has had a notable career in AI. After emigrating from Taiwan to the United States and attending high school in Oak Ridge, Tennessee, he studied computer science at Columbia and Carnegie Mellon universities, receiving his PhD for a thesis involving the development of a speech recognition system that was cutting edge for the time.

Lee joined Apple as a research scientist in 1990, moved to Silicon Graphics in 1996, then returned to China in 1998 to help establish Microsoft Research Asia—a now legendary Beijing lab that helped train countless prominent Chinese engineers and executives. In 2005, he became president of Google’s search business in China, leaving in 2009 to start his own investment firm, Sinovation Ventures, in China’s now thriving tech industry.

As the rise of the smartphone in China drove rapid growth in tech, Sinovation backed a number of successful Chinese AI startups, including Megvii, an image recognition firm, and TuSimple, a company working on autonomous trucking. Lee became a champion of the Chinese AI industry, traveling to the US to encourage Chinese grad students to consider returning home to build AI projects, and in 2018 publishing AI Superpowers, in which he argued that Chinese AI labs and companies would soon rival those in the US thanks to the country’s abundance of talent, data, and users. But Lee also frequently advocated for collaboration between the US and China.

The publication of AI Superpowers coincided with a growing realization in the West that Lee appeared to be correct that China’s tech industry was on track to rival—and perhaps even eclipse—that of the United States. Policymakers and pundits in Washington began talking about China’s goal of challenging US hegemony across the world, and talking up the risks that might bring.

That has posed challenges for anyone trying to build bridges between China and the US. In 2019, Sinovation Ventures shut down its office in Silicon Valley, citing the growing challenges involved in doing deals with US firms. In October of that year, the US government took direct action against China’s AI industry when it imposed sanctions on Megvii over government use of the company’s face recognition technology.

Rebuilding Bridges

With the release of 01.AI’s open source Yi-34B AI model, Lee is back to building bridges. A few months after Yi-34B was released, modified versions began appearing from developers in the West and exceeding its performance on the Hugging Face model leaderboard. Some US and European countries are building their AI strategies on the Chinese model, which is proficient in both Mandarin and English.

“It’s a really good model that a lot of people are building on,” said Clément Delangue, CEO of HuggingFace, at a briefing in November shortly after 01.AI’s model was released.

Delange said that open source language models are improving rapidly and can be better than OpenAI’s market-leading GPT-4 for some specialized tasks. But he noted that many of the best open source models have come from outside the US, saying that 01.AI could be positioned to benefit from innovations that spring up around its model. “US companies have become a little bit less open and transparent,” he said at the briefing. “But there’s this interesting dynamic with AI where the more a company releases open source, the more the ecosystem develops, and so the stronger they become at building AI.”

Meta’s Llama 2 is a rare example of a top open source model from a US company and is the social media giant’s challenge to OpenAI, Microsoft, Google, and other major tech rivals investing heavily in generative AI. Meta chose to release its AI language model under a license that allows commercial reuse, with some caveats.

Yi-34B and Llama 2 appear to have more in common than just being leading open source AI models. Not long after the Chinese model was released, some developers noticed that 01.AI’s code had previously included mentions of Meta’s model that were later removed. Richard Lin, 01.AI’s head of open source, later said that the company would revert the changes, and the company has credited Llama 2 for part of the architecture for Yi-34B. Like all leading language models, 01.AI’s is based on the “transformer” architecture first developed by Google researchers in 2017, and the Chinese company derived that component from Llama 2. Anita Huang, a spokeswoman for 01.AI, says a legal expert consulted by the company said that Yi-34B is not subject to Llama 2’s license. Meta did not respond to a request for comment.

Whatever the extent to which Yi-34B borrows from Llama 2, the Chinese model functions very differently because of the data it has been fed. “Yi shares Llama's architecture but its training is completely different—and significantly better,” says Eric Hartford, an AI researcher at Abacus.AI who follows open source AI projects. “They are completely different.”

The connection with Meta’s Llama 2 is an example of how despite Lee’s confidence in China’s AI expertise it is currently following America’s lead in generative AI. Jeffrey Ding, an assistant professor at George Washington University who studies China’s AI scene, says that although Chinese researchers have released dozens of large language models, the industry as a whole still lags behind the US.

“Western companies gained a significant advantage in large language model development because they could leverage public releases to test out issues, get user feedback, and build interest around new models,” he says. Ding and others have argued that Chinese AI companies face stronger regulatory and economic headwinds than their US counterparts.

Speaking at the World Economic Forum in Davos last week, Lee argued—perhaps hoping the message would travel back home—that the open approach would be crucial for any country to take full advantage of AI.

“One of the issues with one or a few companies having all the most power and dominating the models is that it creates tremendous inequality, and not just with people who are less wealthy and less wealthy countries, but also professor researchers, students, entrepreneurs, hobbyists,” Lee said. “If there were not open source, what would they do to learn; because they might be the next creator, inventor, or developer of applications.”

If he’s right, 01.AI’s technology—and applications built on top of it—will put Chinese technology at the heart of the next phase of the tech industry.

Updated 1-23-2024, 8:10 pm EST: The Chinese name of 01.AI is 零一万物, not 零一万五.






 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

WILL KNIGHT
BUSINESS

AUG 24, 2023 12:00 PM


The Myth of ‘Open Source’ AI​

A new analysis shows that “open source” AI tools like Llama 2 are still controlled by big tech companies in a number of ways.

A pedestrian walking in front of the sign in front of Meta headquarters

PHOTOGRAPH: JUSTIN SULLIVAN/GETTY IMAGES



CHATGPT MADE IT possible for anyone to play with powerful artificial intelligence, but the inner workings of the world-famous chatbot remain a closely guarded secret.

In recent months, however, efforts to make AI more “open” seem to have gained momentum. In May, someone leaked a model from Meta, called Llama, which gave outsiders access to its underlying code as well as the “weights” that determine how it behaves. Then, this July, Meta chose to make an even more powerful model, called Llama 2, available for anyone to download, modify, and reuse. Meta’s models have since become an extremely popular foundation for many companies, researchers, and hobbyists building tools and applications with ChatGPT-like capabilities.

“We have a broad range of supporters around the world who believe in our open approach to today’s AI ... researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do,” Meta said when announcing Llama 2. This morning, Meta released another model, Llama 2 Code, that is fine-tuned for coding.

It might seem as if the open source approach, which has democratized access to software, ensured transparency, and improved security for decades, is now poised to have a similar impact on AI.

Not so fast, say a group behind a research paper that examines the reality of Llama 2 and other AI models that are described, in some way or another, as “open.” The researchers, from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation, say that models that are branded “open” may come with catches.

Llama 2 is free to download, modify, and deploy, but it is not covered by a conventional open source license. Meta’s license prohibits using Llama 2 to train other language models, and it requires a special license if a developer deploys it in an app or service with more than 700 million daily users.

This level of control means that Llama 2 may provide significant technical and strategic benefits to Meta—for example, by allowing the company to benefit from useful tweaks made by outside developers when it uses the model in its own apps.


Models that are released under normal open source licenses, like GPT Neo from the nonprofit EleutherAI, are more fully open, the researchers say. But it is difficult for such projects to get on an equal footing.

First, the data required to train advanced models is often kept secret. Second, software frameworks required to build such models are often controlled by large corporations. The two most popular ones, TensorFlow and Pytorch, are maintained by Google and Meta, respectively. Third, computer power required to train a large model is also beyond the reach of any normal developer or company, typically requiring tens or hundreds of millions of dollars for a single training run. And finally, the human labor required to finesse and improve these models is also a resource that is mostly only available to big companies with deep pockets.

The way things are headed, one of the most important technologies in decades could end up enriching and empowering just a handful of companies, including OpenAI, Microsoft, Meta, and Google. If AI really is such a world-changing technology, then the greatest benefits might be felt if it were made more widely available and accessible.

“What our analysis points to is that openness not only doesn’t serve to ‘democratize’ AI,” Meredith Whittaker, president of Signal and one of the researchers behind the paper, tells me. “Indeed, we show that companies and institutions can and have leveraged ‘open’ technologies to entrench and expand centralized power.”

Whittaker adds that the myth of openness should be a factor in much-needed AI regulations. “We do badly need meaningful alternatives to technology defined and dominated by large, monopolistic corporations—especially as AI systems are integrated into many highly sensitive domains with particular public impact: in health care, finance, education, and the workplace,” she says. “Creating the conditions to make such alternatives possible is a project that can coexist with, and even be supported by, regulatory movements such as antitrust reforms.”

Beyond checking the power of big companies, making AI more open could be crucial to unlock the technology’s best potential—and avoid its worst tendencies.

If we want to understand how capable the most advanced AI models are, and mitigate risks that could come with deployment and further progress, it might be better to make them open to the world’s scientists.

Just as security through obscurity never really guarantees that code will run safely, guarding the workings of powerful AI models may not be the smartest way to proceed.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

OpenAI, Nvidia, NSF, NASA And More Partner On Federal Program To Increase Access To AI Resources​

The pilot program, called the National Artificial Intelligence Research Resource, follows President Biden’s executive order mandating lower entry to AI infrastructure.

Richard Nieva
Forbes Staff

I'm a senior writer covering technology companies.


Jan 24, 2024, 11:00am EST

US-POLITICS-CONGRESS-TECH-MANUFACTURING

Sethuraman Panchanathan, director of the National Science Foundation, which is leading the program.

AFP VIA GETTY IMAGES

The National Science Foundation on Wednesday launched a sprawling new AI infrastructure program aimed at increasing the access of computing resources to more researchers and schools in the U.S.—not just those at deep-pocketed tech companies or elite universities.

The pilot program, called the National Artificial Intelligence Research Resource, or NAIRR, invites researchers to apply for access to AI research and development tools including computing power, models and datasets, donated by several big companies and government agencies. Marquee Silicon Valley players participating include OpenAI, Meta, Nvidia, and Microsoft, while public agencies involved include NASA, the Department of Defense, and the National Institutes of Health. In all, 19 companies and 11 agencies are participating.

Microsoft and Nvidia will donate $20 million and $30 million, respectively, in credits to their cloud and infrastructure platforms and other resources. Meta will help support NAIRR researchers working with the social giant’s LaMDA language model. OpenAI will donate up to $1 million in model access credits for projects related to AI safety or societal impacts. NASA will provide datasets and hands-on tutorials to researchers. The Pentagon will help to manage and allocate computing resources, while the NIH will help focus on healthcare related projects.

Katie Antypas, the NSF’s director of the office of advanced cyberinfrastructure, said the program is crucial to democratizing research. “The resources needed to even begin participating in the ecosystem have become increasingly concentrated and inaccessible to many, many communities,” she said during a briefing with reporters on Tuesday. “This pilot is the first step to bridging that gap.”

NAIRR’s debut follows President Joe Biden’s executive order in October, in which he mandated the creation of a program that would help spread access to AI infrastructure to more people and organizations. The pilot will focus on four areas, homing in on open research, privacy and security, interoperability and getting AI access into more classrooms. For example, a community college researcher, or an educator at a school a serving rural or minority community, might apply for the resources, Antypas said.

The effort comes as the AI frenzy continues. But AI research has traditionally come from only the most well-off institutions or big tech companies, with enough money to power the vast computing hardware and software needed to train AI models.

“We’re going to move fast and build things,” Sethuraman Panchanathan, director of the NSF, said at the briefing. “In order to do this, we need clearly accessible infrastructure—the other AI. One that is available to inspire, motivate, and energize talent and ideas all across our nation.”
[/SIZE]
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

Google’s new Gemini-powered conversational tool helps advertisers quickly build Search campaigns​

Aisha Malik @aiishamalik1 / 1:21 PM EST•January 23, 2024

Google logo on building

Image Credits: Alex Tai/SOPA Images/LightRocket / Getty Images

Google announced today that Gemini, its family of multimodal large language models, now powers the conversational experience within the Google Ads platform. With this new update, it will be easier for advertisers to quickly build and scale Search ad campaigns.

The conversational experience is designed to help build Search campaigns through a chat-based tool. The tool uses your website URL to create Search campaigns by generating relevant ad content, including assets and keywords. It suggests images tailored to your campaign using generative AI and images from your website. Google notes that that all of the images created with generative AI will be identified as such.

Advertisers approve the images and text before the campaign goes live.

Beta access to the conversational experience in Google Ads is now available to all English language advertisers in the U.S. and U.K. Access will start opening up globally to all English language advertisers over the next few weeks. Google plans to open up access in additional languages in the upcoming months.

Google's conversational chat feature for advertisers

Image Credits: Google

“Over the last few months, we’ve been testing the conversational experience with a small group of advertisers,” wrote Shashi Thakur, Google’s VP and GM of Google Ads, in a blog post. “We observed that it helps them build higher quality Search campaigns with less effort.”

The new tool will join Google’s other AI-powered tools for advertisers. A few months ago, Google introduced a suite of generative AI product imagery tools for advertisers in the U.S. called “Product Studio.” The tools allow merchants and advertisers to use text-to-image AI capabilities to create new product imagery for free by typing in a prompt describing what they would like to see. The tools also allow advertisers to improve low-quality images and remove distracting backgrounds.

Today’s announcement comes as Google has been pushing to integrate AI across its products. For instance, the company revealed today that it’s adding three new AI-powered features to Chrome, including a way to organize your tabs, customize your theme, and get help when writing things like online reviews or forum posts on the web.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

Alphabet cuts ties with Australian AI firm that helped train Bard and Google Search​

PUBLISHED TUE, JAN 23 2024 12:25 PM EST
UPDATED TUE, JAN 23 2024 2:41 PM EST


Hayden Field @HAYDENFIELD

KEY POINTS
  • Alphabet has cut all contractual ties with Appen, the artificial intelligence data firm that helped train Google’s chatbot Bard, Google Search results and other AI products.
  • Alphabet contracts account for roughly one-third of Appen’s business revenue.
  • Appen has helped train AI models for a star-studded list of tech behemoths, including Microsoft, Nvidia, Meta, Apple, Adobe, Google and Amazon.
In this article

Google and Alphabet CEO Sundar Pichai departs federal court on October 30, 2023 in Washington, DC. Pichai testified on Monday to defend his company in the largest antitrust case since the 1990s.

Google and Alphabet CEO Sundar Pichai departs federal court on October 30, 2023 in Washington, DC. Pichai testified on Monday to defend his company in the largest antitrust case since the 1990s.
Drew Angerer | Getty Images News | Getty Images

Alphabet has cut contractual ties with Appen, the artificial intelligence data firm that helped train Google’s chatbot Bard, Google Search results and other AI products.

After a “strategic review process,” Alphabet notified Appen over the weekend of the termination, which will go into effect March 19, according to a filing from Appen. The company said it had “no prior knowledge of Google’s decision to terminate the contract.”

Alphabet accounted for roughly one-third of Appen’s revenue, meaning the decision to end the relationship will impact “at least two thousand subcontracted Alphabet workers,” according to a statement Monday from the Alphabet Workers Union.

Appen, based in Australia, has helped train AI models for a star-studded list of tech behemoths. Five customers — Microsoft, Apple, Meta, Google and Amazon — have in the past accounted for 80% of Appen’s revenue. Appen has a platform of about 1 million freelance workers in more than 170 countries.

In 2023, revenue from work with Alphabet totaled $82.8 million of Appen’s $273 million in sales for the year, according to Monday’s filing.

Despite Appen’s enviable client list and its nearly 30-year history, the company has struggled in recent years with a loss of customers, a string of executive departures and plummeting financials — even as generative AI tools increased demand for training data. Revenue dropped 30% in 2023, after declining 13% a year earlier, which the company attributed in part to “challenging external operating and macro conditions.”

In August 2020, Appen’s shares peaked at AU$42.44 ($27.08) on the Australian Securities Exchange, sending its market cap to the equivalent of $4.3 billion. Now, the stock is trading at around 28 Australian cents, down more than 99% since its peak.

Former employees, who asked not to be named for fear of retaliation, told CNBC in September that the company’s current struggle to pivot to generative AI reflects years of weak quality controls and a disjointed organizational structure.

Appen’s past work for tech companies has been on projects like evaluating the relevance of search results, helping AI assistants understand requests in different accents, categorizing e-commerce images using AI and building out map locations of electric vehicle charging stations, according to public information and interviews conducted by CNBC.

Appen has also touted its work on search relevance for Adobe and on translation services for Microsoft, as well as in providing training data for lidar companies, security applications and automotive manufacturers.

But large language models of today operate differently. The underlying LLMs behind OpenAI’s ChatGPT and Google’s Bard are scouring the digital universe to provide sophisticated answers and advanced images in response to simple text queries. Companies are spending far more on processors from Nvidia and less on Appen.

Google and Appen have had conflicts in the past, namely a dispute about wages. In 2019, Google said its contractors would need to pay their workers $15 an hour. Appen didn’t meet that requirement, according to public letters written by some workers.

In January 2023, after months of organizing, raises went into effect for Appen freelancers working on the Bard chatbot and other Google products. The rates went up to between $14 and $14.50 per hour.

But labor issues persisted. In June, Appen faced charges from the U.S. National Labor Relations Board after allegedly firing six freelancers who spoke out publicly about frustrations with workplace conditions. The workers were later reinstated.

Appen wrote in Monday’s filing that it will focus on managing costs, turning the business around and providing customers with quality AI data.

“Appen will immediately adjust its strategic priorities following the notification of the Google contract termination and provide further details in its FY23 full year results on 27 February 2024,” the company wrote.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,382
Reputation
8,215
Daps
156,509

Google Chrome gains AI features, including a writing helper, theme creator, and tab organizer​

Sarah Perez @sarahpereztc / 12:00 PM EST•January 23, 2024

chrome-logo.png

Image Credits: Google

Google’s Chrome web browser is getting an infusion of AI technology in the latest release. The company announced today it’s soon adding a trio of new AI-powered features to Chrome for Mac and Windows, including a way to smartly organize your tabs, customize your theme, and get help when writing things on the web — like forum posts, online reviews, and more.

The latter is similar to a feature already available to Google’s experimental AI search experience, SGE (Search Generative Experience), which allows users to get help drafting things like emails in different tones, like more formal or more casual, or in different lengths.

With the built-in writing helper in Chrome, Google suggests users could write business reviews, “craft a friendly RSVP to a party,” or make a more formal inquiry about a vacation rental, among other things, including writing posts in public spaces, like online forum sites.

Help-me-write.jpeg

Image Credits: Google

The still-experimental feature will be accessible in next month’s Chrome release by right-clicking on a text box or field on the web and then choosing “help me write.” To get started, you’ll first write a few words and then Google’s AI will jump in to help.

In addition to the writing assistant, AI can also be used to help organize tab groups and personalize your browser.

Chrome’s Tab Groups feature allows users who keep many tabs open to manage them by organizing them into groups. However, curating them can be a manual process, the company explains. With the new Tab Organizer, Chrome will automatically suggest and create groups based on the tabs you already have open. This feature will be available by right-clicking on a tab and selecting “Organize Similar Tabs.” Chrome will also suggest names and emojis for the tab groups it creates to make them easier to find. This feature is intended to assist when users are online shopping, researching, trip planning, or doing other tasks that tend to leave a lot of open tabs.

GIF_Tab-Organizer_Chrome.gif

Image Credits: Google

A final addition mirrors the new generative AI wallpaper experience that recently arrived on Android 14 and Pixel devices. Now Google will use the same text-to-image diffusion model to allow users to generate custom themes for their Chrome browser. The feature allows you to generate these themes by subject, mood, visual style, and color by selecting the new “Create with AI” option after opening the “Customize Chrome” side panel and clicking “Change theme.” Before, Chrome offered a variety of colorful but simple themes to choose from alongside those from artists, but this feature will allow users to expand beyond the built-in choices to create a theme that better matches their own current vibe.

GIF_Create_Themes_Inline.gif

Image Credits: Google

Though a busy theme could be distracting, the feature at least allows users who don’t have an Android phone to test-drive Google’s generative AI for personalization, even if they end up returning to a more basic theme for day-to-day use.

While the drafting feature won’t arrive until next month’s Chrome release, Google says that the other features, like the tab organizer and AI theme creator, will roll out over the next few days in the U.S. on both Mac and Windows with the current Chrome release (M121). To access these features, you’ll sign into Chrome, select “Settings” from the three-dot menu, and then navigate to the “Experimental AI” page. Because the features are experimental, they won’t ship to enterprise and educational customers at this time, the company notes.

The features join other AI-powered and machine learning (ML) tools already available in Chrome, like its ability to caption audio and video, protect users from malicious sites via Android’s Safe Browsing feature in Chrome, silence permission prompts, and summarize web pages via the “SGE while browsing” feature.

Google says that Chrome will be updated with more AI and ML features in the coming year, including through integrations with its new AI model, Gemini, which will be used to help make web browsing easier.
 
Top