bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069




In our benchmarking set of 30 Olympiad geometry problems (IMO-AG-30), compiled from the Olympiads from 2000 to 2022, AlphaGeometry solved 25 problems under competition time limits. This is approaching the average score of human gold medalists on these same problems. The previous state-of-the-art approach, known as “Wu’s method”, solved 10.

AI systems often struggle with complex problems in geometry and mathematics due to a lack of reasoning skills and training data. AlphaGeometry’s system combines the predictive power of a neural language model with a rule-bound deduction engine, which work in tandem to find solutions. And by developing a method to generate a vast pool of synthetic training data - 100 million unique examples - we can train AlphaGeometry without any human demonstrations, sidestepping the data bottleneck.

With AlphaGeometry, we demonstrate AI’s growing ability to reason logically, and to discover and verify new knowledge. Solving Olympiad-level geometry problems is an important milestone in developing deep mathematical reasoning on the path towards more advanced and general AI systems. We are open-sourcing the AlphaGeometry code and model, and hope that together with other tools and approaches in synthetic data generation and training, it helps open up new possibilities across mathematics, science, and AI.

It makes perfect sense to me now that researchers in AI are trying their hands on the IMO geometry problems first because finding solutions for them works a little bit like chess in the sense that we have a rather small number of sensible moves at every step. But I still find it stunning that they could make it work. It's an impressive achievement.
NGÔ BẢO CHÂU, FIELDS MEDALIST AND IMO GOLD MEDALIST


:gladbron:

this is the sort of breakthrough needed for AI to engineer things and create solutions accurately.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad​

Lawmakers are introducing new bills to protect famous actors and musicians from ‘AI Fraud’—and maybe the rest of us.


By Janus Rose

NEW YORK, US

January 16, 2024, 11:12am

1705421503279-screen-shot-2024-01-16-at-111121-am.png

IMAGE: TIKTOK

If you’ve been on TikTok lately, you may have noticed weird videos of celebrities promoting extremely shady products, such as a robotic-sounding Taylor Swift promising viewers a free cookware set. All of these videos are scams created with generative AI—the latest example of how the technology is being used to create disturbing virtual clones of people without their consent.

Needless to say, this kind of thing has pissed off a lot of famous people. And now, Congress is proposing new legislation that aims to combat AI deepfakes—specifically when it comes to things like fake celebrity endorsements and non-consensual AI-generated nudes, which have become a problem online and in high schools. Despite the surging popularity of websites and apps designed to generate deepfakes, there's no comprehensive law on the books banning the creation of AI images.

The new bill, called the No AI FRAUD Act and introduced by Rep. María Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.



The bill also specifically targets AI deepfake porn, saying that “any digital depiction or digital voice replica which includes child sexual abuse material, is sexually explicit, or includes intimate images” meets the definition of harm under the act.

The proposed Act is a companion to a similar bill in the Senate, called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act ( NO FAKES Act), which was introduced last October in the aftermath of the viral deepfaked Drake song. The new bill was also introduced the same day as another measure proposed by lawmakers in Tennessee, called the Ensuring Likeness Voice and Image Security Act ( ELVIS Act).

Given that these bills seem to be a response to celebrities getting mad, either in whole or in part, the big question is whether or not they would in practice protect normal people—and not just the intellectual property rights of pop stars with multi-million dollar record deals.

“It’s really drafted with an eye toward the property rights that celebrities and recording artists have in their likeness and voices,” Carrie Goldberg, an attorney who specializes in deepfakes and other internet-based harassment, told Motherboard. “However, our legal system treats the intellectual property of celebrities differently than those of people not in the public eye.”

The most common example is paparazzi photos, Goldberg said. The law allows some redress for celebrities when their photos are taken without permission and used for commercial gain. But for the average person, the rights to their photos belong solely to the person who took them, and there’s not much they can do about someone reproducing their image for reasons other than profit—unless they have the money to spend on an expensive and often lengthy legal process.

“For normal people, when their image is exploited, it’s not usually for commercial gain but instead to embarrass or harass them; and the wrongdoer in these situations is rarely somebody who has the resources to make a lawsuit worthwhile for the victim,” said Goldberg.

The new bill states that everyone has a right to control their own voice and likeness against deepfakes, but the provisions for non-famous people depend heavily on the victim proving harm. Specifically, that means proving that the deepfake has resulted in “physical or physical injury,” caused “severe emotional distress,” or is sexually explicit in nature.

Of course, all of this is an attempt to regulate a symptom of a larger problem, which is that tech companies are building massive AI systems with data scraped from the internet and no robust mitigations against the harm they inevitably cause. In an ongoing lawsuit against ChatGPT creator OpenAI, the company recently argued that it shouldn’t be punished for training its AI models with illegal and copyrighted material because it’s “impossible” to create AI systems without doing so.

But the nature of black box AI systems built by companies like OpenAI, Microsoft, and Meta virtually guarantees that these bad things will happen. Recently, researchers found over 3,000 images of child sexual abuse material in a massive dataset used to train almost every major AI system on the market. Companies are also struggling to ensure that their generative AI systems will filter out illegal content, and deepfake porn has been found at the top of Google and Bing image search results. A major issue is that there are numerous apps made by smaller companies or individuals that are designed solely to create non-consensual AI nudes, which advertise their services on major social media platforms and are available on app stores.

Ultimately, says Goldberg, these problems won’t be fully addressed until the companies building these AI systems are held responsible.

“What our society really needs is to be attacking AI and deepfakes on a systemic level and going after the malicious products that are available on mainstream places like the AppStore and GooglePlay that are on the market solely to manipulate images,” said Goldberg. “We need to pressure search engines to not guide people to these products or promote sites that publish these images and we need to require that they make content removal simple for victims.”

TAGGED: SCAMS AI AI SCAMS CELEBRITIES
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Samsung’s latest Galaxy phones offer live translation over phone calls, texts​

Sarah Perez @sarahpereztc / 1:33 PM EST•January 17, 2024

a variety of languages listed on a piece of paper

Image Credits: Samsung

At Samsung’s launch event today, where the company introduced its AI-powered Galaxy S24 line of smartphones, the company introduced how AI could improve the calling experience with a new Live Translation feature. The feature, one of several translation features coming to the new smartphones, allows customers to make or receive a call in a language they don’t speak and then receive a live translation of the call both audibly and on the screen.

The company demoed the AI technology onstage at today’s Unpacked event, showing how someone could use the new feature to make a restaurant reservation, even though they didn’t speak Spanish. Within the native Call app, the conversation was translated immediately after they spoke to the other person on the phone. That person could then respond in their own language — in this case, Spanish, which was then translated again into the caller’s language, English.

“It’s like having your own personal interpreter on your calls,” noted Samsung VP of Product Management, Drew Blackard.

The live translation feature will launch with support for audio and text translations for up to 13 languages, Samsung said. In addition, all the translation happens on-device, so Samsung owners’ phone calls remain private.

The feature will also remember your language settings and which language each of your contacts speaks, so you don’t have to make updates before your conversation. This could be useful for people making a lot of international calls, as well as for frequent travelers, the company suggested.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

DeepMind co-founder Mustafa Suleyman warns AI is a ‘fundamentally labor replacing’ tool over the long term​

BY WILL DANIEL

January 17, 2024 at 5:27 PM EST

GettyImages-961707246-e1705509794712.jpg

Mustafa Suleyman, co-founder at DeepMind Technologies, pauses during Bloomberg's Sooner Than You Think technology conference in Paris, France, on Wednesday, May 23, 2018.

PHOTOGRAPHER: MARLENE AWAAD/BLOOMBERG VIA GETTY IMAGES

DeepMind co-founder Mustafa Suleyman is a heavyweight in the AI space. The Oxford dropout worked as a negotiator for the United Nations and the Dutch government early in his career, but then pivoted to AI and founded DeepMind in 2010 alongside Demis Hassabis and Shane Legg.

The machine learning lab grew like a weed under Suleyman, with the backing of Peter Thiel’s Founders’ Fund, before selling to Google parent company Alphabet for £400 million in 2014. Suleyman then took on several roles at DeepMind before stepping down five years later.




Now, the veteran AI founder is working on a new company called Inflection AI, which offers personalized AI assistants. And while Suleyman remains an avid supporter of AI, he expressed concerns about the industry’s possible negative effects—in particular on workers.

“In the long term…we have to think very hard about how we integrate these tools, because left completely to the market and to their own devices, these are fundamentally labor replacing tools,” Suleyman told CNBC on Wednesday at the World Economic Forum’s annual gathering in Davos, Switzerland.

AI tools do two main things fundamentally differently, the DeepMind co-founder said. First, they make existing operations more efficient, which can lead to huge savings for businesses, but often by replacing the humans who did those jobs. Second, they allow for entirely new operations and processes to be created—a process that can lead to job creation. These two forces will both hit the labor market by storm in coming years, leaving a serious, but unpredictable impact.

While Suleyman expects AI to “augment us and make us smarter and more productive for the next couple decades,” over the long term, its impact is still “an open question.”

Experts have been debating whether AI will replace human workers for over a decade. Some researchers argue that AI will lead to a wave of unemployment and economic disruption as it takes jobs worldwide, but others believe that the technology will create new job opportunities and spur economic growth by boosting worker productivity.

There’s been a steady stream of academic papers on the topic. A 2013 study by Carl Benedikt Frey and Michael Osborne, for example, estimated that 47% of US jobs are at risk of being automated amid the AI boom by the mid-2030s. And a July McKinsey study found that nearly 12 million Americans will need to switch jobs by 2030 as AI takes over their roles.

On the other hand, some researchers have found that AI could boost economic growth and offer new opportunities for workers. A 2022 United Nations’ International Labor Organization (ILO) study found that most AI systems will complement workers, rather than replacing them.

Still, Suleyman isn’t the only big name in the AI industry to warn about the scary implications of AI for the labor market.

In a Jan. 10 Wired article, MIT professor Daron Acemoglu predicted that AI would disappoint everyone in 2024, proving itself merely a form of “so-so automation” that will take jobs from workers but fail to deliver the expected monumental improvements to productivity.

Researchers have yet to solve the problem of hallucinations—where generative AI systems exaggerate or fabricate facts—and that could lead to a whole host of issues in coming years, the noted economist argued, adding that there’s “no quick fix” to the problem.

“Generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable,” Acemoglu wrote.

For Suleyman, unlike Acemoglu, it’s not that hype surrounding AI isn’t real, it’s definitely a “truly transformational technology.”

“Everything that is of value in our world has been created by our intelligence, our ability to reason over information and make predictions. These tools do exactly that, so it’s going to be very fundamental,” he explained Wednesday.

Suleyman instead fears that AI will be so good at replicating humans that it will eventually displace workers, and without regulation, that could lead to serious economic consequences.

That being said, like Acemoglu, Suleyman argued that AI’s proponents might be getting ahead of themselves with their optimistic near-term outlooks for rising productivity. The true impact of AI, from its ability to birth revolutionary technologies to its potential to stoke epic job losses, likely won’t hit for years.

“AI is truly one of the most incredible technologies of our lifetimes, but at the same time, it feels like expectations about its delivery are higher than they’ve ever been and maybe we have hit a kind of peak hype for this moment,” he explained.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

META

Mark Zuckerberg’s new goal is creating artificial general intelligence​

And he wants Meta to open source it. Eventually. Maybe.​


By Alex Heath
Alex Heath Profile and Activity - The Verge, a deputy editor and author of the Command Line newsletter. He’s covered the tech industry for over a decade at The Information and other outlets.

Jan 18, 2024, 12:59 PM EST103 Comments / 103 New



246967_Meta_Zuckerberg_Interview_final4_CVirginia.jpg

Cath Virginia / The Verge | Photos by Getty Images

Fueling the generative AI craze is a belief that the tech industry is on a path to achieving superhuman, god-like intelligence.

OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal.

Now, Meta CEO Mark Zuckerberg is entering the race. While he doesn’t have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he’s shaking things up by moving Meta’s AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta’s apps. The goal is for Meta’s AI breakthroughs to more directly reach its billions of users.

“We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence,” Zuckerberg tells me in an exclusive interview. “I think that’s important to convey because a lot of the best researchers want to work on the more ambitious problems.”

Here, Zuckerberg is saying the quiet part aloud. The battle for AI talent has never been more fierce, with every company in the space vying for an extremely small pool of researchers and engineers. Those with the needed expertise can command eye-popping compensation packages to the tune of over $1 million a year. CEOs like Zuckerberg are routinely pulled in to try to win over a key recruit or keep a researcher from defecting to a competitor.

“We’re used to there being pretty intense talent wars,” he says. “But there are different dynamics here with multiple companies going for the same profile, [and] a lot of VCs and folks throwing money at different projects, making it easy for people to start different things externally.”

After talent, the scarcest resource in the AI field is the computing power needed to train and run large models. On this topic, Zuckerberg is ready to flex. He tells me that, by the end of this year, Meta will own more than 340,000 of Nvidia’s H100 GPUs — the industry’s chip of choice for building generative AI.

“We have built up the capacity to do this at a scale that may be larger than any other individual company”

External research has pegged Meta’s H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft’s shipments and at least three times larger than everyone else’s. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.

“We have built up the capacity to do this at a scale that may be larger than any other individual company,” he says. “I think a lot of people may not appreciate that.”

The realization

No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive.

“I don’t have a one-sentence, pithy definition,” he tells me. “You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”

Related

Inside Meta’s big AI reorg

He sees its eventual arrival as being a gradual process, rather than a single moment. “I’m not actually that sure that some specific threshold will feel that profound.”

As Zuckerberg explains it, Meta’s new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn’t think that the ability for it to generate code made sense for how people would use a LLM in Meta’s apps. But it’s still an important skill to develop for building smarter AI, so Meta built it anyway.

“One hypothesis was that coding isn’t that important because it’s not like a lot of people are going to ask coding questions in WhatsApp,” he says. “It turns out that coding is actually really important structurally for having the LLMs be able to understand the rigor and hierarchical structure of knowledge, and just generally have more of an intuitive sense of logic.”

“Our ambition is to build things that are at the state of the art and eventually the leading models in the industry”

Meta is training Llama 3 now, and it will have code-generating capabilities, he says. Like Google’s new Gemini model, another focus is on more advanced reasoning and planning abilities.

“Llama 2 wasn’t an industry-leading model, but it was the best open-source model,” he says. “With Llama 3 and beyond, our ambition is to build things that are at the state of the art and eventually the leading models in the industry.”

Open versus closed

The question of who gets to eventually control AGI is a hotly debated one, as the near implosion of OpenAI recently showed the world.

Zuckerberg wields total power at Meta thanks to his voting control over the company’s stock. That puts him in a uniquely powerful position that could be dangerously amplified if AGI is ever achieved. His answer is the playbook that Meta has followed so far for Llama, which can — at least for most use cases — be considered open source.

“I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated,” Zuckerberg says. “Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”

Without naming names, he contrasts Meta’s approach to that of OpenAI’s, which began with the intention of open sourcing its models but has becoming increasingly less transparent. “There were all these companies that used to be open, used to publish all their work, and used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it.’”

While Sam Altman and others espouse the safety benefits of a more closed approach to AI development, Zuckerberg sees a shrewd business play. Meanwhile, the models that have been deployed so far have yet to cause catastrophic damage, he argues.

“The biggest companies that started off with the biggest leads are also, in a lot of cases, the ones calling the most for saying you need to put in place all these guardrails on how everyone else builds AI,” he tells me. “I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy.”

“I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy”

Zuckerberg has his own motivations, of course. The end result of his open vision for AI is still a concentration of power, just in a different shape. Meta already has more users than almost any company on Earth and a wildly profitable social media business. AI features can arguably make his platforms even stickier and more useful. And if Meta can effectively standardize the development of AI by releasing its models openly, its influence over the ecosystem will only grow.

There’s another wrinkle: If AGI is ever achieved at Meta, the call to open source it or not is ultimately Zuckerberg’s. He’s not ready to commit either way.

“For as long as it makes sense and is the safe and responsible thing to do, then I think we will generally want to lean towards open source,” he says. “Obviously, you don’t want to be locked into doing something because you said you would.”

Don’t call it a pivot

In the broader context of Meta, the timing of Zuckerberg’s new AGI push is a bit awkward.

It has been only two years since he changed the company name to focus on the metaverse. Meta’s latest smart glasses with Ray-Ban are showing early traction, but full-fledged AR glasses feel increasingly further out. Apple, meanwhile, has recently validated his bet on headsets with the launch of the Vision Pro, even though VR is still a niche industry.

Zuckerberg, of course, disagrees with the characterization of his focus on AI being a pivot.

“I don’t know how to more unequivocally state that we’re continuing to focus on Reality Labs and the metaverse,” he tells me, pointing to the fact that Meta is still spending north of $15 billion a year on the initiative. Its Ray-Ban smart glasses recently added a visual AI assistant that can identify objects and translate languages. He sees generative AI playing a more critical role in Meta’s hardware efforts going forward.

“I don’t know how to more unequivocally state that we’re continuing to focus on Reality Labs and the metaverse”

He sees a future in which virtual worlds are generated by AI and filled with AI characters that accompany real people. He says a new platform is coming this year to let anyone create their own AI characters and distribute them across Meta’s social apps. Perhaps, he suggests, these AIs will even be able to post their own content to the feeds of Facebook, Instagram, and Threads.

Meta is still a metaverse company. It’s the biggest social media company in the world. It’s now trying to build AGI. Zuckerberg frames all this around the overarching mission of “building the future of connection.”

To date, that connection has been mostly humans interacting with each other. Talking to Zuckerberg, it’s clear that, going forward, it’s increasingly going to be about humans talking to AIs, too. It’s obvious that he views this future as inevitable and exciting, whether the rest of us are ready for it or not.


























 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Microsoft makes its AI-powered reading tutor free​

Kyle Wiggers @kyle_l_wiggers / 1:00 PM EST•January 18, 2024
A Microsoft store entrance with the company's logo

Image Credits: Nicolas Economou/NurPhoto / Getty Images

Microsoft today made Reading Coach, its AI-powered tool that provides learners with personalized reading practice, available at no cost to anyone with a Microsoft account.

As of this morning, Reading Coach is accessible on the web in preview — a Windows app is forthcoming. And soon (in late spring), Reading Coach will integrate with learning management systems such as Canva, Microsoft says.

“It’s well known that reading is foundational to a student’s academic success; studies show that fluent readers are four times more likely to graduate high school and get better jobs,” Microsoft writes in a blog post. “With the latest AI technology, we have an opportunity to provide learners with personalized, engaging, and transformative reading experiences.”

Reading Coach builds on Reading Progress, a plug-in for the education-focused version of Microsoft Teams, Teams for Education, designed to help teachers foster reading fluency in their students. Inspired by the success of Reading Progress (evidently), Microsoft launched Reading Coach in 2022 as a part of Teams for Education and Immersive Reader, the company’s cross-platform assistive service for language and reading comprehension.

Microsoft Reading Coach

Image Credits: Microsoft

Reading Coach works by having learners identify words they struggle with the most and presenting them with tools to support independent, individualized practice. Based on an educator’s preferences, the tools available can include text to speech, syllable breaking and picture dictionaries.

After a learner practices in Reading Coach, educators can view their work, including which words the student practiced, how many attempts they made and which tools they used. Educators can also share this information with students if they choose.

Recently, Reading Coach received a spruce-up in the form of a “choose your own story” feature, powered by Microsoft’s Azure OpenAI Service, that lets learners tap AI to generate their own narrative adventure.

Akin to the AI-generated story tool on the Amazon Echo Show, Reading Coach’s “choose your own story” has learners select a character, setting and reading level and have AI create content based on these selections and the learner’s most challenging words. (Microsoft says that story content is moderated and filtered for things like “quality, safety and age appropriateness.”) Reading Coach provides feedback on pronunciation, listening to the learner read the story and awarding badges that unlock new characters and scenes as they progress.

Learners who opt not to create their own story can pick from curated passages in ReadWorks, a library of resources for reading comprehension.

“Reading Coach intrinsically motivates learners to continue advancing their skills in several ways,” Microsoft continues. “With the use of AI in an impactful, safe, responsible way, we believe that personalized learning at scale is within reach.”

Microsoft’s rosy view of AI for teaching reading comprehension isn’t shared by all educators, it’s key to note. Experts say that there isn’t a foolproof tool on the market for measuring comprehension, which involves assessing what students know and the strength of their vocabulary as well as whether they can sound out and pronounce words. Students can inadvertently affect evaluations by pressing a wrong button. Or they might get bored with a task a tool’s presenting to them and disengage, leading to a low score.

All that being said, teachers don’t think tools like Reading Coach can hurt. In a recent EdWeek Research Center survey, 44% of educators said that they think adaptive tech does a better job of accurately assessing a students’ reading level than non-adaptive software or pen-and-paper methods.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Forta scores $55M to scale parent-led autism therapy platform​

The company will use the funds to grow its autism therapy practice, continue AI and LLM research, and develop clinical algorithms.

By Jessica Hagen

January 18, 2024
12:20 pm


Getty%20Images%20-%201302433766.jpeg

Photo: Image taken by Mayte Torres/Getty Images

AI-enabled, family-powered autism therapy company Forta announced it closed a $55 million Series A funding round led by Insight Partners.

Alumni Ventures and Exor Ventures also participated in the round, as well as the founders of Forward, Flexport, Curative, 23&Me, Prelude Fertility, Warby Parker, Harry's and Allbirds.

WHAT IT DOES

Forta combines AI and LLMs with a 50-hour training course for parents to provide their child with personalized applied behavior analysis (ABA) therapy for autism.

ABA therapy is based on learning and behavior. It involves various techniques, such as positive reinforcement and understanding antecedent, behavior and consequence.

The San Francisco-based company will use the Series A funds to grow its autism therapy practice, continue its AI and LLM research, and develop its clinical algorithms further.

"When we identify areas of care shortages, we find ways to empower local and family caregivers with AI-based tools and education to deliver earlier and more effective interventions," Ritankar Das, cofounder and CEO of Forta, said in a statement.

"[Forta] started by applying our platform to autism therapy because many families face years of waiting for care. Now, we are exploring how to expand the empowered home caregiver model to memory care and other chronic conditions."

MARKET SNAPSHOT

As of 2020, about one in 36 children had been identified as having autism spectrum disorder, according to the CDC.

Several digital health companies focus on providing autism care, including Los Angeles-based SpectrumAi, which offers tools to help analyze quality and outcomes data on ABA therapy in order to improve communication, social and learning skills for those with conditions like autism.

SpectrumAi raised up to $20 million in Series A funding last year and $9 million in seed fundingin 2022.

Other companies in the space include Sprout, a tech-enabled autism care and treatment company offering in-home assessments and applied behavior-analysis therapy, and virtual reality behavioral-therapy platform Floreo, which teaches individuals with autism, ADHD, anxiety and other conditions independent-living skills, communication, and social and emotional regulation.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

OpenAI partners with Arizona State University to use ChatGPT in classrooms​

Students and faculty can expect to use more AI on campus.​


By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Jan 18, 2024, 6:20 PM EST|

acastro_181017_1777_brain_ai_0001.jpg

Illustration by Alex Castro / The Verge

Arizona State University (ASU) and OpenAI announced a partnership to bring ChatGPT into ASU’s classrooms.

In a press release, ASU stated that it wants to focus on “three key areas of concentration” where it can use ChatGPT Enterprise, like “enhancing student success, forging new avenues for innovative research, and streamlining organizational processes.”

ASU deputy chief information officer Kyle Bowen told The Verge, “Our faculty and staff were already using ChatGPT, and after the launch of ChatGPT Enterprise, which for us addressed a lot of the security concerns we had, we believed it made sense to connect with OpenAI.” He added that ASU faculty members, some of whom have expertise in AI, will help guide the usage of generative AI on campus.

The university will begin taking project submissions from faculty and students on where to use ChatGPT beginning in February. Anne Jones, vice provost for undergraduate education, said in an interview some professors already use generative AI in their classes. She mentioned some composition classes that use AI to improve writing and journalism classes that use AI platforms to make multimedia stories. There may even be room for chatbots to act as personalized tutors for ASU students, said Jones.

Jones and Bowen say that universities offer a live testing ground for many generative AI use cases.

“Universities hope to foster critical thinking, so we never considered closing ourselves off from the technology. We want to help determine the conditions in which this technology can be used in education,” Jones said.

Last year, ASU launched an AI accelerator program, bringing researchers and engineers together to create AI-powered services. The university also began offering prompt engineering classes to promote AI literacy.

This is the first partnership between OpenAI and an educational institution. The company has slowly begun to forge collaborations with more public-facing organizations. It announced a deal with the Pennsylvania state government to bring ChatGPT Enterprise to some state employees.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

AI-Generated, Virtual Products Are Coming to a Podcast Near You​


A startup backed by United Talent Agency and L’Oreal is aiming to give podcasters a new line of revenue

-1x-1.png

A screenshot of the AsianBossGirl podcast and an AI-generated Garnier Fructis poster

By Ashley Carman

January 18, 2024 at 2:26 PM EST


Hello and welcome back to Soundbite. After a year of unrelenting, bad podcast-industry news, we’re looking ahead today to new technology that could help podcasters make more money from their shows as they pivot to video.

As always, reach me through email, and if you haven’t yet subscribed to this newsletter, please do so here . Tell a friend to sign up, too!

First up, a few stories I’m reading:

Audible laid off around 5% of its workforce, or about 100 employees. The content teams were not impacted, and from a quick LinkedIn search, it seems folks on the recruitment team were among those cut.

Condé Nast announced it will fold Pitchfork under the GQ team and is laying off some staff, including Editor-In-Chief Puja Patel. It’s unclear whether this means the Pitchfork brand will cease to exist or what to expect of the site in the future, but as someone who religiously read and checked out every “Best New” music pick, it feels like the end of an era (and my youth).

My colleagues Devin Leonard and Dasha Afanasieva cover the “downfall of Diddy Inc.” A must read.

Podcasters’ newest money-making effort? Using AI to generate virtual products

When watching a somewhat recent episode of the AsianBossGirl podcast, you might not immediately register the lime green, Garnier Fructis poster in the background, just behind the heads of the hosts. That is, until a disclaimer unfurls on the lower third of the screen disclosing Garnier’s sponsorship. But even then, you might be additionally surprised to learn that this poster neatly hanging on the wall doesn’t physically exist at all. In fact, you might have no idea something is up until the poster disappears midway through the episode, making you wonder how it just went...poof.

As it turns out, a company called Rembrand generated the image of the poster using artificial intelligence and placed it in the video of the show after the whole thing had been recorded.

Rembrand believes its technology could power the future of podcasting, and, more ambitiously, all video product-placement in general, moving the form beyond visual effects and relying entirely on AI to do the creating.

The company began formally selling and placing ads on shows this past June and worked consistently with around 50 creators last quarter, said Cory Treffiletti, Rembrand’s chief marketing officer.

To date, the team has raised $9 million, according to Omar Tawakol, chief executive officer. Investors include UTA.VC, an arm of United Talent Agency, and BOLD, the venture-capital fund from L’Oreal SA.

Some UTA-represented podcasts, including AsianBossGirl and Zane and Heath: Unfiltered , have embraced the technology. Janet Wang, co-host and co-founder of AsianBossGirl, said her UTA agent introduced her and her co-hosts to Rembrand. The revenue they make from the virtual products supplements their typical host-read ads. They only began regularly uploading a video version of their show nine months ago, so they hadn’t previously dealt with product placement.

Typically, they record their episodes in batches. With Rembrand, they don’t have to swap out the parts of the set or the items being promoted – they just send the final video to Rembrand, and its team places the ads.

“We thought it was really cool they could add a poster on the wall or products on tables, as long as the brands aligned with our branding and what our viewers would be receptive to,” she said.

The pitch for brands is to make product placement easier and more affordable because they don’t have to ship physical items.

It takes Rembrand’s team a few hours to generate a synthetic version of the product and place it in videos, said Tawakol, though they eventually hope to get the process down to a few minutes.

He said the team pays anywhere from $10 to $30 per thousand impressions, or CPM. Currently, the average podcast CPM sits at $25 for a 60-second ad, according to Advertisecast .

The podcast industry, still reeling from a tough year of tighter ad budgets, might see this as a salve. But Tawakol’s ambitions for the field come from a more practical place.

“What’s good about podcasts is they’re usually indoors and have less motion,” he said, adding that they initially started to work in the space because it’s a relatively easy environment in which to operate.

This isn’t the first time the podcast industry has seen potential in product placement. QCODE, a network known for its fiction podcasts starring A-list talent, forged partnerships with various brands to include in its shows. Its 2022 thriller, Listening In , starring Rachel Brosnahan, featured ads for Johnnie Walker.

Episodes began with the sound of ice cubes and liquid being poured into a glass and a producer asking a smart speaker to play the podcast. A fictional smart speaker voice then says, “Playing Listening In, presented by Johnnie Walker blended Scotch whiskey.”

QCODE also sells ads on AsianBossGirl , according to Wang.

At its core, this is what the pivot to video in podcasting has always been about – the ability to make more money off shows through additional audience and YouTube monetization. (Video also gives shows a better shot at being discovered through YouTube’s own recommendation system and short-form clip platforms.) And while the AI aspect of this technology is just getting started, for podcasters, any new revenue line feels like a win.

Odds and ends

EU Parliament calls for new rules around music streaming services


The EU Parliament is looking to ensure European artists not only receive more attention from music streaming services and their audiences but also are paid fairly as a result. Members of the government called for new rules, including a guarantee that European musical works are “visible, prominent and accessible” and a public label disclosing when a song is AI-generated. The push follows various efforts with similar aims in several other countries, including Uruguay and Canada.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Garbage AI on Google News​

JOSEPH COX

JAN 18, 2024 AT 9:33 AM

404 Media reviewed multiple examples of AI rip-offs making their way into Google News. Google said it doesn't focus on how an article was produced—by an AI or human—opening the way for more AI-generated articles.

The Google logo.


Google News is boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media has found. Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News.

The presence of AI-generated content on Google News signals two things: first, the black box nature of Google News, with entry into Google News’ rankings in the first place an opaque, but apparently gameable, system. Second, is how Google may not be ready for moderating its News service in the age of consumer-access AI, where essentially anyone is able to churn out a mass of content with little to no regard for its quality or originality.

“I want to read the original stories written by journalists who actually researched them and spoke to primary sources. Any news junkie would,” Brian Penny, a ghostwriter who first flagged some of the seemingly AI-generated articles to 404 Media, said.

💡

Do you know about any other AI-generated content farms? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at joseph@404media.co.

One example was a news site called Worldtimetodays.com, which is littered with full page and other ads. On Wednesday it published an article about Star Wars fandom. The article was very similar to one published a day earlier on the website Distractify, with even the same author photo. One major difference, though, was that Worldtimetodays.com wrote “Let’s be honest, war of stars fans,” rather than Star Wars fans. Another article is a clear rip-off of a piece from Heavy.com, with Worldtimetodays.com not even bothering to replace the Heavy.com watermarked artwork. Gary Graves, the listed author on Worldtimetodays.com, has published more than 40 articles in a 24 hour period.

Both of these rip-off articles appear in Google News search results. The first appears when searching for “Star Wars theory” and setting the results to the past 24 hours. The second appears when searching for the subject of the article with a similar 24 hour setting.

Gallery Image

Gallery Image

LEFT: THE DISTRACTIFY ARTICLE. RIGHT: THE ARTICLE ON WORLDTIMETODAYS.COM.​

Aaron Nobel, editor-in-chief of Heavy.com, told 404 Media in an email that “I was not aware of this particular ripoff or this particular website. But over the years we've encountered many other sites that rip and republish content at scale.” Neither Distractify or Worldtimetodays.com responded to a request for comment.

There are a few different ways to use Google News. One is to simply open the main Google News homepage, where Google surfaces what it thinks are the most important stories of the day. Another is to search for a particular outlet, where you’ll then see recent stories from just that site. A third is to search by “topic,” such as “artificial intelligence,” “Taylor Swift,” or whatever it is you’re interested in. Appearing in topic searches is especially important for outlets looking to garner more attention for their writings on particular beats. 404 Media, at the time of writing does not appear in topic searches (except people, funnily enough, writing about 404 Media, like this Fast Company article about us and other worker-owned media outlets). As in, if you searched “CivitAI,” an artificial intelligence company we’ve investigated extensively, our investigations would not appear in Google News, only people aggregating our work or producing their own would.


In another example of AI-generated rip-off content, Penny sent screenshots of search results for news related to the AI tool “midjourney.” At one point, those included articles from sites such as “WatchdogWire” and “Examiner.com.” These articles appear to use the same images, very similar or identical headlines, and pockets of similar text.

The Examiner.com domain was once used by a legitimate news service and went through various owners and iterations. The site adopted its current branding in around 2022, according to archived versions of the site on the Wayback Machine. With that in mind, it’s worth remembering that some of these sites that more recently pivoted to AI-generated content may have been accepted into Google News long ago, even before the advent of consumer-level AI.

Gallery Image

Gallery Image

Gallery Image

A SERIES OF GOOGLE NEWS SCREENSHOTS PROVIDED BY PENNY.​

Looking at WatchdogWire and Examiner.com more broadly, both sites regularly publish content with the same art and identical or very similar headlines in quick succession every day. Ahmed Baig, one of the listed authors on WatchdogWire, has published more than 500 articles in the past 30 days, according to his author page. Baig did not respond to a request for comment sent over LinkedIn asking whether he was taking work from other outlets and using AI to reword them. Baig lists himself as the editor-in-chief of WatchdogWire, as well as the head of SEO for a company called Sproutica. A contact email for Examiner.com uses the Sproutica domain.

Someone who replied to a request for comment to that address, and who signed off as “Nabeel,” confirmed Examiner.com is using AI to copy other peoples’ articles. “Sometimes it doesn’t perform well by answering out of context text, therefore, my writer proofread the content,” they wrote. “It's an experiment for now which isn't responding as expected in terms of Google Search. Despite publishing 400+ stories it attracted less than 1000 visits.”

The articles on WatchdogWire and Examiner.com are almost always very similar to those published on Watcher.Guru, another news site which also has a popular Twitter account with 2.1 million followers and which regularly goes viral on the platform. When asked if Watcher.Guru has any connection to WatchdogWire or Examiner.com, a person in control of the Watcher.Guru Twitter account told 404 Media in a direct message that “we are not affiliated with these sites. These sites are using AI to steal our content and featured images.”

In another case, Penny sent a screenshot of a Google News result that showed articles from CBC and another outlet called “PiPa News.” The PiPa News piece appears to be a rewrite of the CBC one, with a very similar headline and body of text. PiPa News did not respond to an emailed request for comment. Kerry Kelly from CBC’s public affairs department, said in an email that “We are aware of an increase in outlets and individuals using CBC News articles without proper licensing or attribution, and are working to curb this trend through media monitoring, takedown requests for individual sites, and connecting with social media platforms when appropriate.”




A SCREENSHOT OF WATCHER.GURU'S WEBSITE ON THURSDAY.​




A SCREENSHOT OF EXAMINER.COM'S WEBSITE ON THURSDAY.​

A Google spokesperson said the company focuses on the quality of the content, and not how it was created. Their statement read: “Our focus when ranking content is on the quality of the content, rather than how it was produced. Automatically-generated content produced primarily for ranking purposes is considered spam, and we take action as appropriate under our policies.” Google reiterated that websites are automatically considered for Google News, and that it can take time for the system to identify new websites. The company added that its Google News ranking systems aim to reward original content that demonstrates things such as expertise and trustworthiness.

With that in mind, after 404 Media approached Google for comment, Penny found that the WatchdogWire and Examiner.com results had apparently been removed from search results for the “midjourney” query and another for and “stable diffusion.” Google did not respond when asked multiple times to confirm if it took any action.

404 Media remains outside of news topics results for the beats we cover.









 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

TikTok can generate AI songs, but it probably shouldn’t​


These AI-generated songs are no AI Drake — yet.

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Jan 18, 2024, 6:16 PM EST|2 Comments / 2 New


acastro_200803_1777_tikTok_0001.0.jpg

Illustration by Alex Castro / The Verge

TikTok has launched many songs that have gone viral over the years, but now it’s testing a feature that lets more people exercise their songwriting skills… with some help from AI.

AI Song generates songs from text prompts with help from the large language model Bloom. Users can write out lyrics on the text field when making a post. TikTok will then recommend AI Song to add sounds to the post, and they can toggle the song’s genre.

The Verge reached out to TikTok for comment. The feature was first spotted last week.

AI Song doesn’t seem to be available to everyone yet, but some TikTok users have already begun experimenting with it. The results so far are not great. Many are out of tune despite the availability of auto-tuning vocals. Take this one from TikTok user Jonah Manzano, who created a song that somehow tried to make the word comedy have more syllables than it needs. Another user, Kristi Leilani, sort of recreated a Britney Spears song but, again, with severely out-of-tune vocals.


@kristileilani

What is TikTok AI Song? The new experimental feature, powered by the Bloom LLM, lets creators generate unique songs and lyrics for videos and photos. 🎧 #tiktoknews #newfeature #generativeai #aimusic #aimusicvideo #tiktokai #tiktokaisong #bloom #llm #macinelearning #whatisit #testing

♬ I'm an a i - Kristi Leilani


AI-generated songs, however, are not new to TikTok. The now infamous AI Drake and The Weeknd song “Heart on My Sleeve” gained virality on the platform. Bad Bunny also criticized people for listening to an AI sound-alike posted on TikTok.

TikTok is not the only platform to lean into generative AI features for its users. YouTube began testing a music creation functionality that lets users make songs from either text prompts or a hummed tune. Dream Track allows for 30-second snippets in the style of other popular artists.

To be more transparent, TikTok rolled out other features that help identify AI-created content on the app and updated its rules requiring users to be upfront about using AI in their content.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,069

Google Responds To Claims Of Google News Boosting Garbage AI Content​

Jan 19, 2024 - 7:51 am 0by Barry Schwartz

Filed Under Google Search Engine Optimization


Robots Writing Google Logo

Danny Sullivan, the Google Search Liaison, responded to the article from 404 Media titled Google News Is Boosting Garbage AI-Generated Articles. In short, Sullivan said that the way they filtered Google News was by date and not relevancy (relevancy filtering is the default) and that Google can always do better.

First, go read the article if you can get through the paywall. :smile:

Then here is Danny Sullivan's response, which he posted on X, Mastodon and on Bluesky:

Let me summarize before I quote:

(1) Google News is not boosting AI content

(2) The examples in the article show that the author used filters and special searches to showcase the AI content above the default settings in Google News.

(3) He said focus on quality, it is not about how it is produced but if the content is quality or not.

(4) Google is not perfect, Google will do better.

(5) Why 404 Media doesn't rank is because it is a new site and Google News needs time to trust it but Google will look for ways to improve this process for new news sites entering the market (we've been asking for this for years).

(6) 404 Media uses a paywall or subscription process for its content and thus should use the paywalled structured data so Google can understand the content and rank it.

Now here is what Danny Sullivan posted:

Jason, I’d like to clarify Google News is not somehow “boosting” AI content to the top of search results. This isn't the case. I also appreciate the frustration of a new publication like yours wanting to quickly appear in Google News. I do. It’s something I hope we’ll improve on. Here’s more on both.

Google News, like Google Search, indexes content from across the web. However, appearing in our index isn’t a guarantee content will rank well. In today’s story, the screenshots of AI content supposedly being boosted involve overriding our default ranking to manually force it higher in the results.

It’s possible to use the News tab on Google Search to show news-related content sorted by date, as opposed to by relevance, as done with the screenshots. Doing this is expressly asking our systems to ignore the regular relevance ranking they do and simply show the latest content in descending order.

As for AI content, as we’ve said before, our focus is on quality of content, not production. This shouldn’t be misinterpreted as if we’ve granted a free pass to churning out lots of low quality content. It isn't & doing so is against our policies. More here.

No automated systems are perfect, and this isn’t to say that our default ranking systems would never show sub-par content. We are also always working to improve them to ensure that quality, original journalism is successful...

As for your content appearing in Google News, it can take time for our systems to recognize & surface material from new pubs. The systems tend to want to see some period of news publishing over time. That said, we’ll look to see if we can find ways to improve the process generally to do a better job.

I’d also encourage your publication (or any pub) that has paywall or gated content to provide our access to our crawler, so we can fully understand the work your doing. This can be done in a way that it does not allow readers to bypass registration. More here.

Forum discussion at X, Mastodon and on Bluesky.
 
Top