bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739
NICOLE KOBIE

CULTURE


DEC 24, 2023 7:00 AM

AI Is Telling Bedtime Stories to Your Kids Now​

Artificial intelligence can now tell tales featuring your kids’ favorite characters. It’s copyright chaos—and a major headache for parents and guardians.

AI-Bedtime-Stories-Culture-1416844414.jpg

PHOTOGRAPH: DANIEL GRIZELJ/GETTY IMAGES


The problem with Bluey is there's not enough of it. Even with 151 seven-minute-long episodes of the popular children's animated show out there, parents of toddlers still desperately wait for Australia’s Ludo Studio to release another season. The only way to get more Bluey more quickly is if they create their own stories starring the Brisbane-based family of blue heeler dogs.

The Internet Is Breeding Hordes of Adult 'Bluey' Fans
DIGITAL CULTURE
The Internet Is Breeding Hordes of Adult Bluey Fans
AMELIA TAIT
Meet the Lawyer Leading the Human Resistance Against AI
MAN VS. MACHINE
Meet the Lawyer Leading the Human Resistance Against AI
KATE KNIBBS
Why the Great AI Backlash Came for a Tiny Startup You’ve Probably Never Heard Of
PAGE BREAK
Why the Great AI Backlash Came for a Tiny Startup You’ve Probably Never Heard Of
KATE KNIBBS




Luke Warner did this—with generative AI. The London-based developer and father used OpenAI's latest tool, customizable bots called GPTs, to create a story generator for his young daughter. The bot, which he calls Bluey-GPT, begins each session by asking people their name, age, and a bit about their day, then churns out personalized tales starring Bluey and her sister Bingo. "It names her school, the area she lives in, and talks about the fact it's cold outside," Warner says. "It makes it more real and engaging."

The main version of ChatGPT has, since its launch last year, been able to write a children's story, but GPTs allow parents—or anyone, really—to constrain the topic and start with specific prompts, such as a child's name. This means anyone can generate personalized stories starring their kid and their favorite character—meaning no one needs to wait for Ludo to drop fresh content.

That said, the stories churned out by AI aren't anywhere as good as the show itself, and raise legal and ethical concerns. At the moment, OpenAI’s GPTs are only available to those with a Plus or Enterprise account. The company has suggested they may be rolled out to other users, but as custom agents are believed to be one of the concerns that led to the company's recent board-level drama, and given that researchers have flagged privacy concerns with GPTs, that release could be a ways out. (OpenAI has yet to reply to requests for comment for this story.)

When Warner built his GPT at the beginning of November, he'd made it with the intention of putting it up on the GPT Store that OpenAI had in the works. That never came to pass. Just five days after he advertised Bluey-GPT on Instagram, he got a takedown notice from OpenAI, which disabled public sharing of the GPT. Warner knew using Bluey as the basis for his GPT would be fraught, so he wasn't surprised. Trademarked names and images are almost always a no-go, but the laws around stories "written" by AI are murky—and Warner's Bluey bedtime stories are just the beginning.

UNPACKING WHICH LAWS apply isn't simple: Warner is based in the UK, OpenAI is in the US, and Ludo is in Australia. Fictional characters can be protected by copyright in the UK and the US, but it's more complicated in Australia, where simply naming a character may not be an infringement without including further elements from the work.


In the UK, the legal protections for characters include names as well as backstory, mannerisms, and expressions, says Xuyang Zhu, a lawyer on the technology, IP, and information team at the firm Taylor Wessing. "That copyright can be infringed if a character is replicated in another context in a way that reproduces enough of these aspects," Zhu says, adding that rights holders may also take action if their characters are at risk of being damaged reputationally. Alternatively, they may see fan creations as a way to drive engagement—after all, fan fiction is hardly new online, and is generally protected by fair use. These stories are just made by AI.

Still, the stories produced by ChatGPT, and therefore Bluey-GPT, are so generic they have little in common with the actual Bluey characters beyond names. Matthew Sag, a professor of law and AI at Emory University, says character-specific GPTs create more trademark problems than copyright ones. Around the time Warner created his Bluey bot, he also made one using Paw Patrol characters. “It didn't produce any content that was remotely similar to the children's cartoon series,” says Sag, who played around with the bot. “But I still don't think that you should be able to do this without permission from the rights holders. Just like you shouldn't be able to market a Coca-Cola GPT."

If generative AI were good enough to truly mimic Bluey, or if prompts were detailed enough to produce a more specific output, it's possible that a children's story generated by a GPT could infringe on the copyright of the show's creators. But Sag argues that responsibility would lie with the person asking for those stories rather than OpenAI, which does filter for requests that may infringe copyright. When I prompted ChatGPT to write a story about Bluey and Bingo from the TV show, the results varied. Some of the time ChatGPT did as asked, but just as often it refused on copyright grounds—once, it offered to write a story starring dogs named "Lulu" and "Bongo" instead.

The Bluey-GPT is still available to Warner and his daughter, but can't be shared publicly or monetized in the forthcoming GPT Store. However, anyone with access to ChatGPT, free or paid, can still ask the chatbot to write a personalized story starring Bluey and her sister Bingo. It appears in this case the problem wasn't the AI-generated fan fiction, but the attempt to sell it.


WARNER ISN'T THE first to see financial potential in AI's ability to generate stories, though story-making apps such as Oscar, Once Upon a Bot, and Bedtimestory.ai use generic characters or those that are in the public domain. Some apps include AI-generated illustrations or the option to have the story read aloud.

Oscar has tight constraints instead of an open prompt like ChatGPT. People can request stories in one of two ways: They can either supply their child's name, age, and gender and then prompt the bot with an animal, profession, and moral—an astronaut fox that learns perseverance, for example. Or, it can provide a child's details and then choose a story from an existing (and out of copyright) universe such as Wizard of Oz or Alice in Wonderland. That differs from Bedtimestory.ai and Once Upon a Bot, which have open prompts similar to Warner's Bluey-GPT, allowing people to ask for stories about Bluey characters. Both delivered such stories when prompted—though neither was remotely near the content that Ludo Studio produces, with the former app misgendering Bluey and the latter treating the characters as pets.

Putting aside Bluey and her sister Bingo, are the customized stories generated by AI even worth reading to your children? Research suggests personalized stories do help grab their attention, says Natalia Kucirkova, a professor in reading and development at The Open University—though it works best if the books are well written and designed to appeal to children and their parents.

Writing a high-quality storybook is no easy task. Olaf Falafel is a comedian, illustrator, and author of Blobfish, a children’s book about a lonely blobfish’s search for a friend, featuring moral lessons on litter, stepping out of your comfort zone, and looking past appearances, as well as plenty of excellent jokes and puns. AI can’t do this. Falafel is sure of it. "I tried to get it to write a joke and it failed—no matter how much I tweaked it and pushed it, it just couldn't get how a joke works and what makes something funny," he says.

With or without Bluey and friends, the biggest challenge facing these AI-generated stories is that they're dull. "I want a twist or something that's different or unique," Falafel says, but his attempts to run his own characters through AI kept coming up with the same plot: "There's a lot of buried treasure."


ANOTHER CONCERN WITH bots that create stories for kids is making sure what they churn out is actually safe for children. Giving Bedtimestory.ai a prompt that included profanity, fecal matter, and crime led to a story entitled "The Crapulent Bandit's Heist," which may offend some but was more goofy than scary. More violent prompts, however, did create content not suitable for the requested age range of 2 to 3 years old, including a story that begins: "Once upon a time, she shot her mom in the face." The promised murder doesn't actually happen, though there are gun references.

Cofounder Linus Ekenstam says Bedtimestory.ai relies on OpenAI's moderation API to ensure content is family-friendly. "It’s not 100 percent watertight, but we are constantly improving, and … it will steer away from this type of input," he says. "As the available tools become better, and we learn more about how to guide and steer the model, this too will improve over time."

While traditionally published children's books generally avoid random forays into violence, they aren't perfect. Many parents have books they grow to hate after multiple rereadings, while older classics get criticized for outdated values. Roald Dahl's stories, for example, have been edited to remove negative references in characters' descriptions including weight, baldness, and skin color. Publishers continue to struggle with diversity and representation. A UK study recently found just 5 percent of children's books have Black, Asian, or minority ethnic protagonists. AI-generated books could be one way for parents to make characters that represent their own families, though that could also prove problematic given the bias found in many generative AIs.

ChatGPT wasn't designed to write children's books. An AI model embedded with knowledge of how to teach reading and trained on only the best children's books would produce better outputs—in theory at least, says Kucirkova. She points to LitLab, which uses AI to generate “decodable content,” short mini stories centered on phonics lessons, though the platform was created to be used by teachers rather than parents.

IN THE MEANTIME, AI-generated bedtime stories may infringe copyright, lack quality, and require safeguards, but if they get a book-bored child to read, harried parents may not mind. Kucirkova worries that while AI might be a fun tool in families that already read widely, it might become a crutch in others. "For children who might need personalized attention the most, they may be left to navigate various open solutions without the necessary safeguards or care," she says.

For those who want an AI-generated tale of their own, Falafel argues it’s best to use a story-bot as a starting point, then create something that builds on whatever the tool spits out. "Have the confidence to edit and make it your own," he says. "Then parents can have a little bit of pride that it wasn't just completely a computer making this."

Falafel says he would use AI stories with his own children, though he's perturbed by the possibility that these AI models were trained on the work of writers, only to now replace them. "I'm not against the technology, I'm against the ethics of how it was made," he says. "If the technology was clean, in terms of everyone who's contributing to that machine learning being made aware and paid for it, I wouldn't have a problem with parents using it."

With that in mind, I asked ChatGPT to write a story featuring my 2-year-old daughter, using Bluey and Bingo to teach a lesson about copyright infringement. The ChatGPT-generated moral of the story is delivered by Bingo: "It's always best to tell stories that you make up yourself, or if you're sharing someone else's story, to tell everyone who really made it up. That way, everyone gets the credit they deserve for their creativity."

Despite its attribution to her favorite character, my toddler tuned out before the end of that sentence. AI and app developers might want to pay closer attention.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739





YAYI2-30B - New Chinese base LLM just launched.
PlUXpaH.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739



The timeline split of AI vs Robot Hardware has changed

the last 90 days i’ve witnessed industry leading AI in our lab running on humanoid hardware, and frankly it’s blown me away

i’m watching robots performing complex tasks entirely with neural nets. AI trained tasks that i didn’t think was feasible until end of the decade

when starting Figure in 2022, i believed we’d have reliable humanoid hardware well in advance of reliable real-world neural nets running on robots

basically, i thought training a robot to do “household” type tasks would size our timeline

my view on this has changed the last several months

i now feel that we’ll likely have reliable AI that can run on hardware around the same time or slightly before the humanoid hardware is highly reliable and starting volume manufacturing

and the path to delivering reliable robotic hardware seems clear and predictable to me, give us time and this will be solved

the Figure AI team is running humanoid robots with end-to-end neural nets performing highly complicated and dexterous tasks. these are tasks too complicated to write heuristics in C++

this is exciting as you can teach robots how to do tasks, and as the robot fleet grows so will the training sets. as the fleets grow in size they will continue to learn, become smarter and more capable every day

2024 will be the year of Embodied AI

we’ll be racing to get reliable hardware, vast training sets, and designing manufacturing processes for billions of units. i can’t think of a more exciting future

we’ll be demonstrating our Embodied AI in 2024, stay tuned!

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739




LongAnimateDiff​

Sapir Weissbuch, Naomi Ken Korem, Daniel Shalem, Yoav HaCohen | Lightricks Research

Hugging Face Spaces

We are pleased to release the "LongAnimateDiff" model, which has been trained to generate videos with a variable frame count, ranging from 16 to 64 frames. This model is compatible with the original AnimateDiff model.

We release two models:

  1. The LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. You can download the weights from either Google Drive or HuggingFace. For optimal results, we recommend using a motion scale of 1.28.
  2. A specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. Please download the weights from Google Drive or HuggingFace. For better results, use a motion scale of 1.15.

Results​

Walking astronaut on the moonA corgi dog skying on skis down a snowy mountainA pizza spinning inside a wood fired pizza ovenA young man is dancing in a Paris nice streetA ship in the oceanA hamster is riding a bicycle on the road
A drone is flying in the sky above a forestA drone is flying in the sky above the mountainsA swan swims in the lakeA ginger woman in space futurePhoto portrait of old lady with glassesSmall fish swimming in an aquarium



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Computer Science > Computation and Language​

[Submitted on 26 Dec 2023]

Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4​

Sondos Mahmoud Bsharat, Aidar Myrzakhan, Zhiqiang Shen
This paper introduces 26 guiding principles designed to streamline the process of querying and prompting large language models. Our goal is to simplify the underlying concepts of formulating questions for various scales of large language models, examining their abilities, and enhancing user comprehension on the behaviors of different scales of large language models when feeding into different prompts. Extensive experiments are conducted on LLaMA-1/2 (7B, 13B and 70B), GPT-3.5/4 to verify the effectiveness of the proposed principles on instructions and prompts design. We hope that this work provides a better guide for researchers working on the prompting of large language models. Project page is available at this https URL.
Comments:Github at: this https URL
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as:arXiv:2312.16171 [cs.CL]
(or arXiv:2312.16171v1 [cs.CL] for this version)

Submission history​

From: Zhiqiang Shen [view email]
[v1] Tue, 26 Dec 2023 18:59:33 UTC (1,127 KB)



wwITjKb.png





EjyGo3y.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,739

Microsoft Azure AI Services Launches Personal Voice for Customized AI Audio​


Microsoft's Personal Voice lets users generate their own AI-based voice using a 60-second audio recording.

By Luke Jones
November 17, 2023 3:41 pm CET


Microsoft-Azure-Peronal-Voice-696x375.jpg.webp


Microsoft has introduced Personal Voice, a new feature allowing individuals and businesses to generate AI-based voices using their own vocal samples. Announced during the Ignite 2023 developers conference, the Azure AI Speech service's latest addition is set to revolutionize the way AI voices are created, providing opportunities in gaming, language dubbing, and personalized voice assistants.

Simplifying Voice Synthesis


The new feature builds upon Microsoft's existing custom neural voice capabilities, streamlining the process for creating a synthetic voice that closely resembles a specific person's speech. Compared to traditional methods that may be complex or expensive, Personal Voice enables users to synthesize a voice that mirrors their own with just a 60-second audio recording.

This technological advancement is seen as particularly transformative for the entertainment industry, where it could be employed to dub an actor's voice across various languages, thereby maintaining a consistent vocal presence. In gaming, players might imbue their characters with a voice that reflects their actual speech, offering a more immersive experience.



Microsoft uses similar AI capabilities in the Skype TruVoice feature. Skype now supports real-time translation for video calls and the translation will use your personal voice (TruVoice). This means the person who hears the translation of you speaking will hear it in your actual voice.


Ethical Considerations and Availability


In light of potential misuses, such as creating deceptive audio clips, Microsoft emphasizes the importance of ethical conduct. Users must consent via a recorded statement, acknowledging they are aware that a digital version of their voice will be created and utilized. Adherence to Microsoft's established guidelines and conduct code is mandatory for all users.

Initially, Personal Voice will be accessible within limited regions, including West Europe, East US, and South East Asia. The company prepares for a public preview slated to become available on December 1. Microsoft's initiative arguably represents a step forward in naturalistic AI interactions by melding cutting-edge AI with the uniqueness of individual human voices. With meticulous guidelines and responsible usage, Personal Voice may soon redefine synthetic voice applications across various sectors.





 

Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,307
Reputation
2,733
Daps
84,478
Anyone know of any A.I. music tools that'll generate ambient music or just plain instrumentals? I tried suno but it seems to only generate music with lyrics.
 
Top