Apple’s latest AI research could completely transform your iPhone

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504

Apple’s latest AI research could completely transform your iPhone​

Michael Nuñez
@MichaelFNunez


December 20, 2023 1:01 PM

Apple unveils transformative AI research, enhancing large language model efficiency and enabling powerful AI capabilities on devices with limited memory.

Credit: VentureBeat made with Midjourney

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.





Apple, a company practically synonymous with technological innovation, has once again positioned itself at the forefront of the AI revolution.

The Cupertino, Calif.-based company recently announced significant strides in artificial intelligence research through two new papers introducing new techniques for 3D avatars and efficient language model inference. The advancements could enable more immersive visual experiences and allow complex AI systems to run on consumer devices such as the iPhone and iPad.

In the first research paper, Apple scientists propose HUGS (Human Gaussian Splats) to generate animated 3D avatars from short monocular videos (i.e. videos taken from a single camera). “Our method takes only a monocular video with a small number of (50-100) frames, and it automatically learns to disentangle the static scene and a fully animatable human avatar within 30 minutes,” said lead author Muhammed Kocabas.

The training video (left upper), the reconstructed canonical human avatar (right upper), the reconstructed scene model (left bottom), and the animated reposed human together with the scene (right bottom). (Credit: Apple)

HUGS represents both the human and background scene using 3D Gaussian splatting, an efficient rendering technique. The human model is initialized from a statistical body shape model called SMPL. But HUGS allows the Gaussians to deviate, enabling capture of details like clothing and hair.

A novel neural deformation module animates the Gaussians in a realistic fashion using linear blend skinning. This coordinated movement avoids artifacts while reposing the avatar. According to Kocabas, HUGS “enables novel-pose synthesis of human and novel view synthesis of both the human and the scene.”

Compared to previous avatar generation methods, HUGS is up to 100 times faster in training and rendering. The researchers demonstrate photorealistic results after optimizing the system for just 30 minutes on a typical gaming GPU. HUGS also outperforms state-of-the-art techniques like Vid2Avatar and NeuMan on 3D reconstruction quality.

The new technology lets people put different digital characters, or “avatars,” into a new scene using just one video of the person and the place. This can be done quickly, with the image updating 60 times every second to make it look smooth and realistic. (Credit: Apple)

The new 3D modeling capabilitiy is a really impressive achievement from Apple researchers. The real-time performance and ability to create avatars from in-the-wild videos could unlock new possibilities for virtual try-on, telepresence, and synthetic media in the relatively near future. Imagine the possibilities if you could create novel 3D scenes like this right on your iPhone camera!

Bridging the memory gap in AI inference​

In the second paper, Apple researchers tackled a key challenge in deploying large language models (LLMs) on devices with limited memory. Modern natural language models like GPT-4 contain hundreds of billions of parameters, making inference expensive on consumer hardware.

The proposed system minimizes data transfer from flash storage into scarce DRAM during inference. “Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks,” explained lead author Keivan Alizadeh.

Two main techniques are introduced. “Windowing” reuses activations from recent inferences, while “row-column bundling” reads larger blocks of data by storing rows and columns together. On an Apple M1 Max CPU, these methods improve inference latency by 4-5x compared to naive loading. On a GPU, the speedup reaches 20-25x.

“This breakthrough is particularly crucial for deploying advanced LLMs in resource-limited environments, thereby expanding their applicability and accessibility,” said co-author Mehrdad Farajtabar. The optimizations could soon allow complex AI assistants and chatbots to run smoothly on iPhone, iPads, and other mobile devices.


Apple’s strategic vision​

Both papers demonstrate Apple’s growing leadership in AI research and applications. While promising, experts caution that Apple will need to exercise great care and responsibility when incorporating these technologies into consumer products. From privacy protection to mitigating misuse, the societal impact must be considered.

As Apple potentially integrates these innovations into its product lineup, it’s clear that the company is not just enhancing its devices but also anticipating the future needs of AI-infused services. By allowing more complex AI models to run on devices with limited memory, Apple is potentially setting the stage for a new class of applications and services that leverage the power of LLMs in a way that was previously unfeasible.

Furthermore, by publishing this research, Apple is contributing to the broader AI community, which could stimulate further advancements in the field. It’s a move that reflects Apple’s confidence in its position as a tech leader and its commitment to pushing the boundaries of what’s possible.

If applied judiciously, Apple’s latest innovations could take artificial intelligence to the next level. Photorealistic digital avatars and powerful AI assistants on portable devices once seemed far off — but thanks to Apple’s scientists, the future is rapidly becoming reality.



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504

Apple Develops Breakthrough Method for Running LLMs on iPhones​


Thursday December 21, 2023 2:26 am PST by Tim Hardwick

Apple GPT
in your pocket? It could be a reality sooner than you think. Apple AI researchers say they have made a key breakthrough in deploying large language models (LLMs) on iPhones and other Apple devices with limited memory by inventing an innovative flash memory utilization technique.

siri symbol iphone


LLMs and Memory Constraints

LLM-based chatbots like ChatGPT and Claude are incredibly data and memory-intensive, typically requiring vast amounts of memory to function, which is a challenge for devices like iPhones that have limited memory capacity. To tackle this issue, Apple researchers have developed a novel technique that uses flash memory – the same memory where your apps and photos live – to store the AI model's data.


Storing AI on Flash Memory

In a new research paper titled "LLM in a flash: Efficient Large Language Model Inference with Limited Memory," the authors note that flash storage is more abundant in mobile devices than the RAM traditionally used for running LLMs. Their method cleverly bypasses the limitation using two key techniques that minimize data transfer and maximize flash memory throughput:

  1. Windowing: Think of this as a recycling method. Instead of loading new data every time, the AI model reuses some of the data it already processed. This reduces the need for constant memory fetching, making the process faster and smoother.
  2. Row-Column Bundling: This technique is like reading a book in larger chunks instead of one word at a time. By grouping data more efficiently, it can be read faster from the flash memory, speeding up the AI's ability to understand and generate language.
The combination of these methods allows AI models to run up to twice the size of the iPhone's available memory, according to the paper. This translates to a 4-5 times increase in speed on standard processors (CPUs) and an impressive 20-25 times faster on graphics processors (GPUs). "This breakthrough is particularly crucial for deploying advanced LLMs in resource-limited environments, thereby expanding their applicability and accessibility," write the authors.


Faster AI on iPhone

The breakthrough in AI efficiency opens new possibilities for future iPhones, such as more advanced Siri capabilities, real-time language translation, and sophisticated AI-driven features in photography and augmented reality. The technology also sets the stage for iPhones to run complex AI assistants and chatbots on-device, something Apple is already said to be working on.

Apple's work on generative AI could eventually be incorporated into its ‌Siri‌ voice assistant. Apple in February 2023 held an AI summit and briefed employees on its large language model work. According to Bloomberg, Apple is aiming for a smarter version of Siri that's deeply integrated with AI. Apple is planning to update the way that ‌Siri‌ interacts with the Messages app, allowing users to field complex questions and auto-complete sentences more effectively. Beyond that, Apple is rumored to be planning to add AI to as many Apple apps as possible.


Apple GPT

Apple is reportedly developing its own generative AI model called "Ajax". Designed to rival the likes of OpenAI's GPT-3 and GPT-4, Ajax operates on 200 billion parameters, suggesting a high level of complexity and capability in language understanding and generation. Internally known as "Apple GPT," Ajax aims to unify machine learning development across Apple, suggesting a broader strategy to integrate AI more deeply into Apple's ecosystem.

As of the latest reports, Ajax is considered more capable than the earlier generation ChatGPT 3.5. However, it's also suggested that OpenAI's newer models may have advanced beyond Ajax's capabilities as of September 2023.

Both The Information and analyst Jeff Pu claim that Apple will have some kind of generative AI feature available on the ‌iPhone‌ and iPad around late 2024, which is when iOS 18 will be coming out. Pu said in October that Apple is building a few hundred AI servers in 2023, with more to come in 2024. Apple will reportedly offer a combination of cloud-based AI and AI with on-device processing.

Tag: Apple GPT Guide
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504

Apple wants AI to run directly on its hardware instead of in the cloud

iPhone maker wants to catch up to its rivals when it comes to AI.

TIM BRADSHAW, FINANCIAL TIMES - 12/21/2023, 9:43 AM

The iPhone 15 Pro.

Enlarge / The iPhone 15 Pro.

Apple

87

Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.

The paper, entitled “LLM in a Flash,” offers a “solution to a current computational bottleneck,” its researchers write.

Its approach “paves the way for effective inference of LLMs on devices with limited memory,” they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users’ queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.

The paper was published on December 12 but caught wider attention after Hugging Face, a popular site for AI researchers to showcase their work, highlighted it late on Wednesday. It is the second Apple paper on generative AI this month and follows earlier moves to enable image-generating models such as Stable Diffusion to run on its custom chips.

Device manufacturers and chipmakers are hoping that new AI features will help revive the smartphone market, which has had its worst year in a decade, with shipments falling an estimated 5 percent, according to Counterpoint Research.

Despite launching one of the first virtual assistants, Siri, back in 2011, Apple has been largely left out of the wave of excitement about generative AI that has swept through Silicon Valley in the year since OpenAI launched its breakthrough chatbot ChatGPT. Apple has been viewed by many in the AI community as lagging behind its Big Tech rivals, despite hiring Google’s top AI executive, John Giannandrea, in 2018.

While Microsoft and Google have largely focused on delivering chatbots and other generative AI services over the Internet from their vast cloud computing platforms, Apple’s research suggests that it will instead focus on AI that can run directly on an iPhone.

Apple’s rivals, such as Samsung, are gearing up to launch a new kind of “AI smartphone” next year. Counterpoint estimated more than 100 million AI-focused smartphones would be shipped in 2024, with 40 percent of new devices offering such capabilities by 2027.

The head of the world’s largest mobile chipmaker, Qualcomm chief executive Cristiano Amon, forecast that bringing AI to smartphones would create a whole new experience for consumers and reverse declining mobile sales.

“You’re going to see devices launch in early 2024 with a number of generative AI use cases,” he told the Financial Times in a recent interview. “As those things get scaled up, they start to make a meaningful change in the user experience and enable new innovation which has the potential to create a new upgrade cycle in smartphones.”

More sophisticated virtual assistants will be able to anticipate users’ actions such as texting or scheduling a meeting, he said, while devices will also be capable of new kinds of photo editing techniques.

Google this month unveiled a version of its new Gemini LLM that will run “natively” on its Pixel smartphones.

Running the kind of large AI model that powers ChatGPT or Google’s Bard on a personal device brings formidable technical challenges, because smartphones lack the huge computing resources and energy available in a data center. Solving this problem could mean that AI assistants respond more quickly than they do from the cloud and even work offline.

Ensuring that queries are answered on an individual’s own device without sending data to the cloud is also likely to bring privacy benefits, a key differentiator for Apple in recent years.

“Our experiment is designed to optimize inference efficiency on personal devices,” its researchers said. Apple tested its approach on models including Falcon 7B, a smaller version of an open source LLM originally developed by the Technology Innovation Institute in Abu Dhabi.

Optimizing LLMs to run on battery-powered devices has been a growing focus for AI researchers. Academic papers are not a direct indicator of how Apple intends to add new features to its products, but they offer a rare glimpse into its secretive research labs and the company’s latest technical breakthroughs.

“Our work not only provides a solution to a current computational bottleneck but also sets a precedent for future research,” wrote Apple’s researchers in the conclusion to their paper. “We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their full potential in a wide range of devices and applications.”

Apple did not immediately respond to a request for comment.
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
25,057
Reputation
4,315
Daps
93,798
Reppin
New York City
After the Google video fiasco I would need to see this released before I believe it. Also, the other issue I have is will this be preloaded data our will you be able to give the phone information and it access it as needed?

If you can say fill a folder on your phone with files and then have the phone access those files exclusively through the phone then I think this could challenge Chat GPT.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504

Apple Explores A.I. Deals With News Publishers

The company has discussed multiyear deals worth at least $50 million to train its generative A.I. systems on publishers’ news articles.


People mill about a stone train station concourse with an Apple insignia behind them.

The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I.Credit...Karsten Moran for The New York Times

By Benjamin Mullin and Tripp Mickle

Benjamin Mullin covers the companies behind news and entertainment from New York. Tripp Mickle covers Apple from San Francisco.

Dec. 22, 2023

Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development of generative artificial intelligence systems, according to four people familiar with the discussions.

The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles, said the people with knowledge of talks, who spoke on the condition of anonymity to discuss sensitive negotiations. The news organizations contacted by Apple include Condé Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast and Better Homes and Gardens.

The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I., which allows computers to create images and chat like a human. The technology, which artificial intelligence experts refer to as neural networks, is built by using troves of photos or digital text to recognize patterns. By analyzing thousands of cat photos, for instance, a computer can learn to recognize a cat.

Microsoft, OpenAI, Google, Meta and other companies have released chatbots and other products built with the technology. The tools could change the way people work and generate billions of dollars in sales.

But Apple has been absent from the public discussion of A.I. Its virtual assistant, Siri, has remained largely stagnant in the decade since its release.

A spokeswoman for Apple declined to comment. During a call with analysts last month, Tim Cook, the company’s chief executive, said Apple has work “going on” connected to A.I. but declined to elaborate.

Some of the publishers contacted by Apple were lukewarm on the overture. After years of on-again-off-again commercial deals with tech companies like Meta, the owner of Facebook, publishers have grown wary of jumping into business with Silicon Valley.

Several publishing executives were concerned that Apple’s terms were too expansive, according to three people familiar with the negotiations. The initial pitch covered broad licensing of publishers’ archives of published content, with publishers potentially on the hook for any legal liabilities that could stem from Apple’s use of their content.

Apple was also vague about how it intended to apply generative A.I. to the news industry, the people said, a potential competitive risk given Apple’s substantial audience for news on its devices.

Still, some news executives were optimistic that Apple’s approach might eventually lead to a meaningful partnership. Two people familiar with the discussions struck a positive note on the long-term prospects of a deal, contrasting Apple’s approach of asking for permission with behavior from other artificial intelligence-enabled companies, which have been accused of seeking licensing deals with news organizations after they had already used their content to train its generative models.

In recent years, Apple executives have been debating how to accumulate the data needed to build generative A.I. products, according to two people familiar with the work. Some of its rivals have been accused of taking written material from across the internet without the permission of the artists, writers and coders who created it, leading to several copyright lawsuits.

Apple has been reluctant to take information from the internet, partly because of its commitment to privacy. After it acquired the social analytics start-up Topsy in 2013, Apple’s leadership asked that Topsy stop collecting information from Twitter, saying that doing so violated the company’s policy against collecting data on Apple customers, who might also post on the social media site, these two people said.

The explosion of artificial intelligence has raised alarms among news executives, many of whom are concerned that generative A.I. products like OpenAI’s ChatGPT could draw in readers who would otherwise consume their news on platforms for their own subscribers and advertisers.

Print news organizations, which decades ago saw their lucrative classifieds business demolished by online competitors, have been particularly wary about striking deals with A.I. organizations, engaging cautiously with an eye toward preserving their existing businesses.

In a statement, an OpenAI spokesman said that the company respects “the rights of content creators and owners and believes they should benefit from A.I. technology,” citing its recent deals with the American Journalism Project and the German publisher Axel Springer.

“We’re optimistic we will continue to find mutually beneficial ways to work together in support of a rich news ecosystem,” the OpenAI spokesman said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504

Apple quietly released an open source multimodal LLM in October​

Sharon Goldman @sharongoldman

December 23, 2023 6:44 AM

Image created by DALL-E 3 for VentureBeat

Image created by DALL-E 3 for VentureBeat

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.





With little fanfare, researchers from Apple and Columbia University released an open source multimodal LLM, called Ferret, in October 2023. At the time, the release — which included the code and weights, but for research use only, not a commercial license — did not receive much attention. But now that may be changing: With open source models from Mistral making recent headlines and Google’s Gemini model is coming to the Pixel Pro and eventually to Android, there has been increased chatter about the potential for local LLMs to power small devices.

That chatter increased recently because Apple announced it had made a key breakthrough in deploying LLMs on iPhones: The company released two new research papers introducing new techniques for 3D avatars and efficient language model inference. The advancements were hailed as potentially enabling more immersive visual experiences and allowing complex AI systems to run on consumer devices such as the iPhone and iPad.

Many in the AI community who belatedly noticed the Ferret release celebrated Apple’s unexpected entry into the open source LLM landscape, especially since Apple has traditionally been known as a “walled garden.”

This morning, Bart de Witte, who runs a European non-profit focused on open source AI in medicine, posted on X: “I somehow missed this,” he wrote. “Apple joined the open source AI community in October. Ferret’s introduction is a testament to Apple’s commitment to impactful AI research, solidifying its place as a leader in the multimodal AI space…ps: I’m looking forward to the day when Local Large Language Models (LLLMs) run on my iPhone as an integrated service of a re-designed iOS.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,908
Reputation
8,234
Daps
157,504





Apple's 'Ferret' is a new open-source machine learning model​



Malcolm Owen's Avatar Malcolm Owen
| Dec 24, 2023

A ferret in the wild [Pixabay/Michael Sehlmeyer]



Researchers working for Apple and from Cornell University quietly pushed an open-source multimodal LLM in October, a research release called "Ferret" that can use regions of images for queries.​

The introduction in October to Github largely flew under the radar, with no announcement or fanfare for its introduction. The code for Ferret was released alongside Ferret-Bench on October 30, with checkpoint releases introduced on December 14.

While it didn't receive much attention at first, the release became more of a big deal to AI researchers on Saturday, reports VentureBeat. Bart De Witte, operator of an AI-in-medicine non-profit, posted to X about the "missed" release, calling it a "testament to Apple's commitment to impactful AI research."

Ferret's release to open-source is being performed under a non-commercial license, so it cannot be commercialized in its current state. However, there's always a possibility for it to become used in a future Apple product or service in some way.

A tweet from October by Apple AI/ML research scientist Zhe Gan explains Ferret's use as being a system that can "refer and ground anything anywhere at any granularity" in an image. It can also do so by using any shape of region within an image.

In simpler terms, the model can examine a region drawn on an image, determine the elements within it that are of use to a user in a query, identify it, and draw a bounding box around the detected element. It can then use that identified element as part of a query, which it can then respond to in a typical fashion.

For example, highlighting an image of an animal in an image and asking the LLM what the animal is, it could determine the creature's species and that the user is referring to an individual animal from a group. It could then use the context of other items detected in the image to offer up further responses.



The release is important to researchers, as it shows Apple is keen to be more open with its AI work, rather than its usual secretive stance.

There's also the problem of infrastructure for Apple, as while it is working to increase the number of AI servers it owns, it may not have the scale available at the moment to work toe-to-toe with ChatGPT, for example. Though Apple could work with other firms to scale its capabilities, the other route is to do what it has just done, namely release an open-source model.

In one interesting element from the Github release, Reddit's r/Apple spotted that Ferret is "trained on 8 A100 GPUs with 80GB memory." Given Apple's history with Nvidia GPU support, this was seen to be a rare acknowledgment of the GPU producer.
 
Last edited:
Top