bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233



Tiny matrix multiplication ASIC for 1.58 bit aka TERNARY weight LLMs​

This work is inspired by The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits paper that reduces weights of the Large Language Model to ternary representation {-1, 0, 1}.

Preliminery performance results based on simulations:

  • eFabless 130nm ASIC - 1 GigaOPS per 0.2 square millimeter of chip area @ 50 MHz
  • $99 FPGA - 0.6 TeraOPS @ 500 MHz (thanks to @samsoniuk for quick synthesis!)


Observation: doubling the chip area leads to 50% increase in performance given a constant memory bandwidth and clock frequency.




Intent & ASIC​

This implementation is an exploration of the design space - intent is to measure how chip area, precsion and memory bandwidth affects the performance of the systolic array and AI accelerators.

This ASIC will be fabricated using eFabless 130 nm process via Tiny Tapeout.




Considerations​

This implementation takes the following considerations into account:

  • Extremely limited chip area ~ 0.1 .. 0.3 square millimeters.
  • Extremely low memory bandwidth limited by the 16 IO pins available in Tiny Tapeout ~ 100 MB/s.
  • Be able to increase compute regardless of memory bandwidth.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233






1/11
Claude 3.5 Sonnet transformed a research paper into an interactive learning dashboard in just 30 seconds.

It goes beyond the capabilities of GPT-4o, Gemini Pro, Llama and other existing LLMs.

Education will never be the same again with AI.

2/11
Find all the awesome LLM Apps demo with RAG in the following Github Repo.

P.S: Don't forget to star the repo to show your support 🌟
GitHub - Shubhamsaboo/awesome-llm-apps: Collection of awesome LLM apps with RAG using OpenAI, Anthropic, Gemini and opensource models.

3/11
Stay tuned to learn about more my experiments with Claude Sonnet 3.5.

If you find this useful, RT to share it with your friends.

Don't forget to follow me @Saboo_Shubham_ for more such LLMs tips and tutorials.

4/11
This is a great usecase...the best way to understand a concept is through interactive learning

5/11
Couldn't agree more.

6/11
This is seriously good.

7/11
Thank you!

8/11
Would be great to export it to an LMS directly.

9/11
That's a great idea. It could be the beginning of new kind of product.

10/11
awesome

11/11
Thanks!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

Perplexity’s grand theft AI​



More like Perfidy?​

By Elizabeth Lopatto, a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg.

Jun 27, 2024, 4:32 PM EDT




Vector collage of the Perplexity logo.

What, exactly, is Perplexity’s innovation? Image: The Verge​

In every hype cycle, certain patterns of deceit emerge. In the last crypto boom, it was “ponzinomics” and “rug pulls.” In self-driving cars, it was “just five years away!” In AI, it’s seeing just how much unethical shyt you can get away with.

Perplexity is basically a rent-seeking middleman on high-quality sources

Perplexity, which is in ongoing talks to raise hundreds of millions of dollars, is trying to create a Google Search competitor. Perplexity isn’t trying to create a “search engine,” though — it wants to create an “answer engine.” The idea is that instead of combing through a bunch of results to answer your own question with a primary source, you’ll simply get an answer Perplexity has found for you. “Factfulness and accuracy is what we care about,” Perplexity CEO Aravind Srinivas told The Verge.

That means that Perplexity is basically a rent-seeking middleman on high-quality sources. The value proposition on search, originally, was that by scraping the work done by journalists and others, Google’s results sent traffic to those sources. But by providing an answer, rather than pointing people to click through to a primary source, these so-called “answer engines” starve the primary source of ad revenue — keeping that revenue for themselves. Perplexity is among a group of vampires that include Arc Search and Google itself.

But Perplexity has taken it a step further with its Pages product, which creates a summary “report” based on those primary sources. It’s not just quoting a sentence or two to directly answer a user’s question — it’s creating an entire aggregated article, and it’s accurate in the sense that it is actively plagiarizing the sources it uses.

Forbes discovered Perplexity was dodging the publication’s paywall in order to provide a summary of an investigation the publication did of former Google CEO Eric Schmidt’s drone company. Though Forbes has a metered paywall on some of its work, the premium work — like that investigation — is behind a hard paywall. Not only did Perplexity somehow dodge the paywall but it barely cited the original investigation and ganked the original art to use for its report. (For those keeping track at home, the art thing is copyright infringement.)

“Someone else did it” is a fine argument for a five-year-old

Aggregation is not a particularly new phenomenon — but the scale at which Perplexity can aggregate, along with the copyright violation of using the original art, is pretty, hmm, remarkable. In an attempt to calm everyone down, the company’s chief business officer went to Semafor to say Perplexity was developing revenue sharing plans with publications, and aw gee whiz, how come everyone was being so mean to a product still in development?

At this point, Wired jumped in, confirming a finding from Robb Knight: Perplexity’s scraping of Forbes’ work wasn’t an exception. In fact, Perplexity has been ignoring the robots.txt code that explicitly asks web crawlers not to scrape the page. Srinivas responded in Fast Company that actually, Perplexity wasn’t ignoring robots.txt; it was just using third-party scrapers that ignored it. Srinivas declined to name the third-party scraper and didn’t commit to asking that crawler to stop violating robots.txt.

“Someone else did it” is a fine argument for a five-year-old. And consider the response further. If Srinivas wanted to be ethical, he had some options here. Option one is to terminate the contract with the third-party scraper. Option two is to try to convince the scraper to honor robots.txt. Srinivas didn’t commit to either, and it seems to me, there’s a clear reason why. Even if Perplexity itself isn’t violating the code, it is reliant on someone else violating the code for its “answer engine” to work.

To add insult to injury, Perplexity plagiarized Wired’s article about it — even though Wired explicitly blocks Perplexity in its text file. The bulk of Wired’s article about the plagiarism is about legal remedies, but I’m interested in what’s going on here with robots.txt. It’s a good-faith agreement that has held up for decades now, and it’s falling apart thanks to unscrupulous AI companies — that’s right, Perplexity isn’t the only one — hoovering up just about anything that’s available in order to train their bullshyt models. And remember how Srinivas said he was committed to “factfulness?” I’m not sure that’s true, either: Perplexity is now surfacing AI-generated results and actual misinformation, Forbes reports.

To my ear, Srinivas was bragging about how charming and clever his lie was

We’ve seen a lot of AI giants engage in questionably legal and arguably unethical practices in order to get the data they want. In order to prove the value of Perplexity to investors, Srinivas built a tool to scrape Twitter by pretending to be an academic researcher using API access for research. “I would call my [fake academic] projects just like Brin Rank and all these kinds of things,” Srinivas told Lex Fridman on the latter’s podcast. I assume “Brin Rank” is a reference to Google co-founder Sergey Brin; to my ear, Srinivas was bragging about how charming and clever his lie was.

I’m not the one who’s telling you the foundation of Perplexity is lying to dodge established principles that hold up the web. Its CEO is. That’s clarifying about the actual value proposition of “answer engines.” Perplexity cannot generate actual information on its own and relies instead on third parties whose policies it abuses. The “answer engine” was developed by people who feel free to lie whenever it is more convenient, and that preference is necessary for how Perplexity works.

So that’s Perplexity’s real innovation here: shattering the foundations of trust that built the internet. The question is if any of its users or investors care.

Correction June 27th: Removes erroneous reference to Axios — the interview in question was with Semafor.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

1/1
Exciting news from #WAVESUMMIT, Baidu’s deep learning developer conference in Beijing! Baidu CTO Haifeng Wang unveiled the ERNIE 4.0 Turbo. With its faster responses and superior performance, it advances the capabilities of the ERNIE 4.0 foundation. Plus, Baidu’s gen AI product ERNIE Bot has hit 300 million users, handling 500 million daily queries. The future is here!
#BaiduAI


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GRKmYMkaAAAtnyq.jpg

GRKoAombwAAJHVL.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

1/1
WhatsApp beta for Android 2.24.14.7: what's new?

WhatsApp is working on a feature to choose the Meta AI Llama model, and it will be available in a future update!

WhatsApp beta for Android 2.24.14.7: what's new? | WABetaInfo


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GRCPSc7XIAARV4U.jpg








WhatsApp beta for Android 2.24.14.7: what’s new?





Jun 27, 2024

WhatsApp is rolling out a new update through the Google Play Beta Program, bringing the version up to 2.24.14.7.

What’s new in this update? WhatsApp is working on a feature to choose the Meta AI Llama model, and it will be available in a future update!


Common questionsAnswers
Name of the feature?Meta Llama model
Status?Under development
Compatibility?WhatsApp is working on bringing a feature to choose the Meta AI Llama model to a future update of the app.
I’ve installed this update but I don’t have this feature. Why?This feature is under development, so it’s not ready for beta testers. However, it’s possible to discover a preview in this article.
Previous news?WhatsApp beta for iOS 24.13.10.71: what’s new? WhatsApp is rolling out a notification feature for media upload events!
OPEN WHATSAPP

META LLAMA MODEL​

In the article about the WhatsApp beta for Android 2.24.9.9 update, we announced that WhatsApp started limited testing of Meta AI on WhatsApp in different countries. This new phase of testing aimed to further explore how users interact with Meta AI through its integration into the search bar for select users and an alternative entry point within the top app bar for users in India, while preserving privacy of these interactions. It appears that WhatsApp is now committed to further improving Meta AI through advanced models, as discovered in the latest WhatsApp beta for Android 2.24.14.7 update from the Google Play Store.

Attached image

Attached image

As you can see from the attached screenshot, WhatsApp is exploring a feature to allow users to choose which model to use for Meta Llama, scheduled for release in a future update of the app. This feature appears to be intended for AI conversations, as indicated in the description of the section. The default option is Llama 3-70B, but WhatsApp has plans to offer a preview of the latest Llama 3-405B model for handling more complex prompts in the future. It’s important to note that the Llama 3-405B model will be available for a limited number of prompts each week. Once users reach this limit, they will be able to continue their conversation using the default model.

In our opinion, users will gain more control over their AI interactions by choosing which model to use based on their specific needs. This flexibility will allow users to tailor their experience, opting for the default model, Llama 3-70B, for quicker and simpler prompts or selecting the advanced Llama 3-405B model for more complex queries. Users will be able to leverage the faster response times of the default model for everyday use while choosing the latest model’s advanced capabilities for more intricate tasks, ensuring they get the best possible assistance for any situation.

A feature to choose the Meta AI Llama model is under development and it will be available in a future update. We will keep you updated with an additional article as soon as we have more details to share about this feature.

Stay up-to-date on WhatsApp news by following WABetaInfo on X. You can also discover other new features for WhatsApp beta for Android, iOS, Web/Desktop, and Windows.

Do you like this news? Please, let us know on X: we love hearing your feedback! If you’re curious to see where you can find WABetaInfo, there is a dedicated page where you can discover our services.

In addition, we have set up a Discord Server where you can chat with other people, get help and advice, and stay up to date with the latest announcements from WABetaInfo.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

China AI Startups Head to Singapore in Bid for Global Growth​


  • AI founders lured by access to foreign capital, technologies
  • Young AI companies finding it hard to go global from China
Inside the TikTok office in Singapore in 2023.

Inside the TikTok office in Singapore in 2023.

Photographer: Ore Huiying/Bloomberg

By Jane Zhang and Saritha Rai

June 30, 2024 at 6:00 PM EDT

Updated on

July 1, 2024 at 6:15 AM EDT

When Wu Cunsong and Chen Binghui founded their artificial intelligence startup two years ago in Hangzhou, China, they quickly ran into obstacles, including dearth of venture capital. This March, they did what scores of other Chinese AI firms have done and moved their company, Tabcut, 2,500 miles southwest to Singapore.

The business-friendly country offers Wu and Chen better access to global investors and customers at a time when elevated geopolitical tensions keep many US and international firms away from China. Equally crucial for an AI startup, they can buy Nvidia Corp.’s latest chips and other cutting-edge technologies in the politically neutral island nation, something that would have been impossible in China because of US export controls.

“We wanted to go to a place abundant with capital for financing, rather than a place where the availability of funds is rapidly diminishing,” Wu said in an interview.

Singapore is emerging as a favorite destination for Chinese AI startups seeking to go global. While the city-state — with an ethnic Chinese majority — has long attracted companies from China, AI entrepreneurs in particular are accelerating the shift because trade sanctions imposed by the US on their homeland block their access to the newest technologies.

A base in Singapore is also a way for companies to distance themselves from their Chinese origins, a move often called “Singapore-washing.” That’s an attempt to reduce scrutiny from customers and regulators in countries that are China’s political opponents, such as the US.

The strategy doesn’t always work: Beijing-based ByteDance Ltd. moved the headquarters for its TikTok business to Singapore, but the popular video service was still hit by a new US law requiring the sale or ban of its American operations over security concerns. Chinese fashion giant Shein, which also moved its base to Singapore, has faced intense criticism in the US and is now aiming to go public in London instead of New York.

But for AI startups, more is at stake than just perception. AI companies amass large amounts of data and rely on cutting-edge chips to train their systems, and if access is restricted the quality of their product will suffer. The US has blocked sales of the most sophisticated chips and other technologies to China, to prevent them from being used for military and other purposes. OpenAI, the American generative-AI leader, is curbing China’s access to its software tools.

China has also taken a strict approach to AI-generated content, trying to ensure it complies with the ruling Communist Party’s policies and propaganda. The country made one of the world’s first major moves to regulate the nascent technology last July, asking companies to register their algorithms with the government before they roll out consumer-facing services.

That means that AI developers “won’t be able to engage in free explorations if they are in China,” said a founder of consulting firm Linkloud, who asked to be identified only by his first name Adam because of the sensitivity of the subject. He estimated that 70% to 80% of Chinese software and AI startups target customers globally, with many now choosing to skip China altogether. Linkloud is building a community for Chinese AI entrepreneurs exploring global markets.

Singapore’s AI regulations are less stringent and it’s known for the ease of setting up a company. The country wants to be a bridge between entrepreneurs from Asia and the world, said Chan Ih-Ming, executive vice president of the Singapore Economic Development Board.

“Many businesses and startups, including Chinese ones, choose Singapore as their hub for Southeast Asia and see Singapore as a springboard to global markets,” he said. The city-state was home to more than 1,100 AI startups at the end of 2023, he said. While Singapore doesn’t disclose data by country, evidence of China-based AI companies setting up shop is mounting.

8e2de1352011c1af46c96978bec30eef7eaba532.jpg

Jianfeng LuSource: Wiz Holdings Pte.

Jianfeng Lu is a pioneer of the trend, having moved to Singapore from the eastern Chinese city of Nanjing to establish his AI startup Wiz Holdings Pte. in 2019. With backing from Tiger Global, GGV Capital and Hillhouse Capital, he built its speech recognition AI engine from the ground up, and sold customer-service bots to clients in Latin America, Southeast Asia and northern Africa. He didn’t sell in China, a move his fellow founders term prescient.

He is now a sought-after mentor for his Chinese peers who want his advice on how to set up a business and settle in Singapore. An online chat group Lu runs for Chinese entrepreneurs wishing to relocate to the city-state has 425 members. (Not all are AI founders.)

“If you want to be a global startup, better begin as a global startup,” the 52-year-old entrepreneur said. “There’s complete predictability about how systems work here.”

Meanwhile, fundraising in China has become more difficult because of its slowing economy and rising tensions with the US, which is prompting global VC firms to reduce their exposure to the country.

Wu and Chen’s Tabcut had a frustrating and arduous experience finding backers in China, with local VC firms demanding financial and operating details for months before making a call, Wu said. Tabcut ended up going with Singapore-based Kamet Capital instead, raising $5.6 million from the firm late last year. The startup moved its global headquarters to the country in March, while launching a beta version of its AI video generating tool for global users.

Climind, a startup that builds large language models and productivity AI tools for professionals in the environmental, social and governance field, is preparing to move in the coming weeks from Hong Kong to Singapore, where its co-founder and chief technology officer Qian Yiming is already based. The company founded last year has a small team of 10.

8bbd1af0014a39e5737bff65a12273dbf653d9e4.jpg

Qian YimingSource: Climind

Besides the cultural and linguistic affinity, Singapore is attractive because its government offers help, including financial backing and technical support, Qian said over a video call. His company is among those that’s received funding from the state, and startup incubators also abound in the country, he said.

“Access to global markets is easy, the environment is good and politics is stable,” Qian said.

To be sure, some Chinese AI companies have scored early successes in their domestic market and remained there. China itself is pushing for AI, robotics and other deep tech startups to stay domiciled within the country and, eventually, list in the local stock markets. Beijing supports the most promising of them by backing them with capital, and providing low-interest loans and tax breaks.

But such companies will struggle to expand globally because their services are typically tailored for the Chinese audience and regulatory environment, said Yiu-Ting Tsoi, founding partner of the Hong Kong-headquartered HB Ventures, which invests in Chinese as well as regional tech and AI startups. The more successful an AI startup is in China, the more challenging it is for it to go global, said Tsoi, a former JPMorgan banker.

d32a2bcdf646b0a76606a4553ecf623dd231ea01.jpg

Karen WongSource: Climind

It’s a reversal from a decade ago when China’s technology giants like Alibaba Group Holding Ltd. and Didi Global Inc. aggressively expanded outside the country, amassing customers for their consumer-friendly apps. Now the escalating geopolitical tension means that young Chinese AI companies are increasingly having to choose whether to try to grow in China, under Chinese rules, or abroad — a combination of both is impossible.

“More regulations are coming out and navigating all of that becomes complicated,” said Karen Wong, 28, Climind’s chief executive officer. “From branding, PR, regulations and compliance angles, Singapore makes sense.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

Character.AI now allows users to talk with AI avatars over calls​

Ivan Mehta

7:30 AM PDT • June 27, 2024

Comment

Character.ai displayed on smartphones
Image Credits: Character.ai

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.

Users can initiate calls with a user-generated AI character directly with a tap of a button. Character.AI said that this feature is built to make the calling experience smooth with reduced latency.

You can switch between calling and texting seamlessly, as well as stop the AI from talking through a “Tap to interrupt” option.

Earlier in March, Character.AI launched Character Voices so users can hear the characters speak in one-on-one chats. The company said with the feature allowing users to create their own voices there are over 1 million voices in the app’s library.

The company’s new calling feature comes at a time when users on Reddit are complaining about their characters’ changing personalities. In response to these complaints, the startup told 404 Media that the company didn’t make any major changes, but some users could have been part of a test and noticed behavioral differences in characters because of that.

Earlier this week, The Information reported that Google is working on its own customized chatbot platform to take on Meta and Character.AI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,698
Reputation
8,224
Daps
157,233

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI​

Ron Miller

8:00 AM PDT • June 29, 2024

Comment

gettyimages-816888400.jpg
Image Credits: Paul Marotta / Getty Images

When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.

In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he’s doing.

He knows what he’s talking about, and he thinks maybe it’s time to put the brakes on the screaming hype that is generative AI. Brooks thinks it’s impressive technology, but maybe not quite as capable as many are suggesting. “I’m not saying LLMs are not important, but we have to be careful [with] how we evaluate them,” he told TechCrunch.

He says the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can, and humans tend to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very over-optimistic, and that’s because they use a model of a person’s performance on a task.”

He added that the problem is that generative AI is not human or even human-like, and it’s flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don’t make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It’s instead much simpler to connect the robots to a stream of data coming from the warehouse management software.

“When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it’s just going to slow things down,” he said. “We have massive data processing and massive AI optimization techniques and planning. And that’s how we get the orders completed fast.”

Another lesson Brooks has learned when it comes to robots and AI is that you can’t try to do too much. You should solve a solvable problem where robots can be integrated easily.

“We need to automate in places where things have already been cleaned up. So the example of my company is we’re doing pretty well in warehouses, and warehouses are actually pretty constrained. The lighting doesn’t change with those big buildings. There’s not stuff lying around on the floor because the people pushing carts would run into that. There’s no floating plastic bags going around. And largely it’s not in the interest of the people who work there to be malicious to the robot,” he said.

Brooks explains that it’s also about robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, as opposed to building a human-looking robot. In this case, it looks like a shopping cart with a handle.

“So the form factor we use is not humanoids walking around — even though I have built and delivered more humanoids than anyone else. These look like shopping carts,” he said. “It’s got a handlebar, so if there’s a problem with the robot, a person can grab the handlebar and do what they wish with it,” he said.

After all these years, Brooks has learned that it’s about making the technology accessible and purpose-built. “I always try to make technology easy for people to understand, and therefore we can deploy it at scale, and always look at the business case; the return on investment is also very important.”

Even with that, Brooks says we have to accept that there are always going to be hard-to-solve outlier cases when it comes to AI, that could take decades to solve. “Without carefully boxing in how an AI system is deployed, there is always a long tail of special cases that take decades to discover and fix. Paradoxically all those fixes are AI complete themselves.”

Brooks adds that there’s this mistaken belief, mostly thanks to Moore’s law, that there will always be exponential growth when it comes to technology — the idea that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that tech doesn’t always grow exponentially, in spite of Moore’s law.

He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models being sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody actually needed more than that.

Brooks acknowledges that LLMs could help at some point with domestic robots, where they could perform specific tasks, especially with an aging population and not enough people to take care of them. But even that, he says, could come with its own set of unique challenges.

“People say, ‘Oh, the large language models are gonna make robots be able to do things they couldn’t do.’ That’s not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization,” he said.

Brooks explains that this could eventually lead to robots with useful language interfaces for people in care situations. “It’s not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots,” he said.
 
Top