bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668





















1/42
@reach_vb
"DeepSeek-R1-Distill-Qwen-1.5B outperforms GPT-4o and Claude-3.5-Sonnet on math benchmarks with 28.9% on AIME and 83.9% on MATH."

1.5B did WHAT?



GhvJ-_XWYAAHmxB.png


2/42
@reach_vb
repo:

deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B · Hugging Face



3/42
@thegenioo
but is it better at coding compared to sonnet new?



4/42
@reach_vb
No, R1 series models aren’t better than V3 on coding, that’s what DeepSeek would work at the next step

They are still pretty powerful tho



5/42
@iatharvkulkarni
How do you run these models? I see that you post a lot of hugging face models, how do you personally run them? Any tool that would help me get going as quickly as possible locally?



6/42
@reach_vb
llama.cpp or transformers, let me make a quick notebook actually



7/42
@leo_grundstrom
How fast is it?



8/42
@reach_vb
Reasonably fast on an old T4, try it out yourself:

[Quoted tweet]
Try out R1 Distill Qwen 1.5B in a FREE Google Colab! 🔥

The vibes are looking gooood!


https://video.twimg.com/ext_tw_video/1881377422135676928/pu/vid/avc1/1670x1080/wUafm6PII4xXizvR.mp4

9/42
@antonio_spie
Can SOMEONE PLEASE TELL ME HOW MANY GBS TO INSTALL LOCALLY??



10/42
@reach_vb
You can try the 1.5B directly here on a free Google colab

[Quoted tweet]
Try out R1 Distill Qwen 1.5B in a FREE Google Colab! 🔥

The vibes are looking gooood!


https://video.twimg.com/ext_tw_video/1881377422135676928/pu/vid/avc1/1670x1080/wUafm6PII4xXizvR.mp4

11/42
@ArpinGarre66002
Dubious



12/42
@reach_vb
Ha, the vibes are strong, I don’t care much about the benchmark, but for a 1.5B it’s pretty strong, try it out yourself:

[Quoted tweet]
Try out R1 Distill Qwen 1.5B in a FREE Google Colab! 🔥

The vibes are looking gooood!


https://video.twimg.com/ext_tw_video/1881377422135676928/pu/vid/avc1/1670x1080/wUafm6PII4xXizvR.mp4

13/42
@gordic_aleksa
overfit is the word



14/42
@reach_vb
Vibe checks look pretty good tho, been playing with it on Colab - not sonnet or 4o like but deffo pretty strong for a 1.5B

[Quoted tweet]
Try out R1 Distill Qwen 1.5B in a FREE Google Colab! 🔥

The vibes are looking gooood!


https://video.twimg.com/ext_tw_video/1881377422135676928/pu/vid/avc1/1670x1080/wUafm6PII4xXizvR.mp4

15/42
@CEOofFuggy
I doubt it's good in general tho, but I'll definitely have to try 14B version etc.



16/42
@reach_vb
think 32B or 70B would be golden



17/42
@snowclipsed
@vikhyatk what if you used this as the text model



18/42
@reach_vb
that would be fire - but I doubt you'd get as much benefit from this, model is a yapper



19/42
@victor_explore


[Quoted tweet]
DeepSeek today 😎


https://video.twimg.com/ext_tw_video/1881340096995364864/pu/vid/avc1/720x720/yN45olzqLvBZt_f_.mp4

20/42
@reach_vb
hahaha, how do you create these, it's amazing!



21/42
@AILeaksAndNews
We are accelerating quickly

3.5 sonnet that can run locally



22/42
@reach_vb
maybe the 32B/ 70B is at that level, I doubt 1.5B will be at that level haha



23/42
@edwardcfrazer
How fast is it 👀



24/42
@reach_vb
On their API it’s pretty fast! 🏎️



25/42
@anushkmittal
1.5b? more like 1.5 based



26/42
@reach_vb
hahahaha!



27/42
@Yuchenj_UW
Unbelievable.

We will have super smart 1B models in the future, running locally on our phone.



28/42
@reach_vb
I call it Baby AGI 😂

[Quoted tweet]
Try out R1 Distill Qwen 1.5B in a FREE Google Colab! 🔥

The vibes are looking gooood!


https://video.twimg.com/ext_tw_video/1881377422135676928/pu/vid/avc1/1670x1080/wUafm6PII4xXizvR.mp4

29/42
@dhruv2038
Yeah this is sus.



30/42
@reach_vb
gotta vibe check it ofc



31/42
@ftmoose
lol



GhwOr3BW4AAes5B.png


32/42
@reach_vb
I love the way it thinks 😂



33/42
@_ggLAB
yet cannot answer historical events.



GhveofdaQAESQqp.png


34/42
@reach_vb
literally doesn't matter, as long as it works for your own use-cases.



35/42
@AntDX316
👍



36/42
@nooriefyi
this is huge. parameter count is so last year.



37/42
@baileygspell
"few shot prompts degrade peformance" def overfit



38/42
@seo_leaders
Going to have to give that model a run up for sure



39/42
@priontific
I'm so psyched holy moly - I'm also super pumped, I think we can squeeze so much by speculative drafting w/ these tiny models. Results in a few hours once I've finished downloading -- stay tuned!!



40/42
@ionet
at first it seemed like humans being outperformed like that by AI, now AI doing the same to other AI 😂😂



41/42
@joysectomy
Western cultures are bad at math, Eastern cultures teach math in a bottoms up way. Wonder how much of a factor that is in the consistent perf gap between these models



42/42
@ChaithanyaK42
This is awesome 👌




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668



Someone bought the domain ‘OGOpenAI’ and redirected it to a Chinese AI lab​


Maxwell Zeff

8:56 PM PST · January 22, 2025



A software engineer has bought the website “OGOpenAI.com” and redirected it to DeepSeek, a Chinese AI lab that’s been making waves in the open source AI world lately.

Software engineer Ananay Arora tells TechCrunch that he bought the domain name for “less than a Chipotle meal,” and that he plans to sell it for more.

The move was an apparent nod to how DeepSeek releases cutting-edge open AI models, just as OpenAI did in its early years. DeepSeek’s models can be used offline and for free by any developer with the necessary hardware, similar to older OpenAI models like Point-E and Jukebox.

DeepSeek caught the attention of AI enthusiasts last week when it released an open version of its DeepSeek-R1 model, which the company claims performs better than OpenAI’s o1 on certain benchmarks. Outside of models such as Whisper, OpenAI rarely releases its flagship AI in an “open” format these days, drawing criticism from some in the AI industry. In fact, OpenAI’s reticence to release its most powerful models is cited in a lawsuit from Elon Musk, who claims that the startup isn’t staying true to its original nonprofit mission.

just found the actual openai https://t.co/wEF0kRNfLA

— Ananay (@ananayarora)
January 22, 2025

Arora says he was inspired by a now-deleted post on X from Perplexity’s CEO, Aravind Srinivas, comparing DeepSeek to OpenAI in its more “open” days. “I thought, hey, it would be cool to have [the] domain go to DeepSeek for fun,” Arora told TechCrunch via DM.

DeepSeek joins Alibaba’s Qwen in the list of Chinese AI labs releasing open alternatives to OpenAI’s models.

The American government has tried to curb China’s AI labs for years with chip export restrictions, but it may need to do more if the latest AI models coming out of the country are any indication.

Topics

AIdeepseekdomaindomain nameogopenai.comopen source ai
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668
Hugging Face claims its new AI models are the smallest of their kind


A team at AI dev platform Hugging Face has released what they’re claiming are the smallest AI models that can analyze images, short videos, and text.

The models, SmolVLM-256M and SmolVLM-500M, are designed to work well on “constrained devices” like laptops with under around 1GB of RAM. The team says that they’re also ideal for developers trying to process large amounts of data very cheaply.

SmolVLM-256M and SmolVLM-500M are just 256 million parameters and 500 million parameters in size, respectively. (Parameters roughly correspond to a model’s problem-solving abilities, such as its performance on math tests.) Both models can perform tasks like describing images or video clips and answering questions about PDFs and the elements within them, including scanned text and charts.

To train SmolVLM-256M and SmolVLM-500M, the Hugging Face team used The Cauldron, a collection of 50 “high-quality” image and text datasets, and Docmatix, a set of file scans paired with detailed captions. Both were created by Hugging Face’s M4 team , which develops multimodal AI technologies.

smoller_vlm_benchmarks.png

Benchmarks comparing the new SmolVLM models to other multimodal models. Image Credits: SmolVLM The team claims that both SmolVLM-256M and SmolVLM-500M outperform a much larger model, Idefics 80B, on benchmarks including AI2D, which tests the ability of models to analyze grade-school-level science diagrams. SmolVLM-256M and SmolVLM-500M are available on the web as well as for download from Hugging Face under an Apache 2.0 license, meaning they can be used without restrictions.

Small models like SmolVLM-256M and SmolVLM-500M may be inexpensive and versatile, but they can also contain flaws that aren’t as pronounced in larger models. A recent study from Google DeepMind, Microsoft Research, and the Mila research institute in Quebec found that many small models perform worse than expected on complex reasoning tasks. The researchers speculated that this could be because smaller models recognize surface-level patterns in data, but struggle to apply that knowledge in new contexts. Topics

AI , AI , Hugging Face , models , multimodal models , open models , open source ai , SmolVLM , vlms
Kyle-Wiggers.jpg
Kyle Wiggers Senior Reporter, Enterprise Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully. View Bio

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668
Perplexity launches Sonar, an API for AI search


Perplexity launches Sonar, an API for AI search​



Perplexity on Tuesday launched an API service called Sonar, allowing enterprises and developers to build the startup’s generative AI search tools into their own applications.

“While most generative AI features today have answers informed only by training data, this limits their capabilities,” Perplexity wrote in a blog post . “To optimize for factuality and authority, APIs require a real-time connection to the Internet, with answers informed by trusted sources.”

To start, Perplexity is offering two tiers that developers can choose from: a base version that’s cheaper and faster, Sonar, and a pricier version that’s better for tough questions, Sonar Pro. Perplexity says the Sonar API also gives enterprises and developers the ability to customize the sources its AI search engine pulls from.

Introducing Sonar: Perplexity’s API. Sonar is the most affordable search API product on the market. Use it to build generative search, powered by real-time information and citations, into your apps. We’re also offering a Pro version with deeper functionality. pic.twitter.com/CWpVUUKYtW

— Perplexity (@perplexity_ai)
January 21, 2025

With the launch of its API, Perplexity is making its AI search engine available in more places than just its app and website. Perplexity says that Zoom, among other companies, is already using Sonar to power an AI assistant for its video conferencing platform. Sonar is allowing Zoom’s AI chatbot to give real-time answers, informed by web searches with citations, without requiring users to leave the video chat window.

Sonar could also give Perplexity another source of revenue, which could be particularly important to the startup’s investors. Perplexity only offers a subscription service for unlimited access to its AI search engine and some additional features. However, the tech industry has slashed prices to access AI tools via APIs in the last year, and Perplexity claims to be offering the cheapest AI search API on the market via Sonar.

The base version of Sonar offers a cheaper and quicker version of the company’s AI search tools. Sonar’s base version has flat pricing and uses a lightweight model. It costs $5 for every 1,000 searches, plus $1 for every 750,000 words you type into the AI model (roughly 1 million input tokens), and another $1 for every 750,000 words the model spits out (roughly 1 million output tokens).

The pricier Sonar Pro gives more-detailed answers and is capable of handling more-complex questions. This version will run multiple searches on top of a user prompt, meaning the pricing could be more unpredictable. Perplexity also says this version offers twice as many citations as the base version of Sonar. Sonar Pro costs $5 for every 1,000 searches, plus $3 for every 750,000 words you type into the AI model (roughly 1 million input tokens), and $15 for every 750,000 words the model spits out (roughly 1 million output tokens).

Perplexity claims Sonar Pro outperformed leading models from Google, OpenAI, and Anthropic on a benchmark that measures factual correctness in AI chatbot answers, SimpleQA.

In December, Perplexity raised a $500 million funding round led by Institutional Venture Partners, valuing the company at $9 billion.

Correction: A previous version of this story included outdated figures on Perplexity’s recent funding and annual recurring revenue. Topics

AI , AI search , Perplexity

Maxwell-Zeff.jpg
Maxwell Zeff Senior Reporter, Consumer Maxwell Zeff is a senior reporter at TechCrunch specializing in AI and emerging technologies. Previously with Gizmodo, Bloomberg, and MSNBC, Zeff has covered the rise of AI and the Silicon Valley Bank crisis. He is based in San Francisco. When not reporting, he can be found hiking, biking, and exploring the Bay Area’s food scene. View Bio
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668
Meta’s Yann LeCun predicts ‘new paradigm of AI architectures’ within 5 years and ‘decade of robotics’

META's Chief Artificial Intelligence (AI) scientist Yann LeCun addresses a speech as he attends the World Economic Forum (WEF) annual meeting in Davos on January 23, 2025.
Image Credits:FABRICE COFFRINI / AFP / Getty Images

AI







Meta’s Yann LeCun predicts ‘new paradigm of AI architectures’ within 5 years and ‘decade of robotics’​


Paul Sawers

7:28 AM PST · January 23, 2025



Meta’s chief AI scientist, Yann LeCun , says that a “new paradigm of AI architectures” will emerge in the next three to five years, going far beyond the capabilities of existing AI systems.

LeCun also predicted that the coming years could be the “decade of robotics,” where advances in AI and robotics combine to unlock a new class of intelligent applications.

Speaking in a session dubbed “ Debating Technology ” at Davos on Thursday, LeCun said that the “flavor of AI” that we have at the moment — that is, generative AI and large language models (LLMs) — isn’t really up to all that much. It’s useful, sure, but falls short on many fronts.

“I think the shelf life of the current [LLM] paradigm is fairly short, probably three to five years,” LeCun said. “I think within five years, nobody in their right mind would use them anymore, at least not as the central component of an AI system. I think [….] we’re going to see the emergence of a new paradigm for AI architectures, which may not have the limitations of current AI systems.”

These “limitations” inhibit truly intelligent behavior in machines, LeCun says. This is down to four key reasons: a lack of understanding of the physical world; a lack of persistent memory; a lack of reasoning; and a lack of complex planning capabilities.

“LLMs really are not capable of any of this,” LeCun said. “So there’s going to be another revolution of AI over the next few years. We may have to change the name of it, because it’s probably not going to be generative in the sense that we understand it today.”



“World models”​


This echoes sentiments that LeCun has espoused in the past . At the heart of this is what are coming to be known as “ world models ” that promise to help machines understand the dynamics of the real world. This includes having a memory, common sense, intuition, reasoning capabilities — traits far beyond that of current systems, which are mostly about pattern recognition.

Previously, LeCun has said this could still be some 10 years away , but today’s estimate brings things closer on the horizon. Though to what extent it will get to in that time frame isn’t exactly clear.

“LLMs are good at manipulating language, but not at thinking,” LeCun said. “So that’s what we’re working on — having systems build mental models of the world. If the plan that we’re working on succeeds, with the timetable that we hope, within three to five years we’ll have systems that are a completely different paradigm. They may have some level of common sense. They may be able to learn how the world works from observing the world and maybe interacting with it.”



“The decade of robotics”​


As impressive as generative AI is , capable of passing the bar exam or unearthing new drugs , LeCun reckons that robotics could be a central component of the next wave of AI applications in such real-world scenarios.

Meta itself is doing some research work in the robotics realm , but so is the AI darling of the moment, ChatGPT-creator OpenAI. Earlier this month, new job listings emerged detailing a new OpenAI robotics team focused on “general-purpose,” “adaptive,” and “versatile” robots capable of human-like intelligence in real-world settings.

“We don’t have robots that can do what a cat can do — understanding the physical world of a cat is way superior to everything we can do with AI,” he said. “Maybe the coming decade will be the decade of robotics, maybe we’ll have AI systems that are sufficiently smart to understand how the real world works.”

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday. Topics

AI , davos , Facebook , Generative AI , Meta , Robotics , Yann LeCun

PS_f47025.jpg
Paul Sawers Senior Reporter Paul is a senior writer based in London, focused largely (but not exclusively) on the world of UK and European startups. He also writes about other subjects that he’s passionate about, such as the business of open source software.

Prior to joining TechCrunch in June 2022, Paul had gained more than a decade’s experience covering consumer and enterprise technologies for The Next Web (now owned by the Financial Times) and VentureBeat.

Pitches on: paul.sawers [at] techcrunch.com

Secure/anon tip-offs via Signal: PSTC.08

See me on Bluesky: @jambo.bsky.social View Bio



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668













1/13
@MatthewBerman
OpenAI just dropped Operator, their first Agents, who can use web browsers to complete tasks for you.

For the first time, OpenAI's agents can directly impact the real world.

The AI industry had strong reactions!

Here’s a roundup of reactions and incredible use cases. 🧵👇



GiJvsf8XsAAgndC.png


2/13
@MatthewBerman
Andrej Karpathy, cofounder of OpenAI, compares Operator to humanoid robots in the physical world.

Why? Because both are designed to interact with systems built for humans (browsers, factories, streets).

[Quoted tweet]
Projects like OpenAI’s Operator are to the digital world as Humanoid robots are to the physical world. One general setting (monitor keyboard and mouse, or human body) that can in principle gradually perform arbitrarily general tasks, via an I/O interface originally designed for humans. In both cases, it leads to a gradually mixed autonomy world, where humans become high-level supervisors of low-level automation. A bit like a driver monitoring the Autopilot. This will happen faster in digital world than in physical world because flipping bits is somewhere around 1000X less expensive than moving atoms. Though the market size and opportunity feels a lot bigger in physical world.

We actually worked on this idea in very early OpenAI (see Universe and World of Bits projects), but it was incorrectly sequenced - LLMs had to happen first. Even now I am not 100% sure if it is ready. Multimodal (images, video, audio) just barely got integrated with LLMs last 1-2 years, often bolted on as adapters. Worse, we haven’t really been to the territory of very very long task horizons. E.g. videos are a huge amount of information and I’m not sure that we can expect to just stuff it all into context windows (current paradigm) and then expect it to also work. I could imagine a breakthrough or two needed here, as an example.

People on my TL are saying 2025 is the year of agents. Personally I think 2025-2035 is the decade of agents. I feel a huge amount of work across the board to make it actually work. But it *should* work. Today, Operator can find you lunch on DoorDash or check a hotel etc, sometimes and maybe. Tomorrow, you’ll spin up organizations of Operators for long-running tasks of your choice (eg running a whole company). You could be a kind of CEO monitoring 10 of them at once, maybe dropping in to the trenches sometimes to unblock something. And things will get pretty interesting.


3/13
@MatthewBerman
👀 Greg Brockman, CTO of OpenAI, hints that Operator is just the start. Expect agents that can control your desktop, phone, and more.

[Quoted tweet]
Operator — research preview of an agent that can use its own browser to perform tasks for you.

2025 is the year of agents.


Gh_2y1CakAAgEFe.jpg


4/13
@MatthewBerman
💡 Aaron Levie, CEO of Box, believes giving agents full browser access unlocks 100x more use cases.

Most web tasks lack APIs—agents solve this gap.

[Quoted tweet]
AI Agents having full browser access is going to open up 100x more use cases for AI. The web doesn’t have APIs for the long tail of tasks that we do every day on computers, and browser use is a major missing link. Another building block for AI is here.


https://video.twimg.com/ext_tw_video/1882496963976871936/pu/vid/avc1/1162x720/EWV5Y5IqcAMMRg2B.mp4

5/13
@MatthewBerman
Open source takes on Operator! 👇

• @_akhaliq shares BrowserGPT
• @hwchase17 recommends BrowserUse
• @pk_iv shares BrowserBase

[Quoted tweet]
You don't need to pay $200 for AI.

We're launching Open Operator - an open source reference project that shows how easy it is to add web browsing capabilities to your existing AI tool.

It's early, slow, and might not work everywhere. But it's free and open source! 🔗👇


https://video.twimg.com/ext_tw_video/1882837132450082817/pu/vid/avc1/1694x1080/ps39hpEL-nrdARdv.mp4

6/13
@MatthewBerman
Data advantage: Greg Kamradt, President of @arcprize points out that Operator collects procedural data as it learns to navigate websites, improving over time.

This “memory” gives OpenAI a major edge in the agent race.

[Quoted tweet]
Imagine the procedural memory OpenAI is building up about how to navigate every website operator touches

Once they jump out of the browser to the desktop, no app is safe.


7/13
@MatthewBerman
But it's not perfect...yet.

[Quoted tweet]
My favorite thing about the AI agents is that they can help me get something done in half an hour, what used to take me less than a minute.


8/13
@MatthewBerman
🌟 Use cases highlight Operator’s potential:

@garrytan Planned an impromptu Vegas trip, navigating complex booking

[Quoted tweet]
OpenAI Operator is very impressive - planning an impromptu trip to Vegas — it's able to navigate JSX's website and handle unusual cases and basically figure out sold out scenarios, change dates and times, and now it's figuring out where to eat for Friday night for 2.

Bravo.


GiBA4sGbcAAPoAR.jpg

GiBBIJlaoAAKONM.jpg


9/13
@MatthewBerman
.@omooretweets: Paid a bill from just a photo.

[Quoted tweet]
I just gave Operator a picture of a paper bill I got in the mail.

From only the bill picture, it navigated to the website, pulled up my account, entered my info, and asked for my credit card number to complete payment.

We are so back 🚀


GiBBsbobwAAEPjz.jpg


10/13
@MatthewBerman
.@daniel_mac8: Built a website using Gemini AI + Operator.

[Quoted tweet]
😂 well played @OpenAI

tried to access Operator through Operator

check out the message that was waiting for me:


GiBV6JbXcAAiXeS.jpg


11/13
@MatthewBerman
🚀 The coolest demo? @kieranklaassen used Operator to QA test a local dev environment, tunneling it through for 24/7 bug checks.

Imagine having an Agent QA engineer ready at all times to work alongside you.

[Quoted tweet]
This is extremely promising and the best use case of Operator so far!
@OpenAI ChatGPT Operator is great for testing my local dev environment to see if my feature is working!

Tunnel Operator to your local dev env and let it test your feature. Waiting for an API and @cursor_ai to integrate it.


https://video.twimg.com/ext_tw_video/1882585578962817024/pu/vid/avc1/1112x720/lRjJyXbBTM7Bc7AT.mp4

12/13
@MatthewBerman
🤯 Interesting insight from @emollick:

Operator’s brand preferences (e.g., choosing Bing or 1-800-Flowers) may inadvertently create new SEO industries.

Agents may define how brands compete in the future.

[Quoted tweet]
Next big thing for brands: knowing what brands agents prefer.

If you ask for stock prices, Claude with Computer Use goes to Yahoo Finance while Operator does a Bing search

Operator loves buying from the top search result on Bing. Claude has direct preferences like 1-800-Flowers


GiBh7lQXoAE_8na.jpg

GiBiMtSW8AAicu_.jpg


13/13
@MatthewBerman
If you enjoy this kind of stuff, check out my newsletter: Forward Future Daily

And check out my full video breakdown of the industry's reactions here:

https://invidious.poast.org/watch?v=i9s4fqhSvz8




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/5
@Techmeme
OpenAI partners with DoorDash, Instacart, Priceline, StubHub, and Uber to ensure that Operator respects these businesses' terms of service agreements (@zeffmax / TechCrunch)

OpenAI launches Operator, an AI agent that performs tasks autonomously | TechCrunch

OpenAI partners with DoorDash, eBay, Instacart, Priceline, StubHub, Uber, and other companies to ensure that Operator respects their terms of service agreements



GdsHsdtXgAAKf04.jpg


2/5
@FindKar
also making it easier to build AI agents — but more for one-off workflows vs. one-off tasks

[Quoted tweet]
Watch @BytespaceAI web-agents control the web

With a few prompts, I built a web-agent that:

- Find prospects on LinkedIn
- Scrapes structured data about their profiles
- And uses Claude to send a personalized message

Whole salesteam on auto-pilot. When? 👩‍💼


https://video.twimg.com/amplify_video/1878274284881399808/vid/avc1/1920x1080/_jbpiUi-UnRVx010.mp4

3/5
@evans4fintech
Regulatory compliance should be a top priority for Operator AI, US policy makers not moving to regulate applications like Operator baffles me.



4/5
@JOSourcing
Sam Altman's own words:

Do NOT trust me.

https://invidious.poast.org/watch?v=dY1VK8oHj5s



5/5
@JOSourcing


[Quoted tweet]
'Suchir Killed By OpenAI': #SuchirBalaji's mother's explosive claim

'My son had documents against OpenAI. They have attacked him and killed him...: Poornima Rao, Suchir Balaji's mother.

@PriyaBahal22 shares more details.


https://video.twimg.com/amplify_video/1880319397006352384/vid/avc1/1280x720/qs_8Tc_qvVTtclRZ.mp4


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668
















1/16
@rowancheung
I got early access to ChatGPT Operator.

It's OpenAI's new AI agent that autonomously takes action across the web on your behalf.

The 9 most impressive use cases I’ve tried (videos sped up):

1. Ordering dinner ingredients based on a picture and a recipe



https://video.twimg.com/ext_tw_video/1882489343752728576/pu/vid/avc1/1244x720/Ct2fzxz0ssYu3qW0.mp4

2/16
@rowancheung
2. Planning a weekend trip based on hidden gems off Reddit, my budget and interests

Notice how at 0:06, ChatGPT Operator was blocked from Reddit but then decided to just do a Bing search with "Reddit" at the end

Very impressive decision-making



https://video.twimg.com/ext_tw_video/1882489626616528896/pu/vid/avc1/1244x720/erZTghINsnK5ASXv.mp4

3/16
@rowancheung
3. Crypto investment research based on tokens that are actually worth looking into

Notice how ChatGPT Operator got hit with a "Are you human" CAPTCHA, then pinged me to take control to confirm

Wild workaround



https://video.twimg.com/ext_tw_video/1882489844712038400/pu/vid/avc1/1244x720/jBMibaWjRV1I50M_.mp4

4/16
@rowancheung
4. Booking a one-way flight from Zurich to Vienna using the Booking integration

This one required a bit of back and forth, with ChatGPT Operator pinging me and asking for my flight preference and having me take control of entering payment details



https://video.twimg.com/ext_tw_video/1882490043731685376/pu/vid/avc1/1244x720/q_94qL3cvwSgfOtr.mp4

5/16
@rowancheung
5. Scheduling an appointment with my barber after looking at my Google Calendar schedule/availability

Note that in this demo, ChatGPT Operator pinged me that I needed to sign in to Google to check my calendar

I tried a second time, and my login was saved session-to-session



https://video.twimg.com/ext_tw_video/1882490239035244544/pu/vid/avc1/1244x720/CbkPOU5GDOEs8nZs.mp4

6/16
@rowancheung
6. Researching a good birthday gift for my mom based on what she likes

Similar to the Reddit block, ChatGPT Operator couldn't access NYTimes, so it pivoted and found another site.

Really neat.

Also cool to see it compare and find the best price across the web for me, too



https://video.twimg.com/ext_tw_video/1882490462667235328/pu/vid/avc1/1244x720/gO3HHASSsNE1Joge.mp4

7/16
@rowancheung
7. Booking a one-time house cleaner for my home through the Thumbtack integration based on my budget

ChatGPT Operator came back to me with four highly rated options within my price range



https://video.twimg.com/ext_tw_video/1882490730100199424/pu/vid/avc1/1244x720/7MBsfPaNkkyc2IlJ.mp4

8/16
@rowancheung
8. Finding the best/cheapest health insurance coverage in Switzerland

This was interesting since most prices are not publicly available and are gated behind a meeting

ChatGPT Operator did what it could, and presented me with a good blog for me to read further



https://video.twimg.com/ext_tw_video/1882490918210592768/pu/vid/avc1/1244x720/_5FYnUDAh66_4MgJ.mp4

9/16
@rowancheung
9. Finding a top-rated dog walker in Vancouver BC

This is no easy task, so I wanted to test how well ChatGPT Operator could handle it

To my surprise, I got 3 really solid options at the end



https://video.twimg.com/ext_tw_video/1882491085848530944/pu/vid/avc1/1244x720/J7lzRiosRKQMuF4i.mp4

10/16
@rowancheung
Overall, I was very impressed by the research preview of Operator.

I loved that it can do tasks for me as I do other work, and simply ping me when it needs me to "take over"

I also really enjoyed the saved tasks tab, and adding Custom Instructions for specific websites.



Gh_zzFxXcAAhsl_.jpg


11/16
@rowancheung
But it's important to note that Operator is still a research preview and is improving.

I found that:

-Quite a few sites were blocked after they detected the AI
-There's a limited set of partner integrations
-It's true purpose is to take actions across the web (more below)



Gh_z1lEXQAAc8FS.jpg


12/16
@rowancheung
Operator *operates* within ChatGPT, but it's a completely different tool.

Its output lengths are small, and its true purpose is to take actions across the web (typing, clicking, scrolling).

Meaning it's not like ChatGPT, which can produce essays and write long code



13/16
@rowancheung
With every new tool, comes a new way of using it optimally.

E.g. with GPT-4, CoT prompting produced the best results, but prompting o1 best is completely different.

The exact same thing is happening here with Operator, and I'm 100% just scratching the surface with these tests.



14/16
@rowancheung
The future of tech work is here. And personally, I'm incredibly excited about it.

Agents can do the boring work, so I can spend more time doing what I love.

I'll be publicly sharing all the ways I automate my work with agents, so follow me @rowancheung for more.



15/16
@rowancheung
Lastly, big thanks to @OpenAI for granting me early access. I had a ton of fun early testing Operator.

If you want to support my work, like/retweet the first tweet of this thread to share with friends:

[Quoted tweet]
I got early access to ChatGPT Operator.

It's OpenAI's new AI agent that autonomously takes action across the web on your behalf.

The 9 most impressive use cases I’ve tried (videos sped up):

1. Ordering dinner ingredients based on a picture and a recipe


https://video.twimg.com/ext_tw_video/1882489343752728576/pu/vid/avc1/1244x720/Ct2fzxz0ssYu3qW0.mp4

16/16
@rowancheung
I'll be writing more about my early experiences and how Operator works in tomorrow's newsletter.

If you want it, you can join 900,000 other readers keeping up with everything going on in AI here (it's free): The Rundown AI




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/2
@BillSun_AI
My first run of OPENAI operator, it is not in ChatGPT app, it is in this website: Operator

Trying to let it dig out @PINAI_IO team member, it indeed found the official PIN AI website, found the team and team LinkedIn (blocked by log in); but made a mistake on our funding amount in the first trial ( the truth there was a 10M funding round announcement on the web but it found 3M)



GiCgSqRaMAAHhf7.jpg

GiCgSqSagAAud64.jpg

GiCgSqRb0AAAZHn.jpg

GiCgSqZbAAAPgH8.jpg


2/2
@gmposter
but he tried, and thanks for that)




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196




1/11
@goodside
OpenAI Operator visits Zombo.com

Elapsed time: 30 minutes and counting



GiFRkXEXkAAKDb1.jpg

GiFRkXDWYAATiAd.jpg


2/11
@MikePFrank
Lol I doubt it can actually hear anything...



3/11
@goodside
Anything is possible at Zombo com.



4/11
@pwlot
Roflmao



5/11
@Ysqander
Samuel Beckett would have been proud



6/11
@m1kehouston
Here for this



7/11
@SterlingCooley
Lmaooo



8/11
@PaulYacoubian
I clicked the link

I elapsed 30min 🤗



9/11
@untitled01ipynb
❤️



10/11
@cr_dandrea
😁😁such nostalgia of discovering this as a kid



11/11
@jordanjhamel
Undefeated - you can do anything with Zombocom




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668





1/15
@jamesmurdza
I made an open source version of @OpenAI's operator in one hour on YouTube Live.

I used @GroqInc for inference, @E2B_dev for cloud sandboxes, @AIatMeta's Llama 3.2 and @zywu_hku's OS-Atlas.



GiFYq3HaUAE9BuL.png

GiFYzQFa8AAPEuF.png


2/15
@jamesmurdza
📹 YouTube: https://invidious.poast.org/watch?v=Qnp4PQTE1Ag



3/15
@anushkmittal
open source > closed source. nice work.



4/15
@jamesmurdza
I agree!



5/15
@DentonRosegoood
How's @sama gonna get ur $200 😆



6/15
@jamesmurdza
The idea is to make an open source one for anyone to use!



7/15
@samuelekpe
Come on!!!



8/15
@lovemitt
Very cool!



9/15
@diegoguerror
This is pretty cool!



10/15
@bisi_kuku
Cool



11/15
@RazberryChai
why are u so cracked



12/15
@xiaoze_jin
🔥🎩



13/15
@0x1cec01d
Do you get coords form a vision model or use the html elements?



14/15
@jamesmurdza
They're just for educational purposes, but the vision agent is based on a larger project that I've been working on for over two months: GitHub - e2b-dev/open-computer-use: Secure AI computer use powered by E2B Desktop Sandbox



15/15
@AgentLocker
Hi James, this is amazing! I'm keen to add your agent to AgentLocker - The World's Largest AI Agent Directory - Do you have a landing page for this or shall i just use:

Google Colab




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668



Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027​


Amodei: "I think progress really is as fast as people think it is."

Benj Edwards – Jan 22, 2025 3:33 PM |

268



A photo of co-founder and CEO of Anthropic, Dario Amodei, dated May 22, 2024.


A photo of co-founder and CEO of Anthropic, Dario Amodei, dated May 22, 2024. Credit: Chesnot via Getty Images

On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland.

Speaking at Journal House in Davos, Amodei said, "I don't know exactly when it'll come, I don't know if it'll be 2027. I think it's plausible it could be longer than that. I don't think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics."

Amodei co-founded Anthropic in 2021 with his sister, Daniela Amodei, and five other former OpenAI employees. Not long after, Anthropic emerged as a strong technological competitor to OpenAI's AI products (such as GPT-4 and ChatGPT). Most recently, its Claude 3.5 Sonnet model has remained highly regarded among some AI users and highly ranked among AI benchmarks.

During the WSJ interview, Amodei also spoke some about the potential implications of highly intelligent AI systems when these AI models can control advanced robotics.

"[If] we make good enough AI systems, they'll enable us to make better robots. And so when that happens, we will need to have a conversation... at places like this event, about how do we organize our economy, right? How do humans find meaning?"

He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves.

"We've recognized that we've reached the point as a technological civilization where the idea, there's huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth," he added. "Once that idea gets invalidated, we're all going to have to sit down and figure it out."

The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic's revenue multiplied tenfold in 2024.



Amodei distances himself from “AGI” term​


Even with his dramatic predictions, Amodei distanced himself from a term for this advanced labor-replacing AI favored by Altman, "artificial general intelligence" (AGI), calling it in a separate CNBC interview from the same event in Switzerland a marketing term.

Instead, he prefers to describe future AI systems as a "country of geniuses in a data center," he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be "smarter than a Nobel Prize winner across most relevant fields."

On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon's $8 billion investment over the past 18 months. Amazon plans to integrate Claude models into future versions of its Alexa speaker.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668



OpenAI launches Operator, an AI agent that can do tasks on the web​


New research "Computer-Use Agent" AI model can perform multi-step tasks through a web browser.

Benj Edwards – Jan 23, 2025 5:24 PM |

159



Two tin toy robots looking at laptop screen.


Credit: josefkubes via Getty Images

On Thursday, OpenAI released a research preview of "Operator," a web automation tool that uses a new AI model called Computer-Using Agent (CUA) to control a web browser through a visual interface. The system performs tasks by viewing and interacting with on-screen elements like buttons and text fields similar to how a human would.

Operator is available today for subscribers of the $200-per-month ChatGPT Pro plan at operator.chatgpt.com. The company plans to expand to Plus, Team, and Enterprise users later. OpenAI intends to integrate these capabilities directly into ChatGPT and later release CUA through its API for developers.

Operator watches on-screen content in its virtual environment while it uses an internal browser and executes tasks through simulated keyboard and mouse inputs. The Computer-Using Agent processes screenshots of its browser interface to understand the browser's state and then makes decisions about clicking, typing, and scrolling based on its observations.

OpenAI's release follows other tech companies as they push into what are often called "agentic" AI systems, which can take actions on a user's behalf. Google announced Project Mariner in December 2024, which performs automated tasks through the Chrome browser, and two months earlier, in October 2024, Anthropic launched a web automation tool called "Computer Use" focused on developers that can control a user's mouse cursor and take actions on a computer.

"The Operator interface looks very similar to Anthropic's Claude Computer Use demo from October," wrote AI researcher Simon Willison on his blog, "even down to the interface with a chat panel on the left and a visible interface being interacted with on the right."


An Operator demo video created by OpenAI.



Watch and take action​


To use a browser like you would, the Computer-Using Agent works in multiple steps. First, it captures screenshots to monitor its progress, then analyzes those images (using GPT-4o's vision capabilities with additional reinforcement learning) to process raw pixel data. Next, it determines what actions to take and then performs virtual inputs to control the browser. This iterative loop design reportedly lets the system recover from errors and handle complex tasks across different applications.

While it's working, Operator shows a view of its browser window and of the actions taking place.

The technology behind Operator is still relatively new and far from perfect. The model reportedly performs best at repetitive web tasks like creating shopping lists or playlists. It struggles more with unfamiliar interfaces like tables and calendars, and does poorly with complex text editing (with a 40 percent success rate), according to OpenAI's internal testing data.

OpenAI reported the system achieved an 87 percent success rate on the WebVoyager benchmark, which tests live sites like Amazon and Google Maps. On WebArena, which uses offline test sites for training autonomous agents, Operator's success rate dropped to 58.1 percent. For computer operating system tasks, CUA set an apparent record of 38.1 percent success on the OSWorld benchmark, surpassing previous models but still falling short of human performance at 72.4 percent.

With this imperfect research preview, OpenAI hopes to gather user feedback and refine the system's capabilities. The company acknowledges CUA won't perform reliably in all scenarios but plans to improve its reliability across a wider range of tasks through user testing.



Safety and privacy concerns​


At the moment, all of the browsing and activity takes place within Operator's virtual environment. OpenAI says it built multiple safety controls into Operator, requiring user confirmation before completing sensitive actions like sending emails or making purchases. Operator also has limits on what it can browse, set by OpenAI. It cannot access certain website categories, including gambling and adult content.

To catch attempts at subverting Operator with jailbreaks and prompt injections, which might hypothetically be embedded in websites that the AI model browses, OpenAI says it has implemented real-time moderation and detection systems. OpenAI reports the system recognized all but one case of prompt injection attempts during an early internal red-teaming session.

However, Willison, who frequently covers AI security issues, isn't convinced Operator can't be subverted, especially as new threats emerge. "Color me skeptical," he wrote in his blog post. "I imagine we'll see all kinds of novel successful prompt injection style attacks against this model once the rest of the world starts to explore it."

As Willison points out, OpenAI acknowledges these risks in its System Card documentation: "Despite proactive testing and mitigation efforts, certain challenges and risks remain due to the difficulty of modeling the complexity of real-world scenarios and the dynamic nature of adversarial threats."

OpenAI says it has implemented several privacy controls: Users can opt out of having their data used for model training through ChatGPT settings, delete all browsing data with one click in Operator settings, and log out of all sites simultaneously. When users need to input sensitive information like passwords or payment details, a "takeover mode" activates where Operator stops collecting screenshots.

Even with these precautions, Willison provided his own Operator privacy advice on his blog: "Start a fresh session for each task you outsource to Operator to ensure it doesn't have access to your credentials for any sites that you have used via the tool in the past. If you're having it spend money on your behalf, let it get to the checkout, then provide it with your payment details and wipe the session straight afterwards."

This article was updated on Janary 25, 2025 to clarify that Operator works within its own virtual browser environment.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
61,775
Reputation
9,318
Daps
169,668



Getting an all-optical AI to handle non-linear math​


Instead of sensing photons and processing the results, why not process the photons?

Jacek Krywko – Jan 12, 2025 7:07 AM |

107



Image of a small processor with individual functional units highlighted in different colors. It's in a package with lots of connections, and the four directions (north, south, east and west) labelled.


An optical processor built by researchers at MIT. Credit: MIT

A standard digital camera used in a car for stuff like emergency braking has a perceptual latency of a hair above 20 milliseconds. That’s just the time needed for a camera to transform the photons hitting its aperture into electrical charges using either CMOS or CCD sensors. It doesn’t count the further milliseconds needed to send that information to an onboard computer or process it there.

A team of MIT researchers figured that if you had a chip that could process photons directly, you could skip the entire digitization step and perform calculations with the photons themselves, which has the potential to be mind-bogglingly faster.

“We’re focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. That’s why we are interested in systems where we’re able to do all the computations optically,” says Saumil Bandyopadhyay, an MIT researcher. The team implemented a complete deep neural network on a photonic chip, achieving a latency of 410 picoseconds. To put that in perspective, Bandyopadhyay’s chip could process the entire neural net it had onboard around 58 times within a single tick of the 4 GHz clock on a standard CPU.



Matrices and nonlinearity​


Neural networks work with multiple layers of computational units that function as neurons. “Each neuron can take an input, and those inputs can be, let’s say, numbers,” says Bandyopadhyay. Those numbers are then multiplied by either a constant called weight or a parameter as they are passed on to the next layer of neurons. Each layer takes a weighted sum of the preceding layers and sends it forward.

This is the equivalent of linear algebra—performing matrix multiplication. However, AI models are often used to find intricate patterns in data where the output is not always proportional to the input. For this, you also need non-linear thresholding functions that adjust the weights between the layers of neurons. “What makes deep neural networks so powerful is that we’re able to map very complicated relationships in data by repeatedly cascading both these linear operations and non-linear thresholding functions between the layers,” Bandyopadhyay says.

The problem is that this cascading requires massive parallel computations that, when done on standard computers, take tons of energy and time. Bandyopadhyay’s team feels this problem can be solved by performing the equivalent operations using photons rather than electrons. In photonic chips, information can be encoded in optical properties like polarization, phase, magnitude, frequency, and wavevector. While this would be extremely fast and energy-efficient, building such chips isn’t easy.



Siphoning light​


“Conveniently, photonics turned out to be particularly good at linear matrix operations,” Bandyopadhyay claims. A group at MIT led by Dirk Englund, a professor who is a co-author of Bandyopadhyay’s study, demonstrated a photonic chip doing matrix multiplication entirely with light in 2017. What the field struggled with, though, was implementing non-linear functions in photonics.

The usual solution, so far, relied on bypassing the problem by doing linear algebra on photonic chips and offloading non-linear operations to external electronics. This, however, increased latency, since the information had to be converted from light to electrical signals, processed on an external processor, and converted back to light. “And bringing the latency down is the primary reason why we want to build neural networks in photonics,” Bandyopadhyay says.

To solve this problem, Bandyopadhyay and his colleagues designed and built what is likely the world’s first chip that can compute the entire deep neural net, including both linear and non-linear operations, using photons. “The process starts with an external laser with a modulator that feeds light into the chip through an optical fiber. This way we convert electrical inputs to light,” Bandyopadhyay explains.

The light is then fanned out to six channels and fed into a layer of six neurons that perform linear matrix multiplication using an array of devices called Mach-Zehnder interferometers. “They are essentially programmable beam splitters, taking two optical fields and mixing them coherently to produce two output optical fields. By applying the voltage, you can control how much the two inputs mix,” Bandyopadhyay says.

What a single Mach-Zehnder interferometer does in this context is a two-by-two matrix operation, performed on a pair of optical signals. With a rectangular array of those interferometers, the team could realize a larger set of matrix operations across all six optical channels.

Once matrix multiplication is done in the first layer, the information goes to another layer through a unit that is responsible for nonlinear thresholding. “We did this by co-integrating electronics and optics,” Bandyopadhyay says. This process works by sending a tiny bit of the optical signal to a photodiode that measures how much optical power is there. The result of this measurement is used to manipulate the rest of the photons passing through the device. “We use that little bit of optical signal siphoned to the diode to modulate the rest of the optical signal,” Bandyopadhyay explains.

The entire chip had three layers of neurons performing matrix multiplications and two nonlinear function units in between. Overall, the network implemented on the chip could work with 132 parameters.

This, in a way, highlights some of the limitations optical chips have today. The number of parameters used in the Chat GPT-4 large language model is reportedly 1 trillion. Compared to this trillion, the 132 parameters supported by Bandyopadhyay’s chip looked like… less.



Modest beginnings​


“Large language models are basically the biggest models you could have, right? They are the hardest to tackle. We are focused more on sort of applications where you benefit from lower latency, and models like that turn out to be smaller,” Bandyopadhyay says. His team gears its chip toward powering AIs that work with up to 100,000 parameters. “It’s not like we have to go straight to Chat GPT to do something that is commercially useful. We can target these smaller models first,” Bandyopadhyay adds.

The smaller model Bandyopadhyay implemented on the chip in his study recognized spoken vowels, which is a task commonly used as a benchmark in research on AI-focused hardware. It scored 92 percent accuracy, which was on par with neural networks run on standard computers.

But there are other and way cooler things small models can do. One of them is keeping self-driving cars from crashing. “The idea is you have an autonomous navigation system where you want to repeatedly classify lidar signals with very fast latency, at speeds that are way faster than human reflexes,” Bandyopadhyay says. According to his team, chips like the one they are working on should make it possible to classify lidar data directly, pushing photons straight into photonic chips without converting them to electrical signals.

Other things Bandyopadhyay thinks could be powered by photonic chips are automotive vision systems that are entirely different from the camera-based systems we use today. “You can essentially replace the camera as we know it. Instead, you could have a large array of inputs taking optical signals, sampling them, and sending them directly to optical processors for machine-learning computations,” Bandyopadhyay says. “It’s just a question of engineering the system.”

The team built the chip using standard CMOS processes, which Bandyopadhyay says should make scaling it easier. “You are not limited by just what can fit on a single chip. You can make multi-chip systems to realize bigger networks. This is a promising direction for photonic chips technology—this is something you can already see happening in electronics,” Bandyopadhyay claims.

Nature Photonics, 2024. DOI: Single-chip photonic deep neural network with forward-only training - Nature Photonics
 
Top