Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,624
Reputation
2,767
Daps
86,837


Sam Altman:
"Today they have chatgpt, it's not very good. Next they have the world's best chief of staff. And then after that every person has a company of 20 or 50 experts that can work super well together. And after that everybody has a company of 10000 experts in every field that can work super well together."

Its hard to imagine what the world will look like when everyone has access to models with that level of capability and intelligence. If nothing else this statement provides insight into what future models will be like.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,229
Reputation
8,195
Daps
156,176

XZckYnm.jpeg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,229
Reputation
8,195
Daps
156,176

Human brain-like supercomputer with 228 trillion links coming in 2024​


Australians develop a supercomputer capable of simulating networks at the scale of the human brain.​


Sejal Sharma
Published: Dec 13, 2023 07:27 AM EST
INNOVATION


An artist’s impression of the DeepSouth supercomputer
An artist’s impression of the DeepSouth supercomputer.
WSU



Australian scientists have their hands on a groundbreaking supercomputer that aims to simulate the synapses of a human brain at full scale.

The neuromorphic supercomputer will be capable of 228 trillion synaptic operations per second, which is on par with the estimated number of operations in the human brain.

A team of researchers at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University have named it DeepSouth. IBM's


To be operational by April 2024​

The incredible computational power of the human brain can be seen in the way it performs billion-billion mathematical operations per second using only 20 watts of power. DeepSouth achieves similar levels of parallel processing by employing neuromorphic engineering, a design approach that mimics the brain's functioning.


Highlighting the distinctive features of DeepSouth, Professor André van Schaik, the Director of the ICNS, emphasized that the supercomputer is designed with a unique purpose – to operate in a manner similar to networks of neurons, the basic units of the human brain.

Neuromorphic systems utilize interconnected artificial neurons and synapses to perform tasks. These systems attempt to emulate the brain' ability to learn, adapt, and process information in a highly parallel and distributed manner.

Often applied in the field of AI and machine learning, a neuromorphic system is used with the goal of creating more efficient and brain-like computing systems.

Traditional computing architectures are typically based on the von Neumann architecture, where computers are composed of separate CPUs and memory units, where data and instructions are stored in the latter.

DeepSouth can handle large amounts of data at a rapid pace while consuming significantly less power and being physically smaller than conventional supercomputers.

"Progress in our understanding of how brains compute using neurons is hampered by our inability to simulate brain-like networks at scale. Simulating spiking neural networks on standard computers using Graphics Processing Units (GPUs) and multicore Central Processing Units (CPUs) is just too slow and power intensive. Our system will change that," Professor van Schaik said.




The system is scalable​

The team named the supercomputer DeepSouth based on IBM's TrueNorth system, which started the idea of building computers that act like large networks of neurons, and Deep Blue, the first computer to beat a world chess champion.

The name also gives a nod to where the supercomputer is located geographically: Australia, which is situated in the southern hemisphere.

The team believes DeepSouth will help in advancements in diverse fields like sensing, biomedical, robotics, space, and large-scale AI applications.

The team believes DeepSouth will also revolutionize smart devices. This includes devices like mobile phones and sensors used in manufacturing and agriculture.




 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,229
Reputation
8,195
Daps
156,176

The brain may learn about the world the same way some computational models do

Two studies find “self-supervised” models, which learn about their environment from unlabeled data, can show activity patterns similar to those of the mammalian brain.

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes

Humans and animals have a rich and flexible understanding of the physical world, which enables them to infer the underlying dynamical trajectories of objects and events, plausible future states, and use that to plan and anticipate the consequences of actions. However, the neural mechanisms underlying these computations are unclear. We combine a goal-driven modeling approach with dense neurophysiological data and high-throughput human behavioral readouts to directly impinge on this question. Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments, ranging from self-supervised end-to-end models with pixel-wise or object-centric objectives, to models that future predict in the latent space of purely static image-based or dynamic video-based pretrained foundation models. We find strong differentiation across these model classes in their ability to predict neural and behavioral data both within and across diverse environments. In particular, we find that neural responses are currently best predicted by models trained to predict the future state of their environment in the latent space of pretrained foundation models optimized for dynamic scenes in a self-supervised manner. Notably, models that future predict in the latent space of video foundation models that are optimized to support a diverse range of sensorimotor tasks, reasonably match both human behavioral error patterns and neural dynamics across all environmental scenarios that we were able to test. Overall, these findings suggest that the neural mechanisms and behaviors of primate mental simulation are thus far most consistent with being optimized to future predict on dynamic, reusable visual representations that are useful for Embodied AI more generally.




[Submitted on 19 May 2023 (v1), last revised 25 Oct 2023 (this version, v2)]

Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes​

Aran Nayebi, Rishi Rajalingham, Mehrdad Jazayeri, Guangyu Robert Yang
Humans and animals have a rich and flexible understanding of the physical world, which enables them to infer the underlying dynamical trajectories of objects and events, plausible future states, and use that to plan and anticipate the consequences of actions. However, the neural mechanisms underlying these computations are unclear. We combine a goal-driven modeling approach with dense neurophysiological data and high-throughput human behavioral readouts to directly impinge on this question. Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments, ranging from self-supervised end-to-end models with pixel-wise or object-centric objectives, to models that future predict in the latent space of purely static image-based or dynamic video-based pretrained foundation models. We find strong differentiation across these model classes in their ability to predict neural and behavioral data both within and across diverse environments. In particular, we find that neural responses are currently best predicted by models trained to predict the future state of their environment in the latent space of pretrained foundation models optimized for dynamic scenes in a self-supervised manner. Notably, models that future predict in the latent space of video foundation models that are optimized to support a diverse range of sensorimotor tasks, reasonably match both human behavioral error patterns and neural dynamics across all environmental scenarios that we were able to test. Overall, these findings suggest that the neural mechanisms and behaviors of primate mental simulation are thus far most consistent with being optimized to future predict on dynamic, reusable visual representations that are useful for Embodied AI more generally.
Comments:20 pages, 10 figures, NeurIPS 2023 Camera Ready Version (spotlight)
Subjects:Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Neurons and Cognition (q-bio.NC)
Cite as:arXiv:2305.11772 [cs.AI]
(or arXiv:2305.11772v2 [cs.AI] for this version)
[2305.11772] Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes
Focus to learn more

Submission history​

From: Aran Nayebi [view email]
[v1] Fri, 19 May 2023 15:56:06 UTC (4,080 KB)
[v2] Wed, 25 Oct 2023 15:34:16 UTC (4,347 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,229
Reputation
8,195
Daps
156,176

Researchers perform AI speech recognition with human brain cells​


DECEMBER 11, 2023

  • Researchers used lab-grown brain cells to perform complex speech recognition tasks
  • The 'mini brain' organoid system, called Brainoware, demonstrates a new form of AI
  • It was capable of distinguishing a single voice from 240 audio recordings of multiple Japanese vowel sounds

Neuron AI


SAM JEANS

Sam is a science and technology writer who has worked…

Clusters of human brain cells cultivated in Petri dishes have been integrated with computers to achieve a fundamental level of speech recognition.

Feng Guo, from Indiana University Bloomington, explains of the study, published in Nature Electronics, “This is a preliminary demonstration to show the feasibility of the concept. There’s still a considerable journey ahead.”

Guo points out two primary challenges in conventional AI that this form of biological AI seeks to solve: high energy consumption and the inherent limitations of silicon chips, like their distinct processing and information storage functions.

Guo’s team, along with others, such as Australia’s Cortical Labs, which trained brain cells to play Pong in 2022, are exploring biocomputing with living nerve cells as a potential solution to these challenges.

These brain organoids –self-organized, three-dimensional tissue cultures resembling mini-brains – emerge from stem cells under specific growth conditions.

They can grow to a few millimeters in diameter and contain up to 100 million nerve cells. By comparison, a human brain has approximately 100 billion nerve cells. The organoids are positioned atop a microelectrode array, which both stimulates the organoid and records neuronal activity. Guo’s team refers to this setup as “Brainoware.”

Essentially, Brainoware is a new form of AI quite different from what we usually see in computers and smartphones.

Instead of using regular chips, researchers have created a small cluster of human brain cells – the brain organoid. This tiny ‘mini-brain’ is grown in a lab from stem cells, and it can perform some basic tasks that we usually associate with AI, like recognizing speech patterns.


cells.png

A) A diagram of the “Brainoware” system, which shows a brain organoid (a lab-grown mini-brain) connected to a device that records and stimulates its electrical activity. B) A microscopic image of the brain organoid, stained to highlight its different cell types, such as mature neurons, astrocytes, early-stage neurons, and progenitor cells, showing its complex 3D structure. Source: Nature Electronics.


How it works​

The brain organoid is placed on a special device that can send and read electrical signals.

By doing this, the researchers can communicate with the organoid, kind of like teaching it to respond to certain patterns or inputs. In the study, they trained it to recognize different voices from audio clips.

One of the most remarkable aspects of Brainware is that it learns and adapts. Just like a human brain gets better at tasks with practice, the organoid improves its ability to recognize voices the more it’s exposed to them.

This brings us a step closer to creating AI that works more like the human brain, which is super efficient and doesn’t need a lot of energy.

However, there are challenges. Growing these brain organoids is tricky – they’re hard to create, tough to replicate consistently, and don’t last long, but the team is working on solutions.


Brainoware performance​

In an unsupervised speech recognition experiment, the organoids were trained to distinguish a single voice from 240 audio recordings of eight individuals uttering Japanese vowel sounds. These sounds were converted into signal sequences and spatial patterns for the organoids.

Initially, the organoids showed an accuracy rate of approximately 30 to 40%, which improved to 70 to 80% after two days of training.


More about the study​

Bio-inspired AI takes a few different forms, such as neuromorphic chips based on biological neurons. This goes a step further by creating computational architecture from biological organoids.

Here’s more detail about how it works:

  1. Bio-inspired AI hardware: The study, published in Nature Electronics, introduces Brainoware, a novel AI hardware that employs biological neural networks within a brain organoid. This marks a fundamental shift from traditional brain-inspired silicon chips, offering a more authentic emulation of brain function.
  2. Brainoware’s structure and functionality: Brainoware operates by interfacing a brain organoid, grown from human pluripotent stem cells, with a high-density multielectrode array. This setup allows for both the transmission of electrical signals to the organoid and the detection of neural responses. The organoid exhibits properties like nonlinear dynamics, memory, and the ability to process spatial information.
  3. Applications demonstrated in the study: The team successfully applied Brainoware in practical scenarios, such as speech recognition and predicting nonlinear chaotic equations (like the Hénon map). This shows Brainoware’s ability to improve its computing performance through training, emphasizing its potential for tasks requiring adaptive learning.
  4. Challenges and limitations: Despite its innovative approach, Brainoware faces several technical challenges, including the generation and maintenance of brain organoids. Additionally, the hardware’s reliance on peripheral equipment hinders its potential. In other words, you need a lot of supporting equipment to enable the brain organs to work correctly.
  5. Future directions and potential: The study suggests that with advancements in organoid cultivation and solving practical issues associated with organoids, Brainoware could evolve into a more efficient and sophisticated system. This could lead to AI hardware that more closely mimics human brain function, potentially lowering energy consumption.

In the future, these types of biocomputing systems might eventually perform AI tasks more energy-efficiently than traditional silicon-based chips.

Developments in bio-inspired AI from this year show immense promise in helping the AI industry overcome the confines of brute-force computing and create energy-efficient technologies as elegant as nature.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,229
Reputation
8,195
Daps
156,176
Top