bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872










 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872




snippet:

Abstract​

Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations1,2,3. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics3,4,5. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool’s utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872
[/U]

Google and OpenAI will lose the AI arms race to open-source engineers, a Googler said in a leaked document​

Hasan Chowdhury
May 5, 2023, 7:30 AM EDT

Google CEO Sundar Pichai talking

A leaked document claims Google has "no secret sauce" in AI race. Brandon Wade/Reuters

  • Google and OpenAI are not positioned to win the AI arms race, a Google engineer said in a leaked doc.
  • A document written by a senior Google engineer suggests the open-source community is rivaling them.
  • The document published by SemiAnalysis said Google had "no secret sauce" to help them succeed.


Google and OpenAI are not positioned to win a hotly contested AI arms race as the open-source community is "lapping" them, a senior engineer at the search giant has said.

A leaked internal document, first shared by an anonymous account on a Discord server and then published by consulting firm SemiAnalysis after verifying its authenticity, said Google had "no secret sauce" in an AI race that has taken shape in recent months.

The document, written by Google senior software engineer Luke Sernau, according to Bloomberg, suggested that although Google has "done a lot of looking over" its shoulders at rival OpenAI, open source engineers have been quietly surpassing them both.

"The uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch," the document noted. "I'm talking, of course, about open source. Plainly put, they are lapping us."

Sernau added that without "secret sauce," Google's best hope was "to learn from and collaborate with what others are doing outside Google," and prioritize third-party integrations.

Sernau said open-source engineers were doing things with $100 that "we struggle with" at $10 million, "doing so in weeks, not months."

The comments come amid a fierce new race for AI supremacy, which has been underway since the release of ChatGPT in November, with both Google and Microsoft-backed OpenAI attempting to keep pace with each other on advancements in their AI models.

Though Sernau said that "our models still hold a slight edge in terms of quality," open-source engineers have managed to solve several things that the likes of Google still consider "major open problems."

He cited challenges around running AI models on a phone as one such major issue, which he said open-source engineers have managed to do on a Google Pixel 6. "Open-source models are faster, more customizable, more private, and pound-for-pound more capable," he wrote.

OpenAI, and Sernau did not immediately respond to Insider's request for comment made outside of normal working hours. Google declined to comment on the record.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872
Researchers translate brain waves into scarily accurate images using Stable Diffusion AI

Researchers translate brain waves into scarily accurate images using Stable Diffusion AI​


By Katie Wickens

published March 06, 2023

Researchers have been able to use Stable Diffusion models to reconstruct images from human brain waves.

8sjxCqBNAjKynVQzvug4Mk-970-80.jpg.webp

(Image credit: R)

Imagine having the ability to recall your memories with near perfect precision. As we move into an ever more AI-centric future, that dream looks set to come true. With researchers now having used Stable Diffusion to reconstruct pretty damn accurate, high resolution images by reading human brain waves, we could one day be pulling up images from the annals of our minds without having taken a single photograph.

Researchers Yu Takagi and Shinji Nishimoto, from the Graduate School of Frontier Biosciences at Osaka University, recently wrote a paper outlining how it's possible to reconstruct high res images(opens in new tab) (PDF) using latent diffusion models, by reading human brain activity gained from functional Magnetic Resonance Imaging (fMRI), "without the need for training or fine-tuning of complex deep generative models" (via Vice(opens in new tab)).

Considering we don't fully understand how the underlying translation happens in the brain, the results from this research are incredible. The fact Takagi and Nishimoto were able to coax high resolution images from the latent space using human brain waves is astonishing.

Alright, they say high resolution, but we're talking 512 x 512 pixels. Still, it's a darn sight better than the 256p the competition had managed, and with a much higher "semantic fidelity", too. In other words, the translations are actually vaguely recognisable, and representative of the original images participants had been shown.

RBN7E2Wr3btx9tUCgQpX4F-970-80.jpg

Shown images (top) and those reconstructed through stable diffusion by reading fMRI scans (bottom). (Image credit: Takagi & Nishimoto)


Previous studies involved "training and possibly fine-tuning of generative models, such as GANs, with the same dataset used in the fMRI experiments", the researchers explain. It's challenging as these generative models are not only awkward as heck to work with, but the training material is super limited. It looks like the Osaka researchers have been able to circumvent these limitations using Stable Diffusion, and the results are looking genuinely impressive.

We've all seen the Black Mirror episode. The Entire History of You(opens in new tab) is a terrifying look into a future in which an implant records our daily life so we can later put every moment under intense scrutiny and ruin our relationships.

But before you consign the idea of using AI for visual recall to the dystopian segment of your brainhole, think of the practical uses! We could one day see nonverbal people, or paralysed people who can't simply snap a pic of something to show later, able to show us exactly what they're thinking of by running their brain waves through an artificial intelligence.

As one of the first studies (if not the first) to use diffusion models in this way, it may at least help to paint such algorithms in a better light. Stable Diffusion has come under fire of recent—at least in the art space. Rightly so when some diffusion models scrape the internet(opens in new tab) and regurgitate the front page of ArtStation, only to be used for some lazy party's own financial gain.

But if the data is used right, and the ease of training these models can benefit the field of accessibility so people can give accurate representations of their own inner worlds, and communicate in new ways, I'm all for it.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872

Google "We Have No Moat, And Neither Does OpenAI"​

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI​



DYLAN PATEL
AND
AFZAL AHMAD
MAY 4, 2023
∙ PAID


The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points


{continue reading on site}
 

#BOTHSIDES

All Star
Joined
Aug 5, 2013
Messages
5,119
Reputation
823
Daps
12,038
Reppin
The Chi
OP, do you have a list of recommended sites, accounts/pages to follow? It would be great to add them on your first post if possible
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872







snippet:

Unlimiformer: Long-Range Transformers with Unlimited Length Input​

Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley
Transformer-based models typically have a predefined bound to their input length, because of their need to potentially attend to every token in the input. In this work, we propose Unlimiformer: a general approach that can wrap any existing pretrained encoder-decoder transformer, and offload the attention computation across all layers to a single k-nearest-neighbor index; this index can be kept on either the GPU or CPU memory and queried in sub-linear time. This way, we can index extremely long input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key. We demonstrate Unlimiformers's efficacy on several long-document and multi-document summarization benchmarks, showing that it can summarize even 350k token-long inputs from the BookSum dataset, without any input truncation at test time. Unlimiformer improves pretrained models such as BART and Longformer by extending them to unlimited inputs without additional learned weights and without modifying their code. We make our code and models publicly available at this https URL .

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872
OP, do you have a list of recommended sites, accounts/pages to follow? It would be great to add them on your first post if possible

I find some info from r/singlularity , r/futurology , tech news sites, searching twitter and browsing the profiles of people commenting on AI development.

I'll try to make a list this weekend made up of the accounts I posted tweets from here.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872


Today is the Llama moment for coding!

StarCoder-15B reaches 40.8% on HumanEval benchmark, beating the 30x bigger PaLM.

Coding holds a very special place in NLP. Most software in the world has AI-friendly APIs. LLMs good at coding will master the digital tools, greatly improve productivity, and some day become full-blown autonomous agents.

StarCoder's training corpus includes permissively licensed data on GitHub and over 80 programming languages. VSCode integration is available. The open-source LLM community is advancing at superhuman speed












 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,193
Reputation
8,249
Daps
157,872








GPT4All​

A free-to-use, locally running, privacy-aware chatbot. No GPU or internet required.

 

TQbrit

Banned
Joined
May 25, 2022
Messages
1,506
Reputation
-25
Daps
3,492
Researchers translate brain waves into scarily accurate images using Stable Diffusion AI

Researchers translate brain waves into scarily accurate images using Stable Diffusion AI​


By Katie Wickens

published March 06, 2023

Researchers have been able to use Stable Diffusion models to reconstruct images from human brain waves.

8sjxCqBNAjKynVQzvug4Mk-970-80.jpg.webp

(Image credit: R)

Imagine having the ability to recall your memories with near perfect precision. As we move into an ever more AI-centric future, that dream looks set to come true. With researchers now having used Stable Diffusion to reconstruct pretty damn accurate, high resolution images by reading human brain waves, we could one day be pulling up images from the annals of our minds without having taken a single photograph.

Researchers Yu Takagi and Shinji Nishimoto, from the Graduate School of Frontier Biosciences at Osaka University, recently wrote a paper outlining how it's possible to reconstruct high res images(opens in new tab) (PDF) using latent diffusion models, by reading human brain activity gained from functional Magnetic Resonance Imaging (fMRI), "without the need for training or fine-tuning of complex deep generative models" (via Vice(opens in new tab)).

Considering we don't fully understand how the underlying translation happens in the brain, the results from this research are incredible. The fact Takagi and Nishimoto were able to coax high resolution images from the latent space using human brain waves is astonishing.

Alright, they say high resolution, but we're talking 512 x 512 pixels. Still, it's a darn sight better than the 256p the competition had managed, and with a much higher "semantic fidelity", too. In other words, the translations are actually vaguely recognisable, and representative of the original images participants had been shown.

RBN7E2Wr3btx9tUCgQpX4F-970-80.jpg

Shown images (top) and those reconstructed through stable diffusion by reading fMRI scans (bottom). (Image credit: Takagi & Nishimoto)


Previous studies involved "training and possibly fine-tuning of generative models, such as GANs, with the same dataset used in the fMRI experiments", the researchers explain. It's challenging as these generative models are not only awkward as heck to work with, but the training material is super limited. It looks like the Osaka researchers have been able to circumvent these limitations using Stable Diffusion, and the results are looking genuinely impressive.

We've all seen the Black Mirror episode. The Entire History of You(opens in new tab) is a terrifying look into a future in which an implant records our daily life so we can later put every moment under intense scrutiny and ruin our relationships.

But before you consign the idea of using AI for visual recall to the dystopian segment of your brainhole, think of the practical uses! We could one day see nonverbal people, or paralysed people who can't simply snap a pic of something to show later, able to show us exactly what they're thinking of by running their brain waves through an artificial intelligence.

As one of the first studies (if not the first) to use diffusion models in this way, it may at least help to paint such algorithms in a better light. Stable Diffusion has come under fire of recent—at least in the art space. Rightly so when some diffusion models scrape the internet(opens in new tab) and regurgitate the front page of ArtStation, only to be used for some lazy party's own financial gain.

But if the data is used right, and the ease of training these models can benefit the field of accessibility so people can give accurate representations of their own inner worlds, and communicate in new ways, I'm all for it.
This won't end well
 
Top